Jan 31 01:24:56 np0005603621 kernel: Linux version 5.14.0-665.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026
Jan 31 01:24:56 np0005603621 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 31 01:24:56 np0005603621 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 01:24:56 np0005603621 kernel: BIOS-provided physical RAM map:
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 31 01:24:56 np0005603621 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 31 01:24:56 np0005603621 kernel: NX (Execute Disable) protection: active
Jan 31 01:24:56 np0005603621 kernel: APIC: Static calls initialized
Jan 31 01:24:56 np0005603621 kernel: SMBIOS 2.8 present.
Jan 31 01:24:56 np0005603621 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 31 01:24:56 np0005603621 kernel: Hypervisor detected: KVM
Jan 31 01:24:56 np0005603621 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 31 01:24:56 np0005603621 kernel: kvm-clock: using sched offset of 5739200973 cycles
Jan 31 01:24:56 np0005603621 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 31 01:24:56 np0005603621 kernel: tsc: Detected 2800.000 MHz processor
Jan 31 01:24:56 np0005603621 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 31 01:24:56 np0005603621 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 31 01:24:56 np0005603621 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 31 01:24:56 np0005603621 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 31 01:24:56 np0005603621 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 31 01:24:56 np0005603621 kernel: Using GB pages for direct mapping
Jan 31 01:24:56 np0005603621 kernel: RAMDISK: [mem 0x2d410000-0x329fffff]
Jan 31 01:24:56 np0005603621 kernel: ACPI: Early table checksum verification disabled
Jan 31 01:24:56 np0005603621 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 31 01:24:56 np0005603621 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:24:56 np0005603621 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:24:56 np0005603621 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:24:56 np0005603621 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 31 01:24:56 np0005603621 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:24:56 np0005603621 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 31 01:24:56 np0005603621 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 31 01:24:56 np0005603621 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 31 01:24:56 np0005603621 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 31 01:24:56 np0005603621 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 31 01:24:56 np0005603621 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 31 01:24:56 np0005603621 kernel: No NUMA configuration found
Jan 31 01:24:56 np0005603621 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 31 01:24:56 np0005603621 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 31 01:24:56 np0005603621 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 31 01:24:56 np0005603621 kernel: Zone ranges:
Jan 31 01:24:56 np0005603621 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 31 01:24:56 np0005603621 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 31 01:24:56 np0005603621 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 01:24:56 np0005603621 kernel:  Device   empty
Jan 31 01:24:56 np0005603621 kernel: Movable zone start for each node
Jan 31 01:24:56 np0005603621 kernel: Early memory node ranges
Jan 31 01:24:56 np0005603621 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 31 01:24:56 np0005603621 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 31 01:24:56 np0005603621 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 31 01:24:56 np0005603621 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 31 01:24:56 np0005603621 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 31 01:24:56 np0005603621 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 31 01:24:56 np0005603621 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 31 01:24:56 np0005603621 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 31 01:24:56 np0005603621 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 31 01:24:56 np0005603621 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 31 01:24:56 np0005603621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 31 01:24:56 np0005603621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 31 01:24:56 np0005603621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 31 01:24:56 np0005603621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 31 01:24:56 np0005603621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 31 01:24:56 np0005603621 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 31 01:24:56 np0005603621 kernel: TSC deadline timer available
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Max. logical packages:   8
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Max. logical dies:       8
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Max. dies per package:   1
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Max. threads per core:   1
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Num. cores per package:     1
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Num. threads per package:   1
Jan 31 01:24:56 np0005603621 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 31 01:24:56 np0005603621 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 31 01:24:56 np0005603621 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 31 01:24:56 np0005603621 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 31 01:24:56 np0005603621 kernel: Booting paravirtualized kernel on KVM
Jan 31 01:24:56 np0005603621 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 31 01:24:56 np0005603621 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 31 01:24:56 np0005603621 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 31 01:24:56 np0005603621 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 31 01:24:56 np0005603621 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 01:24:56 np0005603621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64", will be passed to user space.
Jan 31 01:24:56 np0005603621 kernel: random: crng init done
Jan 31 01:24:56 np0005603621 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: Fallback order for Node 0: 0 
Jan 31 01:24:56 np0005603621 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 31 01:24:56 np0005603621 kernel: Policy zone: Normal
Jan 31 01:24:56 np0005603621 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 31 01:24:56 np0005603621 kernel: software IO TLB: area num 8.
Jan 31 01:24:56 np0005603621 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 31 01:24:56 np0005603621 kernel: ftrace: allocating 49438 entries in 194 pages
Jan 31 01:24:56 np0005603621 kernel: ftrace: allocated 194 pages with 3 groups
Jan 31 01:24:56 np0005603621 kernel: Dynamic Preempt: voluntary
Jan 31 01:24:56 np0005603621 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 31 01:24:56 np0005603621 kernel: rcu: #011RCU event tracing is enabled.
Jan 31 01:24:56 np0005603621 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 31 01:24:56 np0005603621 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 31 01:24:56 np0005603621 kernel: #011Rude variant of Tasks RCU enabled.
Jan 31 01:24:56 np0005603621 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 31 01:24:56 np0005603621 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 31 01:24:56 np0005603621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 31 01:24:56 np0005603621 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 01:24:56 np0005603621 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 01:24:56 np0005603621 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 31 01:24:56 np0005603621 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 31 01:24:56 np0005603621 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 31 01:24:56 np0005603621 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 31 01:24:56 np0005603621 kernel: Console: colour VGA+ 80x25
Jan 31 01:24:56 np0005603621 kernel: printk: console [ttyS0] enabled
Jan 31 01:24:56 np0005603621 kernel: ACPI: Core revision 20230331
Jan 31 01:24:56 np0005603621 kernel: APIC: Switch to symmetric I/O mode setup
Jan 31 01:24:56 np0005603621 kernel: x2apic enabled
Jan 31 01:24:56 np0005603621 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 31 01:24:56 np0005603621 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 31 01:24:56 np0005603621 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 31 01:24:56 np0005603621 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 31 01:24:56 np0005603621 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 31 01:24:56 np0005603621 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 31 01:24:56 np0005603621 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Jan 31 01:24:56 np0005603621 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 31 01:24:56 np0005603621 kernel: Spectre V2 : Mitigation: Retpolines
Jan 31 01:24:56 np0005603621 kernel: RETBleed: Mitigation: untrained return thunk
Jan 31 01:24:56 np0005603621 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Jan 31 01:24:56 np0005603621 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 31 01:24:56 np0005603621 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 31 01:24:56 np0005603621 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 31 01:24:56 np0005603621 kernel: active return thunk: retbleed_return_thunk
Jan 31 01:24:56 np0005603621 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 31 01:24:56 np0005603621 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 31 01:24:56 np0005603621 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 31 01:24:56 np0005603621 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 31 01:24:56 np0005603621 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 31 01:24:56 np0005603621 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 31 01:24:56 np0005603621 kernel: Freeing SMP alternatives memory: 40K
Jan 31 01:24:56 np0005603621 kernel: pid_max: default: 32768 minimum: 301
Jan 31 01:24:56 np0005603621 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 31 01:24:56 np0005603621 kernel: landlock: Up and running.
Jan 31 01:24:56 np0005603621 kernel: Yama: becoming mindful.
Jan 31 01:24:56 np0005603621 kernel: SELinux:  Initializing.
Jan 31 01:24:56 np0005603621 kernel: LSM support for eBPF active
Jan 31 01:24:56 np0005603621 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 31 01:24:56 np0005603621 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 31 01:24:56 np0005603621 kernel: ... version:                0
Jan 31 01:24:56 np0005603621 kernel: ... bit width:              48
Jan 31 01:24:56 np0005603621 kernel: ... generic registers:      6
Jan 31 01:24:56 np0005603621 kernel: ... value mask:             0000ffffffffffff
Jan 31 01:24:56 np0005603621 kernel: ... max period:             00007fffffffffff
Jan 31 01:24:56 np0005603621 kernel: ... fixed-purpose events:   0
Jan 31 01:24:56 np0005603621 kernel: ... event mask:             000000000000003f
Jan 31 01:24:56 np0005603621 kernel: signal: max sigframe size: 1776
Jan 31 01:24:56 np0005603621 kernel: rcu: Hierarchical SRCU implementation.
Jan 31 01:24:56 np0005603621 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 31 01:24:56 np0005603621 kernel: smp: Bringing up secondary CPUs ...
Jan 31 01:24:56 np0005603621 kernel: smpboot: x86: Booting SMP configuration:
Jan 31 01:24:56 np0005603621 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 31 01:24:56 np0005603621 kernel: smp: Brought up 1 node, 8 CPUs
Jan 31 01:24:56 np0005603621 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 31 01:24:56 np0005603621 kernel: node 0 deferred pages initialised in 22ms
Jan 31 01:24:56 np0005603621 kernel: Memory: 7763824K/8388068K available (16384K kernel code, 5801K rwdata, 13928K rodata, 4196K init, 7192K bss, 618408K reserved, 0K cma-reserved)
Jan 31 01:24:56 np0005603621 kernel: devtmpfs: initialized
Jan 31 01:24:56 np0005603621 kernel: x86/mm: Memory block size: 128MB
Jan 31 01:24:56 np0005603621 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 31 01:24:56 np0005603621 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 31 01:24:56 np0005603621 kernel: pinctrl core: initialized pinctrl subsystem
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 31 01:24:56 np0005603621 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 31 01:24:56 np0005603621 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 31 01:24:56 np0005603621 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 31 01:24:56 np0005603621 kernel: audit: initializing netlink subsys (disabled)
Jan 31 01:24:56 np0005603621 kernel: audit: type=2000 audit(1769840694.383:1): state=initialized audit_enabled=0 res=1
Jan 31 01:24:56 np0005603621 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 31 01:24:56 np0005603621 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 31 01:24:56 np0005603621 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 31 01:24:56 np0005603621 kernel: cpuidle: using governor menu
Jan 31 01:24:56 np0005603621 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 31 01:24:56 np0005603621 kernel: PCI: Using configuration type 1 for base access
Jan 31 01:24:56 np0005603621 kernel: PCI: Using configuration type 1 for extended access
Jan 31 01:24:56 np0005603621 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 31 01:24:56 np0005603621 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 31 01:24:56 np0005603621 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 31 01:24:56 np0005603621 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 31 01:24:56 np0005603621 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 31 01:24:56 np0005603621 kernel: Demotion targets for Node 0: null
Jan 31 01:24:56 np0005603621 kernel: cryptd: max_cpu_qlen set to 1000
Jan 31 01:24:56 np0005603621 kernel: ACPI: Added _OSI(Module Device)
Jan 31 01:24:56 np0005603621 kernel: ACPI: Added _OSI(Processor Device)
Jan 31 01:24:56 np0005603621 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 31 01:24:56 np0005603621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 31 01:24:56 np0005603621 kernel: ACPI: Interpreter enabled
Jan 31 01:24:56 np0005603621 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 31 01:24:56 np0005603621 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 31 01:24:56 np0005603621 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 31 01:24:56 np0005603621 kernel: PCI: Using E820 reservations for host bridge windows
Jan 31 01:24:56 np0005603621 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 31 01:24:56 np0005603621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 31 01:24:56 np0005603621 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [3] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [4] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [5] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [6] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [7] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [8] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [9] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [10] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [11] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [12] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [13] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [14] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [15] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [16] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [17] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [18] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [19] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [20] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [21] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [22] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [23] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [24] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [25] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [26] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [27] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [28] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [29] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [30] registered
Jan 31 01:24:56 np0005603621 kernel: acpiphp: Slot [31] registered
Jan 31 01:24:56 np0005603621 kernel: PCI host bridge to bus 0000:00
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 31 01:24:56 np0005603621 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 31 01:24:56 np0005603621 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 31 01:24:56 np0005603621 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 31 01:24:56 np0005603621 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 31 01:24:56 np0005603621 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 31 01:24:56 np0005603621 kernel: iommu: Default domain type: Translated
Jan 31 01:24:56 np0005603621 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 31 01:24:56 np0005603621 kernel: SCSI subsystem initialized
Jan 31 01:24:56 np0005603621 kernel: ACPI: bus type USB registered
Jan 31 01:24:56 np0005603621 kernel: usbcore: registered new interface driver usbfs
Jan 31 01:24:56 np0005603621 kernel: usbcore: registered new interface driver hub
Jan 31 01:24:56 np0005603621 kernel: usbcore: registered new device driver usb
Jan 31 01:24:56 np0005603621 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 31 01:24:56 np0005603621 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 31 01:24:56 np0005603621 kernel: PTP clock support registered
Jan 31 01:24:56 np0005603621 kernel: EDAC MC: Ver: 3.0.0
Jan 31 01:24:56 np0005603621 kernel: NetLabel: Initializing
Jan 31 01:24:56 np0005603621 kernel: NetLabel:  domain hash size = 128
Jan 31 01:24:56 np0005603621 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 31 01:24:56 np0005603621 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 31 01:24:56 np0005603621 kernel: PCI: Using ACPI for IRQ routing
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 31 01:24:56 np0005603621 kernel: vgaarb: loaded
Jan 31 01:24:56 np0005603621 kernel: clocksource: Switched to clocksource kvm-clock
Jan 31 01:24:56 np0005603621 kernel: VFS: Disk quotas dquot_6.6.0
Jan 31 01:24:56 np0005603621 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 31 01:24:56 np0005603621 kernel: pnp: PnP ACPI init
Jan 31 01:24:56 np0005603621 kernel: pnp: PnP ACPI: found 5 devices
Jan 31 01:24:56 np0005603621 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_INET protocol family
Jan 31 01:24:56 np0005603621 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 31 01:24:56 np0005603621 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_XDP protocol family
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 31 01:24:56 np0005603621 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 31 01:24:56 np0005603621 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 31 01:24:56 np0005603621 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 29684 usecs
Jan 31 01:24:56 np0005603621 kernel: PCI: CLS 0 bytes, default 64
Jan 31 01:24:56 np0005603621 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 31 01:24:56 np0005603621 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 31 01:24:56 np0005603621 kernel: ACPI: bus type thunderbolt registered
Jan 31 01:24:56 np0005603621 kernel: Trying to unpack rootfs image as initramfs...
Jan 31 01:24:56 np0005603621 kernel: Initialise system trusted keyrings
Jan 31 01:24:56 np0005603621 kernel: Key type blacklist registered
Jan 31 01:24:56 np0005603621 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 31 01:24:56 np0005603621 kernel: zbud: loaded
Jan 31 01:24:56 np0005603621 kernel: integrity: Platform Keyring initialized
Jan 31 01:24:56 np0005603621 kernel: integrity: Machine keyring initialized
Jan 31 01:24:56 np0005603621 kernel: Freeing initrd memory: 88000K
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_ALG protocol family
Jan 31 01:24:56 np0005603621 kernel: xor: automatically using best checksumming function   avx       
Jan 31 01:24:56 np0005603621 kernel: Key type asymmetric registered
Jan 31 01:24:56 np0005603621 kernel: Asymmetric key parser 'x509' registered
Jan 31 01:24:56 np0005603621 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 31 01:24:56 np0005603621 kernel: io scheduler mq-deadline registered
Jan 31 01:24:56 np0005603621 kernel: io scheduler kyber registered
Jan 31 01:24:56 np0005603621 kernel: io scheduler bfq registered
Jan 31 01:24:56 np0005603621 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 31 01:24:56 np0005603621 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 31 01:24:56 np0005603621 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 31 01:24:56 np0005603621 kernel: ACPI: button: Power Button [PWRF]
Jan 31 01:24:56 np0005603621 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 31 01:24:56 np0005603621 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 31 01:24:56 np0005603621 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 31 01:24:56 np0005603621 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 31 01:24:56 np0005603621 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 31 01:24:56 np0005603621 kernel: Non-volatile memory driver v1.3
Jan 31 01:24:56 np0005603621 kernel: rdac: device handler registered
Jan 31 01:24:56 np0005603621 kernel: hp_sw: device handler registered
Jan 31 01:24:56 np0005603621 kernel: emc: device handler registered
Jan 31 01:24:56 np0005603621 kernel: alua: device handler registered
Jan 31 01:24:56 np0005603621 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 31 01:24:56 np0005603621 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 31 01:24:56 np0005603621 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 31 01:24:56 np0005603621 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 31 01:24:56 np0005603621 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 31 01:24:56 np0005603621 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 31 01:24:56 np0005603621 kernel: usb usb1: Product: UHCI Host Controller
Jan 31 01:24:56 np0005603621 kernel: usb usb1: Manufacturer: Linux 5.14.0-665.el9.x86_64 uhci_hcd
Jan 31 01:24:56 np0005603621 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 31 01:24:56 np0005603621 kernel: hub 1-0:1.0: USB hub found
Jan 31 01:24:56 np0005603621 kernel: hub 1-0:1.0: 2 ports detected
Jan 31 01:24:56 np0005603621 kernel: usbcore: registered new interface driver usbserial_generic
Jan 31 01:24:56 np0005603621 kernel: usbserial: USB Serial support registered for generic
Jan 31 01:24:56 np0005603621 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 31 01:24:56 np0005603621 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 31 01:24:56 np0005603621 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 31 01:24:56 np0005603621 kernel: mousedev: PS/2 mouse device common for all mice
Jan 31 01:24:56 np0005603621 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 31 01:24:56 np0005603621 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 31 01:24:56 np0005603621 kernel: rtc_cmos 00:04: registered as rtc0
Jan 31 01:24:56 np0005603621 kernel: rtc_cmos 00:04: setting system clock to 2026-01-31T06:24:55 UTC (1769840695)
Jan 31 01:24:56 np0005603621 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 31 01:24:56 np0005603621 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 31 01:24:56 np0005603621 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 31 01:24:56 np0005603621 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 31 01:24:56 np0005603621 kernel: usbcore: registered new interface driver usbhid
Jan 31 01:24:56 np0005603621 kernel: usbhid: USB HID core driver
Jan 31 01:24:56 np0005603621 kernel: drop_monitor: Initializing network drop monitor service
Jan 31 01:24:56 np0005603621 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 31 01:24:56 np0005603621 kernel: Initializing XFRM netlink socket
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_INET6 protocol family
Jan 31 01:24:56 np0005603621 kernel: Segment Routing with IPv6
Jan 31 01:24:56 np0005603621 kernel: NET: Registered PF_PACKET protocol family
Jan 31 01:24:56 np0005603621 kernel: mpls_gso: MPLS GSO support
Jan 31 01:24:56 np0005603621 kernel: IPI shorthand broadcast: enabled
Jan 31 01:24:56 np0005603621 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 31 01:24:56 np0005603621 kernel: AES CTR mode by8 optimization enabled
Jan 31 01:24:56 np0005603621 kernel: sched_clock: Marking stable (1501007046, 153845155)->(1860302715, -205450514)
Jan 31 01:24:56 np0005603621 kernel: registered taskstats version 1
Jan 31 01:24:56 np0005603621 kernel: Loading compiled-in X.509 certificates
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 31 01:24:56 np0005603621 kernel: Demotion targets for Node 0: null
Jan 31 01:24:56 np0005603621 kernel: page_owner is disabled
Jan 31 01:24:56 np0005603621 kernel: Key type .fscrypt registered
Jan 31 01:24:56 np0005603621 kernel: Key type fscrypt-provisioning registered
Jan 31 01:24:56 np0005603621 kernel: Key type big_key registered
Jan 31 01:24:56 np0005603621 kernel: Key type encrypted registered
Jan 31 01:24:56 np0005603621 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 31 01:24:56 np0005603621 kernel: Loading compiled-in module X.509 certificates
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8d408fd8f954b245ea1a4231fd25ac56c328a9b5'
Jan 31 01:24:56 np0005603621 kernel: ima: Allocated hash algorithm: sha256
Jan 31 01:24:56 np0005603621 kernel: ima: No architecture policies found
Jan 31 01:24:56 np0005603621 kernel: evm: Initialising EVM extended attributes:
Jan 31 01:24:56 np0005603621 kernel: evm: security.selinux
Jan 31 01:24:56 np0005603621 kernel: evm: security.SMACK64 (disabled)
Jan 31 01:24:56 np0005603621 kernel: evm: security.SMACK64EXEC (disabled)
Jan 31 01:24:56 np0005603621 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 31 01:24:56 np0005603621 kernel: evm: security.SMACK64MMAP (disabled)
Jan 31 01:24:56 np0005603621 kernel: evm: security.apparmor (disabled)
Jan 31 01:24:56 np0005603621 kernel: evm: security.ima
Jan 31 01:24:56 np0005603621 kernel: evm: security.capability
Jan 31 01:24:56 np0005603621 kernel: evm: HMAC attrs: 0x1
Jan 31 01:24:56 np0005603621 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 31 01:24:56 np0005603621 kernel: Running certificate verification RSA selftest
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 31 01:24:56 np0005603621 kernel: Running certificate verification ECDSA selftest
Jan 31 01:24:56 np0005603621 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 31 01:24:56 np0005603621 kernel: clk: Disabling unused clocks
Jan 31 01:24:56 np0005603621 kernel: Freeing unused decrypted memory: 2028K
Jan 31 01:24:56 np0005603621 kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 31 01:24:56 np0005603621 kernel: Write protecting the kernel read-only data: 30720k
Jan 31 01:24:56 np0005603621 kernel: Freeing unused kernel image (rodata/data gap) memory: 408K
Jan 31 01:24:56 np0005603621 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 31 01:24:56 np0005603621 kernel: Run /init as init process
Jan 31 01:24:56 np0005603621 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 01:24:56 np0005603621 systemd: Detected virtualization kvm.
Jan 31 01:24:56 np0005603621 systemd: Detected architecture x86-64.
Jan 31 01:24:56 np0005603621 systemd: Running in initrd.
Jan 31 01:24:56 np0005603621 systemd: No hostname configured, using default hostname.
Jan 31 01:24:56 np0005603621 systemd: Hostname set to <localhost>.
Jan 31 01:24:56 np0005603621 systemd: Initializing machine ID from VM UUID.
Jan 31 01:24:56 np0005603621 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 31 01:24:56 np0005603621 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 31 01:24:56 np0005603621 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 31 01:24:56 np0005603621 kernel: usb 1-1: Manufacturer: QEMU
Jan 31 01:24:56 np0005603621 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 31 01:24:56 np0005603621 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 31 01:24:56 np0005603621 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 31 01:24:56 np0005603621 systemd: Queued start job for default target Initrd Default Target.
Jan 31 01:24:56 np0005603621 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 01:24:56 np0005603621 systemd: Reached target Local Encrypted Volumes.
Jan 31 01:24:56 np0005603621 systemd: Reached target Initrd /usr File System.
Jan 31 01:24:56 np0005603621 systemd: Reached target Local File Systems.
Jan 31 01:24:56 np0005603621 systemd: Reached target Path Units.
Jan 31 01:24:56 np0005603621 systemd: Reached target Slice Units.
Jan 31 01:24:56 np0005603621 systemd: Reached target Swaps.
Jan 31 01:24:56 np0005603621 systemd: Reached target Timer Units.
Jan 31 01:24:56 np0005603621 systemd: Listening on D-Bus System Message Bus Socket.
Jan 31 01:24:56 np0005603621 systemd: Listening on Journal Socket (/dev/log).
Jan 31 01:24:56 np0005603621 systemd: Listening on Journal Socket.
Jan 31 01:24:56 np0005603621 systemd: Listening on udev Control Socket.
Jan 31 01:24:56 np0005603621 systemd: Listening on udev Kernel Socket.
Jan 31 01:24:56 np0005603621 systemd: Reached target Socket Units.
Jan 31 01:24:56 np0005603621 systemd: Starting Create List of Static Device Nodes...
Jan 31 01:24:56 np0005603621 systemd: Starting Journal Service...
Jan 31 01:24:56 np0005603621 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 01:24:56 np0005603621 systemd: Starting Apply Kernel Variables...
Jan 31 01:24:56 np0005603621 systemd: Starting Create System Users...
Jan 31 01:24:56 np0005603621 systemd: Starting Setup Virtual Console...
Jan 31 01:24:56 np0005603621 systemd: Finished Create List of Static Device Nodes.
Jan 31 01:24:56 np0005603621 systemd: Finished Apply Kernel Variables.
Jan 31 01:24:56 np0005603621 systemd: Finished Create System Users.
Jan 31 01:24:56 np0005603621 systemd-journald[304]: Journal started
Jan 31 01:24:56 np0005603621 systemd-journald[304]: Runtime Journal (/run/log/journal/4e4154824f5140cdacc4a0d3058a31bb) is 8.0M, max 153.6M, 145.6M free.
Jan 31 01:24:56 np0005603621 systemd-sysusers[308]: Creating group 'users' with GID 100.
Jan 31 01:24:56 np0005603621 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Jan 31 01:24:56 np0005603621 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 31 01:24:56 np0005603621 systemd: Started Journal Service.
Jan 31 01:24:56 np0005603621 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 01:24:56 np0005603621 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 01:24:56 np0005603621 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 01:24:56 np0005603621 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 01:24:56 np0005603621 systemd[1]: Finished Setup Virtual Console.
Jan 31 01:24:56 np0005603621 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 31 01:24:56 np0005603621 systemd[1]: Starting dracut cmdline hook...
Jan 31 01:24:56 np0005603621 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Jan 31 01:24:56 np0005603621 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-665.el9.x86_64 root=UUID=822f14ea-6e7e-41df-b0d8-fbe282d9ded8 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 31 01:24:56 np0005603621 systemd[1]: Finished dracut cmdline hook.
Jan 31 01:24:56 np0005603621 systemd[1]: Starting dracut pre-udev hook...
Jan 31 01:24:56 np0005603621 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 31 01:24:56 np0005603621 kernel: device-mapper: uevent: version 1.0.3
Jan 31 01:24:56 np0005603621 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 31 01:24:56 np0005603621 kernel: RPC: Registered named UNIX socket transport module.
Jan 31 01:24:56 np0005603621 kernel: RPC: Registered udp transport module.
Jan 31 01:24:56 np0005603621 kernel: RPC: Registered tcp transport module.
Jan 31 01:24:56 np0005603621 kernel: RPC: Registered tcp-with-tls transport module.
Jan 31 01:24:56 np0005603621 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 31 01:24:56 np0005603621 rpc.statd[441]: Version 2.5.4 starting
Jan 31 01:24:56 np0005603621 rpc.statd[441]: Initializing NSM state
Jan 31 01:24:56 np0005603621 rpc.idmapd[446]: Setting log level to 0
Jan 31 01:24:56 np0005603621 systemd[1]: Finished dracut pre-udev hook.
Jan 31 01:24:56 np0005603621 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 01:24:56 np0005603621 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 01:24:56 np0005603621 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 01:24:56 np0005603621 systemd[1]: Starting dracut pre-trigger hook...
Jan 31 01:24:57 np0005603621 systemd[1]: Finished dracut pre-trigger hook.
Jan 31 01:24:57 np0005603621 systemd[1]: Starting Coldplug All udev Devices...
Jan 31 01:24:57 np0005603621 systemd[1]: Created slice Slice /system/modprobe.
Jan 31 01:24:57 np0005603621 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 01:24:57 np0005603621 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 01:24:57 np0005603621 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 01:24:57 np0005603621 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 01:24:57 np0005603621 systemd[1]: Mounting Kernel Configuration File System...
Jan 31 01:24:57 np0005603621 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target Network.
Jan 31 01:24:57 np0005603621 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 31 01:24:57 np0005603621 systemd[1]: Starting dracut initqueue hook...
Jan 31 01:24:57 np0005603621 systemd[1]: Mounted Kernel Configuration File System.
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target System Initialization.
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target Basic System.
Jan 31 01:24:57 np0005603621 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 31 01:24:57 np0005603621 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 31 01:24:57 np0005603621 kernel: vda: vda1
Jan 31 01:24:57 np0005603621 kernel: scsi host0: ata_piix
Jan 31 01:24:57 np0005603621 kernel: scsi host1: ata_piix
Jan 31 01:24:57 np0005603621 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 31 01:24:57 np0005603621 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 31 01:24:57 np0005603621 systemd[1]: Found device /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target Initrd Root Device.
Jan 31 01:24:57 np0005603621 kernel: ata1: found unknown device (class 0)
Jan 31 01:24:57 np0005603621 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 31 01:24:57 np0005603621 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 31 01:24:57 np0005603621 systemd-udevd[480]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:24:57 np0005603621 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 31 01:24:57 np0005603621 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 31 01:24:57 np0005603621 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 31 01:24:57 np0005603621 systemd[1]: Finished dracut initqueue hook.
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 31 01:24:57 np0005603621 systemd[1]: Reached target Remote File Systems.
Jan 31 01:24:57 np0005603621 systemd[1]: Starting dracut pre-mount hook...
Jan 31 01:24:57 np0005603621 systemd[1]: Finished dracut pre-mount hook.
Jan 31 01:24:57 np0005603621 systemd[1]: Starting File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8...
Jan 31 01:24:57 np0005603621 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 31 01:24:57 np0005603621 systemd[1]: Finished File System Check on /dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8.
Jan 31 01:24:57 np0005603621 systemd[1]: Mounting /sysroot...
Jan 31 01:24:57 np0005603621 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 31 01:24:57 np0005603621 kernel: XFS (vda1): Mounting V5 Filesystem 822f14ea-6e7e-41df-b0d8-fbe282d9ded8
Jan 31 01:24:58 np0005603621 kernel: XFS (vda1): Ending clean mount
Jan 31 01:24:58 np0005603621 systemd[1]: Mounted /sysroot.
Jan 31 01:24:58 np0005603621 systemd[1]: Reached target Initrd Root File System.
Jan 31 01:24:58 np0005603621 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 31 01:24:58 np0005603621 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 31 01:24:58 np0005603621 systemd[1]: Reached target Initrd File Systems.
Jan 31 01:24:58 np0005603621 systemd[1]: Reached target Initrd Default Target.
Jan 31 01:24:58 np0005603621 systemd[1]: Starting dracut mount hook...
Jan 31 01:24:58 np0005603621 systemd[1]: Finished dracut mount hook.
Jan 31 01:24:58 np0005603621 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 31 01:24:58 np0005603621 rpc.idmapd[446]: exiting on signal 15
Jan 31 01:24:58 np0005603621 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 31 01:24:58 np0005603621 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Network.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Timer Units.
Jan 31 01:24:58 np0005603621 systemd[1]: dbus.socket: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Initrd Default Target.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Basic System.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Initrd Root Device.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Initrd /usr File System.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Path Units.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Remote File Systems.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Slice Units.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Socket Units.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target System Initialization.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Local File Systems.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Swaps.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut mount hook.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut pre-mount hook.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut initqueue hook.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Coldplug All udev Devices.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut pre-trigger hook.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Setup Virtual Console.
Jan 31 01:24:58 np0005603621 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Closed udev Control Socket.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Closed udev Kernel Socket.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut pre-udev hook.
Jan 31 01:24:58 np0005603621 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped dracut cmdline hook.
Jan 31 01:24:58 np0005603621 systemd[1]: Starting Cleanup udev Database...
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 31 01:24:58 np0005603621 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 31 01:24:58 np0005603621 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Stopped Create System Users.
Jan 31 01:24:58 np0005603621 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 31 01:24:58 np0005603621 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 31 01:24:58 np0005603621 systemd[1]: Finished Cleanup udev Database.
Jan 31 01:24:58 np0005603621 systemd[1]: Reached target Switch Root.
Jan 31 01:24:58 np0005603621 systemd[1]: Starting Switch Root...
Jan 31 01:24:58 np0005603621 systemd[1]: Switching root.
Jan 31 01:24:58 np0005603621 systemd-journald[304]: Journal stopped
Jan 31 01:24:59 np0005603621 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 31 01:24:59 np0005603621 kernel: audit: type=1404 audit(1769840698.496:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:24:59 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:24:59 np0005603621 kernel: audit: type=1403 audit(1769840698.612:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 31 01:24:59 np0005603621 systemd: Successfully loaded SELinux policy in 120.772ms.
Jan 31 01:24:59 np0005603621 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 34.690ms.
Jan 31 01:24:59 np0005603621 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 31 01:24:59 np0005603621 systemd: Detected virtualization kvm.
Jan 31 01:24:59 np0005603621 systemd: Detected architecture x86-64.
Jan 31 01:24:59 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:24:59 np0005603621 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd: Stopped Switch Root.
Jan 31 01:24:59 np0005603621 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 31 01:24:59 np0005603621 systemd: Created slice Slice /system/getty.
Jan 31 01:24:59 np0005603621 systemd: Created slice Slice /system/serial-getty.
Jan 31 01:24:59 np0005603621 systemd: Created slice Slice /system/sshd-keygen.
Jan 31 01:24:59 np0005603621 systemd: Created slice User and Session Slice.
Jan 31 01:24:59 np0005603621 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 31 01:24:59 np0005603621 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 31 01:24:59 np0005603621 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 31 01:24:59 np0005603621 systemd: Reached target Local Encrypted Volumes.
Jan 31 01:24:59 np0005603621 systemd: Stopped target Switch Root.
Jan 31 01:24:59 np0005603621 systemd: Stopped target Initrd File Systems.
Jan 31 01:24:59 np0005603621 systemd: Stopped target Initrd Root File System.
Jan 31 01:24:59 np0005603621 systemd: Reached target Local Integrity Protected Volumes.
Jan 31 01:24:59 np0005603621 systemd: Reached target Path Units.
Jan 31 01:24:59 np0005603621 systemd: Reached target rpc_pipefs.target.
Jan 31 01:24:59 np0005603621 systemd: Reached target Slice Units.
Jan 31 01:24:59 np0005603621 systemd: Reached target Swaps.
Jan 31 01:24:59 np0005603621 systemd: Reached target Local Verity Protected Volumes.
Jan 31 01:24:59 np0005603621 systemd: Listening on RPCbind Server Activation Socket.
Jan 31 01:24:59 np0005603621 systemd: Reached target RPC Port Mapper.
Jan 31 01:24:59 np0005603621 systemd: Listening on Process Core Dump Socket.
Jan 31 01:24:59 np0005603621 systemd: Listening on initctl Compatibility Named Pipe.
Jan 31 01:24:59 np0005603621 systemd: Listening on udev Control Socket.
Jan 31 01:24:59 np0005603621 systemd: Listening on udev Kernel Socket.
Jan 31 01:24:59 np0005603621 systemd: Mounting Huge Pages File System...
Jan 31 01:24:59 np0005603621 systemd: Mounting POSIX Message Queue File System...
Jan 31 01:24:59 np0005603621 systemd: Mounting Kernel Debug File System...
Jan 31 01:24:59 np0005603621 systemd: Mounting Kernel Trace File System...
Jan 31 01:24:59 np0005603621 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 01:24:59 np0005603621 systemd: Starting Create List of Static Device Nodes...
Jan 31 01:24:59 np0005603621 systemd: Starting Load Kernel Module configfs...
Jan 31 01:24:59 np0005603621 systemd: Starting Load Kernel Module drm...
Jan 31 01:24:59 np0005603621 systemd: Starting Load Kernel Module efi_pstore...
Jan 31 01:24:59 np0005603621 systemd: Starting Load Kernel Module fuse...
Jan 31 01:24:59 np0005603621 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 31 01:24:59 np0005603621 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd: Stopped File System Check on Root Device.
Jan 31 01:24:59 np0005603621 systemd: Stopped Journal Service.
Jan 31 01:24:59 np0005603621 kernel: fuse: init (API version 7.37)
Jan 31 01:24:59 np0005603621 systemd: Starting Journal Service...
Jan 31 01:24:59 np0005603621 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 31 01:24:59 np0005603621 systemd: Starting Generate network units from Kernel command line...
Jan 31 01:24:59 np0005603621 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 01:24:59 np0005603621 systemd: Starting Remount Root and Kernel File Systems...
Jan 31 01:24:59 np0005603621 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 31 01:24:59 np0005603621 systemd: Starting Apply Kernel Variables...
Jan 31 01:24:59 np0005603621 systemd: Starting Coldplug All udev Devices...
Jan 31 01:24:59 np0005603621 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 31 01:24:59 np0005603621 systemd: Mounted Huge Pages File System.
Jan 31 01:24:59 np0005603621 systemd: Mounted POSIX Message Queue File System.
Jan 31 01:24:59 np0005603621 systemd: Mounted Kernel Debug File System.
Jan 31 01:24:59 np0005603621 systemd: Mounted Kernel Trace File System.
Jan 31 01:24:59 np0005603621 systemd: Finished Create List of Static Device Nodes.
Jan 31 01:24:59 np0005603621 systemd-journald[674]: Journal started
Jan 31 01:24:59 np0005603621 systemd-journald[674]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 01:24:59 np0005603621 systemd[1]: Queued start job for default target Multi-User System.
Jan 31 01:24:59 np0005603621 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd: Started Journal Service.
Jan 31 01:24:59 np0005603621 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 01:24:59 np0005603621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 31 01:24:59 np0005603621 kernel: ACPI: bus type drm_connector registered
Jan 31 01:24:59 np0005603621 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Load Kernel Module fuse.
Jan 31 01:24:59 np0005603621 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Load Kernel Module drm.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Generate network units from Kernel command line.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Apply Kernel Variables.
Jan 31 01:24:59 np0005603621 systemd[1]: Mounting FUSE Control File System...
Jan 31 01:24:59 np0005603621 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Rebuild Hardware Database...
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 31 01:24:59 np0005603621 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Load/Save OS Random Seed...
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Create System Users...
Jan 31 01:24:59 np0005603621 systemd[1]: Mounted FUSE Control File System.
Jan 31 01:24:59 np0005603621 systemd-journald[674]: Runtime Journal (/run/log/journal/bf0bc0bb03de29b24cba1cc9599cf5d0) is 8.0M, max 153.6M, 145.6M free.
Jan 31 01:24:59 np0005603621 systemd-journald[674]: Received client request to flush runtime journal.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Coldplug All udev Devices.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Load/Save OS Random Seed.
Jan 31 01:24:59 np0005603621 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Create System Users.
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 31 01:24:59 np0005603621 systemd[1]: Reached target Preparation for Local File Systems.
Jan 31 01:24:59 np0005603621 systemd[1]: Reached target Local File Systems.
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 31 01:24:59 np0005603621 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 31 01:24:59 np0005603621 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 31 01:24:59 np0005603621 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Automatic Boot Loader Update...
Jan 31 01:24:59 np0005603621 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Create Volatile Files and Directories...
Jan 31 01:24:59 np0005603621 bootctl[691]: Couldn't find EFI system partition, skipping.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Automatic Boot Loader Update.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Create Volatile Files and Directories.
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Security Auditing Service...
Jan 31 01:24:59 np0005603621 systemd[1]: Starting RPC Bind...
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Rebuild Journal Catalog...
Jan 31 01:24:59 np0005603621 auditd[697]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 31 01:24:59 np0005603621 auditd[697]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Rebuild Journal Catalog.
Jan 31 01:24:59 np0005603621 systemd[1]: Started RPC Bind.
Jan 31 01:24:59 np0005603621 augenrules[702]: /sbin/augenrules: No change
Jan 31 01:24:59 np0005603621 augenrules[717]: No rules
Jan 31 01:24:59 np0005603621 augenrules[717]: enabled 1
Jan 31 01:24:59 np0005603621 augenrules[717]: failure 1
Jan 31 01:24:59 np0005603621 augenrules[717]: pid 697
Jan 31 01:24:59 np0005603621 augenrules[717]: rate_limit 0
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_limit 8192
Jan 31 01:24:59 np0005603621 augenrules[717]: lost 0
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog 3
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_wait_time 60000
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_wait_time_actual 0
Jan 31 01:24:59 np0005603621 augenrules[717]: enabled 1
Jan 31 01:24:59 np0005603621 augenrules[717]: failure 1
Jan 31 01:24:59 np0005603621 augenrules[717]: pid 697
Jan 31 01:24:59 np0005603621 augenrules[717]: rate_limit 0
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_limit 8192
Jan 31 01:24:59 np0005603621 augenrules[717]: lost 0
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog 3
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_wait_time 60000
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_wait_time_actual 0
Jan 31 01:24:59 np0005603621 augenrules[717]: enabled 1
Jan 31 01:24:59 np0005603621 augenrules[717]: failure 1
Jan 31 01:24:59 np0005603621 augenrules[717]: pid 697
Jan 31 01:24:59 np0005603621 augenrules[717]: rate_limit 0
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_limit 8192
Jan 31 01:24:59 np0005603621 augenrules[717]: lost 0
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog 3
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_wait_time 60000
Jan 31 01:24:59 np0005603621 augenrules[717]: backlog_wait_time_actual 0
Jan 31 01:24:59 np0005603621 systemd[1]: Started Security Auditing Service.
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 31 01:24:59 np0005603621 systemd[1]: Finished Rebuild Hardware Database.
Jan 31 01:24:59 np0005603621 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 31 01:24:59 np0005603621 systemd-udevd[725]: Using default interface naming scheme 'rhel-9.0'.
Jan 31 01:25:00 np0005603621 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 31 01:25:00 np0005603621 systemd[1]: Starting Load Kernel Module configfs...
Jan 31 01:25:00 np0005603621 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 31 01:25:00 np0005603621 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 31 01:25:00 np0005603621 systemd[1]: Finished Load Kernel Module configfs.
Jan 31 01:25:00 np0005603621 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 31 01:25:00 np0005603621 systemd[1]: Starting Update is Completed...
Jan 31 01:25:00 np0005603621 systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:25:00 np0005603621 systemd[1]: Finished Update is Completed.
Jan 31 01:25:00 np0005603621 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 31 01:25:00 np0005603621 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 31 01:25:00 np0005603621 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 31 01:25:00 np0005603621 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 31 01:25:00 np0005603621 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 31 01:25:00 np0005603621 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 31 01:25:00 np0005603621 kernel: Console: switching to colour dummy device 80x25
Jan 31 01:25:00 np0005603621 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 31 01:25:00 np0005603621 kernel: [drm] features: -context_init
Jan 31 01:25:00 np0005603621 kernel: [drm] number of scanouts: 1
Jan 31 01:25:00 np0005603621 kernel: [drm] number of cap sets: 0
Jan 31 01:25:00 np0005603621 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 31 01:25:00 np0005603621 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 31 01:25:00 np0005603621 kernel: Console: switching to colour frame buffer device 128x48
Jan 31 01:25:00 np0005603621 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 31 01:25:00 np0005603621 kernel: kvm_amd: TSC scaling supported
Jan 31 01:25:00 np0005603621 kernel: kvm_amd: Nested Virtualization enabled
Jan 31 01:25:00 np0005603621 kernel: kvm_amd: Nested Paging enabled
Jan 31 01:25:00 np0005603621 kernel: kvm_amd: LBR virtualization supported
Jan 31 01:25:00 np0005603621 systemd[1]: Reached target System Initialization.
Jan 31 01:25:00 np0005603621 systemd[1]: Started dnf makecache --timer.
Jan 31 01:25:00 np0005603621 systemd[1]: Started Daily rotation of log files.
Jan 31 01:25:00 np0005603621 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 31 01:25:00 np0005603621 systemd[1]: Reached target Timer Units.
Jan 31 01:25:00 np0005603621 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 31 01:25:00 np0005603621 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 31 01:25:00 np0005603621 systemd[1]: Reached target Socket Units.
Jan 31 01:25:00 np0005603621 systemd[1]: Starting D-Bus System Message Bus...
Jan 31 01:25:00 np0005603621 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 01:25:00 np0005603621 systemd[1]: Started D-Bus System Message Bus.
Jan 31 01:25:00 np0005603621 systemd[1]: Reached target Basic System.
Jan 31 01:25:00 np0005603621 dbus-broker-lau[791]: Ready
Jan 31 01:25:00 np0005603621 systemd[1]: Starting NTP client/server...
Jan 31 01:25:00 np0005603621 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 31 01:25:00 np0005603621 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 31 01:25:00 np0005603621 systemd[1]: Starting IPv4 firewall with iptables...
Jan 31 01:25:00 np0005603621 systemd[1]: Started irqbalance daemon.
Jan 31 01:25:00 np0005603621 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 31 01:25:00 np0005603621 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 01:25:00 np0005603621 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 01:25:00 np0005603621 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 01:25:00 np0005603621 systemd[1]: Reached target sshd-keygen.target.
Jan 31 01:25:00 np0005603621 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 31 01:25:00 np0005603621 systemd[1]: Reached target User and Group Name Lookups.
Jan 31 01:25:00 np0005603621 systemd[1]: Starting User Login Management...
Jan 31 01:25:00 np0005603621 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 31 01:25:00 np0005603621 systemd-logind[818]: New seat seat0.
Jan 31 01:25:00 np0005603621 systemd-logind[818]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 01:25:00 np0005603621 systemd-logind[818]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 01:25:00 np0005603621 systemd[1]: Started User Login Management.
Jan 31 01:25:00 np0005603621 chronyd[827]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 01:25:00 np0005603621 chronyd[827]: Loaded 0 symmetric keys
Jan 31 01:25:00 np0005603621 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 31 01:25:00 np0005603621 systemd[1]: Started NTP client/server.
Jan 31 01:25:00 np0005603621 chronyd[827]: Using right/UTC timezone to obtain leap second data
Jan 31 01:25:00 np0005603621 chronyd[827]: Loaded seccomp filter (level 2)
Jan 31 01:25:00 np0005603621 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 31 01:25:00 np0005603621 iptables.init[812]: iptables: Applying firewall rules: [  OK  ]
Jan 31 01:25:00 np0005603621 systemd[1]: Finished IPv4 firewall with iptables.
Jan 31 01:25:01 np0005603621 cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sat, 31 Jan 2026 06:25:00 +0000. Up 6.95 seconds.
Jan 31 01:25:01 np0005603621 systemd[1]: run-cloud\x2dinit-tmp-tmps5qi4m9z.mount: Deactivated successfully.
Jan 31 01:25:01 np0005603621 systemd[1]: Starting Hostname Service...
Jan 31 01:25:01 np0005603621 systemd[1]: Started Hostname Service.
Jan 31 01:25:01 np0005603621 systemd-hostnamed[851]: Hostname set to <np0005603621.novalocal> (static)
Jan 31 01:25:01 np0005603621 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 31 01:25:01 np0005603621 systemd[1]: Reached target Preparation for Network.
Jan 31 01:25:01 np0005603621 systemd[1]: Starting Network Manager...
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5208] NetworkManager (version 1.54.3-2.el9) is starting... (boot:3db7ea38-6dd3-43e6-a29a-06c99dfe6817)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5215] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5398] manager[0x56310120f000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5442] hostname: hostname: using hostnamed
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5442] hostname: static hostname changed from (none) to "np0005603621.novalocal"
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5446] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5538] manager[0x56310120f000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5539] manager[0x56310120f000]: rfkill: WWAN hardware radio set enabled
Jan 31 01:25:01 np0005603621 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5621] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5621] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5622] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5622] manager: Networking is enabled by state file
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5626] settings: Loaded settings plugin: keyfile (internal)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5661] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5682] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5697] dhcp: init: Using DHCP client 'internal'
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5701] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5714] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5726] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5737] device (lo): Activation: starting connection 'lo' (73e17779-4974-4880-8fe6-9b8e22df0812)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5746] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5750] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5773] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5779] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5782] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5785] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5787] device (eth0): carrier: link connected
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5790] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5802] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5807] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:25:01 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5813] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5814] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5816] manager: NetworkManager state is now CONNECTING
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5818] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5829] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5832] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:25:01 np0005603621 systemd[1]: Started Network Manager.
Jan 31 01:25:01 np0005603621 systemd[1]: Reached target Network.
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5867] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5875] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.5896] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:25:01 np0005603621 systemd[1]: Starting Network Manager Wait Online...
Jan 31 01:25:01 np0005603621 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 31 01:25:01 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6063] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6066] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6067] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6073] device (lo): Activation: successful, device activated.
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6079] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6081] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6084] device (eth0): Activation: successful, device activated.
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6091] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 01:25:01 np0005603621 NetworkManager[855]: <info>  [1769840701.6094] manager: startup complete
Jan 31 01:25:01 np0005603621 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 31 01:25:01 np0005603621 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 31 01:25:01 np0005603621 systemd[1]: Reached target NFS client services.
Jan 31 01:25:01 np0005603621 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 31 01:25:01 np0005603621 systemd[1]: Reached target Remote File Systems.
Jan 31 01:25:01 np0005603621 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 31 01:25:01 np0005603621 systemd[1]: Finished Network Manager Wait Online.
Jan 31 01:25:01 np0005603621 systemd[1]: Starting Cloud-init: Network Stage...
Jan 31 01:25:01 np0005603621 cloud-init[915]: Cloud-init v. 24.4-8.el9 running 'init' at Sat, 31 Jan 2026 06:25:01 +0000. Up 7.88 seconds.
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |  eth0  | True |         38.102.83.98         | 255.255.255.0 | global | fa:16:3e:36:65:10 |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |  eth0  | True | fe80::f816:3eff:fe36:6510/64 |       .       |  link  | fa:16:3e:36:65:10 |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 31 01:25:01 np0005603621 cloud-init[915]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 31 01:25:02 np0005603621 cloud-init[915]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 31 01:25:03 np0005603621 cloud-init[915]: Generating public/private rsa key pair.
Jan 31 01:25:03 np0005603621 cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 31 01:25:03 np0005603621 cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 31 01:25:03 np0005603621 cloud-init[915]: The key fingerprint is:
Jan 31 01:25:03 np0005603621 cloud-init[915]: SHA256:dIEPdhf963AKCcNLGoGVI7Ik/o4+fUWJcrobDJnbEZk root@np0005603621.novalocal
Jan 31 01:25:03 np0005603621 cloud-init[915]: The key's randomart image is:
Jan 31 01:25:03 np0005603621 cloud-init[915]: +---[RSA 3072]----+
Jan 31 01:25:03 np0005603621 cloud-init[915]: |      o.... .o   |
Jan 31 01:25:03 np0005603621 cloud-init[915]: | . ooo ++ ... .  |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |. oEo o.*+..   . |
Jan 31 01:25:03 np0005603621 cloud-init[915]: | .oo.o = *.     .|
Jan 31 01:25:03 np0005603621 cloud-init[915]: | +..+ . S + .   .|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |  =o.  o . o . o |
Jan 31 01:25:03 np0005603621 cloud-init[915]: | .++. .     . =  |
Jan 31 01:25:03 np0005603621 cloud-init[915]: | o +..       . . |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |....o            |
Jan 31 01:25:03 np0005603621 cloud-init[915]: +----[SHA256]-----+
Jan 31 01:25:03 np0005603621 cloud-init[915]: Generating public/private ecdsa key pair.
Jan 31 01:25:03 np0005603621 cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 31 01:25:03 np0005603621 cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 31 01:25:03 np0005603621 cloud-init[915]: The key fingerprint is:
Jan 31 01:25:03 np0005603621 cloud-init[915]: SHA256:75GCoZKmb67hzD43A7RPJEJ//DWdM8hPQbBTa8IHSRs root@np0005603621.novalocal
Jan 31 01:25:03 np0005603621 cloud-init[915]: The key's randomart image is:
Jan 31 01:25:03 np0005603621 cloud-init[915]: +---[ECDSA 256]---+
Jan 31 01:25:03 np0005603621 cloud-init[915]: |        .E+o     |
Jan 31 01:25:03 np0005603621 cloud-init[915]: | .      ..*..    |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |. . .   .*o+o    |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |.o o o   ==*     |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |o + . o S + o    |
Jan 31 01:25:03 np0005603621 cloud-init[915]: | o o . + . o     |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |. B . . . +      |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |+=.*     o .     |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |+X= o     .      |
Jan 31 01:25:03 np0005603621 cloud-init[915]: +----[SHA256]-----+
Jan 31 01:25:03 np0005603621 cloud-init[915]: Generating public/private ed25519 key pair.
Jan 31 01:25:03 np0005603621 cloud-init[915]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 31 01:25:03 np0005603621 cloud-init[915]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 31 01:25:03 np0005603621 cloud-init[915]: The key fingerprint is:
Jan 31 01:25:03 np0005603621 cloud-init[915]: SHA256:nkHTnjnDTtxf+EsNojz6nfXfodfnH0NWRKrAZl9sRcE root@np0005603621.novalocal
Jan 31 01:25:03 np0005603621 cloud-init[915]: The key's randomart image is:
Jan 31 01:25:03 np0005603621 cloud-init[915]: +--[ED25519 256]--+
Jan 31 01:25:03 np0005603621 cloud-init[915]: |              .+*|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |         o   . E |
Jan 31 01:25:03 np0005603621 cloud-init[915]: |        o *   = .|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |       . B * + ..|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |        S X = oo.|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |       . * + oo+.|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |        o =   o=+|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |         . o oooO|
Jan 31 01:25:03 np0005603621 cloud-init[915]: |        ... o..oX|
Jan 31 01:25:03 np0005603621 cloud-init[915]: +----[SHA256]-----+
Jan 31 01:25:03 np0005603621 systemd[1]: Finished Cloud-init: Network Stage.
Jan 31 01:25:03 np0005603621 systemd[1]: Reached target Cloud-config availability.
Jan 31 01:25:03 np0005603621 systemd[1]: Reached target Network is Online.
Jan 31 01:25:03 np0005603621 systemd[1]: Starting Cloud-init: Config Stage...
Jan 31 01:25:03 np0005603621 systemd[1]: Starting Crash recovery kernel arming...
Jan 31 01:25:03 np0005603621 systemd[1]: Starting Notify NFS peers of a restart...
Jan 31 01:25:03 np0005603621 systemd[1]: Starting System Logging Service...
Jan 31 01:25:03 np0005603621 systemd[1]: Starting OpenSSH server daemon...
Jan 31 01:25:03 np0005603621 sm-notify[997]: Version 2.5.4 starting
Jan 31 01:25:03 np0005603621 systemd[1]: Starting Permit User Sessions...
Jan 31 01:25:03 np0005603621 systemd[1]: Started Notify NFS peers of a restart.
Jan 31 01:25:03 np0005603621 systemd[1]: Started OpenSSH server daemon.
Jan 31 01:25:03 np0005603621 systemd[1]: Finished Permit User Sessions.
Jan 31 01:25:03 np0005603621 systemd[1]: Started Command Scheduler.
Jan 31 01:25:03 np0005603621 systemd[1]: Started Getty on tty1.
Jan 31 01:25:03 np0005603621 systemd[1]: Started Serial Getty on ttyS0.
Jan 31 01:25:03 np0005603621 systemd[1]: Reached target Login Prompts.
Jan 31 01:25:03 np0005603621 rsyslogd[998]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="998" x-info="https://www.rsyslog.com"] start
Jan 31 01:25:03 np0005603621 rsyslogd[998]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 31 01:25:03 np0005603621 systemd[1]: Started System Logging Service.
Jan 31 01:25:03 np0005603621 systemd[1]: Reached target Multi-User System.
Jan 31 01:25:03 np0005603621 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 31 01:25:03 np0005603621 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 31 01:25:03 np0005603621 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 31 01:25:03 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 01:25:03 np0005603621 kdumpctl[1007]: kdump: No kdump initial ramdisk found.
Jan 31 01:25:03 np0005603621 kdumpctl[1007]: kdump: Rebuilding /boot/initramfs-5.14.0-665.el9.x86_64kdump.img
Jan 31 01:25:03 np0005603621 cloud-init[1144]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sat, 31 Jan 2026 06:25:03 +0000. Up 9.32 seconds.
Jan 31 01:25:03 np0005603621 systemd[1]: Finished Cloud-init: Config Stage.
Jan 31 01:25:03 np0005603621 systemd[1]: Starting Cloud-init: Final Stage...
Jan 31 01:25:03 np0005603621 dracut[1258]: dracut-057-102.git20250818.el9
Jan 31 01:25:03 np0005603621 dracut[1260]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/822f14ea-6e7e-41df-b0d8-fbe282d9ded8 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-665.el9.x86_64kdump.img 5.14.0-665.el9.x86_64
Jan 31 01:25:03 np0005603621 cloud-init[1319]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sat, 31 Jan 2026 06:25:03 +0000. Up 9.66 seconds.
Jan 31 01:25:03 np0005603621 cloud-init[1330]: #############################################################
Jan 31 01:25:03 np0005603621 cloud-init[1331]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 31 01:25:03 np0005603621 cloud-init[1333]: 256 SHA256:75GCoZKmb67hzD43A7RPJEJ//DWdM8hPQbBTa8IHSRs root@np0005603621.novalocal (ECDSA)
Jan 31 01:25:03 np0005603621 cloud-init[1335]: 256 SHA256:nkHTnjnDTtxf+EsNojz6nfXfodfnH0NWRKrAZl9sRcE root@np0005603621.novalocal (ED25519)
Jan 31 01:25:03 np0005603621 cloud-init[1337]: 3072 SHA256:dIEPdhf963AKCcNLGoGVI7Ik/o4+fUWJcrobDJnbEZk root@np0005603621.novalocal (RSA)
Jan 31 01:25:03 np0005603621 cloud-init[1338]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 31 01:25:03 np0005603621 cloud-init[1339]: #############################################################
Jan 31 01:25:03 np0005603621 cloud-init[1319]: Cloud-init v. 24.4-8.el9 finished at Sat, 31 Jan 2026 06:25:03 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.89 seconds
Jan 31 01:25:03 np0005603621 systemd[1]: Finished Cloud-init: Final Stage.
Jan 31 01:25:03 np0005603621 systemd[1]: Reached target Cloud-init target.
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 01:25:04 np0005603621 dracut[1260]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: memstrack is not available
Jan 31 01:25:05 np0005603621 dracut[1260]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 31 01:25:05 np0005603621 dracut[1260]: memstrack is not available
Jan 31 01:25:05 np0005603621 dracut[1260]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 31 01:25:05 np0005603621 dracut[1260]: *** Including module: systemd ***
Jan 31 01:25:05 np0005603621 dracut[1260]: *** Including module: fips ***
Jan 31 01:25:05 np0005603621 dracut[1260]: *** Including module: systemd-initrd ***
Jan 31 01:25:05 np0005603621 dracut[1260]: *** Including module: i18n ***
Jan 31 01:25:05 np0005603621 dracut[1260]: *** Including module: drm ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: prefixdevname ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: kernel-modules ***
Jan 31 01:25:06 np0005603621 kernel: block vda: the capability attribute has been deprecated.
Jan 31 01:25:06 np0005603621 chronyd[827]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Jan 31 01:25:06 np0005603621 chronyd[827]: System clock TAI offset set to 37 seconds
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: kernel-modules-extra ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: qemu ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: fstab-sys ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: rootfs-block ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: terminfo ***
Jan 31 01:25:06 np0005603621 dracut[1260]: *** Including module: udev-rules ***
Jan 31 01:25:07 np0005603621 dracut[1260]: Skipping udev rule: 91-permissions.rules
Jan 31 01:25:07 np0005603621 dracut[1260]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: virtiofs ***
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: dracut-systemd ***
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: usrmount ***
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: base ***
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: fs-lib ***
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: kdumpbase ***
Jan 31 01:25:07 np0005603621 dracut[1260]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 31 01:25:07 np0005603621 dracut[1260]:  microcode_ctl module: mangling fw_dir
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel" is ignored
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 31 01:25:07 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 31 01:25:08 np0005603621 dracut[1260]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Including module: openssl ***
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Including module: shutdown ***
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Including module: squash ***
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Including modules done ***
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Installing kernel module dependencies ***
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Installing kernel module dependencies done ***
Jan 31 01:25:08 np0005603621 dracut[1260]: *** Resolving executable dependencies ***
Jan 31 01:25:10 np0005603621 dracut[1260]: *** Resolving executable dependencies done ***
Jan 31 01:25:10 np0005603621 dracut[1260]: *** Generating early-microcode cpio image ***
Jan 31 01:25:10 np0005603621 dracut[1260]: *** Store current command line parameters ***
Jan 31 01:25:10 np0005603621 dracut[1260]: Stored kernel commandline:
Jan 31 01:25:10 np0005603621 dracut[1260]: No dracut internal kernel commandline stored in the initramfs
Jan 31 01:25:10 np0005603621 dracut[1260]: *** Install squash loader ***
Jan 31 01:25:10 np0005603621 dracut[1260]: *** Squashing the files inside the initramfs ***
Jan 31 01:25:11 np0005603621 irqbalance[813]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 31 01:25:11 np0005603621 irqbalance[813]: IRQ 25 affinity is now unmanaged
Jan 31 01:25:11 np0005603621 irqbalance[813]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 31 01:25:11 np0005603621 irqbalance[813]: IRQ 31 affinity is now unmanaged
Jan 31 01:25:11 np0005603621 irqbalance[813]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 31 01:25:11 np0005603621 irqbalance[813]: IRQ 28 affinity is now unmanaged
Jan 31 01:25:11 np0005603621 irqbalance[813]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 31 01:25:11 np0005603621 irqbalance[813]: IRQ 32 affinity is now unmanaged
Jan 31 01:25:11 np0005603621 irqbalance[813]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 31 01:25:11 np0005603621 irqbalance[813]: IRQ 30 affinity is now unmanaged
Jan 31 01:25:11 np0005603621 irqbalance[813]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 31 01:25:11 np0005603621 irqbalance[813]: IRQ 29 affinity is now unmanaged
Jan 31 01:25:11 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:25:11 np0005603621 dracut[1260]: *** Squashing the files inside the initramfs done ***
Jan 31 01:25:11 np0005603621 dracut[1260]: *** Creating image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' ***
Jan 31 01:25:11 np0005603621 dracut[1260]: *** Hardlinking files ***
Jan 31 01:25:11 np0005603621 dracut[1260]: *** Hardlinking files done ***
Jan 31 01:25:12 np0005603621 dracut[1260]: *** Creating initramfs image file '/boot/initramfs-5.14.0-665.el9.x86_64kdump.img' done ***
Jan 31 01:25:12 np0005603621 kdumpctl[1007]: kdump: kexec: loaded kdump kernel
Jan 31 01:25:12 np0005603621 kdumpctl[1007]: kdump: Starting kdump: [OK]
Jan 31 01:25:12 np0005603621 systemd[1]: Finished Crash recovery kernel arming.
Jan 31 01:25:12 np0005603621 systemd[1]: Startup finished in 1.777s (kernel) + 2.663s (initrd) + 14.328s (userspace) = 18.769s.
Jan 31 01:25:31 np0005603621 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:31:00 np0005603621 systemd[1]: Created slice User Slice of UID 1000.
Jan 31 01:31:00 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 31 01:31:00 np0005603621 systemd-logind[818]: New session 1 of user zuul.
Jan 31 01:31:00 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 31 01:31:00 np0005603621 systemd[1]: Starting User Manager for UID 1000...
Jan 31 01:31:01 np0005603621 systemd[4306]: Queued start job for default target Main User Target.
Jan 31 01:31:01 np0005603621 systemd[4306]: Created slice User Application Slice.
Jan 31 01:31:01 np0005603621 systemd[4306]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 01:31:01 np0005603621 systemd[4306]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 01:31:01 np0005603621 systemd[4306]: Reached target Paths.
Jan 31 01:31:01 np0005603621 systemd[4306]: Reached target Timers.
Jan 31 01:31:01 np0005603621 systemd[4306]: Starting D-Bus User Message Bus Socket...
Jan 31 01:31:01 np0005603621 systemd[4306]: Starting Create User's Volatile Files and Directories...
Jan 31 01:31:01 np0005603621 systemd[4306]: Finished Create User's Volatile Files and Directories.
Jan 31 01:31:01 np0005603621 systemd[4306]: Listening on D-Bus User Message Bus Socket.
Jan 31 01:31:01 np0005603621 systemd[4306]: Reached target Sockets.
Jan 31 01:31:01 np0005603621 systemd[4306]: Reached target Basic System.
Jan 31 01:31:01 np0005603621 systemd[4306]: Reached target Main User Target.
Jan 31 01:31:01 np0005603621 systemd[4306]: Startup finished in 137ms.
Jan 31 01:31:01 np0005603621 systemd[1]: Started User Manager for UID 1000.
Jan 31 01:31:01 np0005603621 systemd[1]: Started Session 1 of User zuul.
Jan 31 01:31:02 np0005603621 python3[4388]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:31:34 np0005603621 python3[4418]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:31:42 np0005603621 python3[4476]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:31:43 np0005603621 python3[4516]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 31 01:31:45 np0005603621 python3[4542]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0NROJyvixYj14yxc9a1mzd1FlH8bHxigBuCuSXZp+XBwK5CWQYNe1kWs8LnwK1EvgGycvb2uWsCqXynoIDepSR4X45xPMVj2xEV2M0gYJN2FioWZRuHFYKZQNIY+ZFpOMgic6vKkz3uR6hw5OogchCCdEPofRUiDvA6imrPii/QP8S3YnwQCYwkeq72uqj4sslD467c/NglKPLZEKdfcnC4ZLM8nrcRiZwRfWls2oF0OWdbFwIn6RiwJGvZAk12ezTFzNyNkHfkadH1PD5F7tLZVrxU1P73llDzfyU8ppwjlIEtvATWFb1y5VF8VkOvjaen+/DMoFYiLvR6MUyI4JAZ7JmXxmvhLxQHPwYFTbzdjdZRYeQvPWAwtH9LWW2cdBkvLA/vGY+PSqXhb3aAM/O6R0lcyTmGVNEMRpwYZYmdoB8Cr9m2jOxZ7Ffwbs94foCUrIVlc3dkcMCTaUTrXBAqnbUteQ/Ctgp2pFJOSEse2AQi52Xm8A87QOl1wYmN8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:31:46 np0005603621 python3[4566]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:46 np0005603621 python3[4665]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:31:47 np0005603621 python3[4736]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841106.509065-251-208946764759887/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=24510bce553d47bd8880c3ff7a9c0ec0_id_rsa follow=False checksum=3ef597b6e54d9d641aaa8554b0f170ef780d386a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:47 np0005603621 python3[4859]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:31:48 np0005603621 python3[4930]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841107.5015843-306-235287750275566/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=24510bce553d47bd8880c3ff7a9c0ec0_id_rsa.pub follow=False checksum=9e6accf0af0859cd95b66274f88d88182d33bf59 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:49 np0005603621 python3[4978]: ansible-ping Invoked with data=pong
Jan 31 01:31:50 np0005603621 python3[5002]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:31:52 np0005603621 python3[5060]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 31 01:31:53 np0005603621 python3[5092]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:53 np0005603621 python3[5116]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:53 np0005603621 python3[5140]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:54 np0005603621 python3[5164]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:54 np0005603621 python3[5188]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:54 np0005603621 python3[5212]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:56 np0005603621 python3[5239]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:57 np0005603621 python3[5317]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:31:58 np0005603621 python3[5390]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841117.1135032-31-201047411190882/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:31:59 np0005603621 python3[5438]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:31:59 np0005603621 python3[5462]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:31:59 np0005603621 python3[5486]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:31:59 np0005603621 python3[5510]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:00 np0005603621 python3[5534]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:00 np0005603621 python3[5558]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:00 np0005603621 python3[5582]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:01 np0005603621 python3[5606]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:01 np0005603621 python3[5630]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:01 np0005603621 python3[5654]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:01 np0005603621 python3[5678]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:02 np0005603621 python3[5702]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:02 np0005603621 python3[5726]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:02 np0005603621 python3[5750]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:02 np0005603621 python3[5774]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:03 np0005603621 python3[5798]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:03 np0005603621 python3[5822]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:03 np0005603621 python3[5846]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:03 np0005603621 python3[5870]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:04 np0005603621 python3[5894]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:04 np0005603621 python3[5918]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:04 np0005603621 python3[5942]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:05 np0005603621 python3[5966]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:05 np0005603621 python3[5990]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:05 np0005603621 python3[6014]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:06 np0005603621 python3[6038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:32:09 np0005603621 python3[6064]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 01:32:09 np0005603621 systemd[1]: Starting Time & Date Service...
Jan 31 01:32:09 np0005603621 systemd[1]: Started Time & Date Service.
Jan 31 01:32:09 np0005603621 systemd-timedated[6066]: Changed time zone to 'UTC' (UTC).
Jan 31 01:32:10 np0005603621 python3[6095]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:32:10 np0005603621 python3[6171]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:32:11 np0005603621 python3[6242]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769841130.6373765-251-263477051451501/source _original_basename=tmpue0ur_gs follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:32:11 np0005603621 python3[6342]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:32:11 np0005603621 python3[6413]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769841131.470198-301-137349941227042/source _original_basename=tmpbv2t56lf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:32:12 np0005603621 python3[6515]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:32:13 np0005603621 python3[6588]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769841132.5729833-381-31520133736532/source _original_basename=tmp6dmxwv5c follow=False checksum=bd525bf2100f6176f2da7b3ae03ee4707d4592f7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:32:13 np0005603621 python3[6636]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:32:14 np0005603621 python3[6662]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:32:14 np0005603621 python3[6742]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:32:15 np0005603621 python3[6815]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841134.6459372-451-158857969309670/source _original_basename=tmpdm0itjqs follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:32:15 np0005603621 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-b97d-7d19-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:32:16 np0005603621 python3[6894]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-b97d-7d19-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 31 01:32:18 np0005603621 python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:32:21 np0005603621 irqbalance[813]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 31 01:32:21 np0005603621 irqbalance[813]: IRQ 26 affinity is now unmanaged
Jan 31 01:32:39 np0005603621 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 01:32:51 np0005603621 python3[6952]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 31 01:33:33 np0005603621 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 31 01:33:33 np0005603621 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4562] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:33:33 np0005603621 systemd-udevd[6953]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4648] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4673] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4678] device (eth1): carrier: link connected
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4680] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4688] policy: auto-activating connection 'Wired connection 1' (0cf34e0b-aff8-31ae-afe7-3fdf1afa0940)
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4694] device (eth1): Activation: starting connection 'Wired connection 1' (0cf34e0b-aff8-31ae-afe7-3fdf1afa0940)
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4696] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4700] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4705] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:33:33 np0005603621 NetworkManager[855]: <info>  [1769841213.4709] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:33:33 np0005603621 systemd[4306]: Starting Mark boot as successful...
Jan 31 01:33:33 np0005603621 systemd[4306]: Finished Mark boot as successful.
Jan 31 01:33:34 np0005603621 python3[6981]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-922d-b924-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:33:44 np0005603621 python3[7061]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:33:44 np0005603621 python3[7134]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769841224.2414308-104-176579660942029/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=aa8d8e168fe37331ce997c0fe4ae56f9cc889510 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:33:45 np0005603621 python3[7184]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 01:33:45 np0005603621 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 01:33:45 np0005603621 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 01:33:45 np0005603621 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 01:33:45 np0005603621 systemd[1]: Stopping Network Manager...
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6872] caught SIGTERM, shutting down normally.
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6878] dhcp4 (eth0): canceled DHCP transaction
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6878] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6878] dhcp4 (eth0): state changed no lease
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6880] manager: NetworkManager state is now CONNECTING
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6978] dhcp4 (eth1): canceled DHCP transaction
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.6979] dhcp4 (eth1): state changed no lease
Jan 31 01:33:45 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:33:45 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:33:45 np0005603621 NetworkManager[855]: <info>  [1769841225.7804] exiting (success)
Jan 31 01:33:45 np0005603621 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 01:33:45 np0005603621 systemd[1]: Stopped Network Manager.
Jan 31 01:33:45 np0005603621 systemd[1]: NetworkManager.service: Consumed 4.117s CPU time, 10.2M memory peak.
Jan 31 01:33:45 np0005603621 systemd[1]: Starting Network Manager...
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8326] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3db7ea38-6dd3-43e6-a29a-06c99dfe6817)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8328] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8370] manager[0x562c289e9000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 01:33:45 np0005603621 systemd[1]: Starting Hostname Service...
Jan 31 01:33:45 np0005603621 systemd[1]: Started Hostname Service.
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8931] hostname: hostname: using hostnamed
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8932] hostname: static hostname changed from (none) to "np0005603621.novalocal"
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8935] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8939] manager[0x562c289e9000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8939] manager[0x562c289e9000]: rfkill: WWAN hardware radio set enabled
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8963] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8965] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8965] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8965] manager: Networking is enabled by state file
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8967] settings: Loaded settings plugin: keyfile (internal)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8971] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.8993] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9003] dhcp: init: Using DHCP client 'internal'
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9010] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9015] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9019] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9025] device (lo): Activation: starting connection 'lo' (73e17779-4974-4880-8fe6-9b8e22df0812)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9031] device (eth0): carrier: link connected
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9035] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9040] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9041] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9046] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9052] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9057] device (eth1): carrier: link connected
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9060] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9064] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0cf34e0b-aff8-31ae-afe7-3fdf1afa0940) (indicated)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9065] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9069] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9074] device (eth1): Activation: starting connection 'Wired connection 1' (0cf34e0b-aff8-31ae-afe7-3fdf1afa0940)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9079] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 01:33:45 np0005603621 systemd[1]: Started Network Manager.
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9083] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9085] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9087] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9089] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9092] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9094] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9095] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9097] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9101] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9104] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9111] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9113] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9130] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9132] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9136] device (lo): Activation: successful, device activated.
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9146] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Jan 31 01:33:45 np0005603621 NetworkManager[7201]: <info>  [1769841225.9152] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 01:33:45 np0005603621 systemd[1]: Starting Network Manager Wait Online...
Jan 31 01:33:46 np0005603621 NetworkManager[7201]: <info>  [1769841226.0633] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:33:46 np0005603621 NetworkManager[7201]: <info>  [1769841226.0691] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:33:46 np0005603621 NetworkManager[7201]: <info>  [1769841226.0694] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:33:46 np0005603621 NetworkManager[7201]: <info>  [1769841226.0698] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 01:33:46 np0005603621 NetworkManager[7201]: <info>  [1769841226.0702] device (eth0): Activation: successful, device activated.
Jan 31 01:33:46 np0005603621 NetworkManager[7201]: <info>  [1769841226.0710] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 01:33:46 np0005603621 python3[7262]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-922d-b924-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:33:56 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:34:15 np0005603621 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0401] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 01:34:31 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:34:31 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0624] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0626] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0633] device (eth1): Activation: successful, device activated.
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0640] manager: startup complete
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0642] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <warn>  [1769841271.0645] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0652] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 systemd[1]: Finished Network Manager Wait Online.
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0808] dhcp4 (eth1): canceled DHCP transaction
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0809] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0809] dhcp4 (eth1): state changed no lease
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0828] policy: auto-activating connection 'ci-private-network' (4e4e0540-911a-5302-a5db-7888a64bac35)
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0833] device (eth1): Activation: starting connection 'ci-private-network' (4e4e0540-911a-5302-a5db-7888a64bac35)
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0834] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0837] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0846] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0854] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0891] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0892] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 01:34:31 np0005603621 NetworkManager[7201]: <info>  [1769841271.0897] device (eth1): Activation: successful, device activated.
Jan 31 01:34:41 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:34:46 np0005603621 systemd-logind[818]: Session 1 logged out. Waiting for processes to exit.
Jan 31 01:35:55 np0005603621 systemd-logind[818]: New session 3 of user zuul.
Jan 31 01:35:55 np0005603621 systemd[1]: Started Session 3 of User zuul.
Jan 31 01:35:55 np0005603621 python3[7380]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:35:56 np0005603621 python3[7453]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769841355.671502-373-28128864032966/source _original_basename=tmpiegrg1xp follow=False checksum=be2f7c16edb43e88d00ebc0882f9b60a044e6a9d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:36:00 np0005603621 systemd[1]: session-3.scope: Deactivated successfully.
Jan 31 01:36:00 np0005603621 systemd-logind[818]: Session 3 logged out. Waiting for processes to exit.
Jan 31 01:36:00 np0005603621 systemd-logind[818]: Removed session 3.
Jan 31 01:36:47 np0005603621 systemd[4306]: Created slice User Background Tasks Slice.
Jan 31 01:36:47 np0005603621 systemd[4306]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 01:36:47 np0005603621 systemd[4306]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 01:40:37 np0005603621 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 31 01:40:37 np0005603621 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 31 01:40:37 np0005603621 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 31 01:40:37 np0005603621 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 31 01:46:42 np0005603621 systemd-logind[818]: New session 4 of user zuul.
Jan 31 01:46:42 np0005603621 systemd[1]: Started Session 4 of User zuul.
Jan 31 01:46:42 np0005603621 python3[7516]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-bf7f-c771-000000000cb6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:43 np0005603621 python3[7544]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:43 np0005603621 python3[7570]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:43 np0005603621 python3[7597]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:43 np0005603621 python3[7623]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:44 np0005603621 python3[7649]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:45 np0005603621 python3[7727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:46:45 np0005603621 python3[7800]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842004.910627-375-173703915266765/source _original_basename=tmpoi8xw5du follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:46:46 np0005603621 python3[7850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 01:46:46 np0005603621 systemd[1]: Reloading.
Jan 31 01:46:46 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:46:48 np0005603621 python3[7906]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 31 01:46:48 np0005603621 python3[7932]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:49 np0005603621 python3[7960]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:49 np0005603621 python3[7988]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:49 np0005603621 python3[8016]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:50 np0005603621 python3[8043]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-bf7f-c771-000000000cbd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:46:50 np0005603621 python3[8073]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 01:46:53 np0005603621 systemd[1]: session-4.scope: Deactivated successfully.
Jan 31 01:46:53 np0005603621 systemd[1]: session-4.scope: Consumed 3.615s CPU time.
Jan 31 01:46:53 np0005603621 systemd-logind[818]: Session 4 logged out. Waiting for processes to exit.
Jan 31 01:46:53 np0005603621 systemd-logind[818]: Removed session 4.
Jan 31 01:46:55 np0005603621 systemd-logind[818]: New session 5 of user zuul.
Jan 31 01:46:55 np0005603621 systemd[1]: Started Session 5 of User zuul.
Jan 31 01:46:56 np0005603621 python3[8107]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 01:47:03 np0005603621 setsebool[8150]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 31 01:47:03 np0005603621 setsebool[8150]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 31 01:47:17 np0005603621 kernel: SELinux:  Converting 385 SID table entries...
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:47:17 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:47:28 np0005603621 kernel: SELinux:  Converting 388 SID table entries...
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 01:47:28 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 01:47:48 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 01:47:48 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 01:47:48 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 01:47:48 np0005603621 systemd[1]: Reloading.
Jan 31 01:47:48 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 01:47:48 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 01:47:51 np0005603621 irqbalance[813]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 31 01:47:51 np0005603621 irqbalance[813]: IRQ 27 affinity is now unmanaged
Jan 31 01:48:06 np0005603621 python3[20223]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-c34d-28f4-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:48:06 np0005603621 kernel: evm: overlay not supported
Jan 31 01:48:06 np0005603621 systemd[4306]: Starting D-Bus User Message Bus...
Jan 31 01:48:06 np0005603621 dbus-broker-launch[20820]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 31 01:48:06 np0005603621 dbus-broker-launch[20820]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 31 01:48:06 np0005603621 systemd[4306]: Started D-Bus User Message Bus.
Jan 31 01:48:06 np0005603621 dbus-broker-lau[20820]: Ready
Jan 31 01:48:06 np0005603621 systemd[4306]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 31 01:48:06 np0005603621 systemd[4306]: Created slice Slice /user.
Jan 31 01:48:06 np0005603621 systemd[4306]: podman-20750.scope: unit configures an IP firewall, but not running as root.
Jan 31 01:48:06 np0005603621 systemd[4306]: (This warning is only shown for the first unit using IP firewalling.)
Jan 31 01:48:07 np0005603621 systemd[4306]: Started podman-20750.scope.
Jan 31 01:48:07 np0005603621 systemd[4306]: Started podman-pause-d63f6fad.scope.
Jan 31 01:48:08 np0005603621 python3[21348]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.2:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.2:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:48:08 np0005603621 python3[21348]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 31 01:48:08 np0005603621 systemd-logind[818]: Session 5 logged out. Waiting for processes to exit.
Jan 31 01:48:08 np0005603621 systemd[1]: session-5.scope: Deactivated successfully.
Jan 31 01:48:08 np0005603621 systemd[1]: session-5.scope: Consumed 48.513s CPU time.
Jan 31 01:48:08 np0005603621 systemd-logind[818]: Removed session 5.
Jan 31 01:48:23 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 01:48:23 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 01:48:23 np0005603621 systemd[1]: man-db-cache-update.service: Consumed 39.009s CPU time.
Jan 31 01:48:23 np0005603621 systemd[1]: run-r73639babb2e349a29b8a1708ff30174f.service: Deactivated successfully.
Jan 31 01:48:33 np0005603621 systemd-logind[818]: New session 6 of user zuul.
Jan 31 01:48:33 np0005603621 systemd[1]: Started Session 6 of User zuul.
Jan 31 01:48:33 np0005603621 python3[29689]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAYzMDYbycOT72ga9wDhD1NtUc7onT0cFXCjwfAnzaB2tvlINsgaQbDQ5ZwqYE9Er0Wi02qKQ4UqK2RbEye6MZA= zuul@np0005603620.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:48:34 np0005603621 python3[29715]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAYzMDYbycOT72ga9wDhD1NtUc7onT0cFXCjwfAnzaB2tvlINsgaQbDQ5ZwqYE9Er0Wi02qKQ4UqK2RbEye6MZA= zuul@np0005603620.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:48:35 np0005603621 python3[29741]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005603621.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 31 01:48:35 np0005603621 python3[29775]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAYzMDYbycOT72ga9wDhD1NtUc7onT0cFXCjwfAnzaB2tvlINsgaQbDQ5ZwqYE9Er0Wi02qKQ4UqK2RbEye6MZA= zuul@np0005603620.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 31 01:48:36 np0005603621 python3[29853]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:48:36 np0005603621 python3[29926]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769842116.0844045-167-260615408895449/source _original_basename=tmp8naj_50c follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:48:37 np0005603621 python3[29976]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 31 01:48:37 np0005603621 systemd[1]: Starting Hostname Service...
Jan 31 01:48:37 np0005603621 systemd[1]: Started Hostname Service.
Jan 31 01:48:37 np0005603621 systemd-hostnamed[29980]: Changed pretty hostname to 'compute-0'
Jan 31 01:48:37 np0005603621 systemd-hostnamed[29980]: Hostname set to <compute-0> (static)
Jan 31 01:48:37 np0005603621 NetworkManager[7201]: <info>  [1769842117.7911] hostname: static hostname changed from "np0005603621.novalocal" to "compute-0"
Jan 31 01:48:37 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 01:48:37 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 01:48:38 np0005603621 systemd[1]: session-6.scope: Deactivated successfully.
Jan 31 01:48:38 np0005603621 systemd[1]: session-6.scope: Consumed 1.958s CPU time.
Jan 31 01:48:38 np0005603621 systemd-logind[818]: Session 6 logged out. Waiting for processes to exit.
Jan 31 01:48:38 np0005603621 systemd-logind[818]: Removed session 6.
Jan 31 01:48:47 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 01:49:07 np0005603621 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 01:51:47 np0005603621 systemd[1]: Starting dnf makecache...
Jan 31 01:51:47 np0005603621 dnf[29998]: Failed determining last makecache time.
Jan 31 01:51:47 np0005603621 dnf[29998]: CentOS Stream 9 - BaseOS                         25 kB/s | 6.1 kB     00:00
Jan 31 01:51:48 np0005603621 dnf[29998]: CentOS Stream 9 - AppStream                      64 kB/s | 6.5 kB     00:00
Jan 31 01:51:48 np0005603621 dnf[29998]: CentOS Stream 9 - CRB                            58 kB/s | 6.0 kB     00:00
Jan 31 01:51:48 np0005603621 dnf[29998]: CentOS Stream 9 - Extras packages                69 kB/s | 7.3 kB     00:00
Jan 31 01:51:48 np0005603621 dnf[29998]: Metadata cache created.
Jan 31 01:51:48 np0005603621 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 01:51:48 np0005603621 systemd[1]: Finished dnf makecache.
Jan 31 01:53:20 np0005603621 systemd-logind[818]: New session 7 of user zuul.
Jan 31 01:53:20 np0005603621 systemd[1]: Started Session 7 of User zuul.
Jan 31 01:53:21 np0005603621 python3[30080]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 01:53:22 np0005603621 python3[30196]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:23 np0005603621 python3[30269]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=delorean.repo follow=False checksum=cc4ab4695da8ec58c451521a3dd2f41014af145d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:23 np0005603621 python3[30295]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:23 np0005603621 python3[30368]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:24 np0005603621 python3[30394]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:24 np0005603621 python3[30467]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:24 np0005603621 python3[30493]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:24 np0005603621 python3[30566]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:25 np0005603621 python3[30592]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:25 np0005603621 python3[30665]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:25 np0005603621 python3[30691]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:26 np0005603621 python3[30764]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:26 np0005603621 python3[30790]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 01:53:26 np0005603621 python3[30863]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769842402.548355-34073-210296218409161/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=362a603578148d54e8cd25942b88d7f471cc677a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 01:53:38 np0005603621 python3[30921]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 01:58:37 np0005603621 systemd[1]: session-7.scope: Deactivated successfully.
Jan 31 01:58:37 np0005603621 systemd[1]: session-7.scope: Consumed 4.352s CPU time.
Jan 31 01:58:37 np0005603621 systemd-logind[818]: Session 7 logged out. Waiting for processes to exit.
Jan 31 01:58:37 np0005603621 systemd-logind[818]: Removed session 7.
Jan 31 02:10:44 np0005603621 systemd-logind[818]: New session 8 of user zuul.
Jan 31 02:10:44 np0005603621 systemd[1]: Started Session 8 of User zuul.
Jan 31 02:10:45 np0005603621 python3.9[31101]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:10:46 np0005603621 python3.9[31282]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:10:55 np0005603621 systemd[1]: session-8.scope: Deactivated successfully.
Jan 31 02:10:55 np0005603621 systemd[1]: session-8.scope: Consumed 7.720s CPU time.
Jan 31 02:10:55 np0005603621 systemd-logind[818]: Session 8 logged out. Waiting for processes to exit.
Jan 31 02:10:55 np0005603621 systemd-logind[818]: Removed session 8.
Jan 31 02:11:15 np0005603621 systemd-logind[818]: New session 9 of user zuul.
Jan 31 02:11:15 np0005603621 systemd[1]: Started Session 9 of User zuul.
Jan 31 02:11:15 np0005603621 python3.9[31493]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 02:11:17 np0005603621 python3.9[31667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:18 np0005603621 python3.9[31820]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:11:20 np0005603621 python3.9[31973]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:11:21 np0005603621 python3.9[32125]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:11:21 np0005603621 python3.9[32277]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:11:22 np0005603621 python3.9[32400]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843481.3050547-177-109714210555010/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:11:23 np0005603621 python3.9[32552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:23 np0005603621 python3.9[32708]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:11:24 np0005603621 python3.9[32860]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:11:25 np0005603621 python3.9[33010]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:11:29 np0005603621 python3.9[33263]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:11:30 np0005603621 python3.9[33413]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:31 np0005603621 python3.9[33567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:11:32 np0005603621 python3.9[33725]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:11:33 np0005603621 python3.9[33809]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:12:16 np0005603621 systemd[1]: Reloading.
Jan 31 02:12:16 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:12:17 np0005603621 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 31 02:12:17 np0005603621 systemd[1]: Reloading.
Jan 31 02:12:17 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:12:17 np0005603621 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 31 02:12:17 np0005603621 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 31 02:12:17 np0005603621 systemd[1]: Reloading.
Jan 31 02:12:17 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:12:17 np0005603621 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 31 02:12:18 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:12:18 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:12:18 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:13:17 np0005603621 kernel: SELinux:  Converting 2727 SID table entries...
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:13:17 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:13:17 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 31 02:13:17 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:13:17 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:13:17 np0005603621 systemd[1]: Reloading.
Jan 31 02:13:17 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:13:18 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:13:19 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:13:19 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:13:19 np0005603621 systemd[1]: run-r2429f0e20157431a84f12f3230192fb9.service: Deactivated successfully.
Jan 31 02:13:42 np0005603621 python3.9[35328]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:13:45 np0005603621 python3.9[35609]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 02:13:46 np0005603621 python3.9[35761]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 02:13:49 np0005603621 python3.9[35915]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:13:50 np0005603621 python3.9[36067]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 02:13:52 np0005603621 python3.9[36219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:13:52 np0005603621 python3.9[36371]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:13:59 np0005603621 python3.9[36494]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843632.2824464-666-141138276393341/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:00 np0005603621 python3.9[36646]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:01 np0005603621 python3.9[36798]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:02 np0005603621 python3.9[36951]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:14:03 np0005603621 python3.9[37103]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 02:14:03 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:14:03 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:14:04 np0005603621 python3.9[37257]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:14:06 np0005603621 python3.9[37415]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:14:06 np0005603621 python3.9[37575]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 02:14:07 np0005603621 python3.9[37728]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:14:08 np0005603621 python3.9[37886]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 02:14:09 np0005603621 python3.9[38038]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:14:13 np0005603621 python3.9[38192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:14 np0005603621 python3.9[38344]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:14 np0005603621 python3.9[38467]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843653.6133726-1023-68969189897785/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:15 np0005603621 python3.9[38619]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:14:15 np0005603621 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:14:15 np0005603621 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 31 02:14:15 np0005603621 kernel: Bridge firewalling registered
Jan 31 02:14:15 np0005603621 systemd-modules-load[38623]: Inserted module 'br_netfilter'
Jan 31 02:14:15 np0005603621 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:14:16 np0005603621 python3.9[38778]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:14:17 np0005603621 python3.9[38901]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843656.0772076-1092-45466153547528/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:14:18 np0005603621 python3.9[39053]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:14:22 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:14:22 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:14:23 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:14:23 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:14:23 np0005603621 systemd[1]: Reloading.
Jan 31 02:14:23 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:14:23 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:14:25 np0005603621 python3.9[40895]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:26 np0005603621 python3.9[42468]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 02:14:26 np0005603621 python3.9[43127]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:14:27 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:14:27 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:14:27 np0005603621 systemd[1]: man-db-cache-update.service: Consumed 3.489s CPU time.
Jan 31 02:14:27 np0005603621 systemd[1]: run-r7cb1757e986042b9adeb744eda78fbf6.service: Deactivated successfully.
Jan 31 02:14:27 np0005603621 python3.9[43279]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:27 np0005603621 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 02:14:28 np0005603621 systemd[1]: Starting Authorization Manager...
Jan 31 02:14:28 np0005603621 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 02:14:28 np0005603621 polkitd[43497]: Started polkitd version 0.117
Jan 31 02:14:28 np0005603621 systemd[1]: Started Authorization Manager.
Jan 31 02:14:29 np0005603621 python3.9[43667]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:14:29 np0005603621 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 02:14:29 np0005603621 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 02:14:29 np0005603621 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 02:14:29 np0005603621 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 02:14:29 np0005603621 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 02:14:30 np0005603621 python3.9[43829]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 02:14:34 np0005603621 python3.9[43981]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:14:34 np0005603621 systemd[1]: Reloading.
Jan 31 02:14:34 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:14:35 np0005603621 python3.9[44169]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:14:35 np0005603621 systemd[1]: Reloading.
Jan 31 02:14:35 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:14:36 np0005603621 python3.9[44358]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:36 np0005603621 python3.9[44511]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:36 np0005603621 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 31 02:14:37 np0005603621 python3.9[44664]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:39 np0005603621 python3.9[44826]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:14:40 np0005603621 python3.9[44979]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:14:40 np0005603621 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 31 02:14:40 np0005603621 systemd[1]: Stopped Apply Kernel Variables.
Jan 31 02:14:40 np0005603621 systemd[1]: Stopping Apply Kernel Variables...
Jan 31 02:14:40 np0005603621 systemd[1]: Starting Apply Kernel Variables...
Jan 31 02:14:40 np0005603621 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 31 02:14:40 np0005603621 systemd[1]: Finished Apply Kernel Variables.
Jan 31 02:14:41 np0005603621 systemd[1]: session-9.scope: Deactivated successfully.
Jan 31 02:14:41 np0005603621 systemd[1]: session-9.scope: Consumed 2min 4.281s CPU time.
Jan 31 02:14:41 np0005603621 systemd-logind[818]: Session 9 logged out. Waiting for processes to exit.
Jan 31 02:14:41 np0005603621 systemd-logind[818]: Removed session 9.
Jan 31 02:14:46 np0005603621 systemd-logind[818]: New session 10 of user zuul.
Jan 31 02:14:46 np0005603621 systemd[1]: Started Session 10 of User zuul.
Jan 31 02:14:47 np0005603621 python3.9[45163]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:14:48 np0005603621 python3.9[45319]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 02:14:49 np0005603621 python3.9[45472]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:14:50 np0005603621 python3.9[45630]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:14:51 np0005603621 python3.9[45790]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:14:52 np0005603621 python3.9[45874]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:14:56 np0005603621 python3.9[46037]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:15:08 np0005603621 kernel: SELinux:  Converting 2739 SID table entries...
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:15:08 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:15:08 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 31 02:15:08 np0005603621 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 31 02:15:09 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:15:09 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:15:09 np0005603621 systemd[1]: Reloading.
Jan 31 02:15:09 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:15:09 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:15:09 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:15:10 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:15:10 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:15:10 np0005603621 systemd[1]: run-rf29b18d96553486ea86586017e152608.service: Deactivated successfully.
Jan 31 02:15:15 np0005603621 python3.9[47136]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:15:15 np0005603621 systemd[1]: Reloading.
Jan 31 02:15:15 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:15:15 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:15:15 np0005603621 systemd[1]: Starting Open vSwitch Database Unit...
Jan 31 02:15:15 np0005603621 chown[47177]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 31 02:15:15 np0005603621 ovs-ctl[47182]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 31 02:15:15 np0005603621 ovs-ctl[47182]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 31 02:15:15 np0005603621 ovs-ctl[47182]: Starting ovsdb-server [  OK  ]
Jan 31 02:15:15 np0005603621 ovs-vsctl[47232]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 31 02:15:15 np0005603621 ovs-vsctl[47248]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"59a8b96c-18d5-4426-968c-99837b56953c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 31 02:15:15 np0005603621 ovs-ctl[47182]: Configuring Open vSwitch system IDs [  OK  ]
Jan 31 02:15:15 np0005603621 ovs-ctl[47182]: Enabling remote OVSDB managers [  OK  ]
Jan 31 02:15:15 np0005603621 systemd[1]: Started Open vSwitch Database Unit.
Jan 31 02:15:15 np0005603621 ovs-vsctl[47258]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 02:15:15 np0005603621 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 31 02:15:15 np0005603621 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 31 02:15:15 np0005603621 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 31 02:15:16 np0005603621 kernel: openvswitch: Open vSwitch switching datapath
Jan 31 02:15:16 np0005603621 ovs-ctl[47303]: Inserting openvswitch module [  OK  ]
Jan 31 02:15:16 np0005603621 ovs-ctl[47272]: Starting ovs-vswitchd [  OK  ]
Jan 31 02:15:16 np0005603621 ovs-ctl[47272]: Enabling remote OVSDB managers [  OK  ]
Jan 31 02:15:16 np0005603621 ovs-vsctl[47320]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 31 02:15:16 np0005603621 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 31 02:15:16 np0005603621 systemd[1]: Starting Open vSwitch...
Jan 31 02:15:16 np0005603621 systemd[1]: Finished Open vSwitch.
Jan 31 02:15:17 np0005603621 python3.9[47472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:15:18 np0005603621 python3.9[47624]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 02:15:21 np0005603621 kernel: SELinux:  Converting 2753 SID table entries...
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:15:21 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:15:22 np0005603621 python3.9[47780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:15:23 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 31 02:15:23 np0005603621 python3.9[47938]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:15:25 np0005603621 python3.9[48091]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:15:27 np0005603621 python3.9[48378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 02:15:28 np0005603621 python3.9[48528]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:15:29 np0005603621 python3.9[48682]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:15:30 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:15:30 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:15:30 np0005603621 systemd[1]: Reloading.
Jan 31 02:15:30 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:15:30 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:15:30 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:15:31 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:15:31 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:15:31 np0005603621 systemd[1]: run-rbce0443fb99448beb87579a40205c716.service: Deactivated successfully.
Jan 31 02:15:32 np0005603621 python3.9[49000]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:15:32 np0005603621 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 31 02:15:32 np0005603621 systemd[1]: Stopped Network Manager Wait Online.
Jan 31 02:15:32 np0005603621 systemd[1]: Stopping Network Manager Wait Online...
Jan 31 02:15:32 np0005603621 systemd[1]: Stopping Network Manager...
Jan 31 02:15:32 np0005603621 NetworkManager[7201]: <info>  [1769843732.9931] caught SIGTERM, shutting down normally.
Jan 31 02:15:32 np0005603621 NetworkManager[7201]: <info>  [1769843732.9940] dhcp4 (eth0): canceled DHCP transaction
Jan 31 02:15:32 np0005603621 NetworkManager[7201]: <info>  [1769843732.9941] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:15:32 np0005603621 NetworkManager[7201]: <info>  [1769843732.9941] dhcp4 (eth0): state changed no lease
Jan 31 02:15:32 np0005603621 NetworkManager[7201]: <info>  [1769843732.9942] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 02:15:32 np0005603621 NetworkManager[7201]: <info>  [1769843732.9989] exiting (success)
Jan 31 02:15:33 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:15:33 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:15:33 np0005603621 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 31 02:15:33 np0005603621 systemd[1]: Stopped Network Manager.
Jan 31 02:15:33 np0005603621 systemd[1]: NetworkManager.service: Consumed 23.607s CPU time, 4.2M memory peak, read 0B from disk, written 31.0K to disk.
Jan 31 02:15:33 np0005603621 systemd[1]: Starting Network Manager...
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.0505] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:3db7ea38-6dd3-43e6-a29a-06c99dfe6817)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.0508] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.0556] manager[0x558ea6fb8000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 31 02:15:33 np0005603621 systemd[1]: Starting Hostname Service...
Jan 31 02:15:33 np0005603621 systemd[1]: Started Hostname Service.
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1414] hostname: hostname: using hostnamed
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1415] hostname: static hostname changed from (none) to "compute-0"
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1419] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1422] manager[0x558ea6fb8000]: rfkill: Wi-Fi hardware radio set enabled
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1422] manager[0x558ea6fb8000]: rfkill: WWAN hardware radio set enabled
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1438] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1445] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1445] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1446] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1446] manager: Networking is enabled by state file
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1448] settings: Loaded settings plugin: keyfile (internal)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1451] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1475] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1483] dhcp: init: Using DHCP client 'internal'
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1485] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1490] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1495] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1502] device (lo): Activation: starting connection 'lo' (73e17779-4974-4880-8fe6-9b8e22df0812)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1506] device (eth0): carrier: link connected
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1509] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1513] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1513] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1517] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1523] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1527] device (eth1): carrier: link connected
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1530] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1534] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (4e4e0540-911a-5302-a5db-7888a64bac35) (indicated)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1535] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1538] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1543] device (eth1): Activation: starting connection 'ci-private-network' (4e4e0540-911a-5302-a5db-7888a64bac35)
Jan 31 02:15:33 np0005603621 systemd[1]: Started Network Manager.
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1548] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1554] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1565] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1567] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1569] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1571] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1574] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1576] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1578] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1584] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1586] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1593] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1603] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1612] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1614] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1624] device (lo): Activation: successful, device activated.
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1632] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1639] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1917] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1923] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1928] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1930] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1932] device (eth1): Activation: successful, device activated.
Jan 31 02:15:33 np0005603621 systemd[1]: Starting Network Manager Wait Online...
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1966] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1970] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1974] manager: NetworkManager state is now CONNECTED_SITE
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1979] device (eth0): Activation: successful, device activated.
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.1990] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 31 02:15:33 np0005603621 NetworkManager[49013]: <info>  [1769843733.2018] manager: startup complete
Jan 31 02:15:33 np0005603621 systemd[1]: Finished Network Manager Wait Online.
Jan 31 02:15:33 np0005603621 python3.9[49226]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:15:37 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:15:37 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:15:37 np0005603621 systemd[1]: Reloading.
Jan 31 02:15:37 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:15:37 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:15:38 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:15:39 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:15:39 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:15:39 np0005603621 systemd[1]: run-ra733e8b442804053be756646125390ca.service: Deactivated successfully.
Jan 31 02:15:43 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:15:43 np0005603621 python3.9[49687]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:15:44 np0005603621 python3.9[49839]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:45 np0005603621 python3.9[49993]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:46 np0005603621 python3.9[50145]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:46 np0005603621 python3.9[50297]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:47 np0005603621 python3.9[50449]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:48 np0005603621 python3.9[50601]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:15:49 np0005603621 python3.9[50724]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843747.7925057-647-1851774289529/.source _original_basename=.bdahzv6m follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:50 np0005603621 python3.9[50876]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:51 np0005603621 python3.9[51028]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 31 02:15:52 np0005603621 python3.9[51180]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:15:55 np0005603621 python3.9[51607]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 31 02:15:56 np0005603621 ansible-async_wrapper.py[51782]: Invoked with j888998596134 300 /home/zuul/.ansible/tmp/ansible-tmp-1769843755.7058685-845-63967508397228/AnsiballZ_edpm_os_net_config.py _
Jan 31 02:15:56 np0005603621 ansible-async_wrapper.py[51785]: Starting module and watcher
Jan 31 02:15:56 np0005603621 ansible-async_wrapper.py[51785]: Start watching 51786 (300)
Jan 31 02:15:56 np0005603621 ansible-async_wrapper.py[51786]: Start module (51786)
Jan 31 02:15:56 np0005603621 ansible-async_wrapper.py[51782]: Return async_wrapper task started.
Jan 31 02:15:57 np0005603621 python3.9[51787]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 31 02:15:57 np0005603621 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 31 02:15:57 np0005603621 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 31 02:15:57 np0005603621 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 31 02:15:57 np0005603621 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 31 02:15:57 np0005603621 kernel: cfg80211: failed to load regulatory.db
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.7788] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.7811] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8221] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8223] audit: op="connection-add" uuid="261d8cf8-5eb4-4ae4-a4ef-0d81f32ec235" name="br-ex-br" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8237] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8238] audit: op="connection-add" uuid="f842dc5e-6539-47e4-bf39-24e3da6e4874" name="br-ex-port" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8247] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8248] audit: op="connection-add" uuid="35c4d896-6842-4160-9c07-eff1e5fd93de" name="eth1-port" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8257] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8258] audit: op="connection-add" uuid="88d8f515-56fe-42b7-850d-32c480cdb288" name="vlan20-port" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8266] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8267] audit: op="connection-add" uuid="10cc97a3-acc6-4a33-9c29-580c447777db" name="vlan21-port" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8277] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8278] audit: op="connection-add" uuid="088f949d-6c9f-4a22-8a38-3a193c2a224c" name="vlan22-port" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8287] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8288] audit: op="connection-add" uuid="4ea0ddfe-f8e8-44d3-90ec-bbfa8a71cf96" name="vlan23-port" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8304] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8317] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.8318] audit: op="connection-add" uuid="e3fe6dfa-0d5e-48ad-8838-01307afac07c" name="br-ex-if" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9068] audit: op="connection-update" uuid="4e4e0540-911a-5302-a5db-7888a64bac35" name="ci-private-network" args="ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,connection.slave-type,connection.port-type,connection.timestamp,connection.master,connection.controller,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ovs-interface.type" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9089] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9091] audit: op="connection-add" uuid="b2091f87-add5-4090-bff4-1a297c982d71" name="vlan20-if" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9106] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9107] audit: op="connection-add" uuid="e5f375ac-17c9-4c18-a008-a8010eeac61c" name="vlan21-if" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9122] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9123] audit: op="connection-add" uuid="86233da5-e1b6-4127-a9ae-eeccc8ce7c89" name="vlan22-if" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9138] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9140] audit: op="connection-add" uuid="2a3882df-9125-4ca8-8aea-f869b3d0031b" name="vlan23-if" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9152] audit: op="connection-delete" uuid="0cf34e0b-aff8-31ae-afe7-3fdf1afa0940" name="Wired connection 1" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9163] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9165] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9172] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9175] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (261d8cf8-5eb4-4ae4-a4ef-0d81f32ec235)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9176] audit: op="connection-activate" uuid="261d8cf8-5eb4-4ae4-a4ef-0d81f32ec235" name="br-ex-br" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9177] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9178] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9208] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9212] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f842dc5e-6539-47e4-bf39-24e3da6e4874)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9213] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9214] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9218] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9221] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (35c4d896-6842-4160-9c07-eff1e5fd93de)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9223] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9223] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9227] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9231] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (88d8f515-56fe-42b7-850d-32c480cdb288)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9232] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9233] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9238] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9241] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (10cc97a3-acc6-4a33-9c29-580c447777db)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9242] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9244] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9249] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9253] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (088f949d-6c9f-4a22-8a38-3a193c2a224c)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9255] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9255] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9261] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9265] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4ea0ddfe-f8e8-44d3-90ec-bbfa8a71cf96)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9266] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9270] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9272] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9278] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9278] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9281] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9286] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e3fe6dfa-0d5e-48ad-8838-01307afac07c)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9286] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9290] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9292] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9294] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9296] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9305] device (eth1): disconnecting for new activation request.
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9306] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9311] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9313] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9313] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9316] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9317] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9319] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9322] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b2091f87-add5-4090-bff4-1a297c982d71)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9322] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9324] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9326] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9327] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9330] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9330] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9333] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9336] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e5f375ac-17c9-4c18-a008-a8010eeac61c)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9337] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9340] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9342] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9344] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9346] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9347] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9349] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9352] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (86233da5-e1b6-4127-a9ae-eeccc8ce7c89)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9353] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9355] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9357] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9359] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9361] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <warn>  [1769843758.9361] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9363] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9366] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (2a3882df-9125-4ca8-8aea-f869b3d0031b)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9367] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9369] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9371] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9371] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9372] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9383] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9386] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9389] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9393] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9400] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9403] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9406] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9410] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9411] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9416] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9421] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9426] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9428] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9433] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9437] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9440] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9441] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9446] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9451] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9454] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9456] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9462] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9466] dhcp4 (eth0): canceled DHCP transaction
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9467] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9468] dhcp4 (eth0): state changed no lease
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9469] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9478] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51788 uid=0 result="fail" reason="Device is not activated"
Jan 31 02:15:58 np0005603621 kernel: ovs-system: entered promiscuous mode
Jan 31 02:15:58 np0005603621 systemd-udevd[51793]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:15:58 np0005603621 kernel: Timeout policy base is empty
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9820] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9833] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9840] device (eth1): disconnecting for new activation request.
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9841] audit: op="connection-activate" uuid="4e4e0540-911a-5302-a5db-7888a64bac35" name="ci-private-network" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9842] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9849] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9866] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9869] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51788 uid=0 result="success"
Jan 31 02:15:58 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:15:58 np0005603621 kernel: br-ex: entered promiscuous mode
Jan 31 02:15:58 np0005603621 NetworkManager[49013]: <info>  [1769843758.9988] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 31 02:15:59 np0005603621 kernel: vlan22: entered promiscuous mode
Jan 31 02:15:59 np0005603621 systemd-udevd[51792]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:15:59 np0005603621 kernel: vlan20: entered promiscuous mode
Jan 31 02:15:59 np0005603621 systemd-udevd[51794]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0086] device (eth1): Activation: starting connection 'ci-private-network' (4e4e0540-911a-5302-a5db-7888a64bac35)
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0091] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0103] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0105] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0114] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0117] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0128] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0138] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0140] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0142] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0143] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0144] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0146] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0151] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0158] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0161] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0164] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0167] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0170] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0174] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0177] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0180] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0182] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0185] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 kernel: vlan21: entered promiscuous mode
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0188] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0190] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 systemd-udevd[51886]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0197] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0200] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0205] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0219] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0228] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0231] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 kernel: vlan23: entered promiscuous mode
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0247] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0251] device (eth1): Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0258] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0274] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0281] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0286] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0290] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0300] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0312] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0324] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0325] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0331] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0335] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0341] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0344] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0350] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0354] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0359] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0606] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0621] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0636] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0639] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.0643] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 31 02:15:59 np0005603621 NetworkManager[49013]: <info>  [1769843759.7671] dhcp4 (eth0): state changed new lease, address=38.102.83.98
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.3074] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51788 uid=0 result="success"
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.4479] checkpoint[0x558ea6f8d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.4482] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51788 uid=0 result="success"
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.7509] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51788 uid=0 result="success"
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.7519] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51788 uid=0 result="success"
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.9623] audit: op="networking-control" arg="global-dns-configuration" pid=51788 uid=0 result="success"
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.9653] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.9688] audit: op="networking-control" arg="global-dns-configuration" pid=51788 uid=0 result="success"
Jan 31 02:16:00 np0005603621 NetworkManager[49013]: <info>  [1769843760.9735] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51788 uid=0 result="success"
Jan 31 02:16:01 np0005603621 python3.9[52148]: ansible-ansible.legacy.async_status Invoked with jid=j888998596134.51782 mode=status _async_dir=/root/.ansible_async
Jan 31 02:16:01 np0005603621 NetworkManager[49013]: <info>  [1769843761.1164] checkpoint[0x558ea6f8da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 31 02:16:01 np0005603621 NetworkManager[49013]: <info>  [1769843761.1168] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51788 uid=0 result="success"
Jan 31 02:16:01 np0005603621 ansible-async_wrapper.py[51786]: Module complete (51786)
Jan 31 02:16:01 np0005603621 ansible-async_wrapper.py[51785]: Done in kid B.
Jan 31 02:16:03 np0005603621 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 02:16:04 np0005603621 python3.9[52255]: ansible-ansible.legacy.async_status Invoked with jid=j888998596134.51782 mode=status _async_dir=/root/.ansible_async
Jan 31 02:16:05 np0005603621 python3.9[52354]: ansible-ansible.legacy.async_status Invoked with jid=j888998596134.51782 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 02:16:06 np0005603621 python3.9[52506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:07 np0005603621 python3.9[52629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843766.2485442-926-189923798925436/.source.returncode _original_basename=.ks9sbqks follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:10 np0005603621 python3.9[52782]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:10 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:16:11 np0005603621 python3.9[52907]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843770.2567708-974-143725941844628/.source.cfg _original_basename=.y9wf3e0n follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:12 np0005603621 python3.9[53059]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:16:12 np0005603621 systemd[1]: Reloading Network Manager...
Jan 31 02:16:12 np0005603621 NetworkManager[49013]: <info>  [1769843772.2361] audit: op="reload" arg="0" pid=53063 uid=0 result="success"
Jan 31 02:16:12 np0005603621 NetworkManager[49013]: <info>  [1769843772.2369] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 31 02:16:12 np0005603621 systemd[1]: Reloaded Network Manager.
Jan 31 02:16:12 np0005603621 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 31 02:16:12 np0005603621 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 31 02:16:12 np0005603621 systemd[1]: session-10.scope: Deactivated successfully.
Jan 31 02:16:12 np0005603621 systemd[1]: session-10.scope: Consumed 44.257s CPU time.
Jan 31 02:16:12 np0005603621 systemd-logind[818]: Session 10 logged out. Waiting for processes to exit.
Jan 31 02:16:12 np0005603621 systemd-logind[818]: Removed session 10.
Jan 31 02:16:18 np0005603621 systemd-logind[818]: New session 11 of user zuul.
Jan 31 02:16:18 np0005603621 systemd[1]: Started Session 11 of User zuul.
Jan 31 02:16:19 np0005603621 python3.9[53251]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:16:20 np0005603621 python3.9[53405]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:16:21 np0005603621 python3.9[53599]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:21 np0005603621 systemd[1]: session-11.scope: Deactivated successfully.
Jan 31 02:16:21 np0005603621 systemd[1]: session-11.scope: Consumed 1.824s CPU time.
Jan 31 02:16:21 np0005603621 systemd-logind[818]: Session 11 logged out. Waiting for processes to exit.
Jan 31 02:16:21 np0005603621 systemd-logind[818]: Removed session 11.
Jan 31 02:16:22 np0005603621 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 31 02:16:28 np0005603621 systemd-logind[818]: New session 12 of user zuul.
Jan 31 02:16:28 np0005603621 systemd[1]: Started Session 12 of User zuul.
Jan 31 02:16:29 np0005603621 python3.9[53780]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:16:30 np0005603621 python3.9[53935]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:16:31 np0005603621 python3.9[54091]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:16:32 np0005603621 python3.9[54175]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:16:34 np0005603621 python3.9[54328]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:16:35 np0005603621 python3.9[54523]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:36 np0005603621 python3.9[54675]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:36 np0005603621 podman[54676]: 2026-01-31 07:16:36.835980996 +0000 UTC m=+0.047922546 system refresh
Jan 31 02:16:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:16:37 np0005603621 python3.9[54838]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:38 np0005603621 python3.9[54961]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843797.3965082-196-34626483817912/.source.json follow=False _original_basename=podman_network_config.j2 checksum=dbfd71f6ced766214aa7b5eac0aca2fb063391d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:16:39 np0005603621 python3.9[55113]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:16:39 np0005603621 python3.9[55236]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843798.882928-241-37095856761632/.source.conf follow=False _original_basename=registries.conf.j2 checksum=51dca2f6e7d675b0597f23a4e044edd3f4faff03 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:40 np0005603621 python3.9[55388]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:41 np0005603621 python3.9[55540]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:41 np0005603621 python3.9[55692]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:42 np0005603621 python3.9[55844]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:16:43 np0005603621 python3.9[55996]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:16:46 np0005603621 python3.9[56149]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:16:47 np0005603621 python3.9[56303]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:16:48 np0005603621 python3.9[56455]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:16:48 np0005603621 python3.9[56607]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:16:49 np0005603621 python3.9[56760]: ansible-service_facts Invoked
Jan 31 02:16:49 np0005603621 network[56777]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:16:49 np0005603621 network[56778]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:16:49 np0005603621 network[56779]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:16:58 np0005603621 python3.9[57231]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:17:02 np0005603621 python3.9[57384]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 02:17:03 np0005603621 python3.9[57536]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:04 np0005603621 python3.9[57661]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843823.2431614-673-187648772669047/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:05 np0005603621 python3.9[57815]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:05 np0005603621 python3.9[57940]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843824.5366702-718-97017797372093/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:07 np0005603621 python3.9[58094]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:08 np0005603621 python3.9[58248]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:17:09 np0005603621 python3.9[58332]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:12 np0005603621 python3.9[58486]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:17:12 np0005603621 python3.9[58570]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:17:13 np0005603621 chronyd[827]: chronyd exiting
Jan 31 02:17:13 np0005603621 systemd[1]: Stopping NTP client/server...
Jan 31 02:17:13 np0005603621 systemd[1]: chronyd.service: Deactivated successfully.
Jan 31 02:17:13 np0005603621 systemd[1]: Stopped NTP client/server.
Jan 31 02:17:13 np0005603621 systemd[1]: Starting NTP client/server...
Jan 31 02:17:13 np0005603621 chronyd[58578]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 31 02:17:13 np0005603621 chronyd[58578]: Frequency -31.982 +/- 0.094 ppm read from /var/lib/chrony/drift
Jan 31 02:17:13 np0005603621 chronyd[58578]: Loaded seccomp filter (level 2)
Jan 31 02:17:13 np0005603621 systemd[1]: Started NTP client/server.
Jan 31 02:17:13 np0005603621 systemd[1]: session-12.scope: Deactivated successfully.
Jan 31 02:17:13 np0005603621 systemd[1]: session-12.scope: Consumed 22.310s CPU time.
Jan 31 02:17:13 np0005603621 systemd-logind[818]: Session 12 logged out. Waiting for processes to exit.
Jan 31 02:17:13 np0005603621 systemd-logind[818]: Removed session 12.
Jan 31 02:17:19 np0005603621 systemd-logind[818]: New session 13 of user zuul.
Jan 31 02:17:19 np0005603621 systemd[1]: Started Session 13 of User zuul.
Jan 31 02:17:20 np0005603621 python3.9[58759]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:21 np0005603621 python3.9[58911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:21 np0005603621 python3.9[59034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843840.4415638-62-267100933400655/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:22 np0005603621 systemd[1]: session-13.scope: Deactivated successfully.
Jan 31 02:17:22 np0005603621 systemd[1]: session-13.scope: Consumed 1.325s CPU time.
Jan 31 02:17:22 np0005603621 systemd-logind[818]: Session 13 logged out. Waiting for processes to exit.
Jan 31 02:17:22 np0005603621 systemd-logind[818]: Removed session 13.
Jan 31 02:17:27 np0005603621 systemd-logind[818]: New session 14 of user zuul.
Jan 31 02:17:27 np0005603621 systemd[1]: Started Session 14 of User zuul.
Jan 31 02:17:28 np0005603621 python3.9[59212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:17:29 np0005603621 python3.9[59368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:30 np0005603621 python3.9[59543]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:30 np0005603621 python3.9[59666]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769843849.6261425-83-187866697091386/.source.json _original_basename=.ks2um7t9 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:31 np0005603621 python3.9[59818]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:32 np0005603621 python3.9[59941]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843851.4039698-152-74352856105431/.source _original_basename=.0cf052fe follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:33 np0005603621 python3.9[60093]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:33 np0005603621 python3.9[60245]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:34 np0005603621 python3.9[60368]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843853.284513-224-59467233674374/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:34 np0005603621 python3.9[60520]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:35 np0005603621 python3.9[60643]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769843854.2564373-224-180815348507966/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:17:36 np0005603621 python3.9[60795]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:37 np0005603621 python3.9[60947]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:37 np0005603621 python3.9[61070]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843856.542261-335-51981742227932/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:38 np0005603621 python3.9[61222]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:38 np0005603621 python3.9[61345]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843857.718639-380-96088071511882/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:40 np0005603621 python3.9[61497]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:40 np0005603621 systemd[1]: Reloading.
Jan 31 02:17:40 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:40 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:40 np0005603621 systemd[1]: Reloading.
Jan 31 02:17:40 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:40 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:40 np0005603621 systemd[1]: Starting EDPM Container Shutdown...
Jan 31 02:17:40 np0005603621 systemd[1]: Finished EDPM Container Shutdown.
Jan 31 02:17:41 np0005603621 python3.9[61724]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:41 np0005603621 python3.9[61847]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843860.8321428-449-183974649581745/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:42 np0005603621 python3.9[61999]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:42 np0005603621 python3.9[62122]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843861.9546554-494-187539634375235/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:43 np0005603621 python3.9[62274]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:43 np0005603621 systemd[1]: Reloading.
Jan 31 02:17:43 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:43 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:43 np0005603621 systemd[1]: Reloading.
Jan 31 02:17:44 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:44 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:44 np0005603621 systemd[1]: Starting Create netns directory...
Jan 31 02:17:44 np0005603621 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:17:44 np0005603621 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:17:44 np0005603621 systemd[1]: Finished Create netns directory.
Jan 31 02:17:45 np0005603621 python3.9[62502]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:17:45 np0005603621 network[62519]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:17:45 np0005603621 network[62520]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:17:45 np0005603621 network[62521]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:17:49 np0005603621 python3.9[62783]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:49 np0005603621 systemd[1]: Reloading.
Jan 31 02:17:49 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:49 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:49 np0005603621 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 31 02:17:49 np0005603621 iptables.init[62823]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 31 02:17:49 np0005603621 iptables.init[62823]: iptables: Flushing firewall rules: [  OK  ]
Jan 31 02:17:49 np0005603621 systemd[1]: iptables.service: Deactivated successfully.
Jan 31 02:17:49 np0005603621 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 31 02:17:50 np0005603621 python3.9[63021]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:51 np0005603621 python3.9[63175]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:17:51 np0005603621 systemd[1]: Reloading.
Jan 31 02:17:51 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:17:51 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:17:51 np0005603621 systemd[1]: Starting Netfilter Tables...
Jan 31 02:17:51 np0005603621 systemd[1]: Finished Netfilter Tables.
Jan 31 02:17:53 np0005603621 python3.9[63367]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:17:54 np0005603621 python3.9[63520]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:54 np0005603621 python3.9[63645]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843873.9040735-701-168868464911176/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:55 np0005603621 python3.9[63798]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:17:55 np0005603621 systemd[1]: Reloading OpenSSH server daemon...
Jan 31 02:17:55 np0005603621 systemd[1]: Reloaded OpenSSH server daemon.
Jan 31 02:17:56 np0005603621 python3.9[63954]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:57 np0005603621 python3.9[64106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:17:57 np0005603621 python3.9[64229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843876.7517314-794-14823480584789/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:17:59 np0005603621 python3.9[64381]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 02:17:59 np0005603621 systemd[1]: Starting Time & Date Service...
Jan 31 02:17:59 np0005603621 systemd[1]: Started Time & Date Service.
Jan 31 02:17:59 np0005603621 python3.9[64537]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:00 np0005603621 python3.9[64689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:01 np0005603621 python3.9[64812]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843880.2613199-899-265536598584867/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:02 np0005603621 python3.9[64964]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:02 np0005603621 python3.9[65087]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769843881.9403248-944-269163667028861/.source.yaml _original_basename=.m8pklsaj follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:03 np0005603621 python3.9[65239]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:04 np0005603621 python3.9[65362]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843883.0410156-989-143634954781491/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:04 np0005603621 python3.9[65514]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:05 np0005603621 python3.9[65667]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:06 np0005603621 python3[65820]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:18:07 np0005603621 python3.9[65972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:07 np0005603621 python3.9[66095]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843886.3245513-1106-251535700382663/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:08 np0005603621 python3.9[66247]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:09 np0005603621 python3.9[66370]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843888.0206761-1151-229341224785165/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:10 np0005603621 python3.9[66522]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:10 np0005603621 python3.9[66645]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843889.6458542-1196-6504029566303/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:11 np0005603621 python3.9[66797]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:11 np0005603621 python3.9[66920]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843890.8466403-1241-143243407264496/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:12 np0005603621 python3.9[67072]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:18:13 np0005603621 python3.9[67195]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769843892.118556-1286-232342013818155/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:14 np0005603621 python3.9[67347]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:14 np0005603621 python3.9[67499]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:15 np0005603621 python3.9[67658]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:16 np0005603621 python3.9[67811]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:17 np0005603621 python3.9[67963]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:18 np0005603621 python3.9[68115]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:18:18 np0005603621 python3.9[68268]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:18:19 np0005603621 systemd[1]: session-14.scope: Deactivated successfully.
Jan 31 02:18:19 np0005603621 systemd[1]: session-14.scope: Consumed 30.753s CPU time.
Jan 31 02:18:19 np0005603621 systemd-logind[818]: Session 14 logged out. Waiting for processes to exit.
Jan 31 02:18:19 np0005603621 systemd-logind[818]: Removed session 14.
Jan 31 02:18:24 np0005603621 systemd-logind[818]: New session 15 of user zuul.
Jan 31 02:18:24 np0005603621 systemd[1]: Started Session 15 of User zuul.
Jan 31 02:18:25 np0005603621 python3.9[68449]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 02:18:26 np0005603621 python3.9[68601]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:18:27 np0005603621 python3.9[68753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:18:28 np0005603621 python3.9[68905]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSEo2WrFN8DnR2/d+p3YtsWos96nHz1MZInXN3md5cJXE0icMDwEWJuGIDUd5e0SA6Q7i33i/WIEmt/wGMoNhoTI+f3plB2NyAn5vyVQGTZv7m+tOLQI3/k50Kxnpu0c5gO509yln6RcLe4MutF0imS/fINCM+Nznh7oKbn6hELTDlxDz0JH8dNsZGmtVmgnhwIrglpxAg/WpeOWkCmuuXmysx1JcAhIK5016MzaM9cOtHAGzj5s0GE7nQoH4yG0Ak3zMU/DPKr91Xq/m9PCnGKautoHmHgrEG6u+1WubtakbBxlfmroKbvrIFL6KKQzY0SiTrBsH3nZRaFGCqE0ZEyHvJz8AO3quWg2oaXRJWN98f7k3l5dtVJIuwyJxVnv6fUGuLbGxOp4T6UDPqC7b2Eg17EtpUjy77F/+8yrX6NH+hXwcWBwHelRCDSiceGQTm1uexb8Xo1R1Wt9h24H2yRKPFrqzf1R9J2vipDouDo7RLefAiCXEJDdlewdKUM5c=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILwGOCpzCDE8uIHb4RBldbKfEvxhUdsBT4K7sPU4vZLU#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLS8teLqq0Lmt8g22OKhtEhLCXd5cBLM6W2oDJcWxQl8DloBMMFjgDlHt0rzjMKEL0SpxkPbH7sPV1zbWKKJI9M=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAgshePGnD7oc3Zg8kfD9lUGSfPfE1OzPUGBHE12jLoyHnXwKTxYFYSMTWRcYgdFu4HaP0ShO1gEQF+1nDXxrozH/m2qxK/YPC5cVYCPvscwRdlyUNPOV0rpiruVZptTQ1iibsmRwMbxliXD2t13CtsrNjy9iuLgtvvnkfUh0wZKcZ8Jglg6E4vRTBPgXo3fJCfPF9Iz7GE50DpWAU8OnoLNlOf54/tcd8CyOrmLF9RwHTgNtN9FXscdQ3/A8avCF0WPWNUmfLFc20yOtfrq/xxjJMLn4KOZu1D1yjK5BSJu2pv/j0NPrTFKgPKYWjiXPdttcyubkXNZP96jkK9dgTgsEGRKuM83QpDIu7823wv4/GtEi+IsJeyqCN+3VAJo9hDB9eES8qlX4jAg6Kxen1oNkL9M+tz7N0BSdnxbS3skWEw6MsHlsBLOw7KMYe8gq8JoqHLBKBFQZZbjwaK5kNTeu6l5zAYERpt8uAEZkplq2vV5+4EOh7RPncmKuH0Xs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF//s6MNfOt3MK/jBcrJ5VkyeSY5eg1jUHN32BLTGZtT#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEtIHGkRVmmqcsRoXLuIEWyuaX3BoKld3DircbfvRpdFLzOwbxRaZ6uUN5f7sBun3oAcQLdnixnG3R/YK8L7HpM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXc4C5rCDfOfKEuMVHI9SatZ+NRO9lp335K0yZ19CDCOGSUNO2lblpRlgxO3tw3S+UGGiC/7/HHeZBA2Zd+SUVMb7ytbl5c3+XuZIIQF6DyIIDSELf0FoE0NhuSjKFilPsxyxxGYgH+gVaTZkuGhDoljaywQBSPGZdDwejVKWPVuui5xe0X4T0WVfT5avLSpIL3WjJ9hmzEaR0dUqrbKvPUAXJPDqQOZbQZbpXDIi48NPUDFwByej1xHWHRQaPJ/M6AsyrZKP/hiF2xt0mCIk1FANldusq4OUs9r/0KTVrPRCpSrsSimKBtEMJVdxqxAasE7H07sSdwFcWNC21LtsH8+/LM0oofIZ3D0Lom0NoLaC+Ocy2vqbIhOPYJ6c7Q8J/p4NFiA/lD+bgyjOOnm3Ls4VaaHXUyknu259henkVzJ+iZuRNY8ki345nrzPLoLYyxVwRkSuONyYlRp36jjp0QIL9kXLFlJ2OTHvb9FUhlG7RnxzPeHZhsihSHJv1rgU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAT2MDVMbPz3xtbIO31qZj2gzOQiz4a8pTNWAmd0+CUW#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFU8ym/rLGJxMpEsk09j3JHOh1hW4Vrm23tIOjn4/YJIrK1UFRFiQLDm+yZuj1NhWfbg71SK8ZuZ2miEJ20BHno=#012 create=True mode=0644 path=/tmp/ansible.s156zzs8 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:28 np0005603621 python3.9[69057]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.s156zzs8' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:29 np0005603621 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 02:18:29 np0005603621 python3.9[69213]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.s156zzs8 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:30 np0005603621 systemd[1]: session-15.scope: Deactivated successfully.
Jan 31 02:18:30 np0005603621 systemd[1]: session-15.scope: Consumed 3.089s CPU time.
Jan 31 02:18:30 np0005603621 systemd-logind[818]: Session 15 logged out. Waiting for processes to exit.
Jan 31 02:18:30 np0005603621 systemd-logind[818]: Removed session 15.
Jan 31 02:18:35 np0005603621 systemd-logind[818]: New session 16 of user zuul.
Jan 31 02:18:35 np0005603621 systemd[1]: Started Session 16 of User zuul.
Jan 31 02:18:36 np0005603621 python3.9[69391]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:18:37 np0005603621 python3.9[69547]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:18:38 np0005603621 python3.9[69701]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:18:38 np0005603621 python3.9[69854]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:39 np0005603621 python3.9[70007]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:18:40 np0005603621 python3.9[70161]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:41 np0005603621 python3.9[70316]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:18:41 np0005603621 systemd-logind[818]: Session 16 logged out. Waiting for processes to exit.
Jan 31 02:18:41 np0005603621 systemd[1]: session-16.scope: Deactivated successfully.
Jan 31 02:18:41 np0005603621 systemd[1]: session-16.scope: Consumed 3.981s CPU time.
Jan 31 02:18:41 np0005603621 systemd-logind[818]: Removed session 16.
Jan 31 02:18:47 np0005603621 systemd-logind[818]: New session 17 of user zuul.
Jan 31 02:18:47 np0005603621 systemd[1]: Started Session 17 of User zuul.
Jan 31 02:18:48 np0005603621 python3.9[70494]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:18:49 np0005603621 python3.9[70650]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:18:50 np0005603621 python3.9[70734]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:18:52 np0005603621 python3.9[70885]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:18:53 np0005603621 python3.9[71036]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:18:54 np0005603621 python3.9[71186]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:18:54 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:18:54 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:18:54 np0005603621 python3.9[71337]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:18:55 np0005603621 systemd[1]: session-17.scope: Deactivated successfully.
Jan 31 02:18:55 np0005603621 systemd[1]: session-17.scope: Consumed 5.401s CPU time.
Jan 31 02:18:55 np0005603621 systemd-logind[818]: Session 17 logged out. Waiting for processes to exit.
Jan 31 02:18:55 np0005603621 systemd-logind[818]: Removed session 17.
Jan 31 02:19:03 np0005603621 systemd-logind[818]: New session 18 of user zuul.
Jan 31 02:19:03 np0005603621 systemd[1]: Started Session 18 of User zuul.
Jan 31 02:19:09 np0005603621 python3[72103]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:19:11 np0005603621 python3[72198]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 02:19:12 np0005603621 python3[72225]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:19:13 np0005603621 python3[72251]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:19:13 np0005603621 kernel: loop: module loaded
Jan 31 02:19:13 np0005603621 kernel: loop3: detected capacity change from 0 to 14680064
Jan 31 02:19:13 np0005603621 python3[72286]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:19:13 np0005603621 lvm[72289]: PV /dev/loop3 not used.
Jan 31 02:19:13 np0005603621 lvm[72298]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:19:13 np0005603621 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 31 02:19:13 np0005603621 lvm[72300]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 31 02:19:13 np0005603621 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 31 02:19:14 np0005603621 python3[72378]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:19:14 np0005603621 python3[72451]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843953.8445115-37123-57188543037164/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:19:15 np0005603621 python3[72501]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:19:15 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:15 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:15 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:15 np0005603621 systemd[1]: Starting Ceph OSD losetup...
Jan 31 02:19:15 np0005603621 bash[72542]: /dev/loop3: [64513]:4355666 (/var/lib/ceph-osd-0.img)
Jan 31 02:19:15 np0005603621 systemd[1]: Finished Ceph OSD losetup.
Jan 31 02:19:15 np0005603621 lvm[72543]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:19:15 np0005603621 lvm[72543]: VG ceph_vg0 finished
Jan 31 02:19:17 np0005603621 python3[72567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:19:19 np0005603621 python3[72660]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 31 02:19:21 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:19:21 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:19:22 np0005603621 python3[72770]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:19:22 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:19:22 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:19:22 np0005603621 systemd[1]: run-r1fcc4a84743a4c64ab674b614b6e5462.service: Deactivated successfully.
Jan 31 02:19:22 np0005603621 python3[72799]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:19:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:22 np0005603621 chronyd[58578]: Selected source 23.133.168.245 (pool.ntp.org)
Jan 31 02:19:23 np0005603621 python3[72861]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:19:23 np0005603621 python3[72887]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:19:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:24 np0005603621 python3[72965]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:19:24 np0005603621 python3[73038]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843963.8637276-37314-211597801782009/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:19:25 np0005603621 python3[73140]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:19:25 np0005603621 python3[73213]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769843964.9199274-37332-186967825774153/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:19:25 np0005603621 python3[73263]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:19:26 np0005603621 python3[73291]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:19:26 np0005603621 python3[73319]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:19:26 np0005603621 python3[73345]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:19:27 np0005603621 python3[73371]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:19:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:27 np0005603621 systemd-logind[818]: New session 19 of user ceph-admin.
Jan 31 02:19:27 np0005603621 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 02:19:27 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 02:19:27 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 02:19:27 np0005603621 systemd[1]: Starting User Manager for UID 42477...
Jan 31 02:19:27 np0005603621 systemd[73391]: Queued start job for default target Main User Target.
Jan 31 02:19:27 np0005603621 systemd[73391]: Created slice User Application Slice.
Jan 31 02:19:27 np0005603621 systemd[73391]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:19:27 np0005603621 systemd[73391]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:19:27 np0005603621 systemd[73391]: Reached target Paths.
Jan 31 02:19:27 np0005603621 systemd[73391]: Reached target Timers.
Jan 31 02:19:27 np0005603621 systemd[73391]: Starting D-Bus User Message Bus Socket...
Jan 31 02:19:27 np0005603621 systemd[73391]: Starting Create User's Volatile Files and Directories...
Jan 31 02:19:27 np0005603621 systemd[73391]: Finished Create User's Volatile Files and Directories.
Jan 31 02:19:27 np0005603621 systemd[73391]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:19:27 np0005603621 systemd[73391]: Reached target Sockets.
Jan 31 02:19:27 np0005603621 systemd[73391]: Reached target Basic System.
Jan 31 02:19:27 np0005603621 systemd[73391]: Reached target Main User Target.
Jan 31 02:19:27 np0005603621 systemd[73391]: Startup finished in 108ms.
Jan 31 02:19:27 np0005603621 systemd[1]: Started User Manager for UID 42477.
Jan 31 02:19:27 np0005603621 systemd[1]: Started Session 19 of User ceph-admin.
Jan 31 02:19:27 np0005603621 systemd[1]: session-19.scope: Deactivated successfully.
Jan 31 02:19:27 np0005603621 systemd-logind[818]: Session 19 logged out. Waiting for processes to exit.
Jan 31 02:19:27 np0005603621 systemd-logind[818]: Removed session 19.
Jan 31 02:19:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-compat3142717435-lower\x2dmapped.mount: Deactivated successfully.
Jan 31 02:19:37 np0005603621 systemd[1]: Stopping User Manager for UID 42477...
Jan 31 02:19:37 np0005603621 systemd[73391]: Activating special unit Exit the Session...
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped target Main User Target.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped target Basic System.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped target Paths.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped target Sockets.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped target Timers.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:19:37 np0005603621 systemd[73391]: Closed D-Bus User Message Bus Socket.
Jan 31 02:19:37 np0005603621 systemd[73391]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:19:37 np0005603621 systemd[73391]: Removed slice User Application Slice.
Jan 31 02:19:37 np0005603621 systemd[73391]: Reached target Shutdown.
Jan 31 02:19:37 np0005603621 systemd[73391]: Finished Exit the Session.
Jan 31 02:19:37 np0005603621 systemd[73391]: Reached target Exit the Session.
Jan 31 02:19:37 np0005603621 systemd[1]: user@42477.service: Deactivated successfully.
Jan 31 02:19:37 np0005603621 systemd[1]: Stopped User Manager for UID 42477.
Jan 31 02:19:37 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 31 02:19:37 np0005603621 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 31 02:19:37 np0005603621 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 31 02:19:37 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 31 02:19:37 np0005603621 systemd[1]: Removed slice User Slice of UID 42477.
Jan 31 02:19:41 np0005603621 podman[73444]: 2026-01-31 07:19:41.772333189 +0000 UTC m=+13.830406140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:41 np0005603621 podman[73506]: 2026-01-31 07:19:41.868580493 +0000 UTC m=+0.073903249 container create a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89 (image=quay.io/ceph/ceph:v18, name=cranky_mestorf, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:19:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2347172550-merged.mount: Deactivated successfully.
Jan 31 02:19:41 np0005603621 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 31 02:19:41 np0005603621 systemd[1]: Started libpod-conmon-a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89.scope.
Jan 31 02:19:41 np0005603621 podman[73506]: 2026-01-31 07:19:41.816106794 +0000 UTC m=+0.021429560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:41 np0005603621 podman[73506]: 2026-01-31 07:19:41.975632389 +0000 UTC m=+0.180955165 container init a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89 (image=quay.io/ceph/ceph:v18, name=cranky_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:19:41 np0005603621 podman[73506]: 2026-01-31 07:19:41.981402111 +0000 UTC m=+0.186724867 container start a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89 (image=quay.io/ceph/ceph:v18, name=cranky_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:19:41 np0005603621 podman[73506]: 2026-01-31 07:19:41.996642413 +0000 UTC m=+0.201965169 container attach a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89 (image=quay.io/ceph/ceph:v18, name=cranky_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:19:42 np0005603621 cranky_mestorf[73522]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73506]: 2026-01-31 07:19:42.267304973 +0000 UTC m=+0.472627729 container died a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89 (image=quay.io/ceph/ceph:v18, name=cranky_mestorf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:19:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0d0233054a44f65a62416b9a477db56a7126f0f3498cbb9493f5167ebbde755f-merged.mount: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73506]: 2026-01-31 07:19:42.355037977 +0000 UTC m=+0.560360733 container remove a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89 (image=quay.io/ceph/ceph:v18, name=cranky_mestorf, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-conmon-a0312b8ff381c734a0a2d1b3dfa43caf0859394c5d7e11e55f82056e0c1e7d89.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.405009037 +0000 UTC m=+0.035354569 container create 8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2 (image=quay.io/ceph/ceph:v18, name=crazy_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:42 np0005603621 systemd[1]: Started libpod-conmon-8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2.scope.
Jan 31 02:19:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.464797208 +0000 UTC m=+0.095142760 container init 8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2 (image=quay.io/ceph/ceph:v18, name=crazy_ptolemy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.470101006 +0000 UTC m=+0.100446538 container start 8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2 (image=quay.io/ceph/ceph:v18, name=crazy_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:42 np0005603621 crazy_ptolemy[73557]: 167 167
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 conmon[73557]: conmon 8047434a54cf614f5fe5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2.scope/container/memory.events
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.477670895 +0000 UTC m=+0.108016477 container attach 8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2 (image=quay.io/ceph/ceph:v18, name=crazy_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.478224202 +0000 UTC m=+0.108569754 container died 8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2 (image=quay.io/ceph/ceph:v18, name=crazy_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.389049672 +0000 UTC m=+0.019395224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:42 np0005603621 podman[73541]: 2026-01-31 07:19:42.573885198 +0000 UTC m=+0.204230760 container remove 8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2 (image=quay.io/ceph/ceph:v18, name=crazy_ptolemy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-conmon-8047434a54cf614f5fe54adadf8f5ac6c47e3613c1d2361c0d77b16e884580e2.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.644544663 +0000 UTC m=+0.054598719 container create 95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4 (image=quay.io/ceph/ceph:v18, name=mystifying_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 02:19:42 np0005603621 systemd[1]: Started libpod-conmon-95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4.scope.
Jan 31 02:19:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.608359698 +0000 UTC m=+0.018413734 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.731374829 +0000 UTC m=+0.141428855 container init 95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4 (image=quay.io/ceph/ceph:v18, name=mystifying_hopper, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.736299904 +0000 UTC m=+0.146353910 container start 95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4 (image=quay.io/ceph/ceph:v18, name=mystifying_hopper, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.741275301 +0000 UTC m=+0.151329307 container attach 95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4 (image=quay.io/ceph/ceph:v18, name=mystifying_hopper, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:19:42 np0005603621 mystifying_hopper[73591]: AQAOrX1pagz/LBAAib7tYatqUpFBNrG+qG+rfA==
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.757601077 +0000 UTC m=+0.167655113 container died 95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4 (image=quay.io/ceph/ceph:v18, name=mystifying_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:19:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eac1b992db097b47b64cc9dcb8dd7d168906d4de6e9081e6a460848020fe6de3-merged.mount: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73574]: 2026-01-31 07:19:42.814359283 +0000 UTC m=+0.224413289 container remove 95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4 (image=quay.io/ceph/ceph:v18, name=mystifying_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:19:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-conmon-95d3eacd1d66303b25e703f523ca96adb10f297947e5ad79a3f5e5aaac7175e4.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73611]: 2026-01-31 07:19:42.868624499 +0000 UTC m=+0.039755978 container create a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e (image=quay.io/ceph/ceph:v18, name=flamboyant_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:19:42 np0005603621 systemd[1]: Started libpod-conmon-a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e.scope.
Jan 31 02:19:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:42 np0005603621 podman[73611]: 2026-01-31 07:19:42.926091336 +0000 UTC m=+0.097222835 container init a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e (image=quay.io/ceph/ceph:v18, name=flamboyant_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:19:42 np0005603621 podman[73611]: 2026-01-31 07:19:42.929827564 +0000 UTC m=+0.100959043 container start a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e (image=quay.io/ceph/ceph:v18, name=flamboyant_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:19:42 np0005603621 podman[73611]: 2026-01-31 07:19:42.93948171 +0000 UTC m=+0.110613189 container attach a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e (image=quay.io/ceph/ceph:v18, name=flamboyant_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:19:42 np0005603621 flamboyant_banzai[73628]: AQAOrX1pAbFDOBAAZB+gxcwMb1eUw7cwUWcWLA==
Jan 31 02:19:42 np0005603621 systemd[1]: libpod-a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e.scope: Deactivated successfully.
Jan 31 02:19:42 np0005603621 podman[73611]: 2026-01-31 07:19:42.945892222 +0000 UTC m=+0.117023711 container died a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e (image=quay.io/ceph/ceph:v18, name=flamboyant_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:19:42 np0005603621 podman[73611]: 2026-01-31 07:19:42.849184344 +0000 UTC m=+0.020315843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:43 np0005603621 podman[73611]: 2026-01-31 07:19:43.009680229 +0000 UTC m=+0.180811748 container remove a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e (image=quay.io/ceph/ceph:v18, name=flamboyant_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:19:43 np0005603621 systemd[1]: libpod-conmon-a04600943dfbf4079ffb9a7be2a5d696dc0ec019be0af562c3c57dff98d7d17e.scope: Deactivated successfully.
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.060211537 +0000 UTC m=+0.037221447 container create cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072 (image=quay.io/ceph/ceph:v18, name=heuristic_mccarthy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:19:43 np0005603621 systemd[1]: Started libpod-conmon-cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072.scope.
Jan 31 02:19:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.042699764 +0000 UTC m=+0.019709694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.145439793 +0000 UTC m=+0.122449783 container init cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072 (image=quay.io/ceph/ceph:v18, name=heuristic_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.149615285 +0000 UTC m=+0.126625205 container start cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072 (image=quay.io/ceph/ceph:v18, name=heuristic_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.154521181 +0000 UTC m=+0.131531181 container attach cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072 (image=quay.io/ceph/ceph:v18, name=heuristic_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:19:43 np0005603621 heuristic_mccarthy[73665]: AQAPrX1pTMXYCRAAfly9FyiNB9iMkdpOBMJg5A==
Jan 31 02:19:43 np0005603621 systemd[1]: libpod-cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072.scope: Deactivated successfully.
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.168641847 +0000 UTC m=+0.145651737 container died cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072 (image=quay.io/ceph/ceph:v18, name=heuristic_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:19:43 np0005603621 podman[73648]: 2026-01-31 07:19:43.207658871 +0000 UTC m=+0.184668761 container remove cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072 (image=quay.io/ceph/ceph:v18, name=heuristic_mccarthy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:43 np0005603621 systemd[1]: libpod-conmon-cec62a182aad5dc2394b3e3966e5dd3e77953bd9c9eb194713642ebc4191f072.scope: Deactivated successfully.
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.286246486 +0000 UTC m=+0.060040570 container create ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a (image=quay.io/ceph/ceph:v18, name=fervent_mendel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:19:43 np0005603621 systemd[1]: Started libpod-conmon-ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a.scope.
Jan 31 02:19:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.24653728 +0000 UTC m=+0.020331414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2951ec85f712edf020c8b1a4febb71abb424a46ce84cc3a508a7e04896223e4a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.366251586 +0000 UTC m=+0.140045700 container init ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a (image=quay.io/ceph/ceph:v18, name=fervent_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.371306056 +0000 UTC m=+0.145100140 container start ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a (image=quay.io/ceph/ceph:v18, name=fervent_mendel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.382895092 +0000 UTC m=+0.156689176 container attach ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a (image=quay.io/ceph/ceph:v18, name=fervent_mendel, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:19:43 np0005603621 fervent_mendel[73701]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 31 02:19:43 np0005603621 fervent_mendel[73701]: setting min_mon_release = pacific
Jan 31 02:19:43 np0005603621 fervent_mendel[73701]: /usr/bin/monmaptool: set fsid to 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:43 np0005603621 fervent_mendel[73701]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 31 02:19:43 np0005603621 systemd[1]: libpod-ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a.scope: Deactivated successfully.
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.397537965 +0000 UTC m=+0.171332059 container died ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a (image=quay.io/ceph/ceph:v18, name=fervent_mendel, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:19:43 np0005603621 podman[73684]: 2026-01-31 07:19:43.642928775 +0000 UTC m=+0.416722869 container remove ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a (image=quay.io/ceph/ceph:v18, name=fervent_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:19:43 np0005603621 systemd[1]: libpod-conmon-ab2377b30e26aa8f82fd109bf0b409bbf605fade47a9896c500fa8196345dd6a.scope: Deactivated successfully.
Jan 31 02:19:43 np0005603621 podman[73722]: 2026-01-31 07:19:43.707249979 +0000 UTC m=+0.049215207 container create d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381 (image=quay.io/ceph/ceph:v18, name=bold_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:19:43 np0005603621 systemd[1]: Started libpod-conmon-d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381.scope.
Jan 31 02:19:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fada9ceb47a8c3d09a9e7a591ea06585d2616c3541ba1b4a026f4b78c78e198/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fada9ceb47a8c3d09a9e7a591ea06585d2616c3541ba1b4a026f4b78c78e198/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fada9ceb47a8c3d09a9e7a591ea06585d2616c3541ba1b4a026f4b78c78e198/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fada9ceb47a8c3d09a9e7a591ea06585d2616c3541ba1b4a026f4b78c78e198/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:43 np0005603621 podman[73722]: 2026-01-31 07:19:43.677190039 +0000 UTC m=+0.019155277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:43 np0005603621 podman[73722]: 2026-01-31 07:19:43.799402634 +0000 UTC m=+0.141367872 container init d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381 (image=quay.io/ceph/ceph:v18, name=bold_bell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:19:43 np0005603621 podman[73722]: 2026-01-31 07:19:43.803211054 +0000 UTC m=+0.145176282 container start d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381 (image=quay.io/ceph/ceph:v18, name=bold_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:19:43 np0005603621 podman[73722]: 2026-01-31 07:19:43.806796198 +0000 UTC m=+0.148761426 container attach d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381 (image=quay.io/ceph/ceph:v18, name=bold_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:43 np0005603621 systemd[1]: libpod-d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381.scope: Deactivated successfully.
Jan 31 02:19:43 np0005603621 podman[73722]: 2026-01-31 07:19:43.983362613 +0000 UTC m=+0.325327871 container died d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381 (image=quay.io/ceph/ceph:v18, name=bold_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:19:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3fada9ceb47a8c3d09a9e7a591ea06585d2616c3541ba1b4a026f4b78c78e198-merged.mount: Deactivated successfully.
Jan 31 02:19:44 np0005603621 podman[73722]: 2026-01-31 07:19:44.08009144 +0000 UTC m=+0.422056668 container remove d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381 (image=quay.io/ceph/ceph:v18, name=bold_bell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:19:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:44 np0005603621 systemd[1]: libpod-conmon-d8d211d55514ccb2d06a1d263a861e3b324e6564b6145fe90e4a8a653c436381.scope: Deactivated successfully.
Jan 31 02:19:44 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:44 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:44 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:44 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:44 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:44 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:44 np0005603621 systemd[1]: Reached target All Ceph clusters and services.
Jan 31 02:19:44 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:44 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:44 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:44 np0005603621 systemd[1]: Reached target Ceph cluster 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:19:44 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:44 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:44 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:44 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:45 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:45 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:45 np0005603621 systemd[1]: Created slice Slice /system/ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:19:45 np0005603621 systemd[1]: Reached target System Time Set.
Jan 31 02:19:45 np0005603621 systemd[1]: Reached target System Time Synchronized.
Jan 31 02:19:45 np0005603621 systemd[1]: Starting Ceph mon.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:19:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:45 np0005603621 podman[74019]: 2026-01-31 07:19:45.444945134 +0000 UTC m=+0.057901873 container create dcbbb81b18baa54d499c88f9f3db370dc03ab49d515451b75354865109b2e252 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b257f749df765190b30b3cb9191e0733fcef6629b29542141ccb3d80d9198d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b257f749df765190b30b3cb9191e0733fcef6629b29542141ccb3d80d9198d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b257f749df765190b30b3cb9191e0733fcef6629b29542141ccb3d80d9198d20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b257f749df765190b30b3cb9191e0733fcef6629b29542141ccb3d80d9198d20/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 podman[74019]: 2026-01-31 07:19:45.41321868 +0000 UTC m=+0.026175439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:45 np0005603621 podman[74019]: 2026-01-31 07:19:45.509924038 +0000 UTC m=+0.122880817 container init dcbbb81b18baa54d499c88f9f3db370dc03ab49d515451b75354865109b2e252 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:19:45 np0005603621 podman[74019]: 2026-01-31 07:19:45.51410566 +0000 UTC m=+0.127062409 container start dcbbb81b18baa54d499c88f9f3db370dc03ab49d515451b75354865109b2e252 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:19:45 np0005603621 bash[74019]: dcbbb81b18baa54d499c88f9f3db370dc03ab49d515451b75354865109b2e252
Jan 31 02:19:45 np0005603621 systemd[1]: Started Ceph mon.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: pidfile_write: ignore empty --pid-file
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: load: jerasure load: lrc 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Git sha 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: DB SUMMARY
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: DB Session ID:  G36NNBGTKDMA0NB9ENKK
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                                     Options.env: 0x56309bdcec40
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                                Options.info_log: 0x56309e490ec0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                                 Options.wal_dir: 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                    Options.write_buffer_manager: 0x56309e4a0b40
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                               Options.row_cache: None
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                              Options.wal_filter: None
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.wal_compression: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.max_background_jobs: 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Compression algorithms supported:
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kZSTD supported: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:           Options.merge_operator: 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:        Options.compaction_filter: None
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56309e490aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56309e4891f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.compression: NoCompression
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.num_levels: 7
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1c0121d-64f9-45ea-8a37-6ea14a60eed8
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843985568619, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843985574436, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "G36NNBGTKDMA0NB9ENKK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843985574611, "job": 1, "event": "recovery_finished"}
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56309e4b2e00
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: DB pointer 0x56309e5bc000
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56309e4891f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@-1(???) e0 preinit fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:45 np0005603621 podman[74041]: 2026-01-31 07:19:45.612565594 +0000 UTC m=+0.044129407 container create 9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65 (image=quay.io/ceph/ceph:v18, name=sad_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-31T07:19:43.848733Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,os=Linux}
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 31 02:19:45 np0005603621 systemd[1]: Started libpod-conmon-9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65.scope.
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).mds e1 new map
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 02:19:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:45 np0005603621 podman[74041]: 2026-01-31 07:19:45.592140598 +0000 UTC m=+0.023704451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca5bcd2ec494d013e9a08a30479545fc603671ff9a8a1ed4ae033f8df8969ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca5bcd2ec494d013e9a08a30479545fc603671ff9a8a1ed4ae033f8df8969ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ca5bcd2ec494d013e9a08a30479545fc603671ff9a8a1ed4ae033f8df8969ff/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mkfs 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 02:19:45 np0005603621 ceph-mon[74039]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 02:19:45 np0005603621 podman[74041]: 2026-01-31 07:19:45.750454324 +0000 UTC m=+0.182018167 container init 9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65 (image=quay.io/ceph/ceph:v18, name=sad_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:19:45 np0005603621 podman[74041]: 2026-01-31 07:19:45.756881838 +0000 UTC m=+0.188445661 container start 9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65 (image=quay.io/ceph/ceph:v18, name=sad_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:19:45 np0005603621 podman[74041]: 2026-01-31 07:19:45.764002403 +0000 UTC m=+0.195566256 container attach 9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65 (image=quay.io/ceph/ceph:v18, name=sad_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:19:46 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 02:19:46 np0005603621 ceph-mon[74039]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1855693831' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:  cluster:
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    id:     2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    health: HEALTH_OK
Jan 31 02:19:46 np0005603621 sad_bartik[74094]: 
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:  services:
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    mon: 1 daemons, quorum compute-0 (age 0.474994s)
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    mgr: no daemons active
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    osd: 0 osds: 0 up, 0 in
Jan 31 02:19:46 np0005603621 sad_bartik[74094]: 
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:  data:
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    pools:   0 pools, 0 pgs
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    objects: 0 objects, 0 B
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    usage:   0 B used, 0 B / 0 B avail
Jan 31 02:19:46 np0005603621 sad_bartik[74094]:    pgs:     
Jan 31 02:19:46 np0005603621 sad_bartik[74094]: 
Jan 31 02:19:46 np0005603621 systemd[1]: libpod-9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65.scope: Deactivated successfully.
Jan 31 02:19:46 np0005603621 podman[74120]: 2026-01-31 07:19:46.162503185 +0000 UTC m=+0.024552437 container died 9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65 (image=quay.io/ceph/ceph:v18, name=sad_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:19:46 np0005603621 podman[74120]: 2026-01-31 07:19:46.238594681 +0000 UTC m=+0.100643873 container remove 9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65 (image=quay.io/ceph/ceph:v18, name=sad_bartik, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:46 np0005603621 systemd[1]: libpod-conmon-9f08e02db4241cf9cee138dbaa2bb29928b100c4851c366e6b50016cee97eb65.scope: Deactivated successfully.
Jan 31 02:19:46 np0005603621 podman[74135]: 2026-01-31 07:19:46.308768351 +0000 UTC m=+0.045874662 container create cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:19:46 np0005603621 systemd[1]: Started libpod-conmon-cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9.scope.
Jan 31 02:19:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a5fd332a7ace0eadf8b4ea2bbd56e5d3fb9171db6e178c258b1ddfacc21f65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a5fd332a7ace0eadf8b4ea2bbd56e5d3fb9171db6e178c258b1ddfacc21f65/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a5fd332a7ace0eadf8b4ea2bbd56e5d3fb9171db6e178c258b1ddfacc21f65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a5fd332a7ace0eadf8b4ea2bbd56e5d3fb9171db6e178c258b1ddfacc21f65/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:46 np0005603621 podman[74135]: 2026-01-31 07:19:46.284862005 +0000 UTC m=+0.021968346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:46 np0005603621 podman[74135]: 2026-01-31 07:19:46.4212989 +0000 UTC m=+0.158405231 container init cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:19:46 np0005603621 podman[74135]: 2026-01-31 07:19:46.425706289 +0000 UTC m=+0.162812600 container start cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:19:46 np0005603621 podman[74135]: 2026-01-31 07:19:46.431064678 +0000 UTC m=+0.168170999 container attach cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:19:46 np0005603621 ceph-mon[74039]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 02:19:46 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 02:19:46 np0005603621 ceph-mon[74039]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1034640378' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:19:46 np0005603621 ceph-mon[74039]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1034640378' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 02:19:46 np0005603621 clever_maxwell[74151]: 
Jan 31 02:19:46 np0005603621 clever_maxwell[74151]: [global]
Jan 31 02:19:46 np0005603621 clever_maxwell[74151]: #011fsid = 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:46 np0005603621 clever_maxwell[74151]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 31 02:19:46 np0005603621 systemd[1]: libpod-cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9.scope: Deactivated successfully.
Jan 31 02:19:46 np0005603621 podman[74177]: 2026-01-31 07:19:46.930893775 +0000 UTC m=+0.027776129 container died cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c7a5fd332a7ace0eadf8b4ea2bbd56e5d3fb9171db6e178c258b1ddfacc21f65-merged.mount: Deactivated successfully.
Jan 31 02:19:47 np0005603621 podman[74177]: 2026-01-31 07:19:47.003180181 +0000 UTC m=+0.100062485 container remove cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:19:47 np0005603621 systemd[1]: libpod-conmon-cb4bad4923276500bb603323a322a8dfea9c5c711bf481350b8f9d4fe0d377b9.scope: Deactivated successfully.
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.081118556 +0000 UTC m=+0.058182601 container create d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950 (image=quay.io/ceph/ceph:v18, name=inspiring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:19:47 np0005603621 systemd[1]: Started libpod-conmon-d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950.scope.
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.046892973 +0000 UTC m=+0.023957108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b5b53f38355654a63a767cdf3a0a56da9bdf3c63684575cd8428d4a0e18d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b5b53f38355654a63a767cdf3a0a56da9bdf3c63684575cd8428d4a0e18d1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b5b53f38355654a63a767cdf3a0a56da9bdf3c63684575cd8428d4a0e18d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc1b5b53f38355654a63a767cdf3a0a56da9bdf3c63684575cd8428d4a0e18d1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.189322297 +0000 UTC m=+0.166386372 container init d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950 (image=quay.io/ceph/ceph:v18, name=inspiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.195297796 +0000 UTC m=+0.172361881 container start d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950 (image=quay.io/ceph/ceph:v18, name=inspiring_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.224690926 +0000 UTC m=+0.201755011 container attach d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950 (image=quay.io/ceph/ceph:v18, name=inspiring_hodgkin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109665304' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:19:47 np0005603621 systemd[1]: libpod-d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950.scope: Deactivated successfully.
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.63780842 +0000 UTC m=+0.614872475 container died d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950 (image=quay.io/ceph/ceph:v18, name=inspiring_hodgkin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:19:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cc1b5b53f38355654a63a767cdf3a0a56da9bdf3c63684575cd8428d4a0e18d1-merged.mount: Deactivated successfully.
Jan 31 02:19:47 np0005603621 podman[74192]: 2026-01-31 07:19:47.694678769 +0000 UTC m=+0.671742824 container remove d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950 (image=quay.io/ceph/ceph:v18, name=inspiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:19:47 np0005603621 systemd[1]: libpod-conmon-d12a1e7a2a19f54fa68670397932c76dcaf963a7d48ed67e7b657d3b1ce69950.scope: Deactivated successfully.
Jan 31 02:19:47 np0005603621 systemd[1]: Stopping Ceph mon.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: from='client.? 192.168.122.100:0/1034640378' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: from='client.? 192.168.122.100:0/1034640378' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: mon.compute-0@0(leader) e1 shutdown
Jan 31 02:19:47 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0[74035]: 2026-01-31T07:19:47.898+0000 7f269ce59640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 31 02:19:47 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0[74035]: 2026-01-31T07:19:47.898+0000 7f269ce59640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 02:19:47 np0005603621 ceph-mon[74039]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 02:19:48 np0005603621 podman[74273]: 2026-01-31 07:19:48.022856148 +0000 UTC m=+0.162737488 container died dcbbb81b18baa54d499c88f9f3db370dc03ab49d515451b75354865109b2e252 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:19:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b257f749df765190b30b3cb9191e0733fcef6629b29542141ccb3d80d9198d20-merged.mount: Deactivated successfully.
Jan 31 02:19:48 np0005603621 podman[74273]: 2026-01-31 07:19:48.0532817 +0000 UTC m=+0.193163010 container remove dcbbb81b18baa54d499c88f9f3db370dc03ab49d515451b75354865109b2e252 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:19:48 np0005603621 bash[74273]: ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0
Jan 31 02:19:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 31 02:19:48 np0005603621 systemd[1]: ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@mon.compute-0.service: Deactivated successfully.
Jan 31 02:19:48 np0005603621 systemd[1]: Stopped Ceph mon.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:19:48 np0005603621 systemd[1]: Starting Ceph mon.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:19:48 np0005603621 podman[74375]: 2026-01-31 07:19:48.339067518 +0000 UTC m=+0.037555179 container create 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd869002e8b8a3382f2de97b10877aa2b31a62c26e451dfbbaca94ae5ce6e9e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd869002e8b8a3382f2de97b10877aa2b31a62c26e451dfbbaca94ae5ce6e9e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd869002e8b8a3382f2de97b10877aa2b31a62c26e451dfbbaca94ae5ce6e9e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd869002e8b8a3382f2de97b10877aa2b31a62c26e451dfbbaca94ae5ce6e9e9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 podman[74375]: 2026-01-31 07:19:48.32270352 +0000 UTC m=+0.021191211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:48 np0005603621 podman[74375]: 2026-01-31 07:19:48.467691246 +0000 UTC m=+0.166178937 container init 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:19:48 np0005603621 podman[74375]: 2026-01-31 07:19:48.472346063 +0000 UTC m=+0.170833714 container start 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:19:48 np0005603621 bash[74375]: 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86
Jan 31 02:19:48 np0005603621 systemd[1]: Started Ceph mon.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: pidfile_write: ignore empty --pid-file
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: load: jerasure load: lrc 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Git sha 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: DB SUMMARY
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: DB Session ID:  H7FZQV5I20IRLGUCDO1Y
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                                     Options.env: 0x55f82a845c40
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                                Options.info_log: 0x55f82bbd3040
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                                 Options.wal_dir: 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                    Options.write_buffer_manager: 0x55f82bbe2b40
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                               Options.row_cache: None
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                              Options.wal_filter: None
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.wal_compression: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.max_background_jobs: 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.max_total_wal_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:       Options.compaction_readahead_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Compression algorithms supported:
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kZSTD supported: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:           Options.merge_operator: 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:        Options.compaction_filter: None
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f82bbd2c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f82bbcb1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:        Options.write_buffer_size: 33554432
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:  Options.max_write_buffer_number: 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.compression: NoCompression
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.num_levels: 7
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b1c0121d-64f9-45ea-8a37-6ea14a60eed8
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843988504830, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843988509420, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843988, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769843988509519, "job": 1, "event": "recovery_finished"}
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f82bbf4e00
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: DB pointer 0x55f82bc7e000
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.50 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???) e1 preinit fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).mds e1 new map
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 31 02:19:48 np0005603621 podman[74398]: 2026-01-31 07:19:48.559148877 +0000 UTC m=+0.043347651 container create 0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50 (image=quay.io/ceph/ceph:v18, name=modest_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:19:48 np0005603621 ceph-mon[74394]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 31 02:19:48 np0005603621 systemd[1]: Started libpod-conmon-0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50.scope.
Jan 31 02:19:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007447180c029f2afaac8fcdb443ba9e68b1d0454cb5c8fd0aa80b1a517d7e36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007447180c029f2afaac8fcdb443ba9e68b1d0454cb5c8fd0aa80b1a517d7e36/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007447180c029f2afaac8fcdb443ba9e68b1d0454cb5c8fd0aa80b1a517d7e36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:48 np0005603621 podman[74398]: 2026-01-31 07:19:48.641474171 +0000 UTC m=+0.125673025 container init 0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50 (image=quay.io/ceph/ceph:v18, name=modest_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:19:48 np0005603621 podman[74398]: 2026-01-31 07:19:48.546847159 +0000 UTC m=+0.031045933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:48 np0005603621 podman[74398]: 2026-01-31 07:19:48.64997308 +0000 UTC m=+0.134171864 container start 0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50 (image=quay.io/ceph/ceph:v18, name=modest_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:19:48 np0005603621 podman[74398]: 2026-01-31 07:19:48.653936225 +0000 UTC m=+0.138135009 container attach 0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50 (image=quay.io/ceph/ceph:v18, name=modest_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:19:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 31 02:19:49 np0005603621 systemd[1]: libpod-0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50.scope: Deactivated successfully.
Jan 31 02:19:49 np0005603621 podman[74475]: 2026-01-31 07:19:49.112639291 +0000 UTC m=+0.025466296 container died 0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50 (image=quay.io/ceph/ceph:v18, name=modest_meitner, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:19:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-007447180c029f2afaac8fcdb443ba9e68b1d0454cb5c8fd0aa80b1a517d7e36-merged.mount: Deactivated successfully.
Jan 31 02:19:49 np0005603621 podman[74475]: 2026-01-31 07:19:49.168371264 +0000 UTC m=+0.081198249 container remove 0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50 (image=quay.io/ceph/ceph:v18, name=modest_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:19:49 np0005603621 systemd[1]: libpod-conmon-0ce3a318058ff0c3746ea1ead6eff9a262524e9834affbf5315d18bfb31e2d50.scope: Deactivated successfully.
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.221460783 +0000 UTC m=+0.034791452 container create d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb (image=quay.io/ceph/ceph:v18, name=sleepy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:49 np0005603621 systemd[1]: Started libpod-conmon-d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb.scope.
Jan 31 02:19:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/303ffaf4a014c6e71401b1e81915f9e0dcf1f2711011707ccf058ff0d874d5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/303ffaf4a014c6e71401b1e81915f9e0dcf1f2711011707ccf058ff0d874d5d6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/303ffaf4a014c6e71401b1e81915f9e0dcf1f2711011707ccf058ff0d874d5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.28369552 +0000 UTC m=+0.097026229 container init d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb (image=quay.io/ceph/ceph:v18, name=sleepy_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.288140851 +0000 UTC m=+0.101471510 container start d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb (image=quay.io/ceph/ceph:v18, name=sleepy_nash, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.290662141 +0000 UTC m=+0.103992840 container attach d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb (image=quay.io/ceph/ceph:v18, name=sleepy_nash, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.206478539 +0000 UTC m=+0.019809218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 31 02:19:49 np0005603621 systemd[1]: libpod-d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb.scope: Deactivated successfully.
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.670838174 +0000 UTC m=+0.484168843 container died d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb (image=quay.io/ceph/ceph:v18, name=sleepy_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:19:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-303ffaf4a014c6e71401b1e81915f9e0dcf1f2711011707ccf058ff0d874d5d6-merged.mount: Deactivated successfully.
Jan 31 02:19:49 np0005603621 podman[74490]: 2026-01-31 07:19:49.717565171 +0000 UTC m=+0.530895840 container remove d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb (image=quay.io/ceph/ceph:v18, name=sleepy_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:19:49 np0005603621 systemd[1]: libpod-conmon-d42f08d15e23cd63a40f39fd1dafe4c2ae39e9c1cc9afbdf92511a58a08bcabb.scope: Deactivated successfully.
Jan 31 02:19:49 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:49 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:49 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:50 np0005603621 systemd[1]: Reloading.
Jan 31 02:19:50 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:19:50 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:19:50 np0005603621 systemd[1]: Starting Ceph mgr.compute-0.ddmhwk for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:19:50 np0005603621 podman[74670]: 2026-01-31 07:19:50.40372759 +0000 UTC m=+0.034026437 container create 27d29d56922973df23bd6f1e58ddc4d731ae614392da047fc3969a22adc43583 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa5d5fd38d4a5d883a1897d5f6ba37531cdc343e7f1449bd4f729cb9c6aba5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa5d5fd38d4a5d883a1897d5f6ba37531cdc343e7f1449bd4f729cb9c6aba5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa5d5fd38d4a5d883a1897d5f6ba37531cdc343e7f1449bd4f729cb9c6aba5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa5d5fd38d4a5d883a1897d5f6ba37531cdc343e7f1449bd4f729cb9c6aba5a/merged/var/lib/ceph/mgr/ceph-compute-0.ddmhwk supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 podman[74670]: 2026-01-31 07:19:50.448708623 +0000 UTC m=+0.079007490 container init 27d29d56922973df23bd6f1e58ddc4d731ae614392da047fc3969a22adc43583 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:19:50 np0005603621 podman[74670]: 2026-01-31 07:19:50.455168768 +0000 UTC m=+0.085467615 container start 27d29d56922973df23bd6f1e58ddc4d731ae614392da047fc3969a22adc43583 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:19:50 np0005603621 bash[74670]: 27d29d56922973df23bd6f1e58ddc4d731ae614392da047fc3969a22adc43583
Jan 31 02:19:50 np0005603621 podman[74670]: 2026-01-31 07:19:50.387857349 +0000 UTC m=+0.018156216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:50 np0005603621 systemd[1]: Started Ceph mgr.compute-0.ddmhwk for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:19:50 np0005603621 ceph-mgr[74689]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:19:50 np0005603621 ceph-mgr[74689]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 02:19:50 np0005603621 ceph-mgr[74689]: pidfile_write: ignore empty --pid-file
Jan 31 02:19:50 np0005603621 podman[74690]: 2026-01-31 07:19:50.527593618 +0000 UTC m=+0.043239538 container create e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4 (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:50 np0005603621 systemd[1]: Started libpod-conmon-e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4.scope.
Jan 31 02:19:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0853572019eb13fbe7695ce53648db0f71155ab2eccb14bf7bfbf9745f8a2839/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0853572019eb13fbe7695ce53648db0f71155ab2eccb14bf7bfbf9745f8a2839/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'alerts'
Jan 31 02:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0853572019eb13fbe7695ce53648db0f71155ab2eccb14bf7bfbf9745f8a2839/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:50 np0005603621 podman[74690]: 2026-01-31 07:19:50.507371138 +0000 UTC m=+0.023017098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:50 np0005603621 podman[74690]: 2026-01-31 07:19:50.618591366 +0000 UTC m=+0.134237356 container init e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4 (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:19:50 np0005603621 podman[74690]: 2026-01-31 07:19:50.624576034 +0000 UTC m=+0.140221964 container start e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4 (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:19:50 np0005603621 podman[74690]: 2026-01-31 07:19:50.627966782 +0000 UTC m=+0.143612722 container attach e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4 (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:19:50 np0005603621 ceph-mgr[74689]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 02:19:50 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'balancer'
Jan 31 02:19:50 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:50.915+0000 7f5ef7b0f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 02:19:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:19:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076841884' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]: 
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]: {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "health": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "status": "HEALTH_OK",
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "checks": {},
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "mutes": []
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "election_epoch": 5,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "quorum": [
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        0
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    ],
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "quorum_names": [
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "compute-0"
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    ],
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "quorum_age": 2,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "monmap": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "epoch": 1,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "min_mon_release_name": "reef",
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_mons": 1
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "osdmap": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "epoch": 1,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_osds": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_up_osds": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "osd_up_since": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_in_osds": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "osd_in_since": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_remapped_pgs": 0
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "pgmap": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "pgs_by_state": [],
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_pgs": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_pools": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_objects": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "data_bytes": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "bytes_used": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "bytes_avail": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "bytes_total": 0
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "fsmap": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "epoch": 1,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "by_rank": [],
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "up:standby": 0
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "mgrmap": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "available": false,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "num_standbys": 0,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "modules": [
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:            "iostat",
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:            "nfs",
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:            "restful"
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        ],
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "services": {}
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "servicemap": {
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "epoch": 1,
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:        "services": {}
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    },
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]:    "progress_events": {}
Jan 31 02:19:51 np0005603621 xenodochial_mclaren[74730]: }
Jan 31 02:19:51 np0005603621 systemd[1]: libpod-e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4.scope: Deactivated successfully.
Jan 31 02:19:51 np0005603621 podman[74690]: 2026-01-31 07:19:51.037585496 +0000 UTC m=+0.553231416 container died e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4 (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:19:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0853572019eb13fbe7695ce53648db0f71155ab2eccb14bf7bfbf9745f8a2839-merged.mount: Deactivated successfully.
Jan 31 02:19:51 np0005603621 podman[74690]: 2026-01-31 07:19:51.090623043 +0000 UTC m=+0.606268963 container remove e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4 (image=quay.io/ceph/ceph:v18, name=xenodochial_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:19:51 np0005603621 systemd[1]: libpod-conmon-e3219c891c2f6432b0bd174f88b7ba202db5d4a4f938f5e6c17449bfa67aa0b4.scope: Deactivated successfully.
Jan 31 02:19:51 np0005603621 ceph-mgr[74689]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 02:19:51 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'cephadm'
Jan 31 02:19:51 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:51.182+0000 7f5ef7b0f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.173273345 +0000 UTC m=+0.061314160 container create 894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c (image=quay.io/ceph/ceph:v18, name=vibrant_rosalind, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:19:53 np0005603621 systemd[1]: Started libpod-conmon-894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c.scope.
Jan 31 02:19:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fac7cb9a52f298523d8fb662e1fb4ab053af9b822dcf89ace5a0b353b649a75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fac7cb9a52f298523d8fb662e1fb4ab053af9b822dcf89ace5a0b353b649a75/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fac7cb9a52f298523d8fb662e1fb4ab053af9b822dcf89ace5a0b353b649a75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.150134093 +0000 UTC m=+0.038174928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.240837532 +0000 UTC m=+0.128878367 container init 894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c (image=quay.io/ceph/ceph:v18, name=vibrant_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.244349363 +0000 UTC m=+0.132390178 container start 894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c (image=quay.io/ceph/ceph:v18, name=vibrant_rosalind, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.24709765 +0000 UTC m=+0.135138465 container attach 894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c (image=quay.io/ceph/ceph:v18, name=vibrant_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:53 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'crash'
Jan 31 02:19:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:19:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1775759827' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]: 
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]: {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "health": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "status": "HEALTH_OK",
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "checks": {},
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "mutes": []
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "election_epoch": 5,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "quorum": [
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        0
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    ],
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "quorum_names": [
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "compute-0"
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    ],
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "quorum_age": 5,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "monmap": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "epoch": 1,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "min_mon_release_name": "reef",
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_mons": 1
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "osdmap": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "epoch": 1,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_osds": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_up_osds": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "osd_up_since": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_in_osds": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "osd_in_since": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_remapped_pgs": 0
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "pgmap": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "pgs_by_state": [],
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_pgs": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_pools": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_objects": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "data_bytes": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "bytes_used": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "bytes_avail": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "bytes_total": 0
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "fsmap": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "epoch": 1,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "by_rank": [],
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "up:standby": 0
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "mgrmap": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "available": false,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "num_standbys": 0,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "modules": [
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:            "iostat",
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:            "nfs",
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:            "restful"
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        ],
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "services": {}
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "servicemap": {
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "epoch": 1,
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:        "services": {}
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    },
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]:    "progress_events": {}
Jan 31 02:19:53 np0005603621 vibrant_rosalind[74796]: }
Jan 31 02:19:53 np0005603621 systemd[1]: libpod-894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c.scope: Deactivated successfully.
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.648433142 +0000 UTC m=+0.536473957 container died 894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c (image=quay.io/ceph/ceph:v18, name=vibrant_rosalind, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6fac7cb9a52f298523d8fb662e1fb4ab053af9b822dcf89ace5a0b353b649a75-merged.mount: Deactivated successfully.
Jan 31 02:19:53 np0005603621 podman[74779]: 2026-01-31 07:19:53.6901212 +0000 UTC m=+0.578162015 container remove 894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c (image=quay.io/ceph/ceph:v18, name=vibrant_rosalind, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:53 np0005603621 systemd[1]: libpod-conmon-894fffe17e1b01291dd6cf44f7587971ceb39a9711032ea6194e70ac54159f2c.scope: Deactivated successfully.
Jan 31 02:19:53 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:53.820+0000 7f5ef7b0f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 02:19:53 np0005603621 ceph-mgr[74689]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 02:19:53 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'dashboard'
Jan 31 02:19:55 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'devicehealth'
Jan 31 02:19:55 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:55.589+0000 7f5ef7b0f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 02:19:55 np0005603621 ceph-mgr[74689]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 02:19:55 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 02:19:55 np0005603621 podman[74836]: 2026-01-31 07:19:55.737340722 +0000 UTC m=+0.024133934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:55 np0005603621 podman[74836]: 2026-01-31 07:19:55.888161481 +0000 UTC m=+0.174954693 container create 8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc (image=quay.io/ceph/ceph:v18, name=cool_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:19:55 np0005603621 systemd[1]: Started libpod-conmon-8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc.scope.
Jan 31 02:19:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a4655c7895910fc6d7ce423d9bf8f9a667c72e304ae5b2e41f4aa66f608112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a4655c7895910fc6d7ce423d9bf8f9a667c72e304ae5b2e41f4aa66f608112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a4655c7895910fc6d7ce423d9bf8f9a667c72e304ae5b2e41f4aa66f608112/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:55 np0005603621 podman[74836]: 2026-01-31 07:19:55.968312677 +0000 UTC m=+0.255105859 container init 8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc (image=quay.io/ceph/ceph:v18, name=cool_curran, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:19:55 np0005603621 podman[74836]: 2026-01-31 07:19:55.973459129 +0000 UTC m=+0.260252301 container start 8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc (image=quay.io/ceph/ceph:v18, name=cool_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:19:55 np0005603621 podman[74836]: 2026-01-31 07:19:55.977404683 +0000 UTC m=+0.264197885 container attach 8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc (image=quay.io/ceph/ceph:v18, name=cool_curran, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:19:56 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 02:19:56 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 02:19:56 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  from numpy import show_config as show_numpy_config
Jan 31 02:19:56 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:56.133+0000 7f5ef7b0f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'influx'
Jan 31 02:19:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:19:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611862250' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:19:56 np0005603621 cool_curran[74852]: 
Jan 31 02:19:56 np0005603621 cool_curran[74852]: {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "health": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "status": "HEALTH_OK",
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "checks": {},
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "mutes": []
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "election_epoch": 5,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "quorum": [
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        0
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    ],
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "quorum_names": [
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "compute-0"
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    ],
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "quorum_age": 7,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "monmap": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "epoch": 1,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "min_mon_release_name": "reef",
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_mons": 1
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "osdmap": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "epoch": 1,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_osds": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_up_osds": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "osd_up_since": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_in_osds": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "osd_in_since": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_remapped_pgs": 0
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "pgmap": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "pgs_by_state": [],
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_pgs": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_pools": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_objects": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "data_bytes": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "bytes_used": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "bytes_avail": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "bytes_total": 0
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "fsmap": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "epoch": 1,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "by_rank": [],
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "up:standby": 0
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "mgrmap": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "available": false,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "num_standbys": 0,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "modules": [
Jan 31 02:19:56 np0005603621 cool_curran[74852]:            "iostat",
Jan 31 02:19:56 np0005603621 cool_curran[74852]:            "nfs",
Jan 31 02:19:56 np0005603621 cool_curran[74852]:            "restful"
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        ],
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "services": {}
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "servicemap": {
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "epoch": 1,
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:19:56 np0005603621 cool_curran[74852]:        "services": {}
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    },
Jan 31 02:19:56 np0005603621 cool_curran[74852]:    "progress_events": {}
Jan 31 02:19:56 np0005603621 cool_curran[74852]: }
Jan 31 02:19:56 np0005603621 systemd[1]: libpod-8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc.scope: Deactivated successfully.
Jan 31 02:19:56 np0005603621 podman[74836]: 2026-01-31 07:19:56.369226004 +0000 UTC m=+0.656019176 container died 8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc (image=quay.io/ceph/ceph:v18, name=cool_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:56 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:56.384+0000 7f5ef7b0f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'insights'
Jan 31 02:19:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-07a4655c7895910fc6d7ce423d9bf8f9a667c72e304ae5b2e41f4aa66f608112-merged.mount: Deactivated successfully.
Jan 31 02:19:56 np0005603621 podman[74836]: 2026-01-31 07:19:56.408483876 +0000 UTC m=+0.695277048 container remove 8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc (image=quay.io/ceph/ceph:v18, name=cool_curran, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:19:56 np0005603621 systemd[1]: libpod-conmon-8fbe04a6edb21cda5fa369a0174e45909d5915824aa7d52600125a9b468332cc.scope: Deactivated successfully.
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'iostat'
Jan 31 02:19:56 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:19:56.886+0000 7f5ef7b0f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 02:19:56 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'k8sevents'
Jan 31 02:19:58 np0005603621 podman[74890]: 2026-01-31 07:19:58.466480739 +0000 UTC m=+0.041332558 container create c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5 (image=quay.io/ceph/ceph:v18, name=jovial_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:19:58 np0005603621 systemd[1]: Started libpod-conmon-c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5.scope.
Jan 31 02:19:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:19:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88397fabcccdeb65dd59eff619183639bdd973786bac4ec0a243067b501d2115/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88397fabcccdeb65dd59eff619183639bdd973786bac4ec0a243067b501d2115/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88397fabcccdeb65dd59eff619183639bdd973786bac4ec0a243067b501d2115/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:19:58 np0005603621 podman[74890]: 2026-01-31 07:19:58.448886303 +0000 UTC m=+0.023738122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:19:58 np0005603621 podman[74890]: 2026-01-31 07:19:58.554446761 +0000 UTC m=+0.129298590 container init c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5 (image=quay.io/ceph/ceph:v18, name=jovial_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 31 02:19:58 np0005603621 podman[74890]: 2026-01-31 07:19:58.561527044 +0000 UTC m=+0.136378863 container start c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5 (image=quay.io/ceph/ceph:v18, name=jovial_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:19:58 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'localpool'
Jan 31 02:19:58 np0005603621 podman[74890]: 2026-01-31 07:19:58.702008397 +0000 UTC m=+0.276860246 container attach c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5 (image=quay.io/ceph/ceph:v18, name=jovial_meitner, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:19:58 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 02:19:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:19:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213929047' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]: 
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]: {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "health": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "status": "HEALTH_OK",
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "checks": {},
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "mutes": []
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "election_epoch": 5,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "quorum": [
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        0
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    ],
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "quorum_names": [
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "compute-0"
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    ],
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "quorum_age": 10,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "monmap": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "epoch": 1,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "min_mon_release_name": "reef",
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_mons": 1
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "osdmap": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "epoch": 1,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_osds": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_up_osds": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "osd_up_since": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_in_osds": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "osd_in_since": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_remapped_pgs": 0
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "pgmap": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "pgs_by_state": [],
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_pgs": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_pools": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_objects": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "data_bytes": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "bytes_used": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "bytes_avail": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "bytes_total": 0
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "fsmap": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "epoch": 1,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "by_rank": [],
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "up:standby": 0
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "mgrmap": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "available": false,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "num_standbys": 0,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "modules": [
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:            "iostat",
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:            "nfs",
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:            "restful"
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        ],
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "services": {}
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "servicemap": {
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "epoch": 1,
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:        "services": {}
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    },
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]:    "progress_events": {}
Jan 31 02:19:58 np0005603621 jovial_meitner[74906]: }
Jan 31 02:19:58 np0005603621 systemd[1]: libpod-c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5.scope: Deactivated successfully.
Jan 31 02:19:59 np0005603621 podman[74932]: 2026-01-31 07:19:59.019097395 +0000 UTC m=+0.024222697 container died c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5 (image=quay.io/ceph/ceph:v18, name=jovial_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:19:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-88397fabcccdeb65dd59eff619183639bdd973786bac4ec0a243067b501d2115-merged.mount: Deactivated successfully.
Jan 31 02:19:59 np0005603621 podman[74932]: 2026-01-31 07:19:59.079577187 +0000 UTC m=+0.084702459 container remove c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5 (image=quay.io/ceph/ceph:v18, name=jovial_meitner, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:19:59 np0005603621 systemd[1]: libpod-conmon-c2e6abfd2cf0413e7dc04ef3f65d7fd35adb34c84124a809b8845bf77d9723a5.scope: Deactivated successfully.
Jan 31 02:19:59 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'mirroring'
Jan 31 02:19:59 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'nfs'
Jan 31 02:20:00 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:00.659+0000 7f5ef7b0f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 02:20:00 np0005603621 ceph-mgr[74689]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 02:20:00 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'orchestrator'
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.187387575 +0000 UTC m=+0.087819168 container create ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c (image=quay.io/ceph/ceph:v18, name=busy_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.119574821 +0000 UTC m=+0.020006444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:01 np0005603621 systemd[1]: Started libpod-conmon-ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c.scope.
Jan 31 02:20:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135f2f66c7b21189698a5c039ef70ea858d0166296a79960830f76380468db15/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135f2f66c7b21189698a5c039ef70ea858d0166296a79960830f76380468db15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/135f2f66c7b21189698a5c039ef70ea858d0166296a79960830f76380468db15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.266346962 +0000 UTC m=+0.166778575 container init ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c (image=quay.io/ceph/ceph:v18, name=busy_bhaskara, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.270985559 +0000 UTC m=+0.171417152 container start ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c (image=quay.io/ceph/ceph:v18, name=busy_bhaskara, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.282989388 +0000 UTC m=+0.183420981 container attach ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c (image=quay.io/ceph/ceph:v18, name=busy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:20:01 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:01.389+0000 7f5ef7b0f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:01 np0005603621 ceph-mgr[74689]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:01 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 02:20:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:20:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367691433' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]: 
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]: {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "health": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "status": "HEALTH_OK",
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "checks": {},
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "mutes": []
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "election_epoch": 5,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "quorum": [
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        0
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    ],
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "quorum_names": [
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "compute-0"
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    ],
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "quorum_age": 13,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "monmap": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "epoch": 1,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "min_mon_release_name": "reef",
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_mons": 1
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "osdmap": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "epoch": 1,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_osds": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_up_osds": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "osd_up_since": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_in_osds": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "osd_in_since": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_remapped_pgs": 0
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "pgmap": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "pgs_by_state": [],
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_pgs": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_pools": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_objects": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "data_bytes": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "bytes_used": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "bytes_avail": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "bytes_total": 0
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "fsmap": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "epoch": 1,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "by_rank": [],
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "up:standby": 0
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "mgrmap": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "available": false,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "num_standbys": 0,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "modules": [
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:            "iostat",
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:            "nfs",
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:            "restful"
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        ],
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "services": {}
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "servicemap": {
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "epoch": 1,
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:        "services": {}
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    },
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]:    "progress_events": {}
Jan 31 02:20:01 np0005603621 busy_bhaskara[74962]: }
Jan 31 02:20:01 np0005603621 systemd[1]: libpod-ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c.scope: Deactivated successfully.
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.65079522 +0000 UTC m=+0.551226813 container died ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c (image=quay.io/ceph/ceph:v18, name=busy_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:01 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:01.683+0000 7f5ef7b0f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 02:20:01 np0005603621 ceph-mgr[74689]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 02:20:01 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'osd_support'
Jan 31 02:20:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-135f2f66c7b21189698a5c039ef70ea858d0166296a79960830f76380468db15-merged.mount: Deactivated successfully.
Jan 31 02:20:01 np0005603621 podman[74946]: 2026-01-31 07:20:01.779895613 +0000 UTC m=+0.680327236 container remove ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c (image=quay.io/ceph/ceph:v18, name=busy_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:01 np0005603621 systemd[1]: libpod-conmon-ca12c1b372c00ff552390d18f4caf44d4eb8f374f2e3fb94e602b8fa5a396a8c.scope: Deactivated successfully.
Jan 31 02:20:01 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:01.940+0000 7f5ef7b0f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 02:20:01 np0005603621 ceph-mgr[74689]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 02:20:01 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 02:20:02 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:02.255+0000 7f5ef7b0f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 02:20:02 np0005603621 ceph-mgr[74689]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 02:20:02 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'progress'
Jan 31 02:20:02 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:02.520+0000 7f5ef7b0f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 02:20:02 np0005603621 ceph-mgr[74689]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 02:20:02 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'prometheus'
Jan 31 02:20:03 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:03.560+0000 7f5ef7b0f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 02:20:03 np0005603621 ceph-mgr[74689]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 02:20:03 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'rbd_support'
Jan 31 02:20:03 np0005603621 podman[75001]: 2026-01-31 07:20:03.847581482 +0000 UTC m=+0.046117059 container create 7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460 (image=quay.io/ceph/ceph:v18, name=crazy_grothendieck, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:03 np0005603621 systemd[1]: Started libpod-conmon-7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460.scope.
Jan 31 02:20:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7840cca9de7763f86a8f1b5795fc9a97a95a01731b8d7579b405ac09b111810/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7840cca9de7763f86a8f1b5795fc9a97a95a01731b8d7579b405ac09b111810/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7840cca9de7763f86a8f1b5795fc9a97a95a01731b8d7579b405ac09b111810/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:03 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:03.896+0000 7f5ef7b0f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 02:20:03 np0005603621 ceph-mgr[74689]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 02:20:03 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'restful'
Jan 31 02:20:03 np0005603621 podman[75001]: 2026-01-31 07:20:03.907522258 +0000 UTC m=+0.106057835 container init 7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460 (image=quay.io/ceph/ceph:v18, name=crazy_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:03 np0005603621 podman[75001]: 2026-01-31 07:20:03.911880795 +0000 UTC m=+0.110416372 container start 7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460 (image=quay.io/ceph/ceph:v18, name=crazy_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:20:03 np0005603621 podman[75001]: 2026-01-31 07:20:03.915932814 +0000 UTC m=+0.114468371 container attach 7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460 (image=quay.io/ceph/ceph:v18, name=crazy_grothendieck, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:03 np0005603621 podman[75001]: 2026-01-31 07:20:03.82947579 +0000 UTC m=+0.028011387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:20:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456462055' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]: 
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]: {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "health": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "status": "HEALTH_OK",
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "checks": {},
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "mutes": []
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "election_epoch": 5,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "quorum": [
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        0
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    ],
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "quorum_names": [
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "compute-0"
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    ],
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "quorum_age": 15,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "monmap": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "epoch": 1,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "min_mon_release_name": "reef",
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_mons": 1
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "osdmap": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "epoch": 1,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_osds": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_up_osds": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "osd_up_since": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_in_osds": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "osd_in_since": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_remapped_pgs": 0
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "pgmap": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "pgs_by_state": [],
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_pgs": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_pools": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_objects": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "data_bytes": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "bytes_used": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "bytes_avail": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "bytes_total": 0
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "fsmap": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "epoch": 1,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "by_rank": [],
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "up:standby": 0
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "mgrmap": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "available": false,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "num_standbys": 0,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "modules": [
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:            "iostat",
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:            "nfs",
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:            "restful"
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        ],
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "services": {}
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "servicemap": {
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "epoch": 1,
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:        "services": {}
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    },
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]:    "progress_events": {}
Jan 31 02:20:04 np0005603621 crazy_grothendieck[75019]: }
Jan 31 02:20:04 np0005603621 systemd[1]: libpod-7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460.scope: Deactivated successfully.
Jan 31 02:20:04 np0005603621 conmon[75019]: conmon 7caf2e8e66d29f5944de <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460.scope/container/memory.events
Jan 31 02:20:04 np0005603621 podman[75001]: 2026-01-31 07:20:04.280024458 +0000 UTC m=+0.478560045 container died 7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460 (image=quay.io/ceph/ceph:v18, name=crazy_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:20:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b7840cca9de7763f86a8f1b5795fc9a97a95a01731b8d7579b405ac09b111810-merged.mount: Deactivated successfully.
Jan 31 02:20:04 np0005603621 podman[75001]: 2026-01-31 07:20:04.313709513 +0000 UTC m=+0.512245080 container remove 7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460 (image=quay.io/ceph/ceph:v18, name=crazy_grothendieck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:20:04 np0005603621 systemd[1]: libpod-conmon-7caf2e8e66d29f5944deb9ddf34bb535797ac19eee93daa5da4565226e0a6460.scope: Deactivated successfully.
Jan 31 02:20:04 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'rgw'
Jan 31 02:20:05 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:05.381+0000 7f5ef7b0f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 02:20:05 np0005603621 ceph-mgr[74689]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 02:20:05 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'rook'
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.373206253 +0000 UTC m=+0.038588101 container create eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63 (image=quay.io/ceph/ceph:v18, name=beautiful_leakey, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:06 np0005603621 systemd[1]: Started libpod-conmon-eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63.scope.
Jan 31 02:20:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054eb5078c2676fb26544b12f5260a466206f002bec830b751e080af1d5efd52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054eb5078c2676fb26544b12f5260a466206f002bec830b751e080af1d5efd52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/054eb5078c2676fb26544b12f5260a466206f002bec830b751e080af1d5efd52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.437033351 +0000 UTC m=+0.102415209 container init eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63 (image=quay.io/ceph/ceph:v18, name=beautiful_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.443035221 +0000 UTC m=+0.108417069 container start eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63 (image=quay.io/ceph/ceph:v18, name=beautiful_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.448239505 +0000 UTC m=+0.113621373 container attach eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63 (image=quay.io/ceph/ceph:v18, name=beautiful_leakey, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.355887765 +0000 UTC m=+0.021269633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727396770' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]: 
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]: {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "health": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "status": "HEALTH_OK",
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "checks": {},
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "mutes": []
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "election_epoch": 5,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "quorum": [
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        0
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    ],
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "quorum_names": [
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "compute-0"
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    ],
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "quorum_age": 18,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "monmap": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "epoch": 1,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "min_mon_release_name": "reef",
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_mons": 1
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "osdmap": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "epoch": 1,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_osds": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_up_osds": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "osd_up_since": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_in_osds": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "osd_in_since": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_remapped_pgs": 0
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "pgmap": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "pgs_by_state": [],
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_pgs": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_pools": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_objects": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "data_bytes": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "bytes_used": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "bytes_avail": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "bytes_total": 0
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "fsmap": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "epoch": 1,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "by_rank": [],
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "up:standby": 0
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "mgrmap": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "available": false,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "num_standbys": 0,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "modules": [
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:            "iostat",
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:            "nfs",
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:            "restful"
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        ],
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "services": {}
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "servicemap": {
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "epoch": 1,
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:        "services": {}
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    },
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]:    "progress_events": {}
Jan 31 02:20:06 np0005603621 beautiful_leakey[75073]: }
Jan 31 02:20:06 np0005603621 systemd[1]: libpod-eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63.scope: Deactivated successfully.
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.820027483 +0000 UTC m=+0.485409331 container died eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63 (image=quay.io/ceph/ceph:v18, name=beautiful_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-054eb5078c2676fb26544b12f5260a466206f002bec830b751e080af1d5efd52-merged.mount: Deactivated successfully.
Jan 31 02:20:06 np0005603621 podman[75057]: 2026-01-31 07:20:06.922035002 +0000 UTC m=+0.587416850 container remove eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63 (image=quay.io/ceph/ceph:v18, name=beautiful_leakey, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:20:06 np0005603621 systemd[1]: libpod-conmon-eb6128b5319eb70b0d532c424fc2ee2576df351a4d79ab86679e8b287fe71e63.scope: Deactivated successfully.
Jan 31 02:20:07 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:07.578+0000 7f5ef7b0f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 02:20:07 np0005603621 ceph-mgr[74689]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 02:20:07 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'selftest'
Jan 31 02:20:07 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:07.864+0000 7f5ef7b0f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 02:20:07 np0005603621 ceph-mgr[74689]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 02:20:07 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'snap_schedule'
Jan 31 02:20:08 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:08.127+0000 7f5ef7b0f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'stats'
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'status'
Jan 31 02:20:08 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:08.657+0000 7f5ef7b0f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'telegraf'
Jan 31 02:20:08 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:08.900+0000 7f5ef7b0f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 02:20:08 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'telemetry'
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:08.966079141 +0000 UTC m=+0.024686942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:09.118968935 +0000 UTC m=+0.177576686 container create 87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe (image=quay.io/ceph/ceph:v18, name=infallible_rosalind, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:20:09 np0005603621 systemd[1]: Started libpod-conmon-87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe.scope.
Jan 31 02:20:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9a531c853914cc7f807db2c76519bc36ff22e4e2dbe61c5c246868ec18cba5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9a531c853914cc7f807db2c76519bc36ff22e4e2dbe61c5c246868ec18cba5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9a531c853914cc7f807db2c76519bc36ff22e4e2dbe61c5c246868ec18cba5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:09.190100745 +0000 UTC m=+0.248708516 container init 87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe (image=quay.io/ceph/ceph:v18, name=infallible_rosalind, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:09.195770765 +0000 UTC m=+0.254378516 container start 87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe (image=quay.io/ceph/ceph:v18, name=infallible_rosalind, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:09.200588137 +0000 UTC m=+0.259195908 container attach 87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe (image=quay.io/ceph/ceph:v18, name=infallible_rosalind, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:09 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:09.511+0000 7f5ef7b0f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 02:20:09 np0005603621 ceph-mgr[74689]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 02:20:09 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 02:20:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:20:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329978165' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]: 
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]: {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "health": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "status": "HEALTH_OK",
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "checks": {},
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "mutes": []
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "election_epoch": 5,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "quorum": [
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        0
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    ],
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "quorum_names": [
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "compute-0"
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    ],
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "quorum_age": 21,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "monmap": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "epoch": 1,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "min_mon_release_name": "reef",
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_mons": 1
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "osdmap": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "epoch": 1,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_osds": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_up_osds": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "osd_up_since": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_in_osds": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "osd_in_since": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_remapped_pgs": 0
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "pgmap": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "pgs_by_state": [],
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_pgs": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_pools": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_objects": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "data_bytes": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "bytes_used": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "bytes_avail": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "bytes_total": 0
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "fsmap": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "epoch": 1,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "by_rank": [],
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "up:standby": 0
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "mgrmap": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "available": false,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "num_standbys": 0,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "modules": [
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:            "iostat",
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:            "nfs",
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:            "restful"
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        ],
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "services": {}
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "servicemap": {
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "epoch": 1,
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:        "services": {}
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    },
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]:    "progress_events": {}
Jan 31 02:20:09 np0005603621 infallible_rosalind[75126]: }
Jan 31 02:20:09 np0005603621 systemd[1]: libpod-87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe.scope: Deactivated successfully.
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:09.573335015 +0000 UTC m=+0.631942766 container died 87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe (image=quay.io/ceph/ceph:v18, name=infallible_rosalind, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:20:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ba9a531c853914cc7f807db2c76519bc36ff22e4e2dbe61c5c246868ec18cba5-merged.mount: Deactivated successfully.
Jan 31 02:20:09 np0005603621 podman[75111]: 2026-01-31 07:20:09.616139219 +0000 UTC m=+0.674746970 container remove 87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe (image=quay.io/ceph/ceph:v18, name=infallible_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:20:09 np0005603621 systemd[1]: libpod-conmon-87c98b07ddbb13aa7214e55f65b2d7fc1c1f001129de40c37c694366334c89fe.scope: Deactivated successfully.
Jan 31 02:20:10 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:10.218+0000 7f5ef7b0f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:10 np0005603621 ceph-mgr[74689]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:10 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'volumes'
Jan 31 02:20:10 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:10.971+0000 7f5ef7b0f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 02:20:10 np0005603621 ceph-mgr[74689]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 02:20:10 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'zabbix'
Jan 31 02:20:11 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:11.220+0000 7f5ef7b0f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: ms_deliver_dispatch: unhandled message 0x55fece11cf20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ddmhwk
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr handle_mgr_map Activating!
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr handle_mgr_map I am now activating
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ddmhwk(active, starting, since 0.0110197s)
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ddmhwk", "id": "compute-0.ddmhwk"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ddmhwk", "id": "compute-0.ddmhwk"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: balancer
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Manager daemon compute-0.ddmhwk is now available
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: crash
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [balancer INFO root] Starting
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: devicehealth
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Starting
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:20:11
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [balancer INFO root] No pools available
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: iostat
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: nfs
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: orchestrator
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: pg_autoscaler
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: progress
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [progress INFO root] Loading...
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [progress INFO root] No stored events to load
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [progress INFO root] Loaded [] historic events
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] recovery thread starting
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] starting setup
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: rbd_support
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/mirror_snapshot_schedule"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/mirror_snapshot_schedule"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: restful
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [restful INFO root] server_addr: :: server_port: 8003
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [restful WARNING root] server not running: no certificate configured
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: status
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: telemetry
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] PerfHandler: starting
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TaskHandler: starting
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/trash_purge_schedule"} v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/trash_purge_schedule"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] setup complete
Jan 31 02:20:11 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: volumes
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: Activating manager daemon compute-0.ddmhwk
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: Manager daemon compute-0.ddmhwk is now available
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/mirror_snapshot_schedule"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/trash_purge_schedule"}]: dispatch
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:11 np0005603621 podman[75243]: 2026-01-31 07:20:11.668845303 +0000 UTC m=+0.035875245 container create 1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:20:11 np0005603621 systemd[1]: Started libpod-conmon-1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b.scope.
Jan 31 02:20:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48accc70dbd21780a4c9af731d333551d27df96e8ce9e1ccd722ed05ae8e8e9d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48accc70dbd21780a4c9af731d333551d27df96e8ce9e1ccd722ed05ae8e8e9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48accc70dbd21780a4c9af731d333551d27df96e8ce9e1ccd722ed05ae8e8e9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:11 np0005603621 podman[75243]: 2026-01-31 07:20:11.72850708 +0000 UTC m=+0.095537022 container init 1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b (image=quay.io/ceph/ceph:v18, name=trusting_burnell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:20:11 np0005603621 podman[75243]: 2026-01-31 07:20:11.732785316 +0000 UTC m=+0.099815238 container start 1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:20:11 np0005603621 podman[75243]: 2026-01-31 07:20:11.736795442 +0000 UTC m=+0.103825384 container attach 1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b (image=quay.io/ceph/ceph:v18, name=trusting_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:11 np0005603621 podman[75243]: 2026-01-31 07:20:11.653908021 +0000 UTC m=+0.020937963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:20:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158535688' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]: 
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]: {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "health": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "status": "HEALTH_OK",
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "checks": {},
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "mutes": []
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "election_epoch": 5,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "quorum": [
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        0
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    ],
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "quorum_names": [
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "compute-0"
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    ],
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "quorum_age": 23,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "monmap": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "epoch": 1,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "min_mon_release_name": "reef",
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_mons": 1
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "osdmap": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "epoch": 1,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_osds": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_up_osds": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "osd_up_since": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_in_osds": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "osd_in_since": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_remapped_pgs": 0
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "pgmap": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "pgs_by_state": [],
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_pgs": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_pools": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_objects": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "data_bytes": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "bytes_used": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "bytes_avail": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "bytes_total": 0
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "fsmap": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "epoch": 1,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "by_rank": [],
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "up:standby": 0
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "mgrmap": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "available": false,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "num_standbys": 0,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "modules": [
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:            "iostat",
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:            "nfs",
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:            "restful"
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        ],
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "services": {}
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "servicemap": {
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "epoch": 1,
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:        "services": {}
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    },
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]:    "progress_events": {}
Jan 31 02:20:12 np0005603621 trusting_burnell[75258]: }
Jan 31 02:20:12 np0005603621 systemd[1]: libpod-1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b.scope: Deactivated successfully.
Jan 31 02:20:12 np0005603621 podman[75243]: 2026-01-31 07:20:12.138418004 +0000 UTC m=+0.505447926 container died 1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-48accc70dbd21780a4c9af731d333551d27df96e8ce9e1ccd722ed05ae8e8e9d-merged.mount: Deactivated successfully.
Jan 31 02:20:12 np0005603621 podman[75243]: 2026-01-31 07:20:12.181792725 +0000 UTC m=+0.548822657 container remove 1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b (image=quay.io/ceph/ceph:v18, name=trusting_burnell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:12 np0005603621 systemd[1]: libpod-conmon-1ba09ae55e2e8e2d97e57531f6648c9babe2bed83e20b40f2539cf58a8c8837b.scope: Deactivated successfully.
Jan 31 02:20:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ddmhwk(active, since 1.02355s)
Jan 31 02:20:12 np0005603621 ceph-mon[74394]: from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:12 np0005603621 ceph-mon[74394]: from='mgr.14102 192.168.122.100:0/1233616869' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:13 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ddmhwk(active, since 2s)
Jan 31 02:20:14 np0005603621 podman[75295]: 2026-01-31 07:20:14.249700902 +0000 UTC m=+0.049083284 container create 12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f (image=quay.io/ceph/ceph:v18, name=gracious_agnesi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:14 np0005603621 systemd[1]: Started libpod-conmon-12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f.scope.
Jan 31 02:20:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085738cac30bb664f3954f0541c7b9444a5032fa99b7fd14b693a88e5be03887/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085738cac30bb664f3954f0541c7b9444a5032fa99b7fd14b693a88e5be03887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/085738cac30bb664f3954f0541c7b9444a5032fa99b7fd14b693a88e5be03887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:14 np0005603621 podman[75295]: 2026-01-31 07:20:14.227674195 +0000 UTC m=+0.027056557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:15 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:15 np0005603621 podman[75295]: 2026-01-31 07:20:15.925434646 +0000 UTC m=+1.724816998 container init 12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f (image=quay.io/ceph/ceph:v18, name=gracious_agnesi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:15 np0005603621 podman[75295]: 2026-01-31 07:20:15.929908877 +0000 UTC m=+1.729291219 container start 12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f (image=quay.io/ceph/ceph:v18, name=gracious_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:20:15 np0005603621 podman[75295]: 2026-01-31 07:20:15.936663801 +0000 UTC m=+1.736046163 container attach 12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f (image=quay.io/ceph/ceph:v18, name=gracious_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:20:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 31 02:20:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2844388151' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]: 
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]: {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "health": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "status": "HEALTH_OK",
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "checks": {},
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "mutes": []
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "election_epoch": 5,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "quorum": [
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        0
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    ],
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "quorum_names": [
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "compute-0"
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    ],
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "quorum_age": 27,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "monmap": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "epoch": 1,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "min_mon_release_name": "reef",
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_mons": 1
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "osdmap": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "epoch": 1,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_osds": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_up_osds": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "osd_up_since": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_in_osds": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "osd_in_since": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_remapped_pgs": 0
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "pgmap": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "pgs_by_state": [],
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_pgs": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_pools": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_objects": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "data_bytes": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "bytes_used": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "bytes_avail": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "bytes_total": 0
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "fsmap": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "epoch": 1,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "by_rank": [],
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "up:standby": 0
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "mgrmap": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "available": true,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "num_standbys": 0,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "modules": [
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:            "iostat",
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:            "nfs",
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:            "restful"
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        ],
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "services": {}
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "servicemap": {
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "epoch": 1,
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "modified": "2026-01-31T07:19:45.660125+0000",
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:        "services": {}
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    },
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]:    "progress_events": {}
Jan 31 02:20:16 np0005603621 gracious_agnesi[75311]: }
Jan 31 02:20:16 np0005603621 systemd[1]: libpod-12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f.scope: Deactivated successfully.
Jan 31 02:20:16 np0005603621 podman[75295]: 2026-01-31 07:20:16.506358697 +0000 UTC m=+2.305741049 container died 12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f (image=quay.io/ceph/ceph:v18, name=gracious_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:20:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-085738cac30bb664f3954f0541c7b9444a5032fa99b7fd14b693a88e5be03887-merged.mount: Deactivated successfully.
Jan 31 02:20:16 np0005603621 podman[75295]: 2026-01-31 07:20:16.555312095 +0000 UTC m=+2.354694437 container remove 12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f (image=quay.io/ceph/ceph:v18, name=gracious_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:20:16 np0005603621 systemd[1]: libpod-conmon-12e4b808b070d41d659578191af77261b8d48b80cd80e8ddbf997b118486097f.scope: Deactivated successfully.
Jan 31 02:20:16 np0005603621 podman[75353]: 2026-01-31 07:20:16.604398047 +0000 UTC m=+0.036122334 container create db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc (image=quay.io/ceph/ceph:v18, name=unruffled_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:20:16 np0005603621 systemd[1]: Started libpod-conmon-db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc.scope.
Jan 31 02:20:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4abb2668846fa2bff292d6f44231e3cf0c590b2a2b45879699cddea2b85101c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4abb2668846fa2bff292d6f44231e3cf0c590b2a2b45879699cddea2b85101c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4abb2668846fa2bff292d6f44231e3cf0c590b2a2b45879699cddea2b85101c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4abb2668846fa2bff292d6f44231e3cf0c590b2a2b45879699cddea2b85101c1/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:16 np0005603621 podman[75353]: 2026-01-31 07:20:16.585145418 +0000 UTC m=+0.016869735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:16 np0005603621 podman[75353]: 2026-01-31 07:20:16.692332258 +0000 UTC m=+0.124056645 container init db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc (image=quay.io/ceph/ceph:v18, name=unruffled_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:20:16 np0005603621 podman[75353]: 2026-01-31 07:20:16.698964718 +0000 UTC m=+0.130689015 container start db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc (image=quay.io/ceph/ceph:v18, name=unruffled_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:16 np0005603621 podman[75353]: 2026-01-31 07:20:16.702689035 +0000 UTC m=+0.134413412 container attach db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc (image=quay.io/ceph/ceph:v18, name=unruffled_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 02:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/502244135' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:20:17 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:17 np0005603621 systemd[1]: libpod-db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc.scope: Deactivated successfully.
Jan 31 02:20:17 np0005603621 podman[75353]: 2026-01-31 07:20:17.245086919 +0000 UTC m=+0.676811216 container died db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc (image=quay.io/ceph/ceph:v18, name=unruffled_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:20:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4abb2668846fa2bff292d6f44231e3cf0c590b2a2b45879699cddea2b85101c1-merged.mount: Deactivated successfully.
Jan 31 02:20:17 np0005603621 podman[75353]: 2026-01-31 07:20:17.280449957 +0000 UTC m=+0.712174244 container remove db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc (image=quay.io/ceph/ceph:v18, name=unruffled_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:20:17 np0005603621 systemd[1]: libpod-conmon-db6646175faf801f4c06c907a3fd6ed45f388205c0818e342b5661eead8c9cbc.scope: Deactivated successfully.
Jan 31 02:20:17 np0005603621 podman[75407]: 2026-01-31 07:20:17.322398313 +0000 UTC m=+0.030311689 container create 37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88 (image=quay.io/ceph/ceph:v18, name=epic_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:17 np0005603621 systemd[1]: Started libpod-conmon-37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88.scope.
Jan 31 02:20:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec10fe4bf99ed2618e9b96aef73cfdf2c4f61b494b7dcc11fa5048807a18b8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec10fe4bf99ed2618e9b96aef73cfdf2c4f61b494b7dcc11fa5048807a18b8d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec10fe4bf99ed2618e9b96aef73cfdf2c4f61b494b7dcc11fa5048807a18b8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:17 np0005603621 podman[75407]: 2026-01-31 07:20:17.378035242 +0000 UTC m=+0.085948638 container init 37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88 (image=quay.io/ceph/ceph:v18, name=epic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:20:17 np0005603621 podman[75407]: 2026-01-31 07:20:17.381193332 +0000 UTC m=+0.089106708 container start 37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88 (image=quay.io/ceph/ceph:v18, name=epic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:20:17 np0005603621 podman[75407]: 2026-01-31 07:20:17.383844846 +0000 UTC m=+0.091758222 container attach 37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88 (image=quay.io/ceph/ceph:v18, name=epic_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:17 np0005603621 podman[75407]: 2026-01-31 07:20:17.308035469 +0000 UTC m=+0.015948865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:17 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/502244135' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 31 02:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1605070001' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 31 02:20:18 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1605070001' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 31 02:20:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1605070001' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  1: '-n'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  2: 'mgr.compute-0.ddmhwk'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  3: '-f'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  4: '--setuser'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  5: 'ceph'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  6: '--setgroup'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  7: 'ceph'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  8: '--default-log-to-file=false'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  9: '--default-log-to-journald=true'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr respawn  exe_path /proc/self/exe
Jan 31 02:20:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ddmhwk(active, since 7s)
Jan 31 02:20:18 np0005603621 systemd[1]: libpod-37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88.scope: Deactivated successfully.
Jan 31 02:20:18 np0005603621 podman[75407]: 2026-01-31 07:20:18.581879653 +0000 UTC m=+1.289793029 container died 37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88 (image=quay.io/ceph/ceph:v18, name=epic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:20:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aec10fe4bf99ed2618e9b96aef73cfdf2c4f61b494b7dcc11fa5048807a18b8d-merged.mount: Deactivated successfully.
Jan 31 02:20:18 np0005603621 podman[75407]: 2026-01-31 07:20:18.615597129 +0000 UTC m=+1.323510505 container remove 37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88 (image=quay.io/ceph/ceph:v18, name=epic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:18 np0005603621 systemd[1]: libpod-conmon-37c8b61d4d86ab5c5c4acb810c9ed380081dbd90e1e042ef704efd232da50c88.scope: Deactivated successfully.
Jan 31 02:20:18 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: ignoring --setuser ceph since I am not root
Jan 31 02:20:18 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: ignoring --setgroup ceph since I am not root
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: pidfile_write: ignore empty --pid-file
Jan 31 02:20:18 np0005603621 podman[75463]: 2026-01-31 07:20:18.728070946 +0000 UTC m=+0.096995228 container create 2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f (image=quay.io/ceph/ceph:v18, name=sharp_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:18 np0005603621 podman[75463]: 2026-01-31 07:20:18.658981142 +0000 UTC m=+0.027905444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:18 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'alerts'
Jan 31 02:20:18 np0005603621 systemd[1]: Started libpod-conmon-2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f.scope.
Jan 31 02:20:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6d036987ec644112af9421ca2907917744c5971bbe5c27e334cbf0107fd4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6d036987ec644112af9421ca2907917744c5971bbe5c27e334cbf0107fd4f9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6d036987ec644112af9421ca2907917744c5971bbe5c27e334cbf0107fd4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:18 np0005603621 podman[75463]: 2026-01-31 07:20:18.840193682 +0000 UTC m=+0.209117944 container init 2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f (image=quay.io/ceph/ceph:v18, name=sharp_lumiere, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:20:18 np0005603621 podman[75463]: 2026-01-31 07:20:18.844689214 +0000 UTC m=+0.213613476 container start 2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f (image=quay.io/ceph/ceph:v18, name=sharp_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:20:18 np0005603621 podman[75463]: 2026-01-31 07:20:18.895629645 +0000 UTC m=+0.264553917 container attach 2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f (image=quay.io/ceph/ceph:v18, name=sharp_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:19 np0005603621 ceph-mgr[74689]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 02:20:19 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'balancer'
Jan 31 02:20:19 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:19.063+0000 7f6498413140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 31 02:20:19 np0005603621 ceph-mgr[74689]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 02:20:19 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'cephadm'
Jan 31 02:20:19 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:19.335+0000 7f6498413140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 31 02:20:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 02:20:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757137967' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 02:20:19 np0005603621 sharp_lumiere[75503]: {
Jan 31 02:20:19 np0005603621 sharp_lumiere[75503]:    "epoch": 5,
Jan 31 02:20:19 np0005603621 sharp_lumiere[75503]:    "available": true,
Jan 31 02:20:19 np0005603621 sharp_lumiere[75503]:    "active_name": "compute-0.ddmhwk",
Jan 31 02:20:19 np0005603621 sharp_lumiere[75503]:    "num_standby": 0
Jan 31 02:20:19 np0005603621 sharp_lumiere[75503]: }
Jan 31 02:20:19 np0005603621 systemd[1]: libpod-2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f.scope: Deactivated successfully.
Jan 31 02:20:19 np0005603621 podman[75463]: 2026-01-31 07:20:19.420212615 +0000 UTC m=+0.789136877 container died 2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f (image=quay.io/ceph/ceph:v18, name=sharp_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:20:19 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1605070001' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 31 02:20:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-af6d036987ec644112af9421ca2907917744c5971bbe5c27e334cbf0107fd4f9-merged.mount: Deactivated successfully.
Jan 31 02:20:19 np0005603621 podman[75463]: 2026-01-31 07:20:19.8219709 +0000 UTC m=+1.190895162 container remove 2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f (image=quay.io/ceph/ceph:v18, name=sharp_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:20:19 np0005603621 podman[75541]: 2026-01-31 07:20:19.956393221 +0000 UTC m=+0.117116585 container create 8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db (image=quay.io/ceph/ceph:v18, name=nostalgic_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:20:19 np0005603621 podman[75541]: 2026-01-31 07:20:19.861435208 +0000 UTC m=+0.022158592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:20 np0005603621 systemd[1]: Started libpod-conmon-8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db.scope.
Jan 31 02:20:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf1aedd4a922f15d00b40658a86fffb7981a12ac8321e9fe64051402dfd5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf1aedd4a922f15d00b40658a86fffb7981a12ac8321e9fe64051402dfd5c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010bf1aedd4a922f15d00b40658a86fffb7981a12ac8321e9fe64051402dfd5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:20 np0005603621 podman[75541]: 2026-01-31 07:20:20.075799097 +0000 UTC m=+0.236522481 container init 8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db (image=quay.io/ceph/ceph:v18, name=nostalgic_wu, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:20 np0005603621 podman[75541]: 2026-01-31 07:20:20.079497464 +0000 UTC m=+0.240220828 container start 8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db (image=quay.io/ceph/ceph:v18, name=nostalgic_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:20:20 np0005603621 podman[75541]: 2026-01-31 07:20:20.115012558 +0000 UTC m=+0.275735982 container attach 8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db (image=quay.io/ceph/ceph:v18, name=nostalgic_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:20 np0005603621 systemd[1]: libpod-conmon-2c9f0c26d320d11b401f44c9c975b69dfc95497d3103519048900413ad64219f.scope: Deactivated successfully.
Jan 31 02:20:21 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'crash'
Jan 31 02:20:21 np0005603621 ceph-mgr[74689]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 02:20:21 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'dashboard'
Jan 31 02:20:21 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:21.586+0000 7f6498413140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 31 02:20:23 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'devicehealth'
Jan 31 02:20:23 np0005603621 ceph-mgr[74689]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 02:20:23 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'diskprediction_local'
Jan 31 02:20:23 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:23.259+0000 7f6498413140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 31 02:20:23 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 31 02:20:23 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 31 02:20:23 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  from numpy import show_config as show_numpy_config
Jan 31 02:20:23 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:23.796+0000 7f6498413140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 02:20:23 np0005603621 ceph-mgr[74689]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 31 02:20:23 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'influx'
Jan 31 02:20:24 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:24.060+0000 7f6498413140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 02:20:24 np0005603621 ceph-mgr[74689]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 31 02:20:24 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'insights'
Jan 31 02:20:24 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'iostat'
Jan 31 02:20:24 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:24.568+0000 7f6498413140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 02:20:24 np0005603621 ceph-mgr[74689]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 31 02:20:24 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'k8sevents'
Jan 31 02:20:26 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'localpool'
Jan 31 02:20:26 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'mds_autoscaler'
Jan 31 02:20:27 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'mirroring'
Jan 31 02:20:27 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'nfs'
Jan 31 02:20:28 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:28.314+0000 7f6498413140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 02:20:28 np0005603621 ceph-mgr[74689]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 31 02:20:28 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'orchestrator'
Jan 31 02:20:29 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:29.002+0000 7f6498413140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'osd_perf_query'
Jan 31 02:20:29 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:29.256+0000 7f6498413140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'osd_support'
Jan 31 02:20:29 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:29.482+0000 7f6498413140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'pg_autoscaler'
Jan 31 02:20:29 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:29.761+0000 7f6498413140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 31 02:20:29 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'progress'
Jan 31 02:20:30 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:30.005+0000 7f6498413140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 02:20:30 np0005603621 ceph-mgr[74689]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 31 02:20:30 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'prometheus'
Jan 31 02:20:30 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:30.983+0000 7f6498413140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 02:20:30 np0005603621 ceph-mgr[74689]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 31 02:20:30 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'rbd_support'
Jan 31 02:20:31 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:31.262+0000 7f6498413140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 02:20:31 np0005603621 ceph-mgr[74689]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 31 02:20:31 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'restful'
Jan 31 02:20:32 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'rgw'
Jan 31 02:20:32 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:32.794+0000 7f6498413140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 02:20:32 np0005603621 ceph-mgr[74689]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 31 02:20:32 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'rook'
Jan 31 02:20:34 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:34.757+0000 7f6498413140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 02:20:34 np0005603621 ceph-mgr[74689]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 31 02:20:34 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'selftest'
Jan 31 02:20:35 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:35.008+0000 7f6498413140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'snap_schedule'
Jan 31 02:20:35 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:35.270+0000 7f6498413140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'stats'
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'status'
Jan 31 02:20:35 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:35.806+0000 7f6498413140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 31 02:20:35 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'telegraf'
Jan 31 02:20:36 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:36.049+0000 7f6498413140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 02:20:36 np0005603621 ceph-mgr[74689]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 31 02:20:36 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'telemetry'
Jan 31 02:20:36 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:36.668+0000 7f6498413140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 02:20:36 np0005603621 ceph-mgr[74689]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 31 02:20:36 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'test_orchestrator'
Jan 31 02:20:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:37.339+0000 7f6498413140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:37 np0005603621 ceph-mgr[74689]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 31 02:20:37 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'volumes'
Jan 31 02:20:38 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:38.049+0000 7f6498413140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr[py] Loading python module 'zabbix'
Jan 31 02:20:38 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:20:38.326+0000 7f6498413140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ddmhwk restarted
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ddmhwk
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: ms_deliver_dispatch: unhandled message 0x5592e51d0420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr handle_mgr_map Activating!
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr handle_mgr_map I am now activating
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ddmhwk(active, starting, since 0.0141572s)
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ddmhwk", "id": "compute-0.ddmhwk"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ddmhwk", "id": "compute-0.ddmhwk"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e1 all = 1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: balancer
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Starting
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Manager daemon compute-0.ddmhwk is now available
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:20:38
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] No pools available
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: cephadm
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: crash
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: devicehealth
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Starting
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: iostat
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: Active manager daemon compute-0.ddmhwk restarted
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: Activating manager daemon compute-0.ddmhwk
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: Manager daemon compute-0.ddmhwk is now available
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: nfs
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: orchestrator
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: pg_autoscaler
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: progress
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [progress INFO root] Loading...
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [progress INFO root] No stored events to load
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [progress INFO root] Loaded [] historic events
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [progress INFO root] Loaded OSDMap, ready.
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] recovery thread starting
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] starting setup
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: rbd_support
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: restful
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [restful INFO root] server_addr: :: server_port: 8003
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/mirror_snapshot_schedule"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/mirror_snapshot_schedule"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [restful WARNING root] server not running: no certificate configured
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: status
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: telemetry
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] PerfHandler: starting
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TaskHandler: starting
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/trash_purge_schedule"} v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/trash_purge_schedule"}]: dispatch
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] setup complete
Jan 31 02:20:38 np0005603621 ceph-mgr[74689]: mgr load Constructed class from module: volumes
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926261 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 31 02:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ddmhwk(active, since 1.09149s)
Jan 31 02:20:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 31 02:20:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 31 02:20:39 np0005603621 nostalgic_wu[75558]: {
Jan 31 02:20:39 np0005603621 nostalgic_wu[75558]:    "mgrmap_epoch": 7,
Jan 31 02:20:39 np0005603621 nostalgic_wu[75558]:    "initialized": true
Jan 31 02:20:39 np0005603621 nostalgic_wu[75558]: }
Jan 31 02:20:39 np0005603621 systemd[1]: libpod-8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db.scope: Deactivated successfully.
Jan 31 02:20:39 np0005603621 podman[75541]: 2026-01-31 07:20:39.447351508 +0000 UTC m=+19.608074902 container died 8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db (image=quay.io/ceph/ceph:v18, name=nostalgic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:20:39 np0005603621 ceph-mon[74394]: Found migration_current of "None". Setting to last migration.
Jan 31 02:20:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/mirror_snapshot_schedule"}]: dispatch
Jan 31 02:20:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ddmhwk/trash_purge_schedule"}]: dispatch
Jan 31 02:20:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-010bf1aedd4a922f15d00b40658a86fffb7981a12ac8321e9fe64051402dfd5c-merged.mount: Deactivated successfully.
Jan 31 02:20:39 np0005603621 podman[75541]: 2026-01-31 07:20:39.62741789 +0000 UTC m=+19.788141254 container remove 8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db (image=quay.io/ceph/ceph:v18, name=nostalgic_wu, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:39 np0005603621 systemd[1]: libpod-conmon-8247f6b61ef94ee67ba1b82358c15f04bb9c3ebdff6e244204c4151c1343b4db.scope: Deactivated successfully.
Jan 31 02:20:39 np0005603621 podman[75717]: 2026-01-31 07:20:39.690730868 +0000 UTC m=+0.048612165 container create 36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec (image=quay.io/ceph/ceph:v18, name=great_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:20:39 np0005603621 systemd[1]: Started libpod-conmon-36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec.scope.
Jan 31 02:20:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94a540ede4fbe4f5ec23075590d160973e0b5d769bc247e93fc70c190ea5a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94a540ede4fbe4f5ec23075590d160973e0b5d769bc247e93fc70c190ea5a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac94a540ede4fbe4f5ec23075590d160973e0b5d769bc247e93fc70c190ea5a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:39 np0005603621 podman[75717]: 2026-01-31 07:20:39.667499044 +0000 UTC m=+0.025380421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:39 np0005603621 podman[75717]: 2026-01-31 07:20:39.772127096 +0000 UTC m=+0.130008413 container init 36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec (image=quay.io/ceph/ceph:v18, name=great_dewdney, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:20:39 np0005603621 podman[75717]: 2026-01-31 07:20:39.776978569 +0000 UTC m=+0.134859856 container start 36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec (image=quay.io/ceph/ceph:v18, name=great_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:20:39 np0005603621 podman[75717]: 2026-01-31 07:20:39.780692196 +0000 UTC m=+0.138573513 container attach 36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec (image=quay.io/ceph/ceph:v18, name=great_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:40 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 31 02:20:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 02:20:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 02:20:40 np0005603621 systemd[1]: libpod-36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec.scope: Deactivated successfully.
Jan 31 02:20:40 np0005603621 podman[75717]: 2026-01-31 07:20:40.322606857 +0000 UTC m=+0.680488154 container died 36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec (image=quay.io/ceph/ceph:v18, name=great_dewdney, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ac94a540ede4fbe4f5ec23075590d160973e0b5d769bc247e93fc70c190ea5a1-merged.mount: Deactivated successfully.
Jan 31 02:20:40 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:40 np0005603621 podman[75717]: 2026-01-31 07:20:40.361423802 +0000 UTC m=+0.719305089 container remove 36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec (image=quay.io/ceph/ceph:v18, name=great_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:20:40 np0005603621 systemd[1]: libpod-conmon-36070bb1b6670fd053dde3411a29f7be4a1c8125dba962258bcacd36daa0ffec.scope: Deactivated successfully.
Jan 31 02:20:40 np0005603621 podman[75771]: 2026-01-31 07:20:40.410832731 +0000 UTC m=+0.031903398 container create bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8 (image=quay.io/ceph/ceph:v18, name=gifted_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:20:40 np0005603621 systemd[1]: Started libpod-conmon-bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8.scope.
Jan 31 02:20:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992b447b608300abe3a3e9f43d43a4d4df8128ab54436ad19a148beea0b67861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992b447b608300abe3a3e9f43d43a4d4df8128ab54436ad19a148beea0b67861/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992b447b608300abe3a3e9f43d43a4d4df8128ab54436ad19a148beea0b67861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:40 np0005603621 podman[75771]: 2026-01-31 07:20:40.479438935 +0000 UTC m=+0.100509632 container init bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8 (image=quay.io/ceph/ceph:v18, name=gifted_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:20:40 np0005603621 podman[75771]: 2026-01-31 07:20:40.483536925 +0000 UTC m=+0.104607592 container start bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8 (image=quay.io/ceph/ceph:v18, name=gifted_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:20:40 np0005603621 podman[75771]: 2026-01-31 07:20:40.396421586 +0000 UTC m=+0.017492273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:40 np0005603621 podman[75771]: 2026-01-31 07:20:40.494552763 +0000 UTC m=+0.115623470 container attach bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8 (image=quay.io/ceph/ceph:v18, name=gifted_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:20:40 np0005603621 ceph-mgr[74689]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:20:40] ENGINE Bus STARTING
Jan 31 02:20:40 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:20:40] ENGINE Bus STARTING
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Set ssh ssh_user
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:20:41] ENGINE Serving on https://192.168.122.100:7150
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:20:41] ENGINE Serving on https://192.168.122.100:7150
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:20:41] ENGINE Client ('192.168.122.100', 38984) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:20:41] ENGINE Client ('192.168.122.100', 38984) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Set ssh ssh_config
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 31 02:20:41 np0005603621 gifted_cerf[75787]: ssh user set to ceph-admin. sudo will be used
Jan 31 02:20:41 np0005603621 systemd[1]: libpod-bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8.scope: Deactivated successfully.
Jan 31 02:20:41 np0005603621 podman[75771]: 2026-01-31 07:20:41.033502989 +0000 UTC m=+0.654573666 container died bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8 (image=quay.io/ceph/ceph:v18, name=gifted_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-992b447b608300abe3a3e9f43d43a4d4df8128ab54436ad19a148beea0b67861-merged.mount: Deactivated successfully.
Jan 31 02:20:41 np0005603621 podman[75771]: 2026-01-31 07:20:41.069020371 +0000 UTC m=+0.690091028 container remove bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8 (image=quay.io/ceph/ceph:v18, name=gifted_cerf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:20:41 np0005603621 systemd[1]: libpod-conmon-bc30a173e4e9c008e027012e06e52ac3a45696c3316f0ab6435123ceaf1e82d8.scope: Deactivated successfully.
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:20:41] ENGINE Serving on http://192.168.122.100:8765
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:20:41] ENGINE Serving on http://192.168.122.100:8765
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO cherrypy.error] [31/Jan/2026:07:20:41] ENGINE Bus STARTED
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : [31/Jan/2026:07:20:41] ENGINE Bus STARTED
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.127151245 +0000 UTC m=+0.042469402 container create 38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32 (image=quay.io/ceph/ceph:v18, name=optimistic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 31 02:20:41 np0005603621 systemd[1]: Started libpod-conmon-38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32.scope.
Jan 31 02:20:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f5bfb3fb175404bad7adeb1f25e8699a318b4a11c23d861b7095795c6b9885/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f5bfb3fb175404bad7adeb1f25e8699a318b4a11c23d861b7095795c6b9885/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f5bfb3fb175404bad7adeb1f25e8699a318b4a11c23d861b7095795c6b9885/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f5bfb3fb175404bad7adeb1f25e8699a318b4a11c23d861b7095795c6b9885/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13f5bfb3fb175404bad7adeb1f25e8699a318b4a11c23d861b7095795c6b9885/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.107449483 +0000 UTC m=+0.022767660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.208643486 +0000 UTC m=+0.123961653 container init 38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32 (image=quay.io/ceph/ceph:v18, name=optimistic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.216563906 +0000 UTC m=+0.131882053 container start 38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32 (image=quay.io/ceph/ceph:v18, name=optimistic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.22144251 +0000 UTC m=+0.136760657 container attach 38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32 (image=quay.io/ceph/ceph:v18, name=optimistic_mclaren, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ddmhwk(active, since 2s)
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 31 02:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Set ssh private key
Jan 31 02:20:41 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 31 02:20:41 np0005603621 systemd[1]: libpod-38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32.scope: Deactivated successfully.
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.73892116 +0000 UTC m=+0.654239307 container died 38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32 (image=quay.io/ceph/ceph:v18, name=optimistic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:20:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-13f5bfb3fb175404bad7adeb1f25e8699a318b4a11c23d861b7095795c6b9885-merged.mount: Deactivated successfully.
Jan 31 02:20:41 np0005603621 podman[75849]: 2026-01-31 07:20:41.777721254 +0000 UTC m=+0.693039401 container remove 38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32 (image=quay.io/ceph/ceph:v18, name=optimistic_mclaren, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:41 np0005603621 systemd[1]: libpod-conmon-38b8718a69661279796620b3f199e37ce0c1d7aff65589596ced78b77231be32.scope: Deactivated successfully.
Jan 31 02:20:41 np0005603621 podman[75904]: 2026-01-31 07:20:41.820770402 +0000 UTC m=+0.029496781 container create 8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc (image=quay.io/ceph/ceph:v18, name=vigilant_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:41 np0005603621 systemd[1]: Started libpod-conmon-8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc.scope.
Jan 31 02:20:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cad63bd8d1370768d02738373f19d55e909f655b111843b4121602c75b1336/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cad63bd8d1370768d02738373f19d55e909f655b111843b4121602c75b1336/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cad63bd8d1370768d02738373f19d55e909f655b111843b4121602c75b1336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cad63bd8d1370768d02738373f19d55e909f655b111843b4121602c75b1336/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50cad63bd8d1370768d02738373f19d55e909f655b111843b4121602c75b1336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:41 np0005603621 podman[75904]: 2026-01-31 07:20:41.884291757 +0000 UTC m=+0.093018156 container init 8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:20:41 np0005603621 podman[75904]: 2026-01-31 07:20:41.893993843 +0000 UTC m=+0.102720222 container start 8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:20:41 np0005603621 podman[75904]: 2026-01-31 07:20:41.89960962 +0000 UTC m=+0.108336029 container attach 8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc (image=quay.io/ceph/ceph:v18, name=vigilant_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:20:41 np0005603621 podman[75904]: 2026-01-31 07:20:41.807264376 +0000 UTC m=+0.015990775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:42 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:42 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 31 02:20:42 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 31 02:20:42 np0005603621 systemd[1]: libpod-8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc.scope: Deactivated successfully.
Jan 31 02:20:42 np0005603621 podman[75904]: 2026-01-31 07:20:42.452293491 +0000 UTC m=+0.661019880 container died 8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc (image=quay.io/ceph/ceph:v18, name=vigilant_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:20:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-50cad63bd8d1370768d02738373f19d55e909f655b111843b4121602c75b1336-merged.mount: Deactivated successfully.
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: [31/Jan/2026:07:20:40] ENGINE Bus STARTING
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: Set ssh ssh_user
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: [31/Jan/2026:07:20:41] ENGINE Serving on https://192.168.122.100:7150
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: [31/Jan/2026:07:20:41] ENGINE Client ('192.168.122.100', 38984) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: Set ssh ssh_config
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: ssh user set to ceph-admin. sudo will be used
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: [31/Jan/2026:07:20:41] ENGINE Serving on http://192.168.122.100:8765
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: [31/Jan/2026:07:20:41] ENGINE Bus STARTED
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:42 np0005603621 podman[75904]: 2026-01-31 07:20:42.488867075 +0000 UTC m=+0.697593454 container remove 8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc (image=quay.io/ceph/ceph:v18, name=vigilant_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:42 np0005603621 systemd[1]: libpod-conmon-8db1973ad17cad64ed07c1bc1d901aeedc05654894f9692ae8e890f8735d2edc.scope: Deactivated successfully.
Jan 31 02:20:42 np0005603621 podman[75957]: 2026-01-31 07:20:42.530370775 +0000 UTC m=+0.031181475 container create ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d (image=quay.io/ceph/ceph:v18, name=blissful_moser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:20:42 np0005603621 systemd[1]: Started libpod-conmon-ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d.scope.
Jan 31 02:20:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e26b105e83da046d1d319de2e0153d2db1d7bfd93832bb60111d528032af96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e26b105e83da046d1d319de2e0153d2db1d7bfd93832bb60111d528032af96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7e26b105e83da046d1d319de2e0153d2db1d7bfd93832bb60111d528032af96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:42 np0005603621 podman[75957]: 2026-01-31 07:20:42.57935275 +0000 UTC m=+0.080163450 container init ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d (image=quay.io/ceph/ceph:v18, name=blissful_moser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:42 np0005603621 podman[75957]: 2026-01-31 07:20:42.582480989 +0000 UTC m=+0.083291689 container start ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d (image=quay.io/ceph/ceph:v18, name=blissful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:20:42 np0005603621 podman[75957]: 2026-01-31 07:20:42.585220125 +0000 UTC m=+0.086030855 container attach ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d (image=quay.io/ceph/ceph:v18, name=blissful_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:20:42 np0005603621 podman[75957]: 2026-01-31 07:20:42.516022492 +0000 UTC m=+0.016833212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:43 np0005603621 blissful_moser[75973]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZBxXX/on/9c3MGM7a0swk6er0M52IlLkS1IRlFJTQVM950KdPIWitwk3oBs+texZ1/U6sk2b/MyYQzWssP/5q6gE1HhV1VyZEm+Dhuu91BYl0BSltZKJUwHJsGgeEdPBY6tDPGOgge0kbx/rgCMjpvFHIDWKsmpXuf37YgRupNMu+EipIN3eXBT51yYnici1eBh3SWrtJQdQ+cgbNh9sDX7k9B1/9zLrBuH3e/qiavNnqd8GYRNFCyTlnvnQmBA0TYQYmT1gx2U+yS1ioL7j/p1LkYMEU88rRQQvlFTyHhGDpJxwfcmu9W8KcmWm6juNIkRYTFy/GhVDWj9UJxid5Tg1hSN1jlOvtWOiF3uIhzKH/FQLtuppTnFN/+3xnMBtz1LFgbfM/RMdWo5O9+Twfamr/L2svn261BdZ4bLen8/8i75/l+W04hQeyJokBIcr+7/FxBC9x4Zdic0Pt3nJKQC1TcKqivcvThBc+RM2LhEROjg8SO2VDIGN8EU0uUIU= zuul@controller
Jan 31 02:20:43 np0005603621 systemd[1]: libpod-ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d.scope: Deactivated successfully.
Jan 31 02:20:43 np0005603621 conmon[75973]: conmon ec529c66ff3b49ed4b34 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d.scope/container/memory.events
Jan 31 02:20:43 np0005603621 podman[75957]: 2026-01-31 07:20:43.060978288 +0000 UTC m=+0.561788988 container died ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d (image=quay.io/ceph/ceph:v18, name=blissful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d7e26b105e83da046d1d319de2e0153d2db1d7bfd93832bb60111d528032af96-merged.mount: Deactivated successfully.
Jan 31 02:20:43 np0005603621 podman[75957]: 2026-01-31 07:20:43.106635379 +0000 UTC m=+0.607446079 container remove ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d (image=quay.io/ceph/ceph:v18, name=blissful_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:20:43 np0005603621 systemd[1]: libpod-conmon-ec529c66ff3b49ed4b34edab478087189401c1cf7b9d2f51bff5f25fdd6a827d.scope: Deactivated successfully.
Jan 31 02:20:43 np0005603621 podman[76014]: 2026-01-31 07:20:43.151673281 +0000 UTC m=+0.032944951 container create 876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a (image=quay.io/ceph/ceph:v18, name=priceless_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:20:43 np0005603621 systemd[1]: Started libpod-conmon-876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a.scope.
Jan 31 02:20:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b814d51933eee94cd579c40cd26e092f48a8c855a5e0a1dcb8af16f83af80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b814d51933eee94cd579c40cd26e092f48a8c855a5e0a1dcb8af16f83af80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6b814d51933eee94cd579c40cd26e092f48a8c855a5e0a1dcb8af16f83af80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:43 np0005603621 podman[76014]: 2026-01-31 07:20:43.211796597 +0000 UTC m=+0.093068297 container init 876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a (image=quay.io/ceph/ceph:v18, name=priceless_kepler, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:20:43 np0005603621 podman[76014]: 2026-01-31 07:20:43.217805027 +0000 UTC m=+0.099076717 container start 876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a (image=quay.io/ceph/ceph:v18, name=priceless_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:43 np0005603621 podman[76014]: 2026-01-31 07:20:43.221965279 +0000 UTC m=+0.103236969 container attach 876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a (image=quay.io/ceph/ceph:v18, name=priceless_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:43 np0005603621 podman[76014]: 2026-01-31 07:20:43.136200992 +0000 UTC m=+0.017472682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:43 np0005603621 ceph-mon[74394]: Set ssh ssh_identity_key
Jan 31 02:20:43 np0005603621 ceph-mon[74394]: Set ssh private key
Jan 31 02:20:43 np0005603621 ceph-mon[74394]: Set ssh ssh_identity_pub
Jan 31 02:20:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053093 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:43 np0005603621 systemd[1]: Created slice User Slice of UID 42477.
Jan 31 02:20:43 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 31 02:20:43 np0005603621 systemd-logind[818]: New session 21 of user ceph-admin.
Jan 31 02:20:43 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 31 02:20:43 np0005603621 systemd[1]: Starting User Manager for UID 42477...
Jan 31 02:20:44 np0005603621 systemd[76060]: Queued start job for default target Main User Target.
Jan 31 02:20:44 np0005603621 systemd[76060]: Created slice User Application Slice.
Jan 31 02:20:44 np0005603621 systemd[76060]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:20:44 np0005603621 systemd[76060]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:20:44 np0005603621 systemd[76060]: Reached target Paths.
Jan 31 02:20:44 np0005603621 systemd[76060]: Reached target Timers.
Jan 31 02:20:44 np0005603621 systemd[76060]: Starting D-Bus User Message Bus Socket...
Jan 31 02:20:44 np0005603621 systemd[76060]: Starting Create User's Volatile Files and Directories...
Jan 31 02:20:44 np0005603621 systemd[76060]: Finished Create User's Volatile Files and Directories.
Jan 31 02:20:44 np0005603621 systemd[76060]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:20:44 np0005603621 systemd[76060]: Reached target Sockets.
Jan 31 02:20:44 np0005603621 systemd[76060]: Reached target Basic System.
Jan 31 02:20:44 np0005603621 systemd[76060]: Reached target Main User Target.
Jan 31 02:20:44 np0005603621 systemd[76060]: Startup finished in 127ms.
Jan 31 02:20:44 np0005603621 systemd-logind[818]: New session 23 of user ceph-admin.
Jan 31 02:20:44 np0005603621 systemd[1]: Started User Manager for UID 42477.
Jan 31 02:20:44 np0005603621 systemd[1]: Started Session 21 of User ceph-admin.
Jan 31 02:20:44 np0005603621 systemd[1]: Started Session 23 of User ceph-admin.
Jan 31 02:20:44 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:44 np0005603621 systemd-logind[818]: New session 24 of user ceph-admin.
Jan 31 02:20:44 np0005603621 systemd[1]: Started Session 24 of User ceph-admin.
Jan 31 02:20:44 np0005603621 systemd-logind[818]: New session 25 of user ceph-admin.
Jan 31 02:20:44 np0005603621 systemd[1]: Started Session 25 of User ceph-admin.
Jan 31 02:20:45 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 31 02:20:45 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 31 02:20:45 np0005603621 systemd-logind[818]: New session 26 of user ceph-admin.
Jan 31 02:20:45 np0005603621 systemd[1]: Started Session 26 of User ceph-admin.
Jan 31 02:20:45 np0005603621 systemd-logind[818]: New session 27 of user ceph-admin.
Jan 31 02:20:45 np0005603621 systemd[1]: Started Session 27 of User ceph-admin.
Jan 31 02:20:45 np0005603621 systemd-logind[818]: New session 28 of user ceph-admin.
Jan 31 02:20:45 np0005603621 systemd[1]: Started Session 28 of User ceph-admin.
Jan 31 02:20:46 np0005603621 systemd-logind[818]: New session 29 of user ceph-admin.
Jan 31 02:20:46 np0005603621 systemd[1]: Started Session 29 of User ceph-admin.
Jan 31 02:20:46 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:46 np0005603621 ceph-mon[74394]: Deploying cephadm binary to compute-0
Jan 31 02:20:46 np0005603621 systemd-logind[818]: New session 30 of user ceph-admin.
Jan 31 02:20:46 np0005603621 systemd[1]: Started Session 30 of User ceph-admin.
Jan 31 02:20:46 np0005603621 systemd-logind[818]: New session 31 of user ceph-admin.
Jan 31 02:20:46 np0005603621 systemd[1]: Started Session 31 of User ceph-admin.
Jan 31 02:20:47 np0005603621 systemd-logind[818]: New session 32 of user ceph-admin.
Jan 31 02:20:47 np0005603621 systemd[1]: Started Session 32 of User ceph-admin.
Jan 31 02:20:47 np0005603621 systemd-logind[818]: New session 33 of user ceph-admin.
Jan 31 02:20:47 np0005603621 systemd[1]: Started Session 33 of User ceph-admin.
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:48 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Added host compute-0
Jan 31 02:20:48 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 31 02:20:48 np0005603621 priceless_kepler[76030]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 02:20:48 np0005603621 systemd[1]: libpod-876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a.scope: Deactivated successfully.
Jan 31 02:20:48 np0005603621 podman[76014]: 2026-01-31 07:20:48.259344617 +0000 UTC m=+5.140616297 container died 876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a (image=quay.io/ceph/ceph:v18, name=priceless_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:20:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7f6b814d51933eee94cd579c40cd26e092f48a8c855a5e0a1dcb8af16f83af80-merged.mount: Deactivated successfully.
Jan 31 02:20:48 np0005603621 podman[76014]: 2026-01-31 07:20:48.297643046 +0000 UTC m=+5.178914716 container remove 876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a (image=quay.io/ceph/ceph:v18, name=priceless_kepler, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:20:48 np0005603621 systemd[1]: libpod-conmon-876fbc34fd05bf7fc1bf181663cfd3d42256c54c9cad002b501033e54976f29a.scope: Deactivated successfully.
Jan 31 02:20:48 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:48 np0005603621 podman[76710]: 2026-01-31 07:20:48.358593929 +0000 UTC m=+0.042556564 container create 86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26 (image=quay.io/ceph/ceph:v18, name=flamboyant_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:20:48 np0005603621 systemd[1]: Started libpod-conmon-86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26.scope.
Jan 31 02:20:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92579b8531e8867b1a6eec287e7288795f65890650c511a8a54fb1785ee1fe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92579b8531e8867b1a6eec287e7288795f65890650c511a8a54fb1785ee1fe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f92579b8531e8867b1a6eec287e7288795f65890650c511a8a54fb1785ee1fe5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:48 np0005603621 podman[76710]: 2026-01-31 07:20:48.339779796 +0000 UTC m=+0.023742451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:48 np0005603621 podman[76710]: 2026-01-31 07:20:48.444782509 +0000 UTC m=+0.128745174 container init 86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26 (image=quay.io/ceph/ceph:v18, name=flamboyant_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:48 np0005603621 podman[76710]: 2026-01-31 07:20:48.450091056 +0000 UTC m=+0.134053691 container start 86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26 (image=quay.io/ceph/ceph:v18, name=flamboyant_haibt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:20:48 np0005603621 podman[76710]: 2026-01-31 07:20:48.453627168 +0000 UTC m=+0.137589823 container attach 86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26 (image=quay.io/ceph/ceph:v18, name=flamboyant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:48 np0005603621 podman[76821]: 2026-01-31 07:20:48.659386892 +0000 UTC m=+0.031746203 container create c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d (image=quay.io/ceph/ceph:v18, name=condescending_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:20:48 np0005603621 systemd[1]: Started libpod-conmon-c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d.scope.
Jan 31 02:20:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:48 np0005603621 podman[76821]: 2026-01-31 07:20:48.710829064 +0000 UTC m=+0.083188385 container init c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d (image=quay.io/ceph/ceph:v18, name=condescending_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:48 np0005603621 podman[76821]: 2026-01-31 07:20:48.714273203 +0000 UTC m=+0.086632524 container start c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d (image=quay.io/ceph/ceph:v18, name=condescending_clarke, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 02:20:48 np0005603621 podman[76821]: 2026-01-31 07:20:48.728909225 +0000 UTC m=+0.101268546 container attach c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d (image=quay.io/ceph/ceph:v18, name=condescending_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:20:48 np0005603621 podman[76821]: 2026-01-31 07:20:48.644727069 +0000 UTC m=+0.017086410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:48 np0005603621 condescending_clarke[76838]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 31 02:20:48 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:48 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 31 02:20:48 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 31 02:20:48 np0005603621 podman[76821]: 2026-01-31 07:20:48.985966837 +0000 UTC m=+0.358326158 container died c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d (image=quay.io/ceph/ceph:v18, name=condescending_clarke, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:48 np0005603621 systemd[1]: libpod-c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d.scope: Deactivated successfully.
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 02:20:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:48 np0005603621 flamboyant_haibt[76766]: Scheduled mon update...
Jan 31 02:20:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-76360088024f0c85bd1f436b9811846a8c392569b9b623c489b3067a04cf7f1d-merged.mount: Deactivated successfully.
Jan 31 02:20:49 np0005603621 systemd[1]: libpod-86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26.scope: Deactivated successfully.
Jan 31 02:20:49 np0005603621 podman[76710]: 2026-01-31 07:20:49.015020424 +0000 UTC m=+0.698983069 container died 86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26 (image=quay.io/ceph/ceph:v18, name=flamboyant_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:20:49 np0005603621 podman[76821]: 2026-01-31 07:20:49.036903884 +0000 UTC m=+0.409263205 container remove c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d (image=quay.io/ceph/ceph:v18, name=condescending_clarke, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:49 np0005603621 systemd[1]: libpod-conmon-c9d9607297cf834f314d6d9c9eb05956b64583d23ccc2d7c4176214cd3d7650d.scope: Deactivated successfully.
Jan 31 02:20:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f92579b8531e8867b1a6eec287e7288795f65890650c511a8a54fb1785ee1fe5-merged.mount: Deactivated successfully.
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:49 np0005603621 podman[76710]: 2026-01-31 07:20:49.078372283 +0000 UTC m=+0.762334928 container remove 86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26 (image=quay.io/ceph/ceph:v18, name=flamboyant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:49 np0005603621 systemd[1]: libpod-conmon-86db86408e2b45996155bb299ae97ec0b07fad01537c5d5b862fe07df7ef9d26.scope: Deactivated successfully.
Jan 31 02:20:49 np0005603621 podman[76892]: 2026-01-31 07:20:49.134607377 +0000 UTC m=+0.040400766 container create 77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43 (image=quay.io/ceph/ceph:v18, name=sleepy_curran, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:49 np0005603621 systemd[1]: Started libpod-conmon-77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43.scope.
Jan 31 02:20:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc9e2e88807a111e70ec68da2b02f0853bd22c10a2fe76cd3871e8aaac3dfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc9e2e88807a111e70ec68da2b02f0853bd22c10a2fe76cd3871e8aaac3dfd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45dc9e2e88807a111e70ec68da2b02f0853bd22c10a2fe76cd3871e8aaac3dfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:49 np0005603621 podman[76892]: 2026-01-31 07:20:49.187019131 +0000 UTC m=+0.092812510 container init 77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43 (image=quay.io/ceph/ceph:v18, name=sleepy_curran, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:49 np0005603621 podman[76892]: 2026-01-31 07:20:49.190848952 +0000 UTC m=+0.096642301 container start 77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43 (image=quay.io/ceph/ceph:v18, name=sleepy_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:20:49 np0005603621 podman[76892]: 2026-01-31 07:20:49.194266159 +0000 UTC m=+0.100059508 container attach 77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43 (image=quay.io/ceph/ceph:v18, name=sleepy_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:49 np0005603621 podman[76892]: 2026-01-31 07:20:49.116851317 +0000 UTC m=+0.022644726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: Added host compute-0
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:20:49 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:49 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 31 02:20:49 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 31 02:20:50 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 02:20:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:50 np0005603621 ceph-mon[74394]: Saving service mon spec with placement count:5
Jan 31 02:20:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:50 np0005603621 sleepy_curran[76957]: Scheduled mgr update...
Jan 31 02:20:50 np0005603621 systemd[1]: libpod-77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43.scope: Deactivated successfully.
Jan 31 02:20:50 np0005603621 podman[76892]: 2026-01-31 07:20:50.902549266 +0000 UTC m=+1.808342615 container died 77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43 (image=quay.io/ceph/ceph:v18, name=sleepy_curran, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-45dc9e2e88807a111e70ec68da2b02f0853bd22c10a2fe76cd3871e8aaac3dfd-merged.mount: Deactivated successfully.
Jan 31 02:20:51 np0005603621 podman[76892]: 2026-01-31 07:20:51.393015123 +0000 UTC m=+2.298808472 container remove 77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43 (image=quay.io/ceph/ceph:v18, name=sleepy_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:51 np0005603621 podman[77180]: 2026-01-31 07:20:51.466019967 +0000 UTC m=+0.061468041 container create ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e (image=quay.io/ceph/ceph:v18, name=blissful_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:20:51 np0005603621 podman[77180]: 2026-01-31 07:20:51.425674104 +0000 UTC m=+0.021122158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:51 np0005603621 systemd[1]: Started libpod-conmon-ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e.scope.
Jan 31 02:20:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4821563a4ed995a736d253446f608b304bbae6fd134ebab7190ad87883354b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4821563a4ed995a736d253446f608b304bbae6fd134ebab7190ad87883354b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf4821563a4ed995a736d253446f608b304bbae6fd134ebab7190ad87883354b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:52 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:53 np0005603621 podman[77180]: 2026-01-31 07:20:53.115080255 +0000 UTC m=+1.710528309 container init ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e (image=quay.io/ceph/ceph:v18, name=blissful_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:20:53 np0005603621 podman[77180]: 2026-01-31 07:20:53.12349168 +0000 UTC m=+1.718939714 container start ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e (image=quay.io/ceph/ceph:v18, name=blissful_jepsen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:20:53 np0005603621 ceph-mon[74394]: Saving service mgr spec with placement count:2
Jan 31 02:20:53 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:53 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:53 np0005603621 podman[77180]: 2026-01-31 07:20:53.218639753 +0000 UTC m=+1.814087817 container attach ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e (image=quay.io/ceph/ceph:v18, name=blissful_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:20:53 np0005603621 systemd[1]: libpod-conmon-77da2d879000bbdb9be27da04e8c1ca2826141f2f7ec79af908767b563de6c43.scope: Deactivated successfully.
Jan 31 02:20:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:53 np0005603621 podman[77261]: 2026-01-31 07:20:53.591946983 +0000 UTC m=+0.217231616 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:53 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:53 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service crash spec with placement *
Jan 31 02:20:53 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 31 02:20:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:20:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:53 np0005603621 blissful_jepsen[77211]: Scheduled crash update...
Jan 31 02:20:53 np0005603621 systemd[1]: libpod-ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e.scope: Deactivated successfully.
Jan 31 02:20:53 np0005603621 conmon[77211]: conmon ce7efe5d7a861f00ae9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e.scope/container/memory.events
Jan 31 02:20:53 np0005603621 podman[77180]: 2026-01-31 07:20:53.876635526 +0000 UTC m=+2.472083560 container died ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e (image=quay.io/ceph/ceph:v18, name=blissful_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bf4821563a4ed995a736d253446f608b304bbae6fd134ebab7190ad87883354b-merged.mount: Deactivated successfully.
Jan 31 02:20:54 np0005603621 ceph-mon[74394]: Saving service crash spec with placement *
Jan 31 02:20:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:54 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:54 np0005603621 podman[77180]: 2026-01-31 07:20:54.387905619 +0000 UTC m=+2.983353653 container remove ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e (image=quay.io/ceph/ceph:v18, name=blissful_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 31 02:20:54 np0005603621 systemd[1]: libpod-conmon-ce7efe5d7a861f00ae9bb133dcc918e1fe2c6020965d4e067f4797f1c5d83b5e.scope: Deactivated successfully.
Jan 31 02:20:54 np0005603621 podman[77261]: 2026-01-31 07:20:54.412258749 +0000 UTC m=+1.037543352 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:20:54 np0005603621 podman[77329]: 2026-01-31 07:20:54.750786691 +0000 UTC m=+0.347955171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:54 np0005603621 podman[77329]: 2026-01-31 07:20:54.865917174 +0000 UTC m=+0.463085644 container create ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde (image=quay.io/ceph/ceph:v18, name=clever_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:20:55 np0005603621 systemd[1]: Started libpod-conmon-ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde.scope.
Jan 31 02:20:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c356d1a95d5207013f4120e322dc4ecbe158872d592cc1bfcfe9d228d8a1625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c356d1a95d5207013f4120e322dc4ecbe158872d592cc1bfcfe9d228d8a1625/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c356d1a95d5207013f4120e322dc4ecbe158872d592cc1bfcfe9d228d8a1625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:55 np0005603621 podman[77329]: 2026-01-31 07:20:55.17318446 +0000 UTC m=+0.770352990 container init ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde (image=quay.io/ceph/ceph:v18, name=clever_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:20:55 np0005603621 podman[77329]: 2026-01-31 07:20:55.179286683 +0000 UTC m=+0.776455143 container start ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde (image=quay.io/ceph/ceph:v18, name=clever_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:55 np0005603621 podman[77329]: 2026-01-31 07:20:55.292020299 +0000 UTC m=+0.889188779 container attach ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde (image=quay.io/ceph/ceph:v18, name=clever_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:20:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:20:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 31 02:20:55 np0005603621 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77500 (sysctl)
Jan 31 02:20:56 np0005603621 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 31 02:20:56 np0005603621 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 31 02:20:56 np0005603621 ceph-mgr[74689]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 31 02:20:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3400265353' entity='client.admin' 
Jan 31 02:20:56 np0005603621 systemd[1]: libpod-ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde.scope: Deactivated successfully.
Jan 31 02:20:56 np0005603621 podman[77623]: 2026-01-31 07:20:56.638872421 +0000 UTC m=+0.039211439 container died ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde (image=quay.io/ceph/ceph:v18, name=clever_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:20:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:57 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3400265353' entity='client.admin' 
Jan 31 02:20:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6c356d1a95d5207013f4120e322dc4ecbe158872d592cc1bfcfe9d228d8a1625-merged.mount: Deactivated successfully.
Jan 31 02:20:57 np0005603621 podman[77623]: 2026-01-31 07:20:57.724379285 +0000 UTC m=+1.124718283 container remove ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde (image=quay.io/ceph/ceph:v18, name=clever_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:20:57 np0005603621 systemd[1]: libpod-conmon-ff5a1ec113a187555946cbca8eaa98b5437cf022a2e492b9006f8fe589373fde.scope: Deactivated successfully.
Jan 31 02:20:57 np0005603621 podman[77650]: 2026-01-31 07:20:57.84591745 +0000 UTC m=+0.104110477 container create eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:20:57 np0005603621 podman[77650]: 2026-01-31 07:20:57.760265947 +0000 UTC m=+0.018458994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:57 np0005603621 systemd[1]: Started libpod-conmon-eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727.scope.
Jan 31 02:20:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b83daffa26e5e2289efbb017725674195bc0129b24f05508ff2c1cb36cb5d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b83daffa26e5e2289efbb017725674195bc0129b24f05508ff2c1cb36cb5d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b83daffa26e5e2289efbb017725674195bc0129b24f05508ff2c1cb36cb5d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:20:58 np0005603621 podman[77650]: 2026-01-31 07:20:58.102972631 +0000 UTC m=+0.361165698 container init eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:20:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:58 np0005603621 podman[77650]: 2026-01-31 07:20:58.108388582 +0000 UTC m=+0.366581609 container start eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:20:58 np0005603621 podman[77650]: 2026-01-31 07:20:58.201158159 +0000 UTC m=+0.459351286 container attach eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:20:58 np0005603621 ceph-mgr[74689]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 31 02:20:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:20:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 02:20:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:20:58 np0005603621 podman[77837]: 2026-01-31 07:20:58.562610445 +0000 UTC m=+0.057590878 container create 22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:20:58 np0005603621 podman[77837]: 2026-01-31 07:20:58.521474808 +0000 UTC m=+0.016455271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:20:58 np0005603621 systemd[1]: Started libpod-conmon-22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c.scope.
Jan 31 02:20:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:58 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:20:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 31 02:20:58 np0005603621 podman[77837]: 2026-01-31 07:20:58.7234083 +0000 UTC m=+0.218388763 container init 22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:20:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:58 np0005603621 podman[77837]: 2026-01-31 07:20:58.72756447 +0000 UTC m=+0.222544903 container start 22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:58 np0005603621 gracious_jang[77854]: 167 167
Jan 31 02:20:58 np0005603621 systemd[1]: libpod-22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c.scope: Deactivated successfully.
Jan 31 02:20:58 np0005603621 conmon[77854]: conmon 22612b8ee4441b0e4b39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c.scope/container/memory.events
Jan 31 02:20:58 np0005603621 systemd[1]: libpod-eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727.scope: Deactivated successfully.
Jan 31 02:20:58 np0005603621 podman[77837]: 2026-01-31 07:20:58.77695122 +0000 UTC m=+0.271931663 container attach 22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:20:58 np0005603621 podman[77837]: 2026-01-31 07:20:58.777429024 +0000 UTC m=+0.272409457 container died 22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:20:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-08dc6a62ab0d0279c84bc119af4a69aa17876767718c65ac9bc4325cf1a2059b-merged.mount: Deactivated successfully.
Jan 31 02:20:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:59 np0005603621 ceph-mon[74394]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 31 02:20:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:20:59 np0005603621 podman[77837]: 2026-01-31 07:20:59.247515997 +0000 UTC m=+0.742496440 container remove 22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:20:59 np0005603621 systemd[1]: libpod-conmon-22612b8ee4441b0e4b39d4440881c07297b067a381989e20023db76049d5714c.scope: Deactivated successfully.
Jan 31 02:20:59 np0005603621 podman[77650]: 2026-01-31 07:20:59.279821567 +0000 UTC m=+1.538014594 container died eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:20:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-93b83daffa26e5e2289efbb017725674195bc0129b24f05508ff2c1cb36cb5d5-merged.mount: Deactivated successfully.
Jan 31 02:20:59 np0005603621 podman[77870]: 2026-01-31 07:20:59.601633302 +0000 UTC m=+0.840774532 container remove eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727 (image=quay.io/ceph/ceph:v18, name=intelligent_engelbart, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:20:59 np0005603621 systemd[1]: libpod-conmon-eb55239b442b92ffc9c8f502a7bcf0397e4ce661d49ff75da40ea8447b827727.scope: Deactivated successfully.
Jan 31 02:20:59 np0005603621 podman[77890]: 2026-01-31 07:20:59.644691731 +0000 UTC m=+0.022838061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:20:59 np0005603621 podman[77890]: 2026-01-31 07:20:59.81198413 +0000 UTC m=+0.190130490 container create aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9 (image=quay.io/ceph/ceph:v18, name=jolly_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:20:59 np0005603621 systemd[1]: Started libpod-conmon-aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9.scope.
Jan 31 02:20:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332a91392e0ccfc4d193c96f0cbc0db6a429a27725d4dade3d2f4a25a8e12690/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332a91392e0ccfc4d193c96f0cbc0db6a429a27725d4dade3d2f4a25a8e12690/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332a91392e0ccfc4d193c96f0cbc0db6a429a27725d4dade3d2f4a25a8e12690/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:00 np0005603621 podman[77890]: 2026-01-31 07:21:00.020861711 +0000 UTC m=+0.399008051 container init aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9 (image=quay.io/ceph/ceph:v18, name=jolly_black, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:00 np0005603621 podman[77890]: 2026-01-31 07:21:00.026167819 +0000 UTC m=+0.404314139 container start aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9 (image=quay.io/ceph/ceph:v18, name=jolly_black, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:00 np0005603621 podman[77890]: 2026-01-31 07:21:00.063766726 +0000 UTC m=+0.441913046 container attach aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9 (image=quay.io/ceph/ceph:v18, name=jolly_black, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:21:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:00 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:21:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:00 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Added label _admin to host compute-0
Jan 31 02:21:00 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 31 02:21:00 np0005603621 jolly_black[77907]: Added label _admin to host compute-0
Jan 31 02:21:00 np0005603621 systemd[1]: libpod-aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9.scope: Deactivated successfully.
Jan 31 02:21:00 np0005603621 podman[77890]: 2026-01-31 07:21:00.770943761 +0000 UTC m=+1.149090101 container died aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9 (image=quay.io/ceph/ceph:v18, name=jolly_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-332a91392e0ccfc4d193c96f0cbc0db6a429a27725d4dade3d2f4a25a8e12690-merged.mount: Deactivated successfully.
Jan 31 02:21:01 np0005603621 podman[77890]: 2026-01-31 07:21:01.098488627 +0000 UTC m=+1.476634947 container remove aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9 (image=quay.io/ceph/ceph:v18, name=jolly_black, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:01 np0005603621 systemd[1]: libpod-conmon-aa7df79651107fb1b5def8b1bcf051e54ecb28613ec8bc5b05ac2f315ede06a9.scope: Deactivated successfully.
Jan 31 02:21:01 np0005603621 podman[77947]: 2026-01-31 07:21:01.136171187 +0000 UTC m=+0.020436697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:01 np0005603621 podman[77947]: 2026-01-31 07:21:01.239084124 +0000 UTC m=+0.123349624 container create 80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b (image=quay.io/ceph/ceph:v18, name=xenodochial_ellis, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:21:01 np0005603621 systemd[1]: Started libpod-conmon-80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b.scope.
Jan 31 02:21:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd508ec3791f4b1a50abad9d82fcebf24b7a96a5bc15a092a10cd4a043d4c787/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd508ec3791f4b1a50abad9d82fcebf24b7a96a5bc15a092a10cd4a043d4c787/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd508ec3791f4b1a50abad9d82fcebf24b7a96a5bc15a092a10cd4a043d4c787/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:01 np0005603621 podman[77947]: 2026-01-31 07:21:01.413308202 +0000 UTC m=+0.297573672 container init 80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b (image=quay.io/ceph/ceph:v18, name=xenodochial_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:21:01 np0005603621 podman[77947]: 2026-01-31 07:21:01.418424213 +0000 UTC m=+0.302689683 container start 80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b (image=quay.io/ceph/ceph:v18, name=xenodochial_ellis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:21:01 np0005603621 podman[77947]: 2026-01-31 07:21:01.443196405 +0000 UTC m=+0.327461865 container attach 80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b (image=quay.io/ceph/ceph:v18, name=xenodochial_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:01 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 31 02:21:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/829004969' entity='client.admin' 
Jan 31 02:21:02 np0005603621 systemd[1]: libpod-80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b.scope: Deactivated successfully.
Jan 31 02:21:02 np0005603621 podman[77947]: 2026-01-31 07:21:02.006938525 +0000 UTC m=+0.891203995 container died 80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b (image=quay.io/ceph/ceph:v18, name=xenodochial_ellis, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bd508ec3791f4b1a50abad9d82fcebf24b7a96a5bc15a092a10cd4a043d4c787-merged.mount: Deactivated successfully.
Jan 31 02:21:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:02 np0005603621 podman[77947]: 2026-01-31 07:21:02.351658143 +0000 UTC m=+1.235923603 container remove 80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b (image=quay.io/ceph/ceph:v18, name=xenodochial_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:02 np0005603621 systemd[1]: libpod-conmon-80ee67ecbfaac7b8fa94ac6d10a67e7f8c1376c41ba7e4dafbd2f4a895ea427b.scope: Deactivated successfully.
Jan 31 02:21:02 np0005603621 podman[78001]: 2026-01-31 07:21:02.384134977 +0000 UTC m=+0.019597769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:02 np0005603621 podman[78001]: 2026-01-31 07:21:02.496924147 +0000 UTC m=+0.132386909 container create 998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8 (image=quay.io/ceph/ceph:v18, name=quizzical_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:21:02 np0005603621 ceph-mon[74394]: Added label _admin to host compute-0
Jan 31 02:21:02 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/829004969' entity='client.admin' 
Jan 31 02:21:02 np0005603621 systemd[1]: Started libpod-conmon-998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8.scope.
Jan 31 02:21:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2a47af11348a47cf41d5595980b4b89ca7d652e8971e4749f3f2d4a90001fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2a47af11348a47cf41d5595980b4b89ca7d652e8971e4749f3f2d4a90001fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2a47af11348a47cf41d5595980b4b89ca7d652e8971e4749f3f2d4a90001fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:02 np0005603621 podman[78001]: 2026-01-31 07:21:02.614802497 +0000 UTC m=+0.250265269 container init 998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8 (image=quay.io/ceph/ceph:v18, name=quizzical_ride, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:02 np0005603621 podman[78001]: 2026-01-31 07:21:02.619130353 +0000 UTC m=+0.254593115 container start 998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8 (image=quay.io/ceph/ceph:v18, name=quizzical_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:21:02 np0005603621 podman[78001]: 2026-01-31 07:21:02.627200577 +0000 UTC m=+0.262663349 container attach 998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8 (image=quay.io/ceph/ceph:v18, name=quizzical_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 31 02:21:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4014796272' entity='client.admin' 
Jan 31 02:21:03 np0005603621 quizzical_ride[78017]: set mgr/dashboard/cluster/status
Jan 31 02:21:03 np0005603621 systemd[1]: libpod-998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8.scope: Deactivated successfully.
Jan 31 02:21:03 np0005603621 podman[78001]: 2026-01-31 07:21:03.268480794 +0000 UTC m=+0.903943556 container died 998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8 (image=quay.io/ceph/ceph:v18, name=quizzical_ride, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:21:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:03 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/4014796272' entity='client.admin' 
Jan 31 02:21:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5b2a47af11348a47cf41d5595980b4b89ca7d652e8971e4749f3f2d4a90001fb-merged.mount: Deactivated successfully.
Jan 31 02:21:03 np0005603621 podman[78001]: 2026-01-31 07:21:03.948228284 +0000 UTC m=+1.583691046 container remove 998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8 (image=quay.io/ceph/ceph:v18, name=quizzical_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:21:03 np0005603621 systemd[1]: libpod-conmon-998dfaeca62e42b68e383f5bd842ab343bb600c820c8da22ed085064cbfb50b8.scope: Deactivated successfully.
Jan 31 02:21:04 np0005603621 podman[78063]: 2026-01-31 07:21:04.10272777 +0000 UTC m=+0.046268201 container create 302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:04 np0005603621 systemd[1]: Started libpod-conmon-302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7.scope.
Jan 31 02:21:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f0c287391b7f158d58705a340c0481b52ad17b3d4d5558ec2216a3c655ff9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f0c287391b7f158d58705a340c0481b52ad17b3d4d5558ec2216a3c655ff9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f0c287391b7f158d58705a340c0481b52ad17b3d4d5558ec2216a3c655ff9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d7f0c287391b7f158d58705a340c0481b52ad17b3d4d5558ec2216a3c655ff9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:04 np0005603621 podman[78063]: 2026-01-31 07:21:04.075038656 +0000 UTC m=+0.018579107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:04 np0005603621 podman[78063]: 2026-01-31 07:21:04.192214113 +0000 UTC m=+0.135754564 container init 302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:04 np0005603621 podman[78063]: 2026-01-31 07:21:04.196820509 +0000 UTC m=+0.140360940 container start 302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:21:04 np0005603621 podman[78063]: 2026-01-31 07:21:04.252872448 +0000 UTC m=+0.196412879 container attach 302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:21:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:04 np0005603621 python3[78109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:04 np0005603621 podman[78110]: 2026-01-31 07:21:04.536594771 +0000 UTC m=+0.028999486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:04 np0005603621 podman[78110]: 2026-01-31 07:21:04.695468504 +0000 UTC m=+0.187873179 container create a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231 (image=quay.io/ceph/ceph:v18, name=loving_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:21:04 np0005603621 systemd[1]: Started libpod-conmon-a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231.scope.
Jan 31 02:21:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a402e10f858f6a81b45350712b0a29d59343a484769c7d1f3b2995b6baec3c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a402e10f858f6a81b45350712b0a29d59343a484769c7d1f3b2995b6baec3c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:04 np0005603621 podman[78110]: 2026-01-31 07:21:04.906905866 +0000 UTC m=+0.399310581 container init a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231 (image=quay.io/ceph/ceph:v18, name=loving_lalande, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:21:04 np0005603621 podman[78110]: 2026-01-31 07:21:04.91619654 +0000 UTC m=+0.408601215 container start a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231 (image=quay.io/ceph/ceph:v18, name=loving_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:21:04 np0005603621 podman[78110]: 2026-01-31 07:21:04.968352005 +0000 UTC m=+0.460756680 container attach a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231 (image=quay.io/ceph/ceph:v18, name=loving_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:21:05 np0005603621 practical_mendel[78079]: [
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:    {
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "available": false,
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "ceph_device": false,
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "lsm_data": {},
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "lvs": [],
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "path": "/dev/sr0",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "rejected_reasons": [
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "Insufficient space (<5GB)",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "Has a FileSystem"
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        ],
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        "sys_api": {
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "actuators": null,
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "device_nodes": "sr0",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "devname": "sr0",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "human_readable_size": "482.00 KB",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "id_bus": "ata",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "model": "QEMU DVD-ROM",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "nr_requests": "2",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "parent": "/dev/sr0",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "partitions": {},
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "path": "/dev/sr0",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "removable": "1",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "rev": "2.5+",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "ro": "0",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "rotational": "1",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "sas_address": "",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "sas_device_handle": "",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "scheduler_mode": "mq-deadline",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "sectors": 0,
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "sectorsize": "2048",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "size": 493568.0,
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "support_discard": "2048",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "type": "disk",
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:            "vendor": "QEMU"
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:        }
Jan 31 02:21:05 np0005603621 practical_mendel[78079]:    }
Jan 31 02:21:05 np0005603621 practical_mendel[78079]: ]
Jan 31 02:21:05 np0005603621 systemd[1]: libpod-302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7.scope: Deactivated successfully.
Jan 31 02:21:05 np0005603621 systemd[1]: libpod-302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7.scope: Consumed 1.058s CPU time.
Jan 31 02:21:05 np0005603621 podman[78063]: 2026-01-31 07:21:05.305014129 +0000 UTC m=+1.248554560 container died 302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendel, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2276968259' entity='client.admin' 
Jan 31 02:21:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5d7f0c287391b7f158d58705a340c0481b52ad17b3d4d5558ec2216a3c655ff9-merged.mount: Deactivated successfully.
Jan 31 02:21:05 np0005603621 systemd[1]: libpod-a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231.scope: Deactivated successfully.
Jan 31 02:21:05 np0005603621 podman[78063]: 2026-01-31 07:21:05.652832305 +0000 UTC m=+1.596372736 container remove 302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:05 np0005603621 podman[78110]: 2026-01-31 07:21:05.659058701 +0000 UTC m=+1.151463376 container died a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231 (image=quay.io/ceph/ceph:v18, name=loving_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-18a402e10f858f6a81b45350712b0a29d59343a484769c7d1f3b2995b6baec3c-merged.mount: Deactivated successfully.
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:05 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:21:05 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:21:05 np0005603621 podman[78110]: 2026-01-31 07:21:05.906468119 +0000 UTC m=+1.398872794 container remove a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231 (image=quay.io/ceph/ceph:v18, name=loving_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:21:05 np0005603621 systemd[1]: libpod-conmon-a7c09cfd81c7b882e0cd6fe726e103cae4d897fd4dd8c70b00ff05e4ecec2231.scope: Deactivated successfully.
Jan 31 02:21:05 np0005603621 systemd[1]: libpod-conmon-302b3c163f0ab258c31b9fc7f433ca620ddb64895d8d3030bb77012439da6cb7.scope: Deactivated successfully.
Jan 31 02:21:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/2276968259' entity='client.admin' 
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:21:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:06 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:21:06 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:21:06 np0005603621 ansible-async_wrapper.py[79822]: Invoked with j503163952789 30 /home/zuul/.ansible/tmp/ansible-tmp-1769844066.2570648-37380-70508052480913/AnsiballZ_command.py _
Jan 31 02:21:06 np0005603621 ansible-async_wrapper.py[79911]: Starting module and watcher
Jan 31 02:21:06 np0005603621 ansible-async_wrapper.py[79911]: Start watching 79914 (30)
Jan 31 02:21:06 np0005603621 ansible-async_wrapper.py[79914]: Start module (79914)
Jan 31 02:21:06 np0005603621 ansible-async_wrapper.py[79822]: Return async_wrapper task started.
Jan 31 02:21:06 np0005603621 python3[79919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:06 np0005603621 podman[80003]: 2026-01-31 07:21:06.98842219 +0000 UTC m=+0.047786988 container create c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5 (image=quay.io/ceph/ceph:v18, name=cool_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:07 np0005603621 systemd[1]: Started libpod-conmon-c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5.scope.
Jan 31 02:21:07 np0005603621 podman[80003]: 2026-01-31 07:21:06.965944541 +0000 UTC m=+0.025309349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afeabc91a997876d1ee8d994b75ffcaa07b229950662891b78f981ef95a30272/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afeabc91a997876d1ee8d994b75ffcaa07b229950662891b78f981ef95a30272/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:07 np0005603621 podman[80003]: 2026-01-31 07:21:07.145208488 +0000 UTC m=+0.204573296 container init c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5 (image=quay.io/ceph/ceph:v18, name=cool_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:07 np0005603621 podman[80003]: 2026-01-31 07:21:07.150181435 +0000 UTC m=+0.209546213 container start c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5 (image=quay.io/ceph/ceph:v18, name=cool_dijkstra, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:07 np0005603621 podman[80003]: 2026-01-31 07:21:07.158324622 +0000 UTC m=+0.217689400 container attach c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5 (image=quay.io/ceph/ceph:v18, name=cool_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 31 02:21:07 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:21:07 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:21:07 np0005603621 ceph-mon[74394]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:21:07 np0005603621 ceph-mon[74394]: Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:21:07 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 02:21:07 np0005603621 cool_dijkstra[80091]: 
Jan 31 02:21:07 np0005603621 cool_dijkstra[80091]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 02:21:07 np0005603621 systemd[1]: libpod-c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5.scope: Deactivated successfully.
Jan 31 02:21:07 np0005603621 podman[80003]: 2026-01-31 07:21:07.726102839 +0000 UTC m=+0.785467627 container died c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5 (image=quay.io/ceph/ceph:v18, name=cool_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:21:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-afeabc91a997876d1ee8d994b75ffcaa07b229950662891b78f981ef95a30272-merged.mount: Deactivated successfully.
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:21:08 np0005603621 podman[80003]: 2026-01-31 07:21:08.202322297 +0000 UTC m=+1.261687085 container remove c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5 (image=quay.io/ceph/ceph:v18, name=cool_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:21:08 np0005603621 ansible-async_wrapper.py[79914]: Module complete (79914)
Jan 31 02:21:08 np0005603621 systemd[1]: libpod-conmon-c834532a2de6aa52c5b452a0d8a102272a2acc23c2fa1536f1882c271b640aa5.scope: Deactivated successfully.
Jan 31 02:21:08 np0005603621 python3[80727]: ansible-ansible.legacy.async_status Invoked with jid=j503163952789.79822 mode=status _async_dir=/root/.ansible_async
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:21:08 np0005603621 python3[80946]: ansible-ansible.legacy.async_status Invoked with jid=j503163952789.79822 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:08 np0005603621 python3[81247]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:09 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 87b1155c-9d71-4282-93ad-cd67a26d217c (Updating crash deployment (+1 -> 1))
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:09 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 31 02:21:09 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 31 02:21:09 np0005603621 python3[81355]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:09 np0005603621 podman[81383]: 2026-01-31 07:21:09.412484345 +0000 UTC m=+0.056535296 container create 722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6 (image=quay.io/ceph/ceph:v18, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:21:09 np0005603621 podman[81383]: 2026-01-31 07:21:09.373116692 +0000 UTC m=+0.017167663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:09 np0005603621 systemd[1]: Started libpod-conmon-722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6.scope.
Jan 31 02:21:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4a65190cd6059528d5161bd85b155c951436be6b73a96369a67d64c8ce3e00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4a65190cd6059528d5161bd85b155c951436be6b73a96369a67d64c8ce3e00/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4a65190cd6059528d5161bd85b155c951436be6b73a96369a67d64c8ce3e00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:21:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:21:10 np0005603621 podman[81383]: 2026-01-31 07:21:10.055098572 +0000 UTC m=+0.699149533 container init 722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6 (image=quay.io/ceph/ceph:v18, name=cool_shaw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:21:10 np0005603621 podman[81383]: 2026-01-31 07:21:10.059696367 +0000 UTC m=+0.703747308 container start 722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6 (image=quay.io/ceph/ceph:v18, name=cool_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:10 np0005603621 podman[81383]: 2026-01-31 07:21:10.244332174 +0000 UTC m=+0.888383155 container attach 722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6 (image=quay.io/ceph/ceph:v18, name=cool_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:21:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:10 np0005603621 podman[81436]: 2026-01-31 07:21:10.314077075 +0000 UTC m=+0.024474014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:10 np0005603621 podman[81436]: 2026-01-31 07:21:10.544418433 +0000 UTC m=+0.254815332 container create ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:10 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 02:21:10 np0005603621 cool_shaw[81431]: 
Jan 31 02:21:10 np0005603621 cool_shaw[81431]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 02:21:10 np0005603621 systemd[1]: libpod-722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6.scope: Deactivated successfully.
Jan 31 02:21:10 np0005603621 podman[81383]: 2026-01-31 07:21:10.721851493 +0000 UTC m=+1.365902424 container died 722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6 (image=quay.io/ceph/ceph:v18, name=cool_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:21:10 np0005603621 systemd[1]: Started libpod-conmon-ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0.scope.
Jan 31 02:21:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:11 np0005603621 ceph-mon[74394]: Deploying daemon crash.compute-0 on compute-0
Jan 31 02:21:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aa4a65190cd6059528d5161bd85b155c951436be6b73a96369a67d64c8ce3e00-merged.mount: Deactivated successfully.
Jan 31 02:21:11 np0005603621 podman[81471]: 2026-01-31 07:21:11.704424539 +0000 UTC m=+1.004847180 container remove 722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6 (image=quay.io/ceph/ceph:v18, name=cool_shaw, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:21:11 np0005603621 systemd[1]: libpod-conmon-722b9d23b214544141d68a0d5c838cf38b5d52f5b9fc836e58575242d5a3cce6.scope: Deactivated successfully.
Jan 31 02:21:11 np0005603621 ansible-async_wrapper.py[79911]: Done in kid B.
Jan 31 02:21:11 np0005603621 podman[81436]: 2026-01-31 07:21:11.851383786 +0000 UTC m=+1.561780685 container init ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:11 np0005603621 podman[81436]: 2026-01-31 07:21:11.857139058 +0000 UTC m=+1.567535957 container start ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:21:11 np0005603621 vibrant_matsumoto[81486]: 167 167
Jan 31 02:21:11 np0005603621 systemd[1]: libpod-ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0.scope: Deactivated successfully.
Jan 31 02:21:11 np0005603621 podman[81436]: 2026-01-31 07:21:11.915814739 +0000 UTC m=+1.626211648 container attach ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_matsumoto, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:11 np0005603621 podman[81436]: 2026-01-31 07:21:11.916342766 +0000 UTC m=+1.626739675 container died ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-748cd260929feb8d4437516b58f5558199fb16ffcc956e1667658ee15f7990a9-merged.mount: Deactivated successfully.
Jan 31 02:21:12 np0005603621 python3[81531]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:12 np0005603621 podman[81436]: 2026-01-31 07:21:12.258004856 +0000 UTC m=+1.968401755 container remove ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:21:12 np0005603621 systemd[1]: libpod-conmon-ce63e767d8296f724596495d53ca472d3c378b0c98cc70b8911029bb7a061bc0.scope: Deactivated successfully.
Jan 31 02:21:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:12 np0005603621 podman[81534]: 2026-01-31 07:21:12.269695766 +0000 UTC m=+0.134165464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:12 np0005603621 podman[81534]: 2026-01-31 07:21:12.496544125 +0000 UTC m=+0.361013793 container create 0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242 (image=quay.io/ceph/ceph:v18, name=inspiring_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:21:12 np0005603621 systemd[1]: Started libpod-conmon-0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242.scope.
Jan 31 02:21:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5110b5585ef94305753f2e47d4a71a8fc7eb88989b9ff9e4c0ae829f806b0d1b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5110b5585ef94305753f2e47d4a71a8fc7eb88989b9ff9e4c0ae829f806b0d1b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5110b5585ef94305753f2e47d4a71a8fc7eb88989b9ff9e4c0ae829f806b0d1b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:12 np0005603621 podman[81534]: 2026-01-31 07:21:12.757682545 +0000 UTC m=+0.622152243 container init 0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242 (image=quay.io/ceph/ceph:v18, name=inspiring_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:21:12 np0005603621 podman[81534]: 2026-01-31 07:21:12.763127017 +0000 UTC m=+0.627596685 container start 0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242 (image=quay.io/ceph/ceph:v18, name=inspiring_driscoll, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:21:12 np0005603621 systemd[1]: Reloading.
Jan 31 02:21:12 np0005603621 podman[81534]: 2026-01-31 07:21:12.891446896 +0000 UTC m=+0.755916564 container attach 0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242 (image=quay.io/ceph/ceph:v18, name=inspiring_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:21:12 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:21:12 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:21:13 np0005603621 systemd[1]: Reloading.
Jan 31 02:21:13 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:21:13 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:21:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 31 02:21:13 np0005603621 systemd[1]: Starting Ceph crash.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:21:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1449528989' entity='client.admin' 
Jan 31 02:21:13 np0005603621 systemd[1]: libpod-0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242.scope: Deactivated successfully.
Jan 31 02:21:13 np0005603621 podman[81534]: 2026-01-31 07:21:13.380129817 +0000 UTC m=+1.244599485 container died 0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242 (image=quay.io/ceph/ceph:v18, name=inspiring_driscoll, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:21:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5110b5585ef94305753f2e47d4a71a8fc7eb88989b9ff9e4c0ae829f806b0d1b-merged.mount: Deactivated successfully.
Jan 31 02:21:13 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1449528989' entity='client.admin' 
Jan 31 02:21:13 np0005603621 podman[81534]: 2026-01-31 07:21:13.927205771 +0000 UTC m=+1.791675479 container remove 0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242 (image=quay.io/ceph/ceph:v18, name=inspiring_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:13 np0005603621 systemd[1]: libpod-conmon-0e7f76b3698ca9a36adffd91db9c008a1b4c607b91cafa8fc713409edd78b242.scope: Deactivated successfully.
Jan 31 02:21:14 np0005603621 podman[81712]: 2026-01-31 07:21:14.064554485 +0000 UTC m=+0.021991265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:14 np0005603621 podman[81712]: 2026-01-31 07:21:14.164676174 +0000 UTC m=+0.122112934 container create 287b42eb8058430f66d49fad6a858b93e85e13d224b0307dc6db988fdd517c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:14 np0005603621 python3[81749]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8542ac82b5c85be570ed379312009f278265d22f19e187c58c78e043bbcbb587/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8542ac82b5c85be570ed379312009f278265d22f19e187c58c78e043bbcbb587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8542ac82b5c85be570ed379312009f278265d22f19e187c58c78e043bbcbb587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8542ac82b5c85be570ed379312009f278265d22f19e187c58c78e043bbcbb587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 podman[81750]: 2026-01-31 07:21:14.318077515 +0000 UTC m=+0.070708032 container create 21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba (image=quay.io/ceph/ceph:v18, name=cranky_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:21:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:14 np0005603621 systemd[1]: Started libpod-conmon-21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba.scope.
Jan 31 02:21:14 np0005603621 podman[81750]: 2026-01-31 07:21:14.273350873 +0000 UTC m=+0.025981370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b1beaae696d9efe23df2aba123a7cdd08874bc1659a7f9f3e5ecd20958f2be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b1beaae696d9efe23df2aba123a7cdd08874bc1659a7f9f3e5ecd20958f2be/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b1beaae696d9efe23df2aba123a7cdd08874bc1659a7f9f3e5ecd20958f2be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:14 np0005603621 podman[81712]: 2026-01-31 07:21:14.376055565 +0000 UTC m=+0.333492415 container init 287b42eb8058430f66d49fad6a858b93e85e13d224b0307dc6db988fdd517c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:21:14 np0005603621 podman[81712]: 2026-01-31 07:21:14.382801337 +0000 UTC m=+0.340238137 container start 287b42eb8058430f66d49fad6a858b93e85e13d224b0307dc6db988fdd517c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:21:14 np0005603621 bash[81712]: 287b42eb8058430f66d49fad6a858b93e85e13d224b0307dc6db988fdd517c44
Jan 31 02:21:14 np0005603621 systemd[1]: Started Ceph crash.compute-0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:21:14 np0005603621 podman[81750]: 2026-01-31 07:21:14.394176456 +0000 UTC m=+0.146806953 container init 21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba (image=quay.io/ceph/ceph:v18, name=cranky_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:21:14 np0005603621 podman[81750]: 2026-01-31 07:21:14.399287387 +0000 UTC m=+0.151917874 container start 21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba (image=quay.io/ceph/ceph:v18, name=cranky_lewin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:14 np0005603621 podman[81750]: 2026-01-31 07:21:14.404241054 +0000 UTC m=+0.156871551 container attach 21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba (image=quay.io/ceph/ceph:v18, name=cranky_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:14 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 87b1155c-9d71-4282-93ad-cd67a26d217c (Updating crash deployment (+1 -> 1))
Jan 31 02:21:14 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 87b1155c-9d71-4282-93ad-cd67a26d217c (Updating crash deployment (+1 -> 1)) in 6 seconds
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 063d9236-dad2-4c5b-91b8-67a2b0e7c2b3 does not exist
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5fd7aa5a-025b-4cdf-87b6-763d2c5032bf does not exist
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 31 02:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: 2026-01-31T07:21:14.790+0000 7f699aeb0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: 2026-01-31T07:21:14.790+0000 7f699aeb0640 -1 AuthRegistry(0x7f6994066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: 2026-01-31T07:21:14.791+0000 7f699aeb0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: 2026-01-31T07:21:14.791+0000 7f699aeb0640 -1 AuthRegistry(0x7f699aeaf000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: 2026-01-31T07:21:14.792+0000 7f6998c25640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: 2026-01-31T07:21:14.792+0000 7f699aeb0640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 31 02:21:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/498822225' entity='client.admin' 
Jan 31 02:21:15 np0005603621 systemd[1]: libpod-21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba.scope: Deactivated successfully.
Jan 31 02:21:15 np0005603621 podman[81750]: 2026-01-31 07:21:15.202088331 +0000 UTC m=+0.954718818 container died 21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba (image=quay.io/ceph/ceph:v18, name=cranky_lewin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:21:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c4b1beaae696d9efe23df2aba123a7cdd08874bc1659a7f9f3e5ecd20958f2be-merged.mount: Deactivated successfully.
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:15 np0005603621 podman[81750]: 2026-01-31 07:21:15.52940076 +0000 UTC m=+1.282031247 container remove 21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba (image=quay.io/ceph/ceph:v18, name=cranky_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:15 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/498822225' entity='client.admin' 
Jan 31 02:21:15 np0005603621 systemd[1]: libpod-conmon-21d80d2b04196924f0e945437e8333bb418d30efea906571a5a1d2a26ef9a3ba.scope: Deactivated successfully.
Jan 31 02:21:15 np0005603621 podman[82042]: 2026-01-31 07:21:15.802657982 +0000 UTC m=+0.480425681 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:21:15 np0005603621 podman[82042]: 2026-01-31 07:21:15.891679041 +0000 UTC m=+0.569446710 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:21:15 np0005603621 python3[82081]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:16 np0005603621 podman[82098]: 2026-01-31 07:21:16.017656346 +0000 UTC m=+0.095871686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:16 np0005603621 podman[82098]: 2026-01-31 07:21:16.154517615 +0000 UTC m=+0.232732925 container create f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f (image=quay.io/ceph/ceph:v18, name=friendly_easley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:16 np0005603621 systemd[1]: Started libpod-conmon-f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f.scope.
Jan 31 02:21:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528b6ed6c91b96afc4359b8bd73c2c00190ac0b8c025a95733084423c5ed17eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528b6ed6c91b96afc4359b8bd73c2c00190ac0b8c025a95733084423c5ed17eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528b6ed6c91b96afc4359b8bd73c2c00190ac0b8c025a95733084423c5ed17eb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:16 np0005603621 podman[82098]: 2026-01-31 07:21:16.232465125 +0000 UTC m=+0.310680455 container init f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f (image=quay.io/ceph/ceph:v18, name=friendly_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:21:16 np0005603621 podman[82098]: 2026-01-31 07:21:16.237621668 +0000 UTC m=+0.315836978 container start f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f (image=quay.io/ceph/ceph:v18, name=friendly_easley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:21:16 np0005603621 podman[82098]: 2026-01-31 07:21:16.242027296 +0000 UTC m=+0.320242626 container attach f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f (image=quay.io/ceph/ceph:v18, name=friendly_easley, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d701e1f9-7ea8-4043-b1ef-3214e6474c61 does not exist
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d43c9536-fa1e-46ab-87c6-207261a50b39 does not exist
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4e0d53ba-a57b-4104-a6c4-ad0ebd5961a5 does not exist
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2717394701' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:21:16 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/2717394701' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.406530963 +0000 UTC m=+0.032391672 container create 2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:17 np0005603621 systemd[1]: Started libpod-conmon-2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81.scope.
Jan 31 02:21:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.464699469 +0000 UTC m=+0.090560198 container init 2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.471233615 +0000 UTC m=+0.097094324 container start 2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:17 np0005603621 suspicious_knuth[82358]: 167 167
Jan 31 02:21:17 np0005603621 systemd[1]: libpod-2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81.scope: Deactivated successfully.
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.474686984 +0000 UTC m=+0.100547743 container attach 2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.475290754 +0000 UTC m=+0.101151473 container died 2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.391815829 +0000 UTC m=+0.017676558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fd2e8d44c4493305b4b1707ecf0548b8c965160a723371b0523b375cf191914a-merged.mount: Deactivated successfully.
Jan 31 02:21:17 np0005603621 podman[82341]: 2026-01-31 07:21:17.522317938 +0000 UTC m=+0.148178657 container remove 2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_knuth, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:21:17 np0005603621 systemd[1]: libpod-conmon-2968b61578446645c0df79b6f0a3ec36a25b0d727c8ff96fdb0acb54a951bb81.scope: Deactivated successfully.
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:17 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ddmhwk (unknown last config time)...
Jan 31 02:21:17 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ddmhwk (unknown last config time)...
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ddmhwk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ddmhwk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:17 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ddmhwk on compute-0
Jan 31 02:21:17 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ddmhwk on compute-0
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2717394701' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 31 02:21:17 np0005603621 friendly_easley[82133]: set require_min_compat_client to mimic
Jan 31 02:21:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 31 02:21:17 np0005603621 systemd[1]: libpod-f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f.scope: Deactivated successfully.
Jan 31 02:21:17 np0005603621 podman[82098]: 2026-01-31 07:21:17.839512616 +0000 UTC m=+1.917727956 container died f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f (image=quay.io/ceph/ceph:v18, name=friendly_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:21:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-528b6ed6c91b96afc4359b8bd73c2c00190ac0b8c025a95733084423c5ed17eb-merged.mount: Deactivated successfully.
Jan 31 02:21:17 np0005603621 podman[82098]: 2026-01-31 07:21:17.875774631 +0000 UTC m=+1.953989941 container remove f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f (image=quay.io/ceph/ceph:v18, name=friendly_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:21:17 np0005603621 systemd[1]: libpod-conmon-f7fa77dca28e33124ddcd111c7cdeba2b48b8dd91c1043912a9a37c8008e330f.scope: Deactivated successfully.
Jan 31 02:21:17 np0005603621 podman[82509]: 2026-01-31 07:21:17.973168825 +0000 UTC m=+0.033928652 container create 6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:18 np0005603621 systemd[1]: Started libpod-conmon-6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115.scope.
Jan 31 02:21:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:18 np0005603621 podman[82509]: 2026-01-31 07:21:18.018617459 +0000 UTC m=+0.079377296 container init 6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:18 np0005603621 podman[82509]: 2026-01-31 07:21:18.023774872 +0000 UTC m=+0.084534699 container start 6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:21:18 np0005603621 elastic_dirac[82526]: 167 167
Jan 31 02:21:18 np0005603621 systemd[1]: libpod-6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115.scope: Deactivated successfully.
Jan 31 02:21:18 np0005603621 conmon[82526]: conmon 6e6c9a85b4b6aad7f650 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115.scope/container/memory.events
Jan 31 02:21:18 np0005603621 podman[82509]: 2026-01-31 07:21:18.026811357 +0000 UTC m=+0.087571214 container attach 6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:21:18 np0005603621 podman[82509]: 2026-01-31 07:21:18.027516949 +0000 UTC m=+0.088276776 container died 6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:21:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aab8c016b07ac45b66627455b548f21a1685854c503b1dc1e429cfd873159696-merged.mount: Deactivated successfully.
Jan 31 02:21:18 np0005603621 podman[82509]: 2026-01-31 07:21:17.957497829 +0000 UTC m=+0.018257686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:18 np0005603621 podman[82509]: 2026-01-31 07:21:18.06269493 +0000 UTC m=+0.123454797 container remove 6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dirac, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:18 np0005603621 systemd[1]: libpod-conmon-6e6c9a85b4b6aad7f65093ce7bd2381d068e3fcac22e6be29e93bb82c8ba5115.scope: Deactivated successfully.
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cda4f45d-247c-4d05-8a94-ff2aecb992c4 does not exist
Jan 31 02:21:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3c7108cb-8116-4aa3-8c2b-c7bcca1a584d does not exist
Jan 31 02:21:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 622ab5ac-ecc3-410a-86c4-41c3ac37fb7b does not exist
Jan 31 02:21:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:18 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 1 completed events
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 python3[82619]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:18 np0005603621 podman[82620]: 2026-01-31 07:21:18.526186365 +0000 UTC m=+0.053931073 container create ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c (image=quay.io/ceph/ceph:v18, name=cranky_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:18 np0005603621 systemd[1]: Started libpod-conmon-ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c.scope.
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ece02ee16896ca6bbf80643a3856348d874979fc246e1a10714372c5f3efeec/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ece02ee16896ca6bbf80643a3856348d874979fc246e1a10714372c5f3efeec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ece02ee16896ca6bbf80643a3856348d874979fc246e1a10714372c5f3efeec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: Reconfiguring mgr.compute-0.ddmhwk (unknown last config time)...
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ddmhwk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: Reconfiguring daemon mgr.compute-0.ddmhwk on compute-0
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/2717394701' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:18 np0005603621 podman[82620]: 2026-01-31 07:21:18.602211214 +0000 UTC m=+0.129955952 container init ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c (image=quay.io/ceph/ceph:v18, name=cranky_albattani, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:21:18 np0005603621 podman[82620]: 2026-01-31 07:21:18.510389287 +0000 UTC m=+0.038134025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:18 np0005603621 podman[82620]: 2026-01-31 07:21:18.606929263 +0000 UTC m=+0.134673971 container start ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c (image=quay.io/ceph/ceph:v18, name=cranky_albattani, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:18 np0005603621 podman[82620]: 2026-01-31 07:21:18.611224629 +0000 UTC m=+0.138969357 container attach ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c (image=quay.io/ceph/ceph:v18, name=cranky_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:21:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Added host compute-0
Jan 31 02:21:19 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4521b7b9-79b7-4943-b7bc-95a66c678e58 does not exist
Jan 31 02:21:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fddd3bc9-d5b4-42f7-a81a-915efb1c51a8 does not exist
Jan 31 02:21:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9ef767d2-fb99-4efd-a9c7-91585846f954 does not exist
Jan 31 02:21:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:20 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 31 02:21:20 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 31 02:21:21 np0005603621 ceph-mon[74394]: Added host compute-0
Jan 31 02:21:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:22 np0005603621 ceph-mon[74394]: Deploying cephadm binary to compute-1
Jan 31 02:21:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:24 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Added host compute-1
Jan 31 02:21:24 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 31 02:21:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:25 np0005603621 ceph-mon[74394]: Added host compute-1
Jan 31 02:21:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:25 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 31 02:21:25 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 31 02:21:26 np0005603621 ceph-mon[74394]: Deploying cephadm binary to compute-2
Jan 31 02:21:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Added host compute-2
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:29 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 cranky_albattani[82635]: Added host 'compute-0' with addr '192.168.122.100'
Jan 31 02:21:29 np0005603621 cranky_albattani[82635]: Added host 'compute-1' with addr '192.168.122.101'
Jan 31 02:21:29 np0005603621 cranky_albattani[82635]: Added host 'compute-2' with addr '192.168.122.102'
Jan 31 02:21:29 np0005603621 cranky_albattani[82635]: Scheduled mon update...
Jan 31 02:21:29 np0005603621 cranky_albattani[82635]: Scheduled mgr update...
Jan 31 02:21:29 np0005603621 cranky_albattani[82635]: Scheduled osd.default_drive_group update...
Jan 31 02:21:29 np0005603621 systemd[1]: libpod-ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c.scope: Deactivated successfully.
Jan 31 02:21:29 np0005603621 podman[82620]: 2026-01-31 07:21:29.118808926 +0000 UTC m=+10.646553654 container died ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c (image=quay.io/ceph/ceph:v18, name=cranky_albattani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:21:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9ece02ee16896ca6bbf80643a3856348d874979fc246e1a10714372c5f3efeec-merged.mount: Deactivated successfully.
Jan 31 02:21:29 np0005603621 podman[82620]: 2026-01-31 07:21:29.281267412 +0000 UTC m=+10.809012120 container remove ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c (image=quay.io/ceph/ceph:v18, name=cranky_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:21:29 np0005603621 systemd[1]: libpod-conmon-ee1856b2a45b4aed59f5538d08cbeb417da7d826e6da0befcf294e07635bc55c.scope: Deactivated successfully.
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:29 np0005603621 python3[82869]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:21:29 np0005603621 podman[82871]: 2026-01-31 07:21:29.886139378 +0000 UTC m=+0.046226059 container create 9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1 (image=quay.io/ceph/ceph:v18, name=adoring_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:29 np0005603621 systemd[1]: Started libpod-conmon-9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1.scope.
Jan 31 02:21:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80269474d6b27a127715c4e9abb29053147729bbf70d8f5d953658d33ce518b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80269474d6b27a127715c4e9abb29053147729bbf70d8f5d953658d33ce518b6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80269474d6b27a127715c4e9abb29053147729bbf70d8f5d953658d33ce518b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:29 np0005603621 podman[82871]: 2026-01-31 07:21:29.949493618 +0000 UTC m=+0.109580329 container init 9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1 (image=quay.io/ceph/ceph:v18, name=adoring_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:29 np0005603621 podman[82871]: 2026-01-31 07:21:29.954814276 +0000 UTC m=+0.114900997 container start 9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1 (image=quay.io/ceph/ceph:v18, name=adoring_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:29 np0005603621 podman[82871]: 2026-01-31 07:21:29.958549864 +0000 UTC m=+0.118636555 container attach 9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1 (image=quay.io/ceph/ceph:v18, name=adoring_hermann, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:29 np0005603621 podman[82871]: 2026-01-31 07:21:29.866385756 +0000 UTC m=+0.026472457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:21:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: Added host compute-2
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 02:21:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464336814' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 02:21:30 np0005603621 adoring_hermann[82887]: 
Jan 31 02:21:30 np0005603621 adoring_hermann[82887]: {"fsid":"2f5ab832-5f2e-5a84-bd93-cf8bab960ee2","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":102,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-31T07:19:45.660125+0000","services":{}},"progress_events":{}}
Jan 31 02:21:30 np0005603621 systemd[1]: libpod-9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1.scope: Deactivated successfully.
Jan 31 02:21:30 np0005603621 podman[82912]: 2026-01-31 07:21:30.593271353 +0000 UTC m=+0.021943603 container died 9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1 (image=quay.io/ceph/ceph:v18, name=adoring_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:21:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-80269474d6b27a127715c4e9abb29053147729bbf70d8f5d953658d33ce518b6-merged.mount: Deactivated successfully.
Jan 31 02:21:30 np0005603621 podman[82912]: 2026-01-31 07:21:30.632988096 +0000 UTC m=+0.061660316 container remove 9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1 (image=quay.io/ceph/ceph:v18, name=adoring_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:21:30 np0005603621 systemd[1]: libpod-conmon-9ad59c00291b73af0509622346f731e5c15a9b88daeab5f71dddc6dae2cd0da1.scope: Deactivated successfully.
Jan 31 02:21:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:21:38
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] No pools available
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:21:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:43 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:21:43 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:43 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:21:43 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:21:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:44 np0005603621 ceph-mon[74394]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:21:44 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:21:44 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:21:45 np0005603621 ceph-mon[74394]: Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:21:45 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:21:45 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 2db94b3a-734e-4f01-ba6c-f6ea510973f4 (Updating crash deployment (+1 -> 2))
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:21:46.462+0000 7f6427e27640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: service_name: mon
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: placement:
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  hosts:
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  - compute-0
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  - compute-1
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  - compute-2
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:21:46.464+0000 7f6427e27640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: service_name: mgr
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: placement:
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  hosts:
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  - compute-0
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  - compute-1
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]:  - compute-2
Jan 31 02:21:46 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 31 02:21:46 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:21:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:21:47 np0005603621 ceph-mon[74394]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 31 02:21:47 np0005603621 ceph-mon[74394]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 31 02:21:47 np0005603621 ceph-mon[74394]: Deploying daemon crash.compute-1 on compute-1
Jan 31 02:21:47 np0005603621 ceph-mon[74394]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 31 02:21:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 2db94b3a-734e-4f01-ba6c-f6ea510973f4 (Updating crash deployment (+1 -> 2))
Jan 31 02:21:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 2db94b3a-734e-4f01-ba6c-f6ea510973f4 (Updating crash deployment (+1 -> 2)) in 2 seconds
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.188546493 +0000 UTC m=+0.059851299 container create 0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:49 np0005603621 systemd[1]: Started libpod-conmon-0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc.scope.
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.15242177 +0000 UTC m=+0.023726576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.277697325 +0000 UTC m=+0.149002141 container init 0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shamir, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.281501001 +0000 UTC m=+0.152805807 container start 0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:49 np0005603621 thirsty_shamir[83084]: 167 167
Jan 31 02:21:49 np0005603621 systemd[1]: libpod-0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc.scope: Deactivated successfully.
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.284261775 +0000 UTC m=+0.155566591 container attach 0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.285508213 +0000 UTC m=+0.156813029 container died 0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:21:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5b6b7fa43330ab201ebaaa6d34db6872001617a3e4bf4ddfb626263689d25ce7-merged.mount: Deactivated successfully.
Jan 31 02:21:49 np0005603621 podman[83067]: 2026-01-31 07:21:49.318382537 +0000 UTC m=+0.189687343 container remove 0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:21:49 np0005603621 systemd[1]: libpod-conmon-0e33a4380faf2c586b5413c8e1f32df5d4aa1f810f7595562c69c87e7da931fc.scope: Deactivated successfully.
Jan 31 02:21:49 np0005603621 podman[83108]: 2026-01-31 07:21:49.442460486 +0000 UTC m=+0.038211468 container create 680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:21:49 np0005603621 systemd[1]: Started libpod-conmon-680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d.scope.
Jan 31 02:21:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a570c25af264a124b562b59635b0bee8bf3134852bd9cc17a321a3304fb7998f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a570c25af264a124b562b59635b0bee8bf3134852bd9cc17a321a3304fb7998f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a570c25af264a124b562b59635b0bee8bf3134852bd9cc17a321a3304fb7998f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a570c25af264a124b562b59635b0bee8bf3134852bd9cc17a321a3304fb7998f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a570c25af264a124b562b59635b0bee8bf3134852bd9cc17a321a3304fb7998f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:49 np0005603621 podman[83108]: 2026-01-31 07:21:49.504894943 +0000 UTC m=+0.100645905 container init 680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:49 np0005603621 podman[83108]: 2026-01-31 07:21:49.518202599 +0000 UTC m=+0.113953551 container start 680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:21:49 np0005603621 podman[83108]: 2026-01-31 07:21:49.421144615 +0000 UTC m=+0.016895587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:49 np0005603621 podman[83108]: 2026-01-31 07:21:49.521983134 +0000 UTC m=+0.117734096 container attach 680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:21:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:21:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: --> relative data size: 1.0
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 69ce1ba1-37ea-44ee-8e02-ae107b60d956
Jan 31 02:21:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956"} v 0) v1
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2942024132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956"}]: dispatch
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2942024132' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956"}]': finished
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:21:50 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c7b96aaa-43a0-4c7e-ac49-508c01d627b5"} v 0) v1
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3660948089' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c7b96aaa-43a0-4c7e-ac49-508c01d627b5"}]: dispatch
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3660948089' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c7b96aaa-43a0-4c7e-ac49-508c01d627b5"}]': finished
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:21:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:21:50 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:21:50 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 31 02:21:50 np0005603621 lvm[83171]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:21:50 np0005603621 lvm[83171]: VG ceph_vg0 finished
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:50 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255234612' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 02:21:51 np0005603621 gallant_davinci[83124]: stderr: got monmap epoch 1
Jan 31 02:21:51 np0005603621 gallant_davinci[83124]: --> Creating keyring file for osd.0
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2886912492' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 31 02:21:51 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 31 02:21:51 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 31 02:21:51 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 69ce1ba1-37ea-44ee-8e02-ae107b60d956 --setuser ceph --setgroup ceph
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/2942024132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956"}]: dispatch
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/2942024132' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956"}]': finished
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.101:0/3660948089' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c7b96aaa-43a0-4c7e-ac49-508c01d627b5"}]: dispatch
Jan 31 02:21:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.101:0/3660948089' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c7b96aaa-43a0-4c7e-ac49-508c01d627b5"}]': finished
Jan 31 02:21:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:52 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 02:21:52 np0005603621 ceph-mon[74394]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 31 02:21:53 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 2 completed events
Jan 31 02:21:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:21:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:53 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: stderr: 2026-01-31T07:21:51.473+0000 7f2959482740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: stderr: 2026-01-31T07:21:51.473+0000 7f2959482740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: stderr: 2026-01-31T07:21:51.473+0000 7f2959482740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: stderr: 2026-01-31T07:21:51.473+0000 7f2959482740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 02:21:53 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 31 02:21:54 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:54 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:54 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:21:54 np0005603621 gallant_davinci[83124]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 02:21:54 np0005603621 gallant_davinci[83124]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 31 02:21:54 np0005603621 gallant_davinci[83124]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 31 02:21:54 np0005603621 systemd[1]: libpod-680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d.scope: Deactivated successfully.
Jan 31 02:21:54 np0005603621 systemd[1]: libpod-680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d.scope: Consumed 2.398s CPU time.
Jan 31 02:21:54 np0005603621 podman[84082]: 2026-01-31 07:21:54.163347755 +0000 UTC m=+0.028443959 container died 680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:21:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a570c25af264a124b562b59635b0bee8bf3134852bd9cc17a321a3304fb7998f-merged.mount: Deactivated successfully.
Jan 31 02:21:54 np0005603621 podman[84082]: 2026-01-31 07:21:54.21100076 +0000 UTC m=+0.076096954 container remove 680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:54 np0005603621 systemd[1]: libpod-conmon-680da17a6c55a73c57fc843c02b932c77c592fada1399e51048c8419065bff6d.scope: Deactivated successfully.
Jan 31 02:21:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.73797205 +0000 UTC m=+0.035974169 container create 782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:21:54 np0005603621 systemd[1]: Started libpod-conmon-782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981.scope.
Jan 31 02:21:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.721986693 +0000 UTC m=+0.019988832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.821968105 +0000 UTC m=+0.119970244 container init 782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.829199727 +0000 UTC m=+0.127201836 container start 782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.833225929 +0000 UTC m=+0.131228268 container attach 782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:21:54 np0005603621 affectionate_chandrasekhar[84251]: 167 167
Jan 31 02:21:54 np0005603621 systemd[1]: libpod-782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981.scope: Deactivated successfully.
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.836166759 +0000 UTC m=+0.134168888 container died 782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-30db729835c71ba85c76cdfa5adad1095993b2920afd0f588ca0b026a0eb1516-merged.mount: Deactivated successfully.
Jan 31 02:21:54 np0005603621 podman[84235]: 2026-01-31 07:21:54.873718136 +0000 UTC m=+0.171720255 container remove 782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:21:54 np0005603621 systemd[1]: libpod-conmon-782507f1800d79f7a19164c4f95feeb196acc036e595be31d63aa27407653981.scope: Deactivated successfully.
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:55.003863169 +0000 UTC m=+0.051157592 container create 9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:21:55 np0005603621 systemd[1]: Started libpod-conmon-9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c.scope.
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:54.979467935 +0000 UTC m=+0.026762418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccfc07db9e93d85b659d11538cfdb62ab56a5b25f6e3f0630e15dbf4b31cecd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccfc07db9e93d85b659d11538cfdb62ab56a5b25f6e3f0630e15dbf4b31cecd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccfc07db9e93d85b659d11538cfdb62ab56a5b25f6e3f0630e15dbf4b31cecd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ccfc07db9e93d85b659d11538cfdb62ab56a5b25f6e3f0630e15dbf4b31cecd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:55.100088288 +0000 UTC m=+0.147382761 container init 9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:55.107122173 +0000 UTC m=+0.154416616 container start 9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_dubinsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:55.111886568 +0000 UTC m=+0.159181051 container attach 9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:55 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 31 02:21:55 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]: {
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:    "0": [
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:        {
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "devices": [
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "/dev/loop3"
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            ],
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "lv_name": "ceph_lv0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "lv_size": "7511998464",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "name": "ceph_lv0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "tags": {
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.cluster_name": "ceph",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.crush_device_class": "",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.encrypted": "0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.osd_id": "0",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.type": "block",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:                "ceph.vdo": "0"
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            },
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "type": "block",
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:            "vg_name": "ceph_vg0"
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:        }
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]:    ]
Jan 31 02:21:55 np0005603621 upbeat_dubinsky[84290]: }
Jan 31 02:21:55 np0005603621 systemd[1]: libpod-9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c.scope: Deactivated successfully.
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:55.845453187 +0000 UTC m=+0.892747590 container died 9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:21:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4ccfc07db9e93d85b659d11538cfdb62ab56a5b25f6e3f0630e15dbf4b31cecd-merged.mount: Deactivated successfully.
Jan 31 02:21:55 np0005603621 podman[84274]: 2026-01-31 07:21:55.890647747 +0000 UTC m=+0.937942150 container remove 9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_dubinsky, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:21:55 np0005603621 systemd[1]: libpod-conmon-9bd50c7aedf7c39b649e66f4efd2e5d3e6b6c85b14702ef6a396534949887c2c.scope: Deactivated successfully.
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:21:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:21:55 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 31 02:21:55 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.434109592 +0000 UTC m=+0.034386682 container create 3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:21:56 np0005603621 systemd[1]: Started libpod-conmon-3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae.scope.
Jan 31 02:21:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.496773854 +0000 UTC m=+0.097050954 container init 3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_brahmagupta, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.501513689 +0000 UTC m=+0.101790769 container start 3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_brahmagupta, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.504287575 +0000 UTC m=+0.104564655 container attach 3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_brahmagupta, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:21:56 np0005603621 systemd[1]: libpod-3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae.scope: Deactivated successfully.
Jan 31 02:21:56 np0005603621 dreamy_brahmagupta[84469]: 167 167
Jan 31 02:21:56 np0005603621 conmon[84469]: conmon 3e9d1f76400d7f871636 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae.scope/container/memory.events
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.505697997 +0000 UTC m=+0.105975077 container died 3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.419113983 +0000 UTC m=+0.019391083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bf02c8a2c403ca0f9744f45f2c7246b1bcf0067c3130103d1c60da85956841cd-merged.mount: Deactivated successfully.
Jan 31 02:21:56 np0005603621 podman[84453]: 2026-01-31 07:21:56.539270152 +0000 UTC m=+0.139547232 container remove 3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:56 np0005603621 systemd[1]: libpod-conmon-3e9d1f76400d7f871636c6d03561de9186d1271f0c5d8f1015beba93d5acebae.scope: Deactivated successfully.
Jan 31 02:21:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 02:21:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 02:21:56 np0005603621 podman[84501]: 2026-01-31 07:21:56.719196607 +0000 UTC m=+0.033576817 container create 4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:21:56 np0005603621 systemd[1]: Started libpod-conmon-4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32.scope.
Jan 31 02:21:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9217054875d1b967a1cf09b8edef24b8fcb1dfc902f38ef737b2c6b359b16c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9217054875d1b967a1cf09b8edef24b8fcb1dfc902f38ef737b2c6b359b16c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9217054875d1b967a1cf09b8edef24b8fcb1dfc902f38ef737b2c6b359b16c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9217054875d1b967a1cf09b8edef24b8fcb1dfc902f38ef737b2c6b359b16c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9217054875d1b967a1cf09b8edef24b8fcb1dfc902f38ef737b2c6b359b16c0/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:56 np0005603621 podman[84501]: 2026-01-31 07:21:56.79202093 +0000 UTC m=+0.106401170 container init 4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:21:56 np0005603621 podman[84501]: 2026-01-31 07:21:56.801309003 +0000 UTC m=+0.115689213 container start 4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:21:56 np0005603621 podman[84501]: 2026-01-31 07:21:56.706583371 +0000 UTC m=+0.020963601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:56 np0005603621 podman[84501]: 2026-01-31 07:21:56.804390357 +0000 UTC m=+0.118770597 container attach 4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:21:57 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test[84518]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 31 02:21:57 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test[84518]:                            [--no-systemd] [--no-tmpfs]
Jan 31 02:21:57 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test[84518]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 31 02:21:57 np0005603621 systemd[1]: libpod-4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32.scope: Deactivated successfully.
Jan 31 02:21:57 np0005603621 podman[84501]: 2026-01-31 07:21:57.443044468 +0000 UTC m=+0.757424678 container died 4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:21:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b9217054875d1b967a1cf09b8edef24b8fcb1dfc902f38ef737b2c6b359b16c0-merged.mount: Deactivated successfully.
Jan 31 02:21:57 np0005603621 podman[84501]: 2026-01-31 07:21:57.505669691 +0000 UTC m=+0.820049901 container remove 4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate-test, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:21:57 np0005603621 systemd[1]: libpod-conmon-4b502c55bc1fc421ff389dd079ed0681ce3c2817a3cce5ffa6cb16cde857fe32.scope: Deactivated successfully.
Jan 31 02:21:57 np0005603621 systemd[1]: Reloading.
Jan 31 02:21:57 np0005603621 ceph-mon[74394]: Deploying daemon osd.1 on compute-1
Jan 31 02:21:57 np0005603621 ceph-mon[74394]: Deploying daemon osd.0 on compute-0
Jan 31 02:21:57 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:21:57 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:21:57 np0005603621 systemd[1]: Reloading.
Jan 31 02:21:58 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:21:58 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:21:58 np0005603621 systemd[1]: Starting Ceph osd.0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:21:58 np0005603621 podman[84677]: 2026-01-31 07:21:58.355971604 +0000 UTC m=+0.054483634 container create cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:21:58 np0005603621 podman[84677]: 2026-01-31 07:21:58.324515403 +0000 UTC m=+0.023027483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:21:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4576cfd8b09e8faaf99b5d7cfab1d7c0cb9fa161fc15301649450b47c03b54a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4576cfd8b09e8faaf99b5d7cfab1d7c0cb9fa161fc15301649450b47c03b54a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4576cfd8b09e8faaf99b5d7cfab1d7c0cb9fa161fc15301649450b47c03b54a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4576cfd8b09e8faaf99b5d7cfab1d7c0cb9fa161fc15301649450b47c03b54a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4576cfd8b09e8faaf99b5d7cfab1d7c0cb9fa161fc15301649450b47c03b54a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:21:58 np0005603621 podman[84677]: 2026-01-31 07:21:58.520802427 +0000 UTC m=+0.219314457 container init cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:21:58 np0005603621 podman[84677]: 2026-01-31 07:21:58.526425109 +0000 UTC m=+0.224937099 container start cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:21:58 np0005603621 podman[84677]: 2026-01-31 07:21:58.572000161 +0000 UTC m=+0.270512181 container attach cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:21:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 02:21:59 np0005603621 bash[84677]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:21:59 np0005603621 bash[84677]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:21:59 np0005603621 bash[84677]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:21:59 np0005603621 bash[84677]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:59 np0005603621 bash[84677]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 02:21:59 np0005603621 bash[84677]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 31 02:21:59 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate[84693]: --> ceph-volume raw activate successful for osd ID: 0
Jan 31 02:21:59 np0005603621 bash[84677]: --> ceph-volume raw activate successful for osd ID: 0
Jan 31 02:21:59 np0005603621 systemd[1]: libpod-cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4.scope: Deactivated successfully.
Jan 31 02:21:59 np0005603621 podman[84677]: 2026-01-31 07:21:59.442990956 +0000 UTC m=+1.141502956 container died cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 31 02:21:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b4576cfd8b09e8faaf99b5d7cfab1d7c0cb9fa161fc15301649450b47c03b54a-merged.mount: Deactivated successfully.
Jan 31 02:21:59 np0005603621 podman[84677]: 2026-01-31 07:21:59.49457281 +0000 UTC m=+1.193084820 container remove cfad1f35ba2e4f9ab25107c1df18574dcc7682e908244a6ab23b5a83a6f0ffd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0-activate, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:59 np0005603621 podman[84861]: 2026-01-31 07:21:59.668636265 +0000 UTC m=+0.037005591 container create af2e3815ded07052013af44e961eb30d6aebcbabf404f7e8579600fe4cb398c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:21:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8c8448a93ae2267cb316f4191314782982849de7758266ed44098b2c37ed00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8c8448a93ae2267cb316f4191314782982849de7758266ed44098b2c37ed00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8c8448a93ae2267cb316f4191314782982849de7758266ed44098b2c37ed00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8c8448a93ae2267cb316f4191314782982849de7758266ed44098b2c37ed00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b8c8448a93ae2267cb316f4191314782982849de7758266ed44098b2c37ed00/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 31 02:21:59 np0005603621 podman[84861]: 2026-01-31 07:21:59.728094381 +0000 UTC m=+0.096463727 container init af2e3815ded07052013af44e961eb30d6aebcbabf404f7e8579600fe4cb398c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:21:59 np0005603621 podman[84861]: 2026-01-31 07:21:59.733106384 +0000 UTC m=+0.101475720 container start af2e3815ded07052013af44e961eb30d6aebcbabf404f7e8579600fe4cb398c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:21:59 np0005603621 bash[84861]: af2e3815ded07052013af44e961eb30d6aebcbabf404f7e8579600fe4cb398c1
Jan 31 02:21:59 np0005603621 podman[84861]: 2026-01-31 07:21:59.655244927 +0000 UTC m=+0.023614273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:21:59 np0005603621 systemd[1]: Started Ceph osd.0 for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: pidfile_write: ignore empty --pid-file
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e13c43800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e13c43800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e13c43800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e13c43800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e14a85800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e14a85800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e14a85800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e14a85800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 02:21:59 np0005603621 ceph-osd[84880]: bdev(0x558e14a85800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:21:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e13c43800 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.277137086 +0000 UTC m=+0.080104628 container create 760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: load: jerasure load: lrc 
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.215397711 +0000 UTC m=+0.018365233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:00 np0005603621 systemd[1]: Started libpod-conmon-760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6.scope.
Jan 31 02:22:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.365246216 +0000 UTC m=+0.168213738 container init 760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.371099675 +0000 UTC m=+0.174067197 container start 760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:00 np0005603621 vibrant_elgamal[85055]: 167 167
Jan 31 02:22:00 np0005603621 systemd[1]: libpod-760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6.scope: Deactivated successfully.
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.375901762 +0000 UTC m=+0.178869344 container attach 760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.376221591 +0000 UTC m=+0.179189123 container died 760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-330b1e84b0397817107e0373a9c5d188b12a66a03196b83c7b5e81f1e45499eb-merged.mount: Deactivated successfully.
Jan 31 02:22:00 np0005603621 podman[85033]: 2026-01-31 07:22:00.413263442 +0000 UTC m=+0.216230944 container remove 760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:22:00 np0005603621 systemd[1]: libpod-conmon-760dd842a56112457602a2b19d7273d1814e75961b322693af35f40152a7c3a6.scope: Deactivated successfully.
Jan 31 02:22:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:22:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:00 np0005603621 podman[85078]: 2026-01-31 07:22:00.55106058 +0000 UTC m=+0.038920469 container create 6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 02:22:00 np0005603621 systemd[1]: Started libpod-conmon-6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121.scope.
Jan 31 02:22:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fafb7cefa40b0fbf70c65ce5a2cd14141fc4e6561f90709b2fa0ff61db05d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fafb7cefa40b0fbf70c65ce5a2cd14141fc4e6561f90709b2fa0ff61db05d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fafb7cefa40b0fbf70c65ce5a2cd14141fc4e6561f90709b2fa0ff61db05d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76fafb7cefa40b0fbf70c65ce5a2cd14141fc4e6561f90709b2fa0ff61db05d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:00 np0005603621 podman[85078]: 2026-01-31 07:22:00.622501631 +0000 UTC m=+0.110361570 container init 6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:00 np0005603621 podman[85078]: 2026-01-31 07:22:00.532997798 +0000 UTC m=+0.020857697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:00 np0005603621 podman[85078]: 2026-01-31 07:22:00.628836024 +0000 UTC m=+0.116695913 container start 6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:00 np0005603621 podman[85078]: 2026-01-31 07:22:00.633410155 +0000 UTC m=+0.121270024 container attach 6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b06c00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluefs mount
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluefs mount shared_bdev_used = 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Git sha 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: DB SUMMARY
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: DB Session ID:  2PMMSYOD5GKI7PNRNT2I
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                                     Options.env: 0x558e14ad7c70
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                                Options.info_log: 0x558e13cc0ba0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.write_buffer_manager: 0x558e14be0460
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.row_cache: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                              Options.wal_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.wal_compression: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_background_jobs: 4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Compression algorithms supported:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kZSTD supported: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc0600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc05c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc05c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e13cc05c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b6c8d98f-eb86-4dea-9971-46e64504d700
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844120870613, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844120870866, "job": 1, "event": "recovery_finished"}
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: freelist init
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: freelist _read_cfg
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bluefs umount
Jan 31 02:22:00 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) close
Jan 31 02:22:00 np0005603621 python3[85129]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:00 np0005603621 podman[85325]: 2026-01-31 07:22:00.949546228 +0000 UTC m=+0.037056173 container create bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e (image=quay.io/ceph/ceph:v18, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:22:00 np0005603621 systemd[1]: Started libpod-conmon-bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e.scope.
Jan 31 02:22:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7cdacbdb6913cb938e36037ba44a54b1db7caa0b4c3d295b082e896a97eeac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7cdacbdb6913cb938e36037ba44a54b1db7caa0b4c3d295b082e896a97eeac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b7cdacbdb6913cb938e36037ba44a54b1db7caa0b4c3d295b082e896a97eeac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:01 np0005603621 podman[85325]: 2026-01-31 07:22:01.002308679 +0000 UTC m=+0.089818674 container init bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e (image=quay.io/ceph/ceph:v18, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:01 np0005603621 podman[85325]: 2026-01-31 07:22:01.006704512 +0000 UTC m=+0.094214467 container start bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e (image=quay.io/ceph/ceph:v18, name=lucid_heyrovsky, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:01 np0005603621 podman[85325]: 2026-01-31 07:22:01.011915651 +0000 UTC m=+0.099425626 container attach bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e (image=quay.io/ceph/ceph:v18, name=lucid_heyrovsky, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:22:01 np0005603621 podman[85325]: 2026-01-31 07:22:00.935294072 +0000 UTC m=+0.022804027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bdev(0x558e14b07400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluefs mount
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluefs mount shared_bdev_used = 4718592
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: RocksDB version: 7.9.2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Git sha 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: DB SUMMARY
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: DB Session ID:  2PMMSYOD5GKI7PNRNT2J
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: CURRENT file:  CURRENT
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: IDENTITY file:  IDENTITY
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                         Options.error_if_exists: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.create_if_missing: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                         Options.paranoid_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                                     Options.env: 0x558e13d02c40
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                                Options.info_log: 0x558e13cc0d80
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_file_opening_threads: 16
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                              Options.statistics: (nil)
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.use_fsync: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.max_log_file_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                         Options.allow_fallocate: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.use_direct_reads: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.create_missing_column_families: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                              Options.db_log_dir: 
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                                 Options.wal_dir: db.wal
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.advise_random_on_open: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.write_buffer_manager: 0x558e14be08c0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                            Options.rate_limiter: (nil)
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.unordered_write: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.row_cache: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                              Options.wal_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.allow_ingest_behind: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.two_write_queues: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.manual_wal_flush: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.wal_compression: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.atomic_flush: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.log_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.allow_data_in_errors: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.db_host_id: __hostname__
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_background_jobs: 4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_background_compactions: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_subcompactions: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.max_open_files: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.bytes_per_sync: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.max_background_flushes: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Compression algorithms supported:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kZSTD supported: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kXpressCompression supported: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kBZip2Compression supported: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kLZ4Compression supported: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kZlibCompression supported: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: #011kSnappyCompression supported: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:           Options.merge_operator: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.compaction_filter_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.sst_partitioner_factory: None
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558e14ad3380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558e13cb6dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.write_buffer_size: 16777216
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.max_write_buffer_number: 64
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.compression: LZ4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.num_levels: 7
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.level: 32767
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.compression_opts.strategy: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                  Options.compression_opts.enabled: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.arena_block_size: 1048576
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.disable_auto_compactions: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.inplace_update_support: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.bloom_locality: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                    Options.max_successive_merges: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.paranoid_file_checks: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.force_consistency_checks: 1
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.report_bg_io_stats: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                               Options.ttl: 2592000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                       Options.enable_blob_files: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                           Options.min_blob_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                          Options.blob_file_size: 268435456
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb:                Options.blob_file_starting_level: 0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b6c8d98f-eb86-4dea-9971-46e64504d700
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844121157148, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844121161438, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844121, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b6c8d98f-eb86-4dea-9971-46e64504d700", "db_session_id": "2PMMSYOD5GKI7PNRNT2J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844121165293, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844121, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b6c8d98f-eb86-4dea-9971-46e64504d700", "db_session_id": "2PMMSYOD5GKI7PNRNT2J", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844121172413, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844121, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b6c8d98f-eb86-4dea-9971-46e64504d700", "db_session_id": "2PMMSYOD5GKI7PNRNT2J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844121175818, "job": 1, "event": "recovery_finished"}
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558e14a4c700
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: DB pointer 0x558e14bc9a00
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 6.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: _get_class not permitted to load lua
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: _get_class not permitted to load sdk
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: _get_class not permitted to load test_remote_reads
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 load_pgs
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 load_pgs opened 0 pgs
Jan 31 02:22:01 np0005603621 ceph-osd[84880]: osd.0 0 log_to_monitors true
Jan 31 02:22:01 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0[84876]: 2026-01-31T07:22:01.200+0000 7f3438d51740 -1 osd.0 0 log_to_monitors true
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 02:22:01 np0005603621 silly_golick[85099]: {
Jan 31 02:22:01 np0005603621 silly_golick[85099]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:22:01 np0005603621 silly_golick[85099]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:22:01 np0005603621 silly_golick[85099]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:22:01 np0005603621 silly_golick[85099]:        "osd_id": 0,
Jan 31 02:22:01 np0005603621 silly_golick[85099]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:22:01 np0005603621 silly_golick[85099]:        "type": "bluestore"
Jan 31 02:22:01 np0005603621 silly_golick[85099]:    }
Jan 31 02:22:01 np0005603621 silly_golick[85099]: }
Jan 31 02:22:01 np0005603621 systemd[1]: libpod-6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121.scope: Deactivated successfully.
Jan 31 02:22:01 np0005603621 podman[85078]: 2026-01-31 07:22:01.426682966 +0000 UTC m=+0.914542895 container died 6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:22:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-76fafb7cefa40b0fbf70c65ce5a2cd14141fc4e6561f90709b2fa0ff61db05d8-merged.mount: Deactivated successfully.
Jan 31 02:22:01 np0005603621 podman[85078]: 2026-01-31 07:22:01.477359304 +0000 UTC m=+0.965219153 container remove 6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:01 np0005603621 systemd[1]: libpod-conmon-6fd04d2d03dc8b9ba347258da6c1035161929c1640112756563427c6b0d43121.scope: Deactivated successfully.
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304009482' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 02:22:01 np0005603621 lucid_heyrovsky[85341]: 
Jan 31 02:22:01 np0005603621 lucid_heyrovsky[85341]: {"fsid":"2f5ab832-5f2e-5a84-bd93-cf8bab960ee2","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":133,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769844110,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T07:21:40.354455+0000","services":{}},"progress_events":{}}
Jan 31 02:22:01 np0005603621 systemd[1]: libpod-bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e.scope: Deactivated successfully.
Jan 31 02:22:01 np0005603621 conmon[85341]: conmon bc9569a0f2f201b7c283 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e.scope/container/memory.events
Jan 31 02:22:01 np0005603621 podman[85325]: 2026-01-31 07:22:01.598562685 +0000 UTC m=+0.686072640 container died bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e (image=quay.io/ceph/ceph:v18, name=lucid_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:22:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7b7cdacbdb6913cb938e36037ba44a54b1db7caa0b4c3d295b082e896a97eeac-merged.mount: Deactivated successfully.
Jan 31 02:22:01 np0005603621 podman[85325]: 2026-01-31 07:22:01.639097992 +0000 UTC m=+0.726607937 container remove bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e (image=quay.io/ceph/ceph:v18, name=lucid_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:22:01 np0005603621 systemd[1]: libpod-conmon-bc9569a0f2f201b7c283157333bdacc43b8e08b1d5e705734f66eaa50cb38b1e.scope: Deactivated successfully.
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:01 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:01 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 31 02:22:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0 done with init, starting boot process
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0 start_boot
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 31 02:22:02 np0005603621 ceph-osd[84880]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:02 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:02 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1347694087; not ready for session (expect reconnect)
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:02 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:03 np0005603621 podman[85840]: 2026-01-31 07:22:03.495662681 +0000 UTC m=+0.078328522 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: from='osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:03 np0005603621 podman[85861]: 2026-01-31 07:22:03.630995594 +0000 UTC m=+0.048829793 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:03 np0005603621 podman[85840]: 2026-01-31 07:22:03.641405191 +0000 UTC m=+0.224071062 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:22:03 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1347694087; not ready for session (expect reconnect)
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:03 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:03 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/785741871; not ready for session (expect reconnect)
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:03 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: from='osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:04 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1347694087; not ready for session (expect reconnect)
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:04 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:04 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/785741871; not ready for session (expect reconnect)
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:04 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.058544103 +0000 UTC m=+0.054053222 container create 706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:05 np0005603621 systemd[1]: Started libpod-conmon-706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76.scope.
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.023607496 +0000 UTC m=+0.019116625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.165822979 +0000 UTC m=+0.161332118 container init 706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.171291056 +0000 UTC m=+0.166800165 container start 706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:22:05 np0005603621 vigorous_maxwell[86213]: 167 167
Jan 31 02:22:05 np0005603621 systemd[1]: libpod-706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76.scope: Deactivated successfully.
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.180988241 +0000 UTC m=+0.176497350 container attach 706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.182123796 +0000 UTC m=+0.177632935 container died 706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:22:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6751ebf623ab63a70dac5defee1ddada1db6af22a8ae2a45ad24d3595643cd46-merged.mount: Deactivated successfully.
Jan 31 02:22:05 np0005603621 podman[86197]: 2026-01-31 07:22:05.345208236 +0000 UTC m=+0.340717345 container remove 706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:22:05 np0005603621 systemd[1]: libpod-conmon-706b951854870ff9564a48212dbec8a6b8c1740844edcc2a8d779207e5345f76.scope: Deactivated successfully.
Jan 31 02:22:05 np0005603621 podman[86238]: 2026-01-31 07:22:05.509983798 +0000 UTC m=+0.072678181 container create 4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:22:05 np0005603621 podman[86238]: 2026-01-31 07:22:05.46095394 +0000 UTC m=+0.023648353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:05 np0005603621 systemd[1]: Started libpod-conmon-4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72.scope.
Jan 31 02:22:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecf704c70c93bd9b9d25bee4c6d264d10cda19e203202f0b6e397cb8a050ba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecf704c70c93bd9b9d25bee4c6d264d10cda19e203202f0b6e397cb8a050ba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecf704c70c93bd9b9d25bee4c6d264d10cda19e203202f0b6e397cb8a050ba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecf704c70c93bd9b9d25bee4c6d264d10cda19e203202f0b6e397cb8a050ba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:05 np0005603621 podman[86238]: 2026-01-31 07:22:05.651173368 +0000 UTC m=+0.213867791 container init 4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:22:05 np0005603621 podman[86238]: 2026-01-31 07:22:05.657102959 +0000 UTC m=+0.219797342 container start 4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:05 np0005603621 podman[86238]: 2026-01-31 07:22:05.680845455 +0000 UTC m=+0.243539888 container attach 4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:05 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1347694087; not ready for session (expect reconnect)
Jan 31 02:22:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:05 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 31 02:22:05 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/785741871; not ready for session (expect reconnect)
Jan 31 02:22:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:05 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.579 iops: 8596.349 elapsed_sec: 0.349
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [WRN] : OSD bench result of 8596.349487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 0 waiting for initial osdmap
Jan 31 02:22:06 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0[84876]: 2026-01-31T07:22:06.445+0000 7f3434cd1640 -1 osd.0 0 waiting for initial osdmap
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 set_numa_affinity not setting numa affinity
Jan 31 02:22:06 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-osd-0[84876]: 2026-01-31T07:22:06.476+0000 7f34302f9640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]: [
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:    {
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "available": false,
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "ceph_device": false,
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "lsm_data": {},
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "lvs": [],
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "path": "/dev/sr0",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "rejected_reasons": [
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "Insufficient space (<5GB)",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "Has a FileSystem"
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        ],
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        "sys_api": {
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "actuators": null,
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "device_nodes": "sr0",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "devname": "sr0",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "human_readable_size": "482.00 KB",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "id_bus": "ata",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "model": "QEMU DVD-ROM",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "nr_requests": "2",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "parent": "/dev/sr0",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "partitions": {},
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "path": "/dev/sr0",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "removable": "1",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "rev": "2.5+",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "ro": "0",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "rotational": "1",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "sas_address": "",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "sas_device_handle": "",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "scheduler_mode": "mq-deadline",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "sectors": 0,
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "sectorsize": "2048",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "size": 493568.0,
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "support_discard": "2048",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "type": "disk",
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:            "vendor": "QEMU"
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:        }
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]:    }
Jan 31 02:22:06 np0005603621 wonderful_moore[86254]: ]
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-osd[84880]: osd.0 9 state: booting -> active
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087] boot
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 systemd[1]: libpod-4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72.scope: Deactivated successfully.
Jan 31 02:22:06 np0005603621 systemd[1]: libpod-4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72.scope: Consumed 1.003s CPU time.
Jan 31 02:22:06 np0005603621 podman[86238]: 2026-01-31 07:22:06.704840332 +0000 UTC m=+1.267534715 container died 4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aecf704c70c93bd9b9d25bee4c6d264d10cda19e203202f0b6e397cb8a050ba2-merged.mount: Deactivated successfully.
Jan 31 02:22:06 np0005603621 podman[86238]: 2026-01-31 07:22:06.747270277 +0000 UTC m=+1.309964650 container remove 4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:06 np0005603621 systemd[1]: libpod-conmon-4d2cb57f742dba3d5d103f713ee08ea060e3e5d6e5dfecd7b78dc95ac923ae72.scope: Deactivated successfully.
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/785741871; not ready for session (expect reconnect)
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:06 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: OSD bench result of 8596.349487 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: osd.0 [v2:192.168.122.100:6802/1347694087,v1:192.168.122.100:6803/1347694087] boot
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: Unable to set osd_memory_target on compute-0 to 134197657: error parsing value: Value '134197657' is below minimum 939524096
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:07 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:07 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/785741871; not ready for session (expect reconnect)
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:07 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] creating mgr pool
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:22:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: OSD bench result of 6163.600447 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871] boot
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 31 02:22:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: osd.1 [v2:192.168.122.101:6800/785741871,v1:192.168.122.101:6801/785741871] boot
Jan 31 02:22:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 31 02:22:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 13 GiB / 14 GiB avail
Jan 31 02:22:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 31 02:22:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 31 02:22:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 31 02:22:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 31 02:22:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] creating main.db for devicehealth
Jan 31 02:22:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 02:22:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 02:22:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 02:22:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 02:22:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 02:22:11 np0005603621 ceph-mon[74394]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 31 02:22:11 np0005603621 ceph-mon[74394]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 31 02:22:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 31 02:22:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ddmhwk(active, since 94s)
Jan 31 02:22:13 np0005603621 ceph-osd[84880]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 31 02:22:13 np0005603621 ceph-osd[84880]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 31 02:22:13 np0005603621 ceph-osd[84880]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 31 02:22:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:22:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:22:20 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:22:20 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:22:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:20 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:20 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:22:21 np0005603621 ceph-mon[74394]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:22:21 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:22:21 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:22:22 np0005603621 ceph-mon[74394]: Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:22 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:22:22 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:23 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 53f4b449-3521-444d-ba55-650e5d18d706 (Updating mon deployment (+2 -> 3))
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:23 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 31 02:22:23 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.client.admin.keyring
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:24 np0005603621 ceph-mon[74394]: Deploying daemon mon.compute-2 on compute-2
Jan 31 02:22:24 np0005603621 ceph-mon[74394]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 31 02:22:24 np0005603621 ceph-mon[74394]: Cluster is now healthy
Jan 31 02:22:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:25 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 31 02:22:25 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 31 02:22:25 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3222857715; not ready for session (expect reconnect)
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:25 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:25 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 31 02:22:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:22:26 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3222857715; not ready for session (expect reconnect)
Jan 31 02:22:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:26 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 02:22:27 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:27 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 02:22:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:27 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3222857715; not ready for session (expect reconnect)
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:27 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 02:22:28 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:28 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 02:22:28 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3222857715; not ready for session (expect reconnect)
Jan 31 02:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:28 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 02:22:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 02:22:29 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:29 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 02:22:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:29 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3222857715; not ready for session (expect reconnect)
Jan 31 02:22:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:29 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3222857715; not ready for session (expect reconnect)
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ddmhwk(active, since 112s)
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 53f4b449-3521-444d-ba55-650e5d18d706 (Updating mon deployment (+2 -> 3))
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 53f4b449-3521-444d-ba55-650e5d18d706 (Updating mon deployment (+2 -> 3)) in 7 seconds
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev ce3baafd-f94b-4ee7-8b9c-9bc5c9e2bc0b (Updating mgr deployment (+2 -> 3))
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.cdjvtw on compute-2
Jan 31 02:22:30 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.cdjvtw on compute-2
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: Deploying daemon mon.compute-1 on compute-1
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0 calling monitor election
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-2 calling monitor election
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 31 02:22:31 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:31 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:22:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:22:31 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 02:22:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:31 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:22:31.578+0000 7f6435e43640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 31 02:22:31 np0005603621 ceph-mgr[74689]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 31 02:22:31 np0005603621 python3[87354]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:31 np0005603621 podman[87356]: 2026-01-31 07:22:31.935443044 +0000 UTC m=+0.041916411 container create 30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:31 np0005603621 systemd[1]: Started libpod-conmon-30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d.scope.
Jan 31 02:22:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7107641ea742ea27ffa16156488d2d6981a914cdc97248144e636d5976d3ecac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7107641ea742ea27ffa16156488d2d6981a914cdc97248144e636d5976d3ecac/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7107641ea742ea27ffa16156488d2d6981a914cdc97248144e636d5976d3ecac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:32 np0005603621 podman[87356]: 2026-01-31 07:22:31.914857685 +0000 UTC m=+0.021331102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:32 np0005603621 podman[87356]: 2026-01-31 07:22:32.017668504 +0000 UTC m=+0.124141891 container init 30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:32 np0005603621 podman[87356]: 2026-01-31 07:22:32.023318647 +0000 UTC m=+0.129792034 container start 30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:32 np0005603621 podman[87356]: 2026-01-31 07:22:32.028463744 +0000 UTC m=+0.134937161 container attach 30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:22:32 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:32 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:33 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:33 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 02:22:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:33 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 3 completed events
Jan 31 02:22:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:22:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:34 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:34 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:35 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:35 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 31 02:22:36 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:36 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ddmhwk(active, since 117s)
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.gxjgok", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gxjgok", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0 calling monitor election
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-2 calling monitor election
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-1 calling monitor election
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gxjgok", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:36 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.gxjgok on compute-1
Jan 31 02:22:36 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.gxjgok on compute-1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101359698' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 02:22:37 np0005603621 youthful_pasteur[87373]: 
Jan 31 02:22:37 np0005603621 youthful_pasteur[87373]: {"fsid":"2f5ab832-5f2e-5a84-bd93-cf8bab960ee2","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":0,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1769844128,"num_in_osds":2,"osd_in_since":1769844110,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475201536,"bytes_avail":14548795392,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-31T07:21:40.354455+0000","services":{}},"progress_events":{"ce3baafd-f94b-4ee7-8b9c-9bc5c9e2bc0b":{"message":"Updating mgr deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 31 02:22:37 np0005603621 systemd[1]: libpod-30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d.scope: Deactivated successfully.
Jan 31 02:22:37 np0005603621 podman[87356]: 2026-01-31 07:22:37.132214064 +0000 UTC m=+5.238687431 container died 30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7107641ea742ea27ffa16156488d2d6981a914cdc97248144e636d5976d3ecac-merged.mount: Deactivated successfully.
Jan 31 02:22:37 np0005603621 podman[87356]: 2026-01-31 07:22:37.180877262 +0000 UTC m=+5.287350629 container remove 30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d (image=quay.io/ceph/ceph:v18, name=youthful_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:37 np0005603621 systemd[1]: libpod-conmon-30fa8e947d5e1fb108688e68f4a1a45b86961c357c5e315e9fd065996bf39e4d.scope: Deactivated successfully.
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1732313368; not ready for session (expect reconnect)
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gxjgok", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.gxjgok", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: Deploying daemon mgr.compute-1.gxjgok on compute-1
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:37 np0005603621 python3[87435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:37 np0005603621 podman[87436]: 2026-01-31 07:22:37.705006422 +0000 UTC m=+0.044309355 container create 1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef (image=quay.io/ceph/ceph:v18, name=relaxed_swirles, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev ce3baafd-f94b-4ee7-8b9c-9bc5c9e2bc0b (Updating mgr deployment (+2 -> 3))
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event ce3baafd-f94b-4ee7-8b9c-9bc5c9e2bc0b (Updating mgr deployment (+2 -> 3)) in 7 seconds
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 31 02:22:37 np0005603621 systemd[1]: Started libpod-conmon-1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef.scope.
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 759ea8cc-09cb-4993-90f7-392ac73230d1 (Updating crash deployment (+1 -> 3))
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 31 02:22:37 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 31 02:22:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d53178a5a62c7bb8c63c59c41ccc2ac93b5f3b806e6a9790e8b00584336e3e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d53178a5a62c7bb8c63c59c41ccc2ac93b5f3b806e6a9790e8b00584336e3e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:37 np0005603621 podman[87436]: 2026-01-31 07:22:37.778406821 +0000 UTC m=+0.117709794 container init 1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef (image=quay.io/ceph/ceph:v18, name=relaxed_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:22:37 np0005603621 podman[87436]: 2026-01-31 07:22:37.684179929 +0000 UTC m=+0.023482892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:37 np0005603621 podman[87436]: 2026-01-31 07:22:37.783284374 +0000 UTC m=+0.122587297 container start 1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef (image=quay.io/ceph/ceph:v18, name=relaxed_swirles, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:37 np0005603621 podman[87436]: 2026-01-31 07:22:37.787655679 +0000 UTC m=+0.126958612 container attach 1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef (image=quay.io/ceph/ceph:v18, name=relaxed_swirles, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:38 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T07:22:38.226+0000 7f6435e43640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:22:38
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr']
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1597819399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 759ea8cc-09cb-4993-90f7-392ac73230d1 (Updating crash deployment (+1 -> 3))
Jan 31 02:22:39 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 759ea8cc-09cb-4993-90f7-392ac73230d1 (Updating crash deployment (+1 -> 3)) in 1 seconds
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: Deploying daemon crash.compute-2 on compute-2
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1597819399' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1597819399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 31 02:22:39 np0005603621 relaxed_swirles[87451]: pool 'vms' created
Jan 31 02:22:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 31 02:22:39 np0005603621 systemd[1]: libpod-1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef.scope: Deactivated successfully.
Jan 31 02:22:39 np0005603621 conmon[87451]: conmon 1e5ba248608bd8c02cf5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef.scope/container/memory.events
Jan 31 02:22:39 np0005603621 podman[87436]: 2026-01-31 07:22:39.405212007 +0000 UTC m=+1.744514940 container died 1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef (image=quay.io/ceph/ceph:v18, name=relaxed_swirles, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-07d53178a5a62c7bb8c63c59c41ccc2ac93b5f3b806e6a9790e8b00584336e3e-merged.mount: Deactivated successfully.
Jan 31 02:22:39 np0005603621 podman[87436]: 2026-01-31 07:22:39.444209044 +0000 UTC m=+1.783511977 container remove 1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef (image=quay.io/ceph/ceph:v18, name=relaxed_swirles, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:39 np0005603621 systemd[1]: libpod-conmon-1e5ba248608bd8c02cf5baabc1e4206a45f3e856d55ebe9e14788a128dc3afef.scope: Deactivated successfully.
Jan 31 02:22:39 np0005603621 python3[87616]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:39 np0005603621 podman[87655]: 2026-01-31 07:22:39.757118329 +0000 UTC m=+0.040194888 container create 03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251 (image=quay.io/ceph/ceph:v18, name=inspiring_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.773593817 +0000 UTC m=+0.044657256 container create 71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:39 np0005603621 systemd[1]: Started libpod-conmon-03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251.scope.
Jan 31 02:22:39 np0005603621 systemd[1]: Started libpod-conmon-71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18.scope.
Jan 31 02:22:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b291fcfe4d7cedb96b0678acaae6ff4e21487b154fce4f0b6661aacf4cc51b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b291fcfe4d7cedb96b0678acaae6ff4e21487b154fce4f0b6661aacf4cc51b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:39 np0005603621 podman[87655]: 2026-01-31 07:22:39.736904857 +0000 UTC m=+0.019981436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:39 np0005603621 podman[87655]: 2026-01-31 07:22:39.832583369 +0000 UTC m=+0.115659938 container init 03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251 (image=quay.io/ceph/ceph:v18, name=inspiring_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.83563315 +0000 UTC m=+0.106696599 container init 71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gauss, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:39 np0005603621 podman[87655]: 2026-01-31 07:22:39.840089308 +0000 UTC m=+0.123165867 container start 03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251 (image=quay.io/ceph/ceph:v18, name=inspiring_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.840241083 +0000 UTC m=+0.111304522 container start 71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gauss, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:22:39 np0005603621 podman[87655]: 2026-01-31 07:22:39.843468841 +0000 UTC m=+0.126545400 container attach 03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251 (image=quay.io/ceph/ceph:v18, name=inspiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:22:39 np0005603621 practical_gauss[87690]: 167 167
Jan 31 02:22:39 np0005603621 systemd[1]: libpod-71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18.scope: Deactivated successfully.
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.847641939 +0000 UTC m=+0.118705378 container attach 71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.848109345 +0000 UTC m=+0.119172784 container died 71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gauss, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.751221493 +0000 UTC m=+0.022284952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4d1669f3f6f594b2193d0e1c73b8aea67c711dcfbdc7d5bec1c6457567027703-merged.mount: Deactivated successfully.
Jan 31 02:22:39 np0005603621 podman[87661]: 2026-01-31 07:22:39.885313982 +0000 UTC m=+0.156377421 container remove 71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:39 np0005603621 systemd[1]: libpod-conmon-71d9f3baa5a066e1b857b017d91223b76dd06c47a6700bfc6d969fe7afff7d18.scope: Deactivated successfully.
Jan 31 02:22:40 np0005603621 podman[87718]: 2026-01-31 07:22:40.055021755 +0000 UTC m=+0.085029138 container create 10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:40 np0005603621 podman[87718]: 2026-01-31 07:22:39.988491353 +0000 UTC m=+0.018498766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:40 np0005603621 systemd[1]: Started libpod-conmon-10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8.scope.
Jan 31 02:22:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bebc881f40612560b44eddc3a49fa0cb99cbee9b82e5627c694e82a6e08a8d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bebc881f40612560b44eddc3a49fa0cb99cbee9b82e5627c694e82a6e08a8d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bebc881f40612560b44eddc3a49fa0cb99cbee9b82e5627c694e82a6e08a8d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bebc881f40612560b44eddc3a49fa0cb99cbee9b82e5627c694e82a6e08a8d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bebc881f40612560b44eddc3a49fa0cb99cbee9b82e5627c694e82a6e08a8d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:40 np0005603621 podman[87718]: 2026-01-31 07:22:40.142140272 +0000 UTC m=+0.172147665 container init 10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamarr, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:40 np0005603621 podman[87718]: 2026-01-31 07:22:40.149532968 +0000 UTC m=+0.179540371 container start 10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamarr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:22:40 np0005603621 podman[87718]: 2026-01-31 07:22:40.16523947 +0000 UTC m=+0.195246873 container attach 10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamarr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:22:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 31 02:22:40 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1597819399' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 02:22:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3898076589' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 31 02:22:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 31 02:22:40 np0005603621 elated_lamarr[87735]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:22:40 np0005603621 elated_lamarr[87735]: --> relative data size: 1.0
Jan 31 02:22:40 np0005603621 elated_lamarr[87735]: --> All data devices are unavailable
Jan 31 02:22:41 np0005603621 systemd[1]: libpod-10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8.scope: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87718]: 2026-01-31 07:22:41.018314527 +0000 UTC m=+1.048321910 container died 10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamarr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0bebc881f40612560b44eddc3a49fa0cb99cbee9b82e5627c694e82a6e08a8d4-merged.mount: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87718]: 2026-01-31 07:22:41.057811421 +0000 UTC m=+1.087818804 container remove 10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:22:41 np0005603621 systemd[1]: libpod-conmon-10a5e183de18b960367469f7e43680ca9f7daadde9724654750813f94ab2f8c8.scope: Deactivated successfully.
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "d561c1d2-064b-46a8-af35-64503a234a3c"} v 0) v1
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d561c1d2-064b-46a8-af35-64503a234a3c"}]: dispatch
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3898076589' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d561c1d2-064b-46a8-af35-64503a234a3c"}]': finished
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 31 02:22:41 np0005603621 inspiring_kirch[87688]: pool 'volumes' created
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:41 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:41 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:22:41 np0005603621 systemd[1]: libpod-03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251.scope: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87655]: 2026-01-31 07:22:41.241512609 +0000 UTC m=+1.524589178 container died 03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251 (image=quay.io/ceph/ceph:v18, name=inspiring_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83b291fcfe4d7cedb96b0678acaae6ff4e21487b154fce4f0b6661aacf4cc51b-merged.mount: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87655]: 2026-01-31 07:22:41.281691785 +0000 UTC m=+1.564768334 container remove 03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251 (image=quay.io/ceph/ceph:v18, name=inspiring_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:41 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 5 completed events
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:22:41 np0005603621 systemd[1]: libpod-conmon-03db8d232a46a0c487ee1d1dd76d555759d3d0fac11e799f0d6546310038c251.scope: Deactivated successfully.
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v71: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3898076589' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.102:0/935814870' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d561c1d2-064b-46a8-af35-64503a234a3c"}]: dispatch
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d561c1d2-064b-46a8-af35-64503a234a3c"}]: dispatch
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3898076589' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d561c1d2-064b-46a8-af35-64503a234a3c"}]': finished
Jan 31 02:22:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.565617817 +0000 UTC m=+0.038702818 container create ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:41 np0005603621 python3[87920]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:41 np0005603621 systemd[1]: Started libpod-conmon-ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116.scope.
Jan 31 02:22:41 np0005603621 podman[87973]: 2026-01-31 07:22:41.614067507 +0000 UTC m=+0.036741461 container create 522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e (image=quay.io/ceph/ceph:v18, name=kind_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:22:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.627010418 +0000 UTC m=+0.100095429 container init ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_saha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:41 np0005603621 systemd[1]: Started libpod-conmon-522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e.scope.
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.634539919 +0000 UTC m=+0.107624930 container start ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:22:41 np0005603621 admiring_saha[87988]: 167 167
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.637476816 +0000 UTC m=+0.110561827 container attach ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:22:41 np0005603621 systemd[1]: libpod-ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116.scope: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.638809171 +0000 UTC m=+0.111894172 container died ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ccef1e13e0e68c30169b6b186aec041f36c3d749dbae2ae5b45d475de646e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ccef1e13e0e68c30169b6b186aec041f36c3d749dbae2ae5b45d475de646e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.550574966 +0000 UTC m=+0.023660027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0bc90f9f986814c06ce626188caadb5e929b58511901aeb16cd33bff9953c101-merged.mount: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87973]: 2026-01-31 07:22:41.668486787 +0000 UTC m=+0.091160761 container init 522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e (image=quay.io/ceph/ceph:v18, name=kind_brahmagupta, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:41 np0005603621 podman[87959]: 2026-01-31 07:22:41.671040062 +0000 UTC m=+0.144125073 container remove ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_saha, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:22:41 np0005603621 podman[87973]: 2026-01-31 07:22:41.672757219 +0000 UTC m=+0.095431283 container start 522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e (image=quay.io/ceph/ceph:v18, name=kind_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:22:41 np0005603621 podman[87973]: 2026-01-31 07:22:41.675919284 +0000 UTC m=+0.098593248 container attach 522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e (image=quay.io/ceph/ceph:v18, name=kind_brahmagupta, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:22:41 np0005603621 systemd[1]: libpod-conmon-ee3ea35ff291987c3883e54317020ecb0026c8a538b2c808c59bdc7f6226c116.scope: Deactivated successfully.
Jan 31 02:22:41 np0005603621 podman[87973]: 2026-01-31 07:22:41.595050065 +0000 UTC m=+0.017724069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:41 np0005603621 podman[88018]: 2026-01-31 07:22:41.779716836 +0000 UTC m=+0.037829199 container create c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:41 np0005603621 systemd[1]: Started libpod-conmon-c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e.scope.
Jan 31 02:22:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9626ddd8d4324cf9d73de9e0d71c29210c84c4b91c8882ca6ab3285fb460ae80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9626ddd8d4324cf9d73de9e0d71c29210c84c4b91c8882ca6ab3285fb460ae80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9626ddd8d4324cf9d73de9e0d71c29210c84c4b91c8882ca6ab3285fb460ae80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9626ddd8d4324cf9d73de9e0d71c29210c84c4b91c8882ca6ab3285fb460ae80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:41 np0005603621 podman[88018]: 2026-01-31 07:22:41.84659683 +0000 UTC m=+0.104709193 container init c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:41 np0005603621 podman[88018]: 2026-01-31 07:22:41.851035328 +0000 UTC m=+0.109147681 container start c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_yalow, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:22:41 np0005603621 podman[88018]: 2026-01-31 07:22:41.854130451 +0000 UTC m=+0.112242824 container attach c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_yalow, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:41 np0005603621 podman[88018]: 2026-01-31 07:22:41.762579666 +0000 UTC m=+0.020692059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3326848097' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3326848097' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 31 02:22:42 np0005603621 kind_brahmagupta[87994]: pool 'backups' created
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:42 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:42 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:22:42 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:22:42 np0005603621 systemd[1]: libpod-522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e.scope: Deactivated successfully.
Jan 31 02:22:42 np0005603621 podman[87973]: 2026-01-31 07:22:42.252352502 +0000 UTC m=+0.675026466 container died 522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e (image=quay.io/ceph/ceph:v18, name=kind_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:22:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-25ccef1e13e0e68c30169b6b186aec041f36c3d749dbae2ae5b45d475de646e7-merged.mount: Deactivated successfully.
Jan 31 02:22:42 np0005603621 podman[87973]: 2026-01-31 07:22:42.292810518 +0000 UTC m=+0.715484462 container remove 522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e (image=quay.io/ceph/ceph:v18, name=kind_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:22:42 np0005603621 systemd[1]: libpod-conmon-522e5727b1ceb7a2c138669246688a9d87e04f14f99230c0853111fe829cd45e.scope: Deactivated successfully.
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3326848097' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3326848097' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:42 np0005603621 ceph-mon[74394]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:22:42 np0005603621 python3[88099]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]: {
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:    "0": [
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:        {
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "devices": [
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "/dev/loop3"
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            ],
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "lv_name": "ceph_lv0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "lv_size": "7511998464",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "name": "ceph_lv0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "tags": {
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.cluster_name": "ceph",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.crush_device_class": "",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.encrypted": "0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.osd_id": "0",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.type": "block",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:                "ceph.vdo": "0"
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            },
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "type": "block",
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:            "vg_name": "ceph_vg0"
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:        }
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]:    ]
Jan 31 02:22:42 np0005603621 suspicious_yalow[88034]: }
Jan 31 02:22:42 np0005603621 podman[88103]: 2026-01-31 07:22:42.627940132 +0000 UTC m=+0.042871266 container create 0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41 (image=quay.io/ceph/ceph:v18, name=eager_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:22:42 np0005603621 systemd[1]: libpod-c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e.scope: Deactivated successfully.
Jan 31 02:22:42 np0005603621 podman[88018]: 2026-01-31 07:22:42.640972826 +0000 UTC m=+0.899085189 container died c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_yalow, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:22:42 np0005603621 systemd[1]: Started libpod-conmon-0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41.scope.
Jan 31 02:22:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9626ddd8d4324cf9d73de9e0d71c29210c84c4b91c8882ca6ab3285fb460ae80-merged.mount: Deactivated successfully.
Jan 31 02:22:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:42 np0005603621 podman[88018]: 2026-01-31 07:22:42.689993765 +0000 UTC m=+0.948106128 container remove c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:22:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d28e003e669efb5c5308e1d47be4e686c3275a30fdd627ce6e4712297d4f4fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d28e003e669efb5c5308e1d47be4e686c3275a30fdd627ce6e4712297d4f4fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:42 np0005603621 systemd[1]: libpod-conmon-c41f5e2cacf450956e65d335f98e57b66441fb9e3670856f65d699e04b669f4e.scope: Deactivated successfully.
Jan 31 02:22:42 np0005603621 podman[88103]: 2026-01-31 07:22:42.609881182 +0000 UTC m=+0.024812226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:42 np0005603621 podman[88103]: 2026-01-31 07:22:42.708216652 +0000 UTC m=+0.123147686 container init 0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41 (image=quay.io/ceph/ceph:v18, name=eager_mahavira, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:42 np0005603621 podman[88103]: 2026-01-31 07:22:42.712611447 +0000 UTC m=+0.127542461 container start 0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41 (image=quay.io/ceph/ceph:v18, name=eager_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:42 np0005603621 podman[88103]: 2026-01-31 07:22:42.717123467 +0000 UTC m=+0.132054501 container attach 0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41 (image=quay.io/ceph/ceph:v18, name=eager_mahavira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.13703692 +0000 UTC m=+0.030224216 container create 78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:43 np0005603621 systemd[1]: Started libpod-conmon-78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06.scope.
Jan 31 02:22:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.194306155 +0000 UTC m=+0.087493271 container init 78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.19986983 +0000 UTC m=+0.093056916 container start 78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:22:43 np0005603621 practical_rhodes[88309]: 167 167
Jan 31 02:22:43 np0005603621 systemd[1]: libpod-78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06.scope: Deactivated successfully.
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.206873902 +0000 UTC m=+0.100061018 container attach 78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.207555535 +0000 UTC m=+0.100742621 container died 78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.124265796 +0000 UTC m=+0.017452892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 31 02:22:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-67928474c0881cd728e73980b0677a8cde060755cce96e0d647b542693335907-merged.mount: Deactivated successfully.
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:43 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:22:43 np0005603621 podman[88287]: 2026-01-31 07:22:43.254184597 +0000 UTC m=+0.147371673 container remove 78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:22:43 np0005603621 systemd[1]: libpod-conmon-78034b5f28d1c0495af0ff8c1926f94ba3894ebbc32e22491c8b8b6b8e38bd06.scope: Deactivated successfully.
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3012310796' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v74: 4 pgs: 1 creating+peering, 2 unknown, 1 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:43 np0005603621 podman[88336]: 2026-01-31 07:22:43.360680058 +0000 UTC m=+0.032555494 container create e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:43 np0005603621 systemd[1]: Started libpod-conmon-e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b.scope.
Jan 31 02:22:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9110f11da4793c2798b1fb793271d479694e4ab7707631c4eb96f9df1c30f76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9110f11da4793c2798b1fb793271d479694e4ab7707631c4eb96f9df1c30f76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9110f11da4793c2798b1fb793271d479694e4ab7707631c4eb96f9df1c30f76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9110f11da4793c2798b1fb793271d479694e4ab7707631c4eb96f9df1c30f76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:43 np0005603621 podman[88336]: 2026-01-31 07:22:43.438451234 +0000 UTC m=+0.110326660 container init e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:22:43 np0005603621 podman[88336]: 2026-01-31 07:22:43.346228697 +0000 UTC m=+0.018104133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:43 np0005603621 podman[88336]: 2026-01-31 07:22:43.444167644 +0000 UTC m=+0.116043060 container start e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:22:43 np0005603621 podman[88336]: 2026-01-31 07:22:43.446854563 +0000 UTC m=+0.118729969 container attach e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:22:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]: {
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:        "osd_id": 0,
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:        "type": "bluestore"
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]:    }
Jan 31 02:22:44 np0005603621 vigilant_chatterjee[88353]: }
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 31 02:22:44 np0005603621 systemd[1]: libpod-e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b.scope: Deactivated successfully.
Jan 31 02:22:44 np0005603621 podman[88336]: 2026-01-31 07:22:44.250245118 +0000 UTC m=+0.922120594 container died e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3012310796' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3012310796' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 31 02:22:44 np0005603621 eager_mahavira[88132]: pool 'images' created
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:44 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f9110f11da4793c2798b1fb793271d479694e4ab7707631c4eb96f9df1c30f76-merged.mount: Deactivated successfully.
Jan 31 02:22:44 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:22:44 np0005603621 systemd[76060]: Starting Mark boot as successful...
Jan 31 02:22:44 np0005603621 systemd[76060]: Finished Mark boot as successful.
Jan 31 02:22:44 np0005603621 systemd[1]: libpod-0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41.scope: Deactivated successfully.
Jan 31 02:22:44 np0005603621 conmon[88132]: conmon 0c924dabb8980433710d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41.scope/container/memory.events
Jan 31 02:22:44 np0005603621 podman[88336]: 2026-01-31 07:22:44.299375521 +0000 UTC m=+0.971250937 container remove e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chatterjee, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:22:44 np0005603621 podman[88103]: 2026-01-31 07:22:44.300866332 +0000 UTC m=+1.715797346 container died 0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41 (image=quay.io/ceph/ceph:v18, name=eager_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:44 np0005603621 systemd[1]: libpod-conmon-e12a4d6f79f5b16599304c929813ea6a9de70f43a95df5c2a72deca4b0f38a5b.scope: Deactivated successfully.
Jan 31 02:22:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4d28e003e669efb5c5308e1d47be4e686c3275a30fdd627ce6e4712297d4f4fa-merged.mount: Deactivated successfully.
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:22:44 np0005603621 podman[88103]: 2026-01-31 07:22:44.336976962 +0000 UTC m=+1.751907976 container remove 0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41 (image=quay.io/ceph/ceph:v18, name=eager_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:44 np0005603621 systemd[1]: libpod-conmon-0c924dabb8980433710db084e869cd45929a4dccdad5911d33c64b843fac6f41.scope: Deactivated successfully.
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:22:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:44 np0005603621 python3[88426]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:44 np0005603621 podman[88427]: 2026-01-31 07:22:44.640463383 +0000 UTC m=+0.032098518 container create 460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:44 np0005603621 systemd[1]: Started libpod-conmon-460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d.scope.
Jan 31 02:22:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b475fb4202014aa285827e4fdbc8abb1a356aff9f144f968ed53c6a8038343/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0b475fb4202014aa285827e4fdbc8abb1a356aff9f144f968ed53c6a8038343/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:44 np0005603621 podman[88427]: 2026-01-31 07:22:44.720488745 +0000 UTC m=+0.112123890 container init 460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:22:44 np0005603621 podman[88427]: 2026-01-31 07:22:44.627381219 +0000 UTC m=+0.019016354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:44 np0005603621 podman[88427]: 2026-01-31 07:22:44.725811532 +0000 UTC m=+0.117446657 container start 460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:44 np0005603621 podman[88427]: 2026-01-31 07:22:44.728484981 +0000 UTC m=+0.120120106 container attach 460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1249872425' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3012310796' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1249872425' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v76: 5 pgs: 2 active+clean, 1 creating+peering, 2 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1249872425' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Jan 31 02:22:45 np0005603621 loving_proskuriakova[88443]: pool 'cephfs.cephfs.meta' created
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:45 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:22:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:22:45 np0005603621 systemd[1]: libpod-460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d.scope: Deactivated successfully.
Jan 31 02:22:45 np0005603621 podman[88427]: 2026-01-31 07:22:45.371663678 +0000 UTC m=+0.763298803 container died 460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f0b475fb4202014aa285827e4fdbc8abb1a356aff9f144f968ed53c6a8038343-merged.mount: Deactivated successfully.
Jan 31 02:22:45 np0005603621 podman[88427]: 2026-01-31 07:22:45.407151209 +0000 UTC m=+0.798786334 container remove 460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d (image=quay.io/ceph/ceph:v18, name=loving_proskuriakova, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:22:45 np0005603621 systemd[1]: libpod-conmon-460c81aa42744f1ba0a72fdd6bc3a0f417acd1a66221dee32b76447c6ab9492d.scope: Deactivated successfully.
Jan 31 02:22:45 np0005603621 python3[88508]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:45 np0005603621 podman[88509]: 2026-01-31 07:22:45.71725239 +0000 UTC m=+0.038969207 container create 4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df (image=quay.io/ceph/ceph:v18, name=gallant_lichterman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:22:45 np0005603621 systemd[1]: Started libpod-conmon-4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df.scope.
Jan 31 02:22:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b68ab44fc4abc8f80d2928b15f1e6bb141c2a8ecb170f626096b88f30fcb5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2b68ab44fc4abc8f80d2928b15f1e6bb141c2a8ecb170f626096b88f30fcb5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:45 np0005603621 podman[88509]: 2026-01-31 07:22:45.771878376 +0000 UTC m=+0.093595213 container init 4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df (image=quay.io/ceph/ceph:v18, name=gallant_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:45 np0005603621 podman[88509]: 2026-01-31 07:22:45.776613504 +0000 UTC m=+0.098330321 container start 4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df (image=quay.io/ceph/ceph:v18, name=gallant_lichterman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:22:45 np0005603621 podman[88509]: 2026-01-31 07:22:45.779963975 +0000 UTC m=+0.101680792 container attach 4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df (image=quay.io/ceph/ceph:v18, name=gallant_lichterman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:22:45 np0005603621 podman[88509]: 2026-01-31 07:22:45.702683946 +0000 UTC m=+0.024400783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:45 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 31 02:22:45 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/839230673' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/839230673' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Jan 31 02:22:46 np0005603621 gallant_lichterman[88524]: pool 'cephfs.cephfs.data' created
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1249872425' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/839230673' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:46 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:46 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:22:46 np0005603621 systemd[1]: libpod-4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df.scope: Deactivated successfully.
Jan 31 02:22:46 np0005603621 podman[88509]: 2026-01-31 07:22:46.45102817 +0000 UTC m=+0.772744997 container died 4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df (image=quay.io/ceph/ceph:v18, name=gallant_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:22:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c2b68ab44fc4abc8f80d2928b15f1e6bb141c2a8ecb170f626096b88f30fcb5f-merged.mount: Deactivated successfully.
Jan 31 02:22:46 np0005603621 podman[88509]: 2026-01-31 07:22:46.486756378 +0000 UTC m=+0.808473205 container remove 4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df (image=quay.io/ceph/ceph:v18, name=gallant_lichterman, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:22:46 np0005603621 systemd[1]: libpod-conmon-4ab987445604448df85d0877f2408f871802416bb92c164ce2e25e3cd60ba0df.scope: Deactivated successfully.
Jan 31 02:22:46 np0005603621 python3[88589]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:46 np0005603621 podman[88590]: 2026-01-31 07:22:46.868554354 +0000 UTC m=+0.055394774 container create 63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae (image=quay.io/ceph/ceph:v18, name=charming_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:22:46 np0005603621 systemd[1]: Started libpod-conmon-63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae.scope.
Jan 31 02:22:46 np0005603621 podman[88590]: 2026-01-31 07:22:46.832917308 +0000 UTC m=+0.019757728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82be03b48d241ab9d47a69110be30787d1b0011f64056ee03686ea5e5aaa5c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82be03b48d241ab9d47a69110be30787d1b0011f64056ee03686ea5e5aaa5c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:46 np0005603621 podman[88590]: 2026-01-31 07:22:46.960950647 +0000 UTC m=+0.147791097 container init 63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae (image=quay.io/ceph/ceph:v18, name=charming_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:46 np0005603621 podman[88590]: 2026-01-31 07:22:46.965012331 +0000 UTC m=+0.151852751 container start 63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae (image=quay.io/ceph/ceph:v18, name=charming_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 02:22:46 np0005603621 podman[88590]: 2026-01-31 07:22:46.978171528 +0000 UTC m=+0.165011948 container attach 63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae (image=quay.io/ceph/ceph:v18, name=charming_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:47 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: Deploying daemon osd.2 on compute-2
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/839230673' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 31 02:22:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1320339162' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1320339162' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1320339162' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Jan 31 02:22:48 np0005603621 charming_galileo[88605]: enabled application 'rbd' on pool 'vms'
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:48 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:48 np0005603621 systemd[1]: libpod-63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae.scope: Deactivated successfully.
Jan 31 02:22:48 np0005603621 podman[88630]: 2026-01-31 07:22:48.816978715 +0000 UTC m=+0.020831385 container died 63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae (image=quay.io/ceph/ceph:v18, name=charming_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:22:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b82be03b48d241ab9d47a69110be30787d1b0011f64056ee03686ea5e5aaa5c2-merged.mount: Deactivated successfully.
Jan 31 02:22:48 np0005603621 podman[88630]: 2026-01-31 07:22:48.851256254 +0000 UTC m=+0.055108954 container remove 63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae (image=quay.io/ceph/ceph:v18, name=charming_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:22:48 np0005603621 systemd[1]: libpod-conmon-63f06dfd454c92530bc5f9ab1cbfe01856e964244ebf6680e7e7461529ddc2ae.scope: Deactivated successfully.
Jan 31 02:22:49 np0005603621 python3[88670]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:49 np0005603621 podman[88671]: 2026-01-31 07:22:49.223059937 +0000 UTC m=+0.047777579 container create 959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:22:49 np0005603621 systemd[1]: Started libpod-conmon-959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801.scope.
Jan 31 02:22:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d19bb5885ebbff56335c4ccb0a80262ca8366965bf8f77b09d2153aeac618fb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d19bb5885ebbff56335c4ccb0a80262ca8366965bf8f77b09d2153aeac618fb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:49 np0005603621 podman[88671]: 2026-01-31 07:22:49.298544758 +0000 UTC m=+0.123262410 container init 959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:49 np0005603621 podman[88671]: 2026-01-31 07:22:49.205605357 +0000 UTC m=+0.030323089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:49 np0005603621 podman[88671]: 2026-01-31 07:22:49.303271605 +0000 UTC m=+0.127989247 container start 959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:22:49 np0005603621 podman[88671]: 2026-01-31 07:22:49.306351827 +0000 UTC m=+0.131069469 container attach 959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:22:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v82: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 31 02:22:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/218093747' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 02:22:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/218093747' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Jan 31 02:22:50 np0005603621 stoic_fermat[88686]: enabled application 'rbd' on pool 'volumes'
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1320339162' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:50 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:50 np0005603621 systemd[1]: libpod-959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801.scope: Deactivated successfully.
Jan 31 02:22:50 np0005603621 podman[88671]: 2026-01-31 07:22:50.116789496 +0000 UTC m=+0.941507158 container died 959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5d19bb5885ebbff56335c4ccb0a80262ca8366965bf8f77b09d2153aeac618fb-merged.mount: Deactivated successfully.
Jan 31 02:22:50 np0005603621 podman[88671]: 2026-01-31 07:22:50.156980133 +0000 UTC m=+0.981697765 container remove 959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801 (image=quay.io/ceph/ceph:v18, name=stoic_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:22:50 np0005603621 systemd[1]: libpod-conmon-959bd3d10eff1551d796e7c52d879cfbd540fe0bb8b2d10ef82f08efb245e801.scope: Deactivated successfully.
Jan 31 02:22:50 np0005603621 python3[88751]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:50 np0005603621 podman[88752]: 2026-01-31 07:22:50.443131408 +0000 UTC m=+0.040122175 container create d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96 (image=quay.io/ceph/ceph:v18, name=brave_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:22:50 np0005603621 systemd[1]: Started libpod-conmon-d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96.scope.
Jan 31 02:22:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6647dd5fdad3d104e6d2d3136206ffd5aee5e9577baf8475b2d4d507f8fbd74e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6647dd5fdad3d104e6d2d3136206ffd5aee5e9577baf8475b2d4d507f8fbd74e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:50 np0005603621 podman[88752]: 2026-01-31 07:22:50.514269203 +0000 UTC m=+0.111259990 container init d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96 (image=quay.io/ceph/ceph:v18, name=brave_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:22:50 np0005603621 podman[88752]: 2026-01-31 07:22:50.422268814 +0000 UTC m=+0.019259641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:50 np0005603621 podman[88752]: 2026-01-31 07:22:50.51867639 +0000 UTC m=+0.115667217 container start d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96 (image=quay.io/ceph/ceph:v18, name=brave_sutherland, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:50 np0005603621 podman[88752]: 2026-01-31 07:22:50.522433625 +0000 UTC m=+0.119424412 container attach d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96 (image=quay.io/ceph/ceph:v18, name=brave_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1045044254' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/218093747' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/218093747' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1045044254' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1045044254' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Jan 31 02:22:51 np0005603621 brave_sutherland[88768]: enabled application 'rbd' on pool 'backups'
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:51 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 31 02:22:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 02:22:51 np0005603621 systemd[1]: libpod-d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96.scope: Deactivated successfully.
Jan 31 02:22:51 np0005603621 podman[88752]: 2026-01-31 07:22:51.156624464 +0000 UTC m=+0.753615251 container died d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96 (image=quay.io/ceph/ceph:v18, name=brave_sutherland, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6647dd5fdad3d104e6d2d3136206ffd5aee5e9577baf8475b2d4d507f8fbd74e-merged.mount: Deactivated successfully.
Jan 31 02:22:51 np0005603621 podman[88752]: 2026-01-31 07:22:51.197163451 +0000 UTC m=+0.794154268 container remove d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96 (image=quay.io/ceph/ceph:v18, name=brave_sutherland, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:51 np0005603621 systemd[1]: libpod-conmon-d04c6139d8b7c8087a7d1cab65cd79247551d6423e17bff8187a860c67be9d96.scope: Deactivated successfully.
Jan 31 02:22:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v85: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:51 np0005603621 python3[88832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:51 np0005603621 podman[88833]: 2026-01-31 07:22:51.514765603 +0000 UTC m=+0.036189714 container create c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868 (image=quay.io/ceph/ceph:v18, name=clever_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:22:51 np0005603621 systemd[1]: Started libpod-conmon-c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868.scope.
Jan 31 02:22:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbbb31a2bfec3517c441a541ace547e470b9f83c24199ed8a89f7a05cbe3598/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbbb31a2bfec3517c441a541ace547e470b9f83c24199ed8a89f7a05cbe3598/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:51 np0005603621 podman[88833]: 2026-01-31 07:22:51.572988339 +0000 UTC m=+0.094412430 container init c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868 (image=quay.io/ceph/ceph:v18, name=clever_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 31 02:22:51 np0005603621 podman[88833]: 2026-01-31 07:22:51.577771198 +0000 UTC m=+0.099195279 container start c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868 (image=quay.io/ceph/ceph:v18, name=clever_hoover, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:51 np0005603621 podman[88833]: 2026-01-31 07:22:51.580965574 +0000 UTC m=+0.102389665 container attach c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868 (image=quay.io/ceph/ceph:v18, name=clever_hoover, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:22:51 np0005603621 podman[88833]: 2026-01-31 07:22:51.49871088 +0000 UTC m=+0.020134971 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1692381800' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: from='osd.2 [v2:192.168.122.102:6800/1205784752,v1:192.168.122.102:6801/1205784752]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1045044254' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1692381800' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1692381800' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Jan 31 02:22:52 np0005603621 clever_hoover[88849]: enabled application 'rbd' on pool 'images'
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:52 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:52 np0005603621 systemd[1]: libpod-c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868.scope: Deactivated successfully.
Jan 31 02:22:52 np0005603621 podman[88833]: 2026-01-31 07:22:52.355889802 +0000 UTC m=+0.877313883 container died c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868 (image=quay.io/ceph/ceph:v18, name=clever_hoover, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:22:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fbbbb31a2bfec3517c441a541ace547e470b9f83c24199ed8a89f7a05cbe3598-merged.mount: Deactivated successfully.
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e26 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Jan 31 02:22:52 np0005603621 podman[88833]: 2026-01-31 07:22:52.393441401 +0000 UTC m=+0.914865482 container remove c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868 (image=quay.io/ceph/ceph:v18, name=clever_hoover, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:52 np0005603621 systemd[1]: libpod-conmon-c3f8a953156acad68951a4801039635aeff3e1342525c8154c6568704706d868.scope: Deactivated successfully.
Jan 31 02:22:52 np0005603621 python3[88961]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:52 np0005603621 podman[88962]: 2026-01-31 07:22:52.728001927 +0000 UTC m=+0.052049792 container create 63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72 (image=quay.io/ceph/ceph:v18, name=suspicious_maxwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:52 np0005603621 systemd[1]: Started libpod-conmon-63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72.scope.
Jan 31 02:22:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7efbddc792d9628fdab5f55a134aad3a93250b246a70f7d0ff7557aa49b1f06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7efbddc792d9628fdab5f55a134aad3a93250b246a70f7d0ff7557aa49b1f06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:52 np0005603621 podman[88962]: 2026-01-31 07:22:52.700905625 +0000 UTC m=+0.024953490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:52 np0005603621 podman[88962]: 2026-01-31 07:22:52.81293065 +0000 UTC m=+0.136978555 container init 63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72 (image=quay.io/ceph/ceph:v18, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:22:52 np0005603621 podman[88962]: 2026-01-31 07:22:52.820286266 +0000 UTC m=+0.144334131 container start 63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72 (image=quay.io/ceph/ceph:v18, name=suspicious_maxwell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:52 np0005603621 podman[88962]: 2026-01-31 07:22:52.824077861 +0000 UTC m=+0.148125696 container attach 63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72 (image=quay.io/ceph/ceph:v18, name=suspicious_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921316751' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 02:22:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v87: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/1692381800' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='osd.2 [v2:192.168.122.102:6800/1205784752,v1:192.168.122.102:6801/1205784752]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3921316751' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3921316751' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Jan 31 02:22:53 np0005603621 suspicious_maxwell[88977]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:53 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=27 pruub=12.824289322s) [] r=-1 lpr=27 pi=[16,27)/1 crt=0'0 mlcod 0'0 active pruub 65.049346924s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:22:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 27 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=15.943499565s) [] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 active pruub 68.168617249s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:22:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=27 pruub=12.824289322s) [] r=-1 lpr=27 pi=[16,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.049346924s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:22:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 27 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=27 pruub=15.943499565s) [] r=-1 lpr=27 pi=[19,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.168617249s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:22:53 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1205784752; not ready for session (expect reconnect)
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:53 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:53 np0005603621 systemd[1]: libpod-63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72.scope: Deactivated successfully.
Jan 31 02:22:53 np0005603621 podman[88962]: 2026-01-31 07:22:53.431597203 +0000 UTC m=+0.755645048 container died 63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72 (image=quay.io/ceph/ceph:v18, name=suspicious_maxwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:22:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f7efbddc792d9628fdab5f55a134aad3a93250b246a70f7d0ff7557aa49b1f06-merged.mount: Deactivated successfully.
Jan 31 02:22:53 np0005603621 podman[88962]: 2026-01-31 07:22:53.487623236 +0000 UTC m=+0.811671081 container remove 63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72 (image=quay.io/ceph/ceph:v18, name=suspicious_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:22:53 np0005603621 systemd[1]: libpod-conmon-63a947d5881a2e90c6abbe521a3023392c0f81c64cd44fc5789e719a5ea67f72.scope: Deactivated successfully.
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:53 np0005603621 python3[89039]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:53 np0005603621 podman[89040]: 2026-01-31 07:22:53.806096216 +0000 UTC m=+0.044526282 container create 8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5 (image=quay.io/ceph/ceph:v18, name=nervous_torvalds, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:53 np0005603621 systemd[1]: Started libpod-conmon-8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5.scope.
Jan 31 02:22:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16053b3983391449e83ac30b3f2f9e6ad766a750e512c4cb3c070f2148d4bdc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16053b3983391449e83ac30b3f2f9e6ad766a750e512c4cb3c070f2148d4bdc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:53 np0005603621 podman[89040]: 2026-01-31 07:22:53.881162833 +0000 UTC m=+0.119592929 container init 8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5 (image=quay.io/ceph/ceph:v18, name=nervous_torvalds, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:53 np0005603621 podman[89040]: 2026-01-31 07:22:53.785368637 +0000 UTC m=+0.023798763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:53 np0005603621 podman[89040]: 2026-01-31 07:22:53.884996349 +0000 UTC m=+0.123426425 container start 8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5 (image=quay.io/ceph/ceph:v18, name=nervous_torvalds, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:53 np0005603621 podman[89040]: 2026-01-31 07:22:53.88951283 +0000 UTC m=+0.127942906 container attach 8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5 (image=quay.io/ceph/ceph:v18, name=nervous_torvalds, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3921316751' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:54 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1205784752; not ready for session (expect reconnect)
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 31 02:22:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4251382841' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 02:22:54 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v89: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 31 02:22:55 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1205784752; not ready for session (expect reconnect)
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:55 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4251382841' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 31 02:22:55 np0005603621 nervous_torvalds[89056]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 31 02:22:55 np0005603621 systemd[1]: libpod-8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5.scope: Deactivated successfully.
Jan 31 02:22:55 np0005603621 podman[89040]: 2026-01-31 07:22:55.457789389 +0000 UTC m=+1.696219485 container died 8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5 (image=quay.io/ceph/ceph:v18, name=nervous_torvalds, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:55 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:55 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/4251382841' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 31 02:22:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-16053b3983391449e83ac30b3f2f9e6ad766a750e512c4cb3c070f2148d4bdc5-merged.mount: Deactivated successfully.
Jan 31 02:22:55 np0005603621 podman[89040]: 2026-01-31 07:22:55.558399605 +0000 UTC m=+1.796829681 container remove 8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5 (image=quay.io/ceph/ceph:v18, name=nervous_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:55 np0005603621 systemd[1]: libpod-conmon-8bbe575567fe82ae7ca5e4d4ef5ed64d9ce7266facc6e79a4a07be77d9efa5a5.scope: Deactivated successfully.
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.cdjvtw started
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mgr.compute-2.cdjvtw 192.168.122.102:0/3231195751; not ready for session (expect reconnect)
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1205784752; not ready for session (expect reconnect)
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 31 02:22:56 np0005603621 python3[89224]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/4251382841' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1205784752,v1:192.168.122.102:6801/1205784752] boot
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 31 02:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 31 02:22:56 np0005603621 python3[89462]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844176.2089202-37495-17330865342014/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:56 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29 pruub=9.126909256s) [2] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.049346924s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:22:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=29 pruub=12.246171951s) [2] r=-1 lpr=29 pi=[19,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.168617249s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:22:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=29 pruub=12.246129990s) [2] r=-1 lpr=29 pi=[19,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.168617249s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:22:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=29 pruub=9.126859665s) [2] r=-1 lpr=29 pi=[16,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 65.049346924s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.ddmhwk(active, since 2m), standbys: compute-2.cdjvtw
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.cdjvtw", "id": "compute-2.cdjvtw"} v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.cdjvtw", "id": "compute-2.cdjvtw"}]: dispatch
Jan 31 02:22:57 np0005603621 python3[89872]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v92: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: OSD bench result of 8242.819324 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: Updating compute-0:/etc/ceph/ceph.conf
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: Updating compute-1:/etc/ceph/ceph.conf
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: Updating compute-2:/etc/ceph/ceph.conf
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: osd.2 [v2:192.168.122.102:6800/1205784752,v1:192.168.122.102:6801/1205784752] boot
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 31 02:22:57 np0005603621 python3[90112]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844177.0230136-37509-416450724126/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b391745dade55bf1c05b80aba8a332fa2b0a8ade backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.gxjgok started
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mgr.compute-1.gxjgok 192.168.122.101:0/3760284386; not ready for session (expect reconnect)
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b0339701-678c-44d1-8090-473b1d02cb72 does not exist
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9645872d-e7d6-4f86-ac63-33719e2777e1 does not exist
Jan 31 02:22:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3483dc06-4a79-4f54-8590-d98464c90582 does not exist
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:22:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 02:22:58 np0005603621 python3[90412]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:58 np0005603621 podman[90457]: 2026-01-31 07:22:58.354381899 +0000 UTC m=+0.033099271 container create 6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc (image=quay.io/ceph/ceph:v18, name=determined_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.365359024 +0000 UTC m=+0.043091104 container create 4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:22:58 np0005603621 systemd[1]: Started libpod-conmon-6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc.scope.
Jan 31 02:22:58 np0005603621 systemd[1]: Started libpod-conmon-4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f.scope.
Jan 31 02:22:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c1edd96eacdd7d1e23e95f8767cb8aa9c27c57d657728682ba870d1abecba4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c1edd96eacdd7d1e23e95f8767cb8aa9c27c57d657728682ba870d1abecba4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98c1edd96eacdd7d1e23e95f8767cb8aa9c27c57d657728682ba870d1abecba4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.422250016 +0000 UTC m=+0.099982126 container init 4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:22:58 np0005603621 podman[90457]: 2026-01-31 07:22:58.42599239 +0000 UTC m=+0.104709772 container init 6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc (image=quay.io/ceph/ceph:v18, name=determined_bassi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.429116924 +0000 UTC m=+0.106849004 container start 4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:22:58 np0005603621 sharp_panini[90488]: 167 167
Jan 31 02:22:58 np0005603621 systemd[1]: libpod-4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f.scope: Deactivated successfully.
Jan 31 02:22:58 np0005603621 podman[90457]: 2026-01-31 07:22:58.432066452 +0000 UTC m=+0.110783834 container start 6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc (image=quay.io/ceph/ceph:v18, name=determined_bassi, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.431721941 +0000 UTC m=+0.109454121 container attach 4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:22:58 np0005603621 podman[90457]: 2026-01-31 07:22:58.43531567 +0000 UTC m=+0.114033072 container attach 6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc (image=quay.io/ceph/ceph:v18, name=determined_bassi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.435781046 +0000 UTC m=+0.113513126 container died 4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:22:58 np0005603621 podman[90457]: 2026-01-31 07:22:58.339959119 +0000 UTC m=+0.018676531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.350576662 +0000 UTC m=+0.028308752 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-66435aa091ec64c871da91ab95fdd4d7e9973a0ceb41f379ab2d8763d4636c8d-merged.mount: Deactivated successfully.
Jan 31 02:22:58 np0005603621 podman[90455]: 2026-01-31 07:22:58.466620491 +0000 UTC m=+0.144352571 container remove 4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:22:58 np0005603621 systemd[1]: libpod-conmon-4ed48f91b6b06edb2afecc403765aacd25b65abdaf7b5a8a354a78f97223270f.scope: Deactivated successfully.
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: Updating compute-0:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: Updating compute-2:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: Updating compute-1:/var/lib/ceph/2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/config/ceph.conf
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: Cluster is now healthy
Jan 31 02:22:58 np0005603621 podman[90515]: 2026-01-31 07:22:58.58414692 +0000 UTC m=+0.037332213 container create ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:22:58 np0005603621 systemd[1]: Started libpod-conmon-ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c.scope.
Jan 31 02:22:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6397393ccdf2e09ed4ee74826040ee5b988d559f318258bb0083222c3894184c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6397393ccdf2e09ed4ee74826040ee5b988d559f318258bb0083222c3894184c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6397393ccdf2e09ed4ee74826040ee5b988d559f318258bb0083222c3894184c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6397393ccdf2e09ed4ee74826040ee5b988d559f318258bb0083222c3894184c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6397393ccdf2e09ed4ee74826040ee5b988d559f318258bb0083222c3894184c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:58 np0005603621 podman[90515]: 2026-01-31 07:22:58.652633216 +0000 UTC m=+0.105818519 container init ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:58 np0005603621 podman[90515]: 2026-01-31 07:22:58.658938336 +0000 UTC m=+0.112123629 container start ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:22:58 np0005603621 podman[90515]: 2026-01-31 07:22:58.570862937 +0000 UTC m=+0.024048250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:22:58 np0005603621 podman[90515]: 2026-01-31 07:22:58.670371827 +0000 UTC m=+0.123557120 container attach ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:58 np0005603621 ceph-mgr[74689]: mgr.server handle_open ignoring open from mgr.compute-1.gxjgok 192.168.122.101:0/3760284386; not ready for session (expect reconnect)
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ddmhwk(active, since 2m), standbys: compute-2.cdjvtw, compute-1.gxjgok
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.gxjgok", "id": "compute-1.gxjgok"} v 0) v1
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.gxjgok", "id": "compute-1.gxjgok"}]: dispatch
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:22:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 02:22:58 np0005603621 determined_bassi[90486]: 
Jan 31 02:22:58 np0005603621 determined_bassi[90486]: [global]
Jan 31 02:22:58 np0005603621 determined_bassi[90486]: #011fsid = 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2
Jan 31 02:22:58 np0005603621 determined_bassi[90486]: #011mon_host = 192.168.122.100
Jan 31 02:22:58 np0005603621 systemd[1]: libpod-6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc.scope: Deactivated successfully.
Jan 31 02:22:58 np0005603621 podman[90457]: 2026-01-31 07:22:58.998950123 +0000 UTC m=+0.677667535 container died 6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc (image=quay.io/ceph/ceph:v18, name=determined_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-98c1edd96eacdd7d1e23e95f8767cb8aa9c27c57d657728682ba870d1abecba4-merged.mount: Deactivated successfully.
Jan 31 02:22:59 np0005603621 podman[90457]: 2026-01-31 07:22:59.034211005 +0000 UTC m=+0.712928387 container remove 6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc (image=quay.io/ceph/ceph:v18, name=determined_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:22:59 np0005603621 systemd[1]: libpod-conmon-6c062df97bbf9c84b91a84ec2d09ffffa1997b4bc91713d7aea6be5e1bcd67bc.scope: Deactivated successfully.
Jan 31 02:22:59 np0005603621 python3[90595]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:22:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v94: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:22:59 np0005603621 podman[90596]: 2026-01-31 07:22:59.379615181 +0000 UTC m=+0.039059390 container create 797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249 (image=quay.io/ceph/ceph:v18, name=epic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:22:59 np0005603621 systemd[1]: Started libpod-conmon-797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249.scope.
Jan 31 02:22:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:22:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe7c359df1b01400bf29a6097ec9532b5d1394d1c6061fc4bad38409f780c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe7c359df1b01400bf29a6097ec9532b5d1394d1c6061fc4bad38409f780c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cbe7c359df1b01400bf29a6097ec9532b5d1394d1c6061fc4bad38409f780c5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:22:59 np0005603621 podman[90596]: 2026-01-31 07:22:59.453595501 +0000 UTC m=+0.113039720 container init 797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249 (image=quay.io/ceph/ceph:v18, name=epic_beaver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:22:59 np0005603621 podman[90596]: 2026-01-31 07:22:59.361301872 +0000 UTC m=+0.020746121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:22:59 np0005603621 podman[90596]: 2026-01-31 07:22:59.459138235 +0000 UTC m=+0.118582444 container start 797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249 (image=quay.io/ceph/ceph:v18, name=epic_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:22:59 np0005603621 podman[90596]: 2026-01-31 07:22:59.463584463 +0000 UTC m=+0.123028712 container attach 797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249 (image=quay.io/ceph/ceph:v18, name=epic_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:22:59 np0005603621 stupefied_joliot[90531]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:22:59 np0005603621 stupefied_joliot[90531]: --> relative data size: 1.0
Jan 31 02:22:59 np0005603621 stupefied_joliot[90531]: --> All data devices are unavailable
Jan 31 02:22:59 np0005603621 systemd[1]: libpod-ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c.scope: Deactivated successfully.
Jan 31 02:22:59 np0005603621 podman[90515]: 2026-01-31 07:22:59.604932433 +0000 UTC m=+1.058117736 container died ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:22:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6397393ccdf2e09ed4ee74826040ee5b988d559f318258bb0083222c3894184c-merged.mount: Deactivated successfully.
Jan 31 02:22:59 np0005603621 podman[90515]: 2026-01-31 07:22:59.658108091 +0000 UTC m=+1.111293374 container remove ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:22:59 np0005603621 systemd[1]: libpod-conmon-ab802cfe20ba29d46784ea1e7f049e0929cf4d0236f861752413eb644853f20c.scope: Deactivated successfully.
Jan 31 02:22:59 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3074196547' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:22:59 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 31 02:22:59 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 31 02:23:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 31 02:23:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3569125940' entity='client.admin' 
Jan 31 02:23:00 np0005603621 epic_beaver[90611]: set ssl_option
Jan 31 02:23:00 np0005603621 systemd[1]: libpod-797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249.scope: Deactivated successfully.
Jan 31 02:23:00 np0005603621 podman[90596]: 2026-01-31 07:23:00.109237093 +0000 UTC m=+0.768681302 container died 797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249 (image=quay.io/ceph/ceph:v18, name=epic_beaver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:23:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5cbe7c359df1b01400bf29a6097ec9532b5d1394d1c6061fc4bad38409f780c5-merged.mount: Deactivated successfully.
Jan 31 02:23:00 np0005603621 podman[90596]: 2026-01-31 07:23:00.154393004 +0000 UTC m=+0.813837193 container remove 797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249 (image=quay.io/ceph/ceph:v18, name=epic_beaver, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:23:00 np0005603621 systemd[1]: libpod-conmon-797de2d78cf1ee577033df08a5b4b8e06b71fc70aec2fcbfd50eaab59403a249.scope: Deactivated successfully.
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.161318014 +0000 UTC m=+0.055676252 container create 96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:23:00 np0005603621 systemd[1]: Started libpod-conmon-96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33.scope.
Jan 31 02:23:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.221545097 +0000 UTC m=+0.115903365 container init 96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.227374621 +0000 UTC m=+0.121732859 container start 96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:00 np0005603621 clever_easley[90823]: 167 167
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.230482615 +0000 UTC m=+0.124840883 container attach 96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.134385189 +0000 UTC m=+0.028743447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:00 np0005603621 systemd[1]: libpod-96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33.scope: Deactivated successfully.
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.231662753 +0000 UTC m=+0.126020991 container died 96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5cb4b28b4a9e9afc29ca99bf13fc8865f17a37702e1fb22c4717a15fcbaccf7b-merged.mount: Deactivated successfully.
Jan 31 02:23:00 np0005603621 podman[90796]: 2026-01-31 07:23:00.267584898 +0000 UTC m=+0.161943136 container remove 96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:00 np0005603621 systemd[1]: libpod-conmon-96cc671acca7e0c9cf62a702a777d495adb36a770784595c6e6af87dde9bdc33.scope: Deactivated successfully.
Jan 31 02:23:00 np0005603621 podman[90873]: 2026-01-31 07:23:00.394859511 +0000 UTC m=+0.048326049 container create 1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:00 np0005603621 python3[90867]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:00 np0005603621 systemd[1]: Started libpod-conmon-1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2.scope.
Jan 31 02:23:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe0261f153754b9cc76a9629a230cc8a93fdf7758ac1b7104e22eaafd09b202/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 podman[90873]: 2026-01-31 07:23:00.373231912 +0000 UTC m=+0.026698480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe0261f153754b9cc76a9629a230cc8a93fdf7758ac1b7104e22eaafd09b202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe0261f153754b9cc76a9629a230cc8a93fdf7758ac1b7104e22eaafd09b202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe0261f153754b9cc76a9629a230cc8a93fdf7758ac1b7104e22eaafd09b202/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 podman[90873]: 2026-01-31 07:23:00.484982008 +0000 UTC m=+0.138448566 container init 1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:00 np0005603621 podman[90889]: 2026-01-31 07:23:00.487711368 +0000 UTC m=+0.040652973 container create 28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:23:00 np0005603621 podman[90873]: 2026-01-31 07:23:00.491343568 +0000 UTC m=+0.144810136 container start 1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:00 np0005603621 podman[90873]: 2026-01-31 07:23:00.495712244 +0000 UTC m=+0.149178822 container attach 1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:23:00 np0005603621 systemd[1]: Started libpod-conmon-28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b.scope.
Jan 31 02:23:00 np0005603621 podman[90889]: 2026-01-31 07:23:00.464373122 +0000 UTC m=+0.017314767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b549a8518454245102e35771df9c220effae2839eea688e7252b76e81f0f5a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b549a8518454245102e35771df9c220effae2839eea688e7252b76e81f0f5a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3b549a8518454245102e35771df9c220effae2839eea688e7252b76e81f0f5a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:00 np0005603621 podman[90889]: 2026-01-31 07:23:00.594578382 +0000 UTC m=+0.147520057 container init 28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:00 np0005603621 podman[90889]: 2026-01-31 07:23:00.601775061 +0000 UTC m=+0.154716646 container start 28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:23:00 np0005603621 podman[90889]: 2026-01-31 07:23:00.609846899 +0000 UTC m=+0.162788484 container attach 28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:01 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3569125940' entity='client.admin' 
Jan 31 02:23:01 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:23:01 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:01 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 02:23:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:01 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 31 02:23:01 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 31 02:23:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 02:23:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:01 np0005603621 strange_mccarthy[90909]: Scheduled rgw.rgw update...
Jan 31 02:23:01 np0005603621 strange_mccarthy[90909]: Scheduled ingress.rgw.default update...
Jan 31 02:23:01 np0005603621 systemd[1]: libpod-28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b.scope: Deactivated successfully.
Jan 31 02:23:01 np0005603621 podman[90889]: 2026-01-31 07:23:01.172455567 +0000 UTC m=+0.725397182 container died 28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:23:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e3b549a8518454245102e35771df9c220effae2839eea688e7252b76e81f0f5a-merged.mount: Deactivated successfully.
Jan 31 02:23:01 np0005603621 podman[90889]: 2026-01-31 07:23:01.206764979 +0000 UTC m=+0.759706564 container remove 28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b (image=quay.io/ceph/ceph:v18, name=strange_mccarthy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:01 np0005603621 systemd[1]: libpod-conmon-28e36bee37de2362a805c239e4c0e27e001d7e705404e827b9cd61506a55221b.scope: Deactivated successfully.
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]: {
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:    "0": [
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:        {
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "devices": [
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "/dev/loop3"
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            ],
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "lv_name": "ceph_lv0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "lv_size": "7511998464",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "name": "ceph_lv0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "tags": {
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.cluster_name": "ceph",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.crush_device_class": "",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.encrypted": "0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.osd_id": "0",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.type": "block",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:                "ceph.vdo": "0"
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            },
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "type": "block",
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:            "vg_name": "ceph_vg0"
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:        }
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]:    ]
Jan 31 02:23:01 np0005603621 compassionate_perlman[90890]: }
Jan 31 02:23:01 np0005603621 systemd[1]: libpod-1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2.scope: Deactivated successfully.
Jan 31 02:23:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v95: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:01 np0005603621 podman[90949]: 2026-01-31 07:23:01.348490951 +0000 UTC m=+0.021227816 container died 1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:23:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9fe0261f153754b9cc76a9629a230cc8a93fdf7758ac1b7104e22eaafd09b202-merged.mount: Deactivated successfully.
Jan 31 02:23:01 np0005603621 podman[90949]: 2026-01-31 07:23:01.39807288 +0000 UTC m=+0.070809735 container remove 1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:23:01 np0005603621 systemd[1]: libpod-conmon-1b45f8daa77efec64962bbd45724eeccbb679ee25af62f5df9c2960a0102ebc2.scope: Deactivated successfully.
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.864829111 +0000 UTC m=+0.031423276 container create 8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noether, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:01 np0005603621 systemd[1]: Started libpod-conmon-8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984.scope.
Jan 31 02:23:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.93304707 +0000 UTC m=+0.099641255 container init 8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noether, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.938377917 +0000 UTC m=+0.104972112 container start 8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noether, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:01 np0005603621 crazy_noether[91121]: 167 167
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.942331428 +0000 UTC m=+0.108925633 container attach 8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noether, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:01 np0005603621 systemd[1]: libpod-8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984.scope: Deactivated successfully.
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.942874216 +0000 UTC m=+0.109468391 container died 8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noether, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.85035266 +0000 UTC m=+0.016946845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ea7e56203d377e926fb3e2778223000da9a503558c7cf3bde558de9c11ed49df-merged.mount: Deactivated successfully.
Jan 31 02:23:01 np0005603621 podman[91104]: 2026-01-31 07:23:01.981555992 +0000 UTC m=+0.148150157 container remove 8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_noether, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:23:01 np0005603621 systemd[1]: libpod-conmon-8e1f2401ccb73a9a5f2e2a738fdc0af8a72429a3b1bb42c78047d85c6b3d6984.scope: Deactivated successfully.
Jan 31 02:23:02 np0005603621 podman[91150]: 2026-01-31 07:23:02.09726513 +0000 UTC m=+0.035830692 container create 97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:23:02 np0005603621 systemd[1]: Started libpod-conmon-97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5.scope.
Jan 31 02:23:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f738afd8ead76ef6e09ff3a394d40a7e12e231556a9e0626d950eff7a2df190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f738afd8ead76ef6e09ff3a394d40a7e12e231556a9e0626d950eff7a2df190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f738afd8ead76ef6e09ff3a394d40a7e12e231556a9e0626d950eff7a2df190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f738afd8ead76ef6e09ff3a394d40a7e12e231556a9e0626d950eff7a2df190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:02 np0005603621 ceph-mon[74394]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:02 np0005603621 ceph-mon[74394]: Saving service ingress.rgw.default spec with placement count:2
Jan 31 02:23:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:02 np0005603621 podman[91150]: 2026-01-31 07:23:02.160905936 +0000 UTC m=+0.099471518 container init 97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:23:02 np0005603621 podman[91150]: 2026-01-31 07:23:02.166496812 +0000 UTC m=+0.105062374 container start 97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:23:02 np0005603621 podman[91150]: 2026-01-31 07:23:02.171326743 +0000 UTC m=+0.109892305 container attach 97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:02 np0005603621 podman[91150]: 2026-01-31 07:23:02.080559635 +0000 UTC m=+0.019125227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:02 np0005603621 python3[91241]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:23:02 np0005603621 python3[91312]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844182.081304-37550-35941675353230/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]: {
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:        "osd_id": 0,
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:        "type": "bluestore"
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]:    }
Jan 31 02:23:02 np0005603621 laughing_driscoll[91213]: }
Jan 31 02:23:03 np0005603621 systemd[1]: libpod-97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5.scope: Deactivated successfully.
Jan 31 02:23:03 np0005603621 podman[91150]: 2026-01-31 07:23:03.011007764 +0000 UTC m=+0.949573326 container died 97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9f738afd8ead76ef6e09ff3a394d40a7e12e231556a9e0626d950eff7a2df190-merged.mount: Deactivated successfully.
Jan 31 02:23:03 np0005603621 podman[91150]: 2026-01-31 07:23:03.059132795 +0000 UTC m=+0.997698357 container remove 97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:23:03 np0005603621 systemd[1]: libpod-conmon-97b688d6dd4d2a03f05d8b66b2cb952bc26e2a18f79c91af31cc98c78d1330c5.scope: Deactivated successfully.
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:03 np0005603621 python3[91378]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.169087081 +0000 UTC m=+0.039106831 container create 7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de (image=quay.io/ceph/ceph:v18, name=festive_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:23:03 np0005603621 systemd[1]: Started libpod-conmon-7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de.scope.
Jan 31 02:23:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86c863f025bd2d8136cf3151fa1f555c0c088083367c53223127bea6e064bb2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86c863f025bd2d8136cf3151fa1f555c0c088083367c53223127bea6e064bb2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d86c863f025bd2d8136cf3151fa1f555c0c088083367c53223127bea6e064bb2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.151109493 +0000 UTC m=+0.021129253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.248959887 +0000 UTC m=+0.118979627 container init 7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de (image=quay.io/ceph/ceph:v18, name=festive_greider, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.255452543 +0000 UTC m=+0.125472273 container start 7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de (image=quay.io/ceph/ceph:v18, name=festive_greider, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.258269127 +0000 UTC m=+0.128288867 container attach 7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de (image=quay.io/ceph/ceph:v18, name=festive_greider, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v96: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 02:23:03 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0[74390]: 2026-01-31T07:23:03.854+0000 7f2c55c1c640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e2 new map
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:03.855587+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 02:23:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:03 np0005603621 ceph-mgr[74689]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 31 02:23:03 np0005603621 podman[91596]: 2026-01-31 07:23:03.894296536 +0000 UTC m=+0.039895337 container create db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:23:03 np0005603621 systemd[1]: libpod-7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de.scope: Deactivated successfully.
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.918577573 +0000 UTC m=+0.788597303 container died 7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de (image=quay.io/ceph/ceph:v18, name=festive_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:03 np0005603621 systemd[1]: Started libpod-conmon-db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0.scope.
Jan 31 02:23:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d86c863f025bd2d8136cf3151fa1f555c0c088083367c53223127bea6e064bb2-merged.mount: Deactivated successfully.
Jan 31 02:23:03 np0005603621 podman[91596]: 2026-01-31 07:23:03.952830363 +0000 UTC m=+0.098429194 container init db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:03 np0005603621 podman[91596]: 2026-01-31 07:23:03.956619669 +0000 UTC m=+0.102218470 container start db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:23:03 np0005603621 priceless_clarke[91620]: 167 167
Jan 31 02:23:03 np0005603621 systemd[1]: libpod-db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0.scope: Deactivated successfully.
Jan 31 02:23:03 np0005603621 podman[91596]: 2026-01-31 07:23:03.971269866 +0000 UTC m=+0.116868687 container attach db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:23:03 np0005603621 podman[91596]: 2026-01-31 07:23:03.971893026 +0000 UTC m=+0.117491847 container died db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:23:03 np0005603621 podman[91596]: 2026-01-31 07:23:03.879152782 +0000 UTC m=+0.024751593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:03 np0005603621 podman[91394]: 2026-01-31 07:23:03.988714276 +0000 UTC m=+0.858734006 container remove 7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de (image=quay.io/ceph/ceph:v18, name=festive_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:23:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3bbeaf6c6bdbbe722075982e4adccf76a6c7e8d6e11cb7a6ee06ba1e31525acf-merged.mount: Deactivated successfully.
Jan 31 02:23:04 np0005603621 systemd[1]: libpod-conmon-7c81e2ec6ad8303459afd1c72e41a290102cdc87989df9cc02fa49f63d2924de.scope: Deactivated successfully.
Jan 31 02:23:04 np0005603621 podman[91596]: 2026-01-31 07:23:04.01316995 +0000 UTC m=+0.158768751 container remove db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:04 np0005603621 systemd[1]: libpod-conmon-db23938024d0b543dec9301c067e2562526ea2fd2f14d48b4fb501a0a983c8d0.scope: Deactivated successfully.
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ddmhwk (monmap changed)...
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ddmhwk (monmap changed)...
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ddmhwk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ddmhwk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ddmhwk on compute-0
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ddmhwk on compute-0
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ddmhwk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:23:04 np0005603621 python3[91698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:04 np0005603621 podman[91774]: 2026-01-31 07:23:04.358620586 +0000 UTC m=+0.049116013 container create 9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6 (image=quay.io/ceph/ceph:v18, name=boring_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:23:04 np0005603621 systemd[1]: Started libpod-conmon-9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6.scope.
Jan 31 02:23:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b05b301fb1ba967c86c786e9d673867c16837625fe8f598d7c8481f045289ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b05b301fb1ba967c86c786e9d673867c16837625fe8f598d7c8481f045289ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b05b301fb1ba967c86c786e9d673867c16837625fe8f598d7c8481f045289ef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:04 np0005603621 podman[91774]: 2026-01-31 07:23:04.423837295 +0000 UTC m=+0.114332732 container init 9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6 (image=quay.io/ceph/ceph:v18, name=boring_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:04 np0005603621 podman[91774]: 2026-01-31 07:23:04.428607124 +0000 UTC m=+0.119102541 container start 9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6 (image=quay.io/ceph/ceph:v18, name=boring_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:04 np0005603621 podman[91774]: 2026-01-31 07:23:04.334698051 +0000 UTC m=+0.025193498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:04 np0005603621 podman[91774]: 2026-01-31 07:23:04.432014797 +0000 UTC m=+0.122510234 container attach 9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6 (image=quay.io/ceph/ceph:v18, name=boring_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.524812773 +0000 UTC m=+0.041799781 container create dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:04 np0005603621 systemd[1]: Started libpod-conmon-dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944.scope.
Jan 31 02:23:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.574312689 +0000 UTC m=+0.091299697 container init dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.577595888 +0000 UTC m=+0.094582896 container start dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bell, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:23:04 np0005603621 cranky_bell[91825]: 167 167
Jan 31 02:23:04 np0005603621 systemd[1]: libpod-dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944.scope: Deactivated successfully.
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.582088847 +0000 UTC m=+0.099075855 container attach dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.582373987 +0000 UTC m=+0.099360995 container died dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bell, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:23:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b787576d136966d395b2124b83db2a1e0827967f450d4ae0780018083a91b990-merged.mount: Deactivated successfully.
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.507111984 +0000 UTC m=+0.024099022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:04 np0005603621 podman[91809]: 2026-01-31 07:23:04.617897868 +0000 UTC m=+0.134884876 container remove dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bell, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:23:04 np0005603621 systemd[1]: libpod-conmon-dc17696f1278b7b293917fafb9ee44e7f174419a5fa72164cb2c56ad2e3ab944.scope: Deactivated successfully.
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14325 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:04 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 boring_mclaren[91790]: Scheduled mds.cephfs update...
Jan 31 02:23:05 np0005603621 systemd[1]: libpod-9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6.scope: Deactivated successfully.
Jan 31 02:23:05 np0005603621 podman[91774]: 2026-01-31 07:23:05.024785588 +0000 UTC m=+0.715281005 container died 9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6 (image=quay.io/ceph/ceph:v18, name=boring_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:23:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4b05b301fb1ba967c86c786e9d673867c16837625fe8f598d7c8481f045289ef-merged.mount: Deactivated successfully.
Jan 31 02:23:05 np0005603621 podman[91774]: 2026-01-31 07:23:05.068671607 +0000 UTC m=+0.759167024 container remove 9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6 (image=quay.io/ceph/ceph:v18, name=boring_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:05 np0005603621 systemd[1]: libpod-conmon-9bac44d2a77d97114d4edfdb889229828a58732214d66bead79bc3fef5a727e6.scope: Deactivated successfully.
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.119376783 +0000 UTC m=+0.034716155 container create 18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:05 np0005603621 systemd[1]: Started libpod-conmon-18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6.scope.
Jan 31 02:23:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.167458252 +0000 UTC m=+0.082797644 container init 18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.17187253 +0000 UTC m=+0.087211922 container start 18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:23:05 np0005603621 priceless_dewdney[92008]: 167 167
Jan 31 02:23:05 np0005603621 systemd[1]: libpod-18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6.scope: Deactivated successfully.
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.175482299 +0000 UTC m=+0.090821691 container attach 18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.176203803 +0000 UTC m=+0.091543175 container died 18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.106402142 +0000 UTC m=+0.021741534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:05 np0005603621 podman[91991]: 2026-01-31 07:23:05.208975293 +0000 UTC m=+0.124314665 container remove 18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:05 np0005603621 systemd[1]: libpod-conmon-18037d0d089b4fb78dd51c7b0fc632d87e21499a8ea776d2f466a7c20bb243a6.scope: Deactivated successfully.
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v98: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-400ff11d071ca8f9d7bb164f6435a3eafbc0817ee32ed34527e2d41eb5a90040-merged.mount: Deactivated successfully.
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: Reconfiguring mgr.compute-0.ddmhwk (monmap changed)...
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: Reconfiguring daemon mgr.compute-0.ddmhwk on compute-0
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.697313092 +0000 UTC m=+0.039883038 container create c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_solomon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:05 np0005603621 systemd[1]: Started libpod-conmon-c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92.scope.
Jan 31 02:23:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.763880465 +0000 UTC m=+0.106450421 container init c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_solomon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.770754484 +0000 UTC m=+0.113324420 container start c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:05 np0005603621 nice_solomon[92158]: 167 167
Jan 31 02:23:05 np0005603621 systemd[1]: libpod-c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92.scope: Deactivated successfully.
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.774592371 +0000 UTC m=+0.117162357 container attach c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_solomon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.67923438 +0000 UTC m=+0.021804356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.775113249 +0000 UTC m=+0.117683205 container died c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:23:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6abc4106d73aa0cd520a403f49fd3b26c9249c8dca59bd06c4c267b181aa41a9-merged.mount: Deactivated successfully.
Jan 31 02:23:05 np0005603621 podman[92143]: 2026-01-31 07:23:05.815219153 +0000 UTC m=+0.157789089 container remove c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_solomon, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:23:05 np0005603621 systemd[1]: libpod-conmon-c181aa0e7e87692694cce310fa60a0088efc1c004e9ff18fe223bee1cfc06d92.scope: Deactivated successfully.
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 02:23:05 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 02:23:06 np0005603621 python3[92264]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 31 02:23:06 np0005603621 python3[92337]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844185.9008105-37598-37699340750226/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=35152db97829fbbc30ac5e5c6e1f42921e77a1a7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:06 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 31 02:23:06 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:06 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 31 02:23:06 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: Reconfiguring osd.0 (monmap changed)...
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: Reconfiguring daemon osd.0 on compute-0
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 31 02:23:07 np0005603621 python3[92387]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.068976963 +0000 UTC m=+0.034577401 container create 3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248 (image=quay.io/ceph/ceph:v18, name=flamboyant_spence, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:07 np0005603621 systemd[1]: Started libpod-conmon-3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248.scope.
Jan 31 02:23:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9b32dbd89a240703ffc6b4b928653f34c9f62181cd995849bb3af932fe12c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c9b32dbd89a240703ffc6b4b928653f34c9f62181cd995849bb3af932fe12c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.054660667 +0000 UTC m=+0.020261115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.154035402 +0000 UTC m=+0.119635850 container init 3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248 (image=quay.io/ceph/ceph:v18, name=flamboyant_spence, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.158281453 +0000 UTC m=+0.123881891 container start 3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248 (image=quay.io/ceph/ceph:v18, name=flamboyant_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.161914104 +0000 UTC m=+0.127514652 container attach 3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248 (image=quay.io/ceph/ceph:v18, name=flamboyant_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v99: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: Reconfiguring osd.1 (monmap changed)...
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: Reconfiguring daemon osd.1 on compute-1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/720950555' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/720950555' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 02:23:07 np0005603621 systemd[1]: libpod-3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248.scope: Deactivated successfully.
Jan 31 02:23:07 np0005603621 conmon[92403]: conmon 3137b533c1a99cbc5396 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248.scope/container/memory.events
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.811922279 +0000 UTC m=+0.777522717 container died 3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248 (image=quay.io/ceph/ceph:v18, name=flamboyant_spence, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8c9b32dbd89a240703ffc6b4b928653f34c9f62181cd995849bb3af932fe12c3-merged.mount: Deactivated successfully.
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:07 np0005603621 podman[92388]: 2026-01-31 07:23:07.864640601 +0000 UTC m=+0.830241039 container remove 3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248 (image=quay.io/ceph/ceph:v18, name=flamboyant_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 02:23:07 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 02:23:07 np0005603621 systemd[1]: libpod-conmon-3137b533c1a99cbc53960bd34fd30088221723c80b9811339ccf0950cd4cc248.scope: Deactivated successfully.
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.cdjvtw (monmap changed)...
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.cdjvtw (monmap changed)...
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.cdjvtw on compute-2
Jan 31 02:23:08 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.cdjvtw on compute-2
Jan 31 02:23:08 np0005603621 python3[92465]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:08 np0005603621 podman[92467]: 2026-01-31 07:23:08.631920756 +0000 UTC m=+0.051557276 container create 70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67 (image=quay.io/ceph/ceph:v18, name=recursing_easley, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:08 np0005603621 systemd[1]: Started libpod-conmon-70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67.scope.
Jan 31 02:23:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b97752fec6f2c117ef45bddff5d5a41f73f30b0dd891ce2d81a5fe3be2fff8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b97752fec6f2c117ef45bddff5d5a41f73f30b0dd891ce2d81a5fe3be2fff8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:08 np0005603621 podman[92467]: 2026-01-31 07:23:08.698407656 +0000 UTC m=+0.118044206 container init 70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67 (image=quay.io/ceph/ceph:v18, name=recursing_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:23:08 np0005603621 podman[92467]: 2026-01-31 07:23:08.610156511 +0000 UTC m=+0.029793081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:08 np0005603621 podman[92467]: 2026-01-31 07:23:08.704159898 +0000 UTC m=+0.123796418 container start 70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67 (image=quay.io/ceph/ceph:v18, name=recursing_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 31 02:23:08 np0005603621 podman[92467]: 2026-01-31 07:23:08.708323935 +0000 UTC m=+0.127960485 container attach 70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67 (image=quay.io/ceph/ceph:v18, name=recursing_easley, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/720950555' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/720950555' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: Reconfiguring mgr.compute-2.cdjvtw (monmap changed)...
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cdjvtw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: Reconfiguring daemon mgr.compute-2.cdjvtw on compute-2
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950126233' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 02:23:09 np0005603621 recursing_easley[92483]: 
Jan 31 02:23:09 np0005603621 recursing_easley[92483]: {"fsid":"2f5ab832-5f2e-5a84-bd93-cf8bab960ee2","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":33,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1769844176,"num_in_osds":3,"osd_in_since":1769844161,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":503238656,"bytes_avail":22032756736,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-31T07:23:07.340397+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.gxjgok":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.cdjvtw":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 31 02:23:09 np0005603621 systemd[1]: libpod-70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67.scope: Deactivated successfully.
Jan 31 02:23:09 np0005603621 podman[92467]: 2026-01-31 07:23:09.412483031 +0000 UTC m=+0.832119551 container died 70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67 (image=quay.io/ceph/ceph:v18, name=recursing_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-31b97752fec6f2c117ef45bddff5d5a41f73f30b0dd891ce2d81a5fe3be2fff8-merged.mount: Deactivated successfully.
Jan 31 02:23:09 np0005603621 podman[92467]: 2026-01-31 07:23:09.464269683 +0000 UTC m=+0.883906193 container remove 70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67 (image=quay.io/ceph/ceph:v18, name=recursing_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:09 np0005603621 systemd[1]: libpod-conmon-70d701683800dc24e28144aee7552495f911ad857627cda41548d8adf88a5a67.scope: Deactivated successfully.
Jan 31 02:23:09 np0005603621 python3[92673]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:09 np0005603621 podman[92674]: 2026-01-31 07:23:09.802389797 +0000 UTC m=+0.044617235 container create 332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 systemd[1]: Started libpod-conmon-332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9.scope.
Jan 31 02:23:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a3a56159ef39722b130cdcb9176ad5214f854ac689d352eee9e468f9c246a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72a3a56159ef39722b130cdcb9176ad5214f854ac689d352eee9e468f9c246a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:09 np0005603621 podman[92674]: 2026-01-31 07:23:09.785735003 +0000 UTC m=+0.027962461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:09 np0005603621 podman[92674]: 2026-01-31 07:23:09.88366358 +0000 UTC m=+0.125891028 container init 332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:23:09 np0005603621 podman[92674]: 2026-01-31 07:23:09.88849085 +0000 UTC m=+0.130718298 container start 332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:09 np0005603621 podman[92674]: 2026-01-31 07:23:09.891567992 +0000 UTC m=+0.133795460 container attach 332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b632e4fb-8070-4963-b59b-415db1451e97 does not exist
Jan 31 02:23:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c2d97f1c-618c-45d4-af44-46c5b00e7665 does not exist
Jan 31 02:23:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4bc7b624-0df2-473f-937d-4e322a907eda does not exist
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:23:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243653670' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:23:10 np0005603621 clever_maxwell[92689]: 
Jan 31 02:23:10 np0005603621 clever_maxwell[92689]: {"epoch":3,"fsid":"2f5ab832-5f2e-5a84-bd93-cf8bab960ee2","modified":"2026-01-31T07:22:31.221989Z","created":"2026-01-31T07:19:43.394621Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 31 02:23:10 np0005603621 clever_maxwell[92689]: dumped monmap epoch 3
Jan 31 02:23:10 np0005603621 systemd[1]: libpod-332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9.scope: Deactivated successfully.
Jan 31 02:23:10 np0005603621 podman[92674]: 2026-01-31 07:23:10.539675613 +0000 UTC m=+0.781903051 container died 332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:23:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-72a3a56159ef39722b130cdcb9176ad5214f854ac689d352eee9e468f9c246a1-merged.mount: Deactivated successfully.
Jan 31 02:23:10 np0005603621 podman[92674]: 2026-01-31 07:23:10.580219492 +0000 UTC m=+0.822446920 container remove 332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9 (image=quay.io/ceph/ceph:v18, name=clever_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:23:10 np0005603621 systemd[1]: libpod-conmon-332be8be6a26ddf8e4341eb3214fa0511bd535d64694a5e3156963b4b10711f9.scope: Deactivated successfully.
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:11.001623804 +0000 UTC m=+0.036536106 container create 0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:23:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:23:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:23:11 np0005603621 systemd[1]: Started libpod-conmon-0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676.scope.
Jan 31 02:23:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:11.061206856 +0000 UTC m=+0.096119188 container init 0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wright, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:11.067814136 +0000 UTC m=+0.102726438 container start 0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:11 np0005603621 hardcore_wright[92886]: 167 167
Jan 31 02:23:11 np0005603621 systemd[1]: libpod-0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676.scope: Deactivated successfully.
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:11.072125139 +0000 UTC m=+0.107037461 container attach 0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:23:11 np0005603621 conmon[92886]: conmon 0a099384edba57710589 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676.scope/container/memory.events
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:11.07333998 +0000 UTC m=+0.108252292 container died 0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:10.985505899 +0000 UTC m=+0.020418221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-76281867f35d23efcb0bba65fb87be21fb94db0ba0ff8bc07e2260e52ffcc838-merged.mount: Deactivated successfully.
Jan 31 02:23:11 np0005603621 podman[92869]: 2026-01-31 07:23:11.111921692 +0000 UTC m=+0.146833994 container remove 0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:23:11 np0005603621 systemd[1]: libpod-conmon-0a099384edba577105895fa9db0172550710bc5692e06089f93ba3fc5455e676.scope: Deactivated successfully.
Jan 31 02:23:11 np0005603621 python3[92928]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:11 np0005603621 podman[92936]: 2026-01-31 07:23:11.25081817 +0000 UTC m=+0.045774502 container create 0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:11 np0005603621 systemd[1]: Started libpod-conmon-0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e.scope.
Jan 31 02:23:11 np0005603621 podman[92950]: 2026-01-31 07:23:11.31305983 +0000 UTC m=+0.053897423 container create 81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871 (image=quay.io/ceph/ceph:v18, name=eager_jemison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:23:11 np0005603621 podman[92936]: 2026-01-31 07:23:11.232801752 +0000 UTC m=+0.027758094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:11 np0005603621 systemd[1]: Started libpod-conmon-81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871.scope.
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0353b7cb5bedfe1e08460154967b6762a3bceaeb7f4a0d5f520bf74acda0d619/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0353b7cb5bedfe1e08460154967b6762a3bceaeb7f4a0d5f520bf74acda0d619/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0353b7cb5bedfe1e08460154967b6762a3bceaeb7f4a0d5f520bf74acda0d619/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0353b7cb5bedfe1e08460154967b6762a3bceaeb7f4a0d5f520bf74acda0d619/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0353b7cb5bedfe1e08460154967b6762a3bceaeb7f4a0d5f520bf74acda0d619/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v101: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:11 np0005603621 podman[92936]: 2026-01-31 07:23:11.356722752 +0000 UTC m=+0.151679114 container init 0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 31 02:23:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2537e309431907df0c4d689a4778b6ad6243f210830eccb4f633a689375fcc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d2537e309431907df0c4d689a4778b6ad6243f210830eccb4f633a689375fcc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:11 np0005603621 podman[92936]: 2026-01-31 07:23:11.364585854 +0000 UTC m=+0.159542186 container start 0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:11 np0005603621 podman[92936]: 2026-01-31 07:23:11.368353939 +0000 UTC m=+0.163310271 container attach 0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:11 np0005603621 podman[92950]: 2026-01-31 07:23:11.374802114 +0000 UTC m=+0.115639737 container init 81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871 (image=quay.io/ceph/ceph:v18, name=eager_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:11 np0005603621 podman[92950]: 2026-01-31 07:23:11.384217367 +0000 UTC m=+0.125054980 container start 81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871 (image=quay.io/ceph/ceph:v18, name=eager_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:23:11 np0005603621 podman[92950]: 2026-01-31 07:23:11.291961319 +0000 UTC m=+0.032798942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:11 np0005603621 podman[92950]: 2026-01-31 07:23:11.38822676 +0000 UTC m=+0.129064353 container attach 81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871 (image=quay.io/ceph/ceph:v18, name=eager_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 31 02:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/992138592' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 02:23:12 np0005603621 eager_jemison[92971]: [client.openstack]
Jan 31 02:23:12 np0005603621 eager_jemison[92971]: #011key = AQD2rH1pAAAAABAAAwMm6zxHUiRLXw6m6+H9Ow==
Jan 31 02:23:12 np0005603621 eager_jemison[92971]: #011caps mgr = "allow *"
Jan 31 02:23:12 np0005603621 eager_jemison[92971]: #011caps mon = "profile rbd"
Jan 31 02:23:12 np0005603621 eager_jemison[92971]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 31 02:23:12 np0005603621 systemd[1]: libpod-81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871.scope: Deactivated successfully.
Jan 31 02:23:12 np0005603621 podman[92950]: 2026-01-31 07:23:12.069580097 +0000 UTC m=+0.810417730 container died 81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871 (image=quay.io/ceph/ceph:v18, name=eager_jemison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2d2537e309431907df0c4d689a4778b6ad6243f210830eccb4f633a689375fcc-merged.mount: Deactivated successfully.
Jan 31 02:23:12 np0005603621 podman[92950]: 2026-01-31 07:23:12.118088619 +0000 UTC m=+0.858926212 container remove 81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871 (image=quay.io/ceph/ceph:v18, name=eager_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:12 np0005603621 systemd[1]: libpod-conmon-81ca32e074973a543700fe897e07865c454e9dcc56c94d62fe021898194e2871.scope: Deactivated successfully.
Jan 31 02:23:12 np0005603621 naughty_gates[92966]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:23:12 np0005603621 naughty_gates[92966]: --> relative data size: 1.0
Jan 31 02:23:12 np0005603621 naughty_gates[92966]: --> All data devices are unavailable
Jan 31 02:23:12 np0005603621 systemd[1]: libpod-0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e.scope: Deactivated successfully.
Jan 31 02:23:12 np0005603621 conmon[92966]: conmon 0720fef85bb3842712c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e.scope/container/memory.events
Jan 31 02:23:12 np0005603621 podman[92936]: 2026-01-31 07:23:12.209853972 +0000 UTC m=+1.004810304 container died 0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:23:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0353b7cb5bedfe1e08460154967b6762a3bceaeb7f4a0d5f520bf74acda0d619-merged.mount: Deactivated successfully.
Jan 31 02:23:12 np0005603621 podman[92936]: 2026-01-31 07:23:12.253221033 +0000 UTC m=+1.048177365 container remove 0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_gates, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:23:12 np0005603621 systemd[1]: libpod-conmon-0720fef85bb3842712c6faf8bf936c18f866f1f5e08b51e6c38b2e506823089e.scope: Deactivated successfully.
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.811474047 +0000 UTC m=+0.038163181 container create 564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:23:12 np0005603621 systemd[1]: Started libpod-conmon-564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082.scope.
Jan 31 02:23:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.795332281 +0000 UTC m=+0.022021435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.892732709 +0000 UTC m=+0.119421893 container init 564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.90268972 +0000 UTC m=+0.129378874 container start 564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.90752581 +0000 UTC m=+0.134214964 container attach 564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:23:12 np0005603621 sweet_keldysh[93192]: 167 167
Jan 31 02:23:12 np0005603621 systemd[1]: libpod-564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082.scope: Deactivated successfully.
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.909356072 +0000 UTC m=+0.136045216 container died 564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:23:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-211acf70faf37dff400921d2a917ad6801673e21f28660374622eddd652d0546-merged.mount: Deactivated successfully.
Jan 31 02:23:12 np0005603621 podman[93176]: 2026-01-31 07:23:12.943480677 +0000 UTC m=+0.170169801 container remove 564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:12 np0005603621 systemd[1]: libpod-conmon-564573b28ba82d0d9e44d55be9e562d99c77e0dcf62b58ab6b3aa423d2221082.scope: Deactivated successfully.
Jan 31 02:23:13 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/992138592' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 31 02:23:13 np0005603621 podman[93214]: 2026-01-31 07:23:13.1104779 +0000 UTC m=+0.060464262 container create 50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:23:13 np0005603621 systemd[1]: Started libpod-conmon-50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6.scope.
Jan 31 02:23:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ad63ea0e901e8501e5fa6f891ee9ecc00cfd3e328ab5abdef4f3d7e5183bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ad63ea0e901e8501e5fa6f891ee9ecc00cfd3e328ab5abdef4f3d7e5183bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ad63ea0e901e8501e5fa6f891ee9ecc00cfd3e328ab5abdef4f3d7e5183bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975ad63ea0e901e8501e5fa6f891ee9ecc00cfd3e328ab5abdef4f3d7e5183bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:13 np0005603621 podman[93214]: 2026-01-31 07:23:13.092658897 +0000 UTC m=+0.042645309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:13 np0005603621 podman[93214]: 2026-01-31 07:23:13.207495515 +0000 UTC m=+0.157481897 container init 50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:23:13 np0005603621 podman[93214]: 2026-01-31 07:23:13.213165814 +0000 UTC m=+0.163152176 container start 50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:13 np0005603621 podman[93214]: 2026-01-31 07:23:13.218909035 +0000 UTC m=+0.168895417 container attach 50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:23:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v102: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:13 np0005603621 ansible-async_wrapper.py[93384]: Invoked with j721619293593 30 /home/zuul/.ansible/tmp/ansible-tmp-1769844193.1057785-37670-159079180591893/AnsiballZ_command.py _
Jan 31 02:23:13 np0005603621 ansible-async_wrapper.py[93387]: Starting module and watcher
Jan 31 02:23:13 np0005603621 ansible-async_wrapper.py[93387]: Start watching 93388 (30)
Jan 31 02:23:13 np0005603621 ansible-async_wrapper.py[93388]: Start module (93388)
Jan 31 02:23:13 np0005603621 ansible-async_wrapper.py[93384]: Return async_wrapper task started.
Jan 31 02:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:13 np0005603621 python3[93389]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:13 np0005603621 podman[93390]: 2026-01-31 07:23:13.76970812 +0000 UTC m=+0.062560740 container create 29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e (image=quay.io/ceph/ceph:v18, name=intelligent_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:23:13 np0005603621 systemd[1]: Started libpod-conmon-29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e.scope.
Jan 31 02:23:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c06b86e6513fc108cc49769882be21cfdb2a6c14b6b85d27186d4883842d969/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c06b86e6513fc108cc49769882be21cfdb2a6c14b6b85d27186d4883842d969/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:13 np0005603621 podman[93390]: 2026-01-31 07:23:13.746201529 +0000 UTC m=+0.039054229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:13 np0005603621 podman[93390]: 2026-01-31 07:23:13.855325688 +0000 UTC m=+0.148178308 container init 29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e (image=quay.io/ceph/ceph:v18, name=intelligent_goldwasser, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:23:13 np0005603621 podman[93390]: 2026-01-31 07:23:13.861427501 +0000 UTC m=+0.154280141 container start 29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e (image=quay.io/ceph/ceph:v18, name=intelligent_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:23:13 np0005603621 podman[93390]: 2026-01-31 07:23:13.865585899 +0000 UTC m=+0.158438549 container attach 29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e (image=quay.io/ceph/ceph:v18, name=intelligent_goldwasser, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:13 np0005603621 serene_banach[93274]: {
Jan 31 02:23:13 np0005603621 serene_banach[93274]:    "0": [
Jan 31 02:23:13 np0005603621 serene_banach[93274]:        {
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "devices": [
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "/dev/loop3"
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            ],
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "lv_name": "ceph_lv0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "lv_size": "7511998464",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "name": "ceph_lv0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "tags": {
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.cluster_name": "ceph",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.crush_device_class": "",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.encrypted": "0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.osd_id": "0",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.type": "block",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:                "ceph.vdo": "0"
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            },
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "type": "block",
Jan 31 02:23:13 np0005603621 serene_banach[93274]:            "vg_name": "ceph_vg0"
Jan 31 02:23:13 np0005603621 serene_banach[93274]:        }
Jan 31 02:23:13 np0005603621 serene_banach[93274]:    ]
Jan 31 02:23:13 np0005603621 serene_banach[93274]: }
Jan 31 02:23:13 np0005603621 systemd[1]: libpod-50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6.scope: Deactivated successfully.
Jan 31 02:23:13 np0005603621 podman[93214]: 2026-01-31 07:23:13.97959117 +0000 UTC m=+0.929577532 container died 50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-975ad63ea0e901e8501e5fa6f891ee9ecc00cfd3e328ab5abdef4f3d7e5183bd-merged.mount: Deactivated successfully.
Jan 31 02:23:14 np0005603621 podman[93214]: 2026-01-31 07:23:14.042191731 +0000 UTC m=+0.992178093 container remove 50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_banach, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:23:14 np0005603621 systemd[1]: libpod-conmon-50eab911e1617287e7bf6de3b75e7b68739a2aa165af9fce3f16d6fbe8d21be6.scope: Deactivated successfully.
Jan 31 02:23:14 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 02:23:14 np0005603621 intelligent_goldwasser[93405]: 
Jan 31 02:23:14 np0005603621 intelligent_goldwasser[93405]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 02:23:14 np0005603621 systemd[1]: libpod-29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e.scope: Deactivated successfully.
Jan 31 02:23:14 np0005603621 podman[93390]: 2026-01-31 07:23:14.435105757 +0000 UTC m=+0.727958397 container died 29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e (image=quay.io/ceph/ceph:v18, name=intelligent_goldwasser, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9c06b86e6513fc108cc49769882be21cfdb2a6c14b6b85d27186d4883842d969-merged.mount: Deactivated successfully.
Jan 31 02:23:14 np0005603621 podman[93390]: 2026-01-31 07:23:14.472708487 +0000 UTC m=+0.765561107 container remove 29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e (image=quay.io/ceph/ceph:v18, name=intelligent_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:23:14 np0005603621 systemd[1]: libpod-conmon-29e2e0fb9ddf7d298e53cae3178e665a97b90b93ce700927b78a841e0bad1c2e.scope: Deactivated successfully.
Jan 31 02:23:14 np0005603621 ansible-async_wrapper.py[93388]: Module complete (93388)
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.562864475 +0000 UTC m=+0.029771720 container create ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:23:14 np0005603621 systemd[1]: Started libpod-conmon-ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954.scope.
Jan 31 02:23:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.623061897 +0000 UTC m=+0.089969142 container init ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.627529856 +0000 UTC m=+0.094437101 container start ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hodgkin, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:23:14 np0005603621 vigilant_hodgkin[93616]: 167 167
Jan 31 02:23:14 np0005603621 systemd[1]: libpod-ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954.scope: Deactivated successfully.
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.631340233 +0000 UTC m=+0.098247488 container attach ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.631662334 +0000 UTC m=+0.098569589 container died ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.548630182 +0000 UTC m=+0.015537437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7a6b439aac140c9043fb97d27ed2acc526ee14375ddb7ac9e673af69deb860e0-merged.mount: Deactivated successfully.
Jan 31 02:23:14 np0005603621 podman[93599]: 2026-01-31 07:23:14.666527072 +0000 UTC m=+0.133434327 container remove ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:23:14 np0005603621 systemd[1]: libpod-conmon-ab151220e15eec3892ebfd4771f3c638ec01d0dd01c71b345cd11adfee500954.scope: Deactivated successfully.
Jan 31 02:23:14 np0005603621 podman[93689]: 2026-01-31 07:23:14.778253527 +0000 UTC m=+0.038847592 container create da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:23:14 np0005603621 systemd[1]: Started libpod-conmon-da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa.scope.
Jan 31 02:23:14 np0005603621 python3[93683]: ansible-ansible.legacy.async_status Invoked with jid=j721619293593.93384 mode=status _async_dir=/root/.ansible_async
Jan 31 02:23:14 np0005603621 podman[93689]: 2026-01-31 07:23:14.759347659 +0000 UTC m=+0.019941744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a567d53ab8c19bb3ff0ba1f1c1f2af9eb7962dae3088b5eba732e6112ab62a79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a567d53ab8c19bb3ff0ba1f1c1f2af9eb7962dae3088b5eba732e6112ab62a79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a567d53ab8c19bb3ff0ba1f1c1f2af9eb7962dae3088b5eba732e6112ab62a79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a567d53ab8c19bb3ff0ba1f1c1f2af9eb7962dae3088b5eba732e6112ab62a79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:14 np0005603621 podman[93689]: 2026-01-31 07:23:14.882542646 +0000 UTC m=+0.143136731 container init da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:23:14 np0005603621 podman[93689]: 2026-01-31 07:23:14.887678966 +0000 UTC m=+0.148273051 container start da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:23:14 np0005603621 podman[93689]: 2026-01-31 07:23:14.892157086 +0000 UTC m=+0.152751171 container attach da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:23:15 np0005603621 python3[93758]: ansible-ansible.legacy.async_status Invoked with jid=j721619293593.93384 mode=cleanup _async_dir=/root/.ansible_async
Jan 31 02:23:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v103: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:15 np0005603621 frosty_allen[93705]: {
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:        "osd_id": 0,
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:        "type": "bluestore"
Jan 31 02:23:15 np0005603621 frosty_allen[93705]:    }
Jan 31 02:23:15 np0005603621 frosty_allen[93705]: }
Jan 31 02:23:15 np0005603621 systemd[1]: libpod-da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa.scope: Deactivated successfully.
Jan 31 02:23:15 np0005603621 podman[93689]: 2026-01-31 07:23:15.686137797 +0000 UTC m=+0.946731872 container died da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:23:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a567d53ab8c19bb3ff0ba1f1c1f2af9eb7962dae3088b5eba732e6112ab62a79-merged.mount: Deactivated successfully.
Jan 31 02:23:15 np0005603621 python3[93792]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:15 np0005603621 podman[93689]: 2026-01-31 07:23:15.753597381 +0000 UTC m=+1.014191446 container remove da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:15 np0005603621 systemd[1]: libpod-conmon-da5c1f43b530162f5dad127a41cc1b3ff74304324a4c9a109e4e08ae0a6579aa.scope: Deactivated successfully.
Jan 31 02:23:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:15 np0005603621 podman[93813]: 2026-01-31 07:23:15.790753847 +0000 UTC m=+0.060865956 container create 2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e (image=quay.io/ceph/ceph:v18, name=determined_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:15 np0005603621 podman[93813]: 2026-01-31 07:23:15.764803343 +0000 UTC m=+0.034915462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:15 np0005603621 systemd[1]: Started libpod-conmon-2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e.scope.
Jan 31 02:23:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113285930a2d7317dcaa36efa48a053a58989d47c7114ff4eb1ee3fe2f72382d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/113285930a2d7317dcaa36efa48a053a58989d47c7114ff4eb1ee3fe2f72382d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:15 np0005603621 podman[93813]: 2026-01-31 07:23:15.906429292 +0000 UTC m=+0.176541461 container init 2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e (image=quay.io/ceph/ceph:v18, name=determined_moser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:15 np0005603621 podman[93813]: 2026-01-31 07:23:15.910575531 +0000 UTC m=+0.180687630 container start 2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e (image=quay.io/ceph/ceph:v18, name=determined_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:23:15 np0005603621 podman[93813]: 2026-01-31 07:23:15.920672046 +0000 UTC m=+0.190784205 container attach 2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e (image=quay.io/ceph/ceph:v18, name=determined_moser, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:23:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:16 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev b93e9d36-b998-4bc3-bdae-db5bc74b3fef (Updating rgw.rgw deployment (+3 -> 3))
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aejomu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aejomu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aejomu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:16 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.aejomu on compute-2
Jan 31 02:23:16 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.aejomu on compute-2
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aejomu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:23:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aejomu", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:23:16 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 02:23:16 np0005603621 determined_moser[93828]: 
Jan 31 02:23:16 np0005603621 determined_moser[93828]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 31 02:23:16 np0005603621 systemd[1]: libpod-2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e.scope: Deactivated successfully.
Jan 31 02:23:16 np0005603621 podman[93813]: 2026-01-31 07:23:16.486661807 +0000 UTC m=+0.756773906 container died 2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e (image=quay.io/ceph/ceph:v18, name=determined_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-113285930a2d7317dcaa36efa48a053a58989d47c7114ff4eb1ee3fe2f72382d-merged.mount: Deactivated successfully.
Jan 31 02:23:16 np0005603621 podman[93813]: 2026-01-31 07:23:16.529631926 +0000 UTC m=+0.799744025 container remove 2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e (image=quay.io/ceph/ceph:v18, name=determined_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:23:16 np0005603621 systemd[1]: libpod-conmon-2ac104105ada91c749399aa03793bc2180233ea2e59f88aa2bffdafa06b4b37e.scope: Deactivated successfully.
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: Deploying daemon rgw.rgw.compute-2.aejomu on compute-2
Jan 31 02:23:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v104: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:17 np0005603621 python3[93890]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:17 np0005603621 podman[93891]: 2026-01-31 07:23:17.445874663 +0000 UTC m=+0.031736906 container create cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645 (image=quay.io/ceph/ceph:v18, name=funny_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:17 np0005603621 systemd[1]: Started libpod-conmon-cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645.scope.
Jan 31 02:23:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411c9eb80b1477bff76331268846a35e694d887d73a7cc33cefa88aedbbfc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f411c9eb80b1477bff76331268846a35e694d887d73a7cc33cefa88aedbbfc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:17 np0005603621 podman[93891]: 2026-01-31 07:23:17.505290109 +0000 UTC m=+0.091152352 container init cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645 (image=quay.io/ceph/ceph:v18, name=funny_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:23:17 np0005603621 podman[93891]: 2026-01-31 07:23:17.510183912 +0000 UTC m=+0.096046155 container start cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645 (image=quay.io/ceph/ceph:v18, name=funny_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:17 np0005603621 podman[93891]: 2026-01-31 07:23:17.513779241 +0000 UTC m=+0.099641514 container attach cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645 (image=quay.io/ceph/ceph:v18, name=funny_austin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:17 np0005603621 podman[93891]: 2026-01-31 07:23:17.431064221 +0000 UTC m=+0.016926484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.bjsbdg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.bjsbdg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.bjsbdg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:17 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.bjsbdg on compute-1
Jan 31 02:23:17 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.bjsbdg on compute-1
Jan 31 02:23:18 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14367 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 02:23:18 np0005603621 funny_austin[93907]: 
Jan 31 02:23:18 np0005603621 funny_austin[93907]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 31 02:23:18 np0005603621 systemd[1]: libpod-cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645.scope: Deactivated successfully.
Jan 31 02:23:18 np0005603621 conmon[93907]: conmon cf9e4fb440dded831b99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645.scope/container/memory.events
Jan 31 02:23:18 np0005603621 podman[93934]: 2026-01-31 07:23:18.099041613 +0000 UTC m=+0.033261627 container died cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645 (image=quay.io/ceph/ceph:v18, name=funny_austin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:23:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2f411c9eb80b1477bff76331268846a35e694d887d73a7cc33cefa88aedbbfc7-merged.mount: Deactivated successfully.
Jan 31 02:23:18 np0005603621 podman[93934]: 2026-01-31 07:23:18.135572308 +0000 UTC m=+0.069792272 container remove cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645 (image=quay.io/ceph/ceph:v18, name=funny_austin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:23:18 np0005603621 systemd[1]: libpod-conmon-cf9e4fb440dded831b9965e9757fd395d72e422e0e82e32120e2fdca3762b645.scope: Deactivated successfully.
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.bjsbdg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.bjsbdg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: Deploying daemon rgw.rgw.compute-1.bjsbdg on compute-1
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 02:23:18 np0005603621 ansible-async_wrapper.py[93387]: Done in kid B.
Jan 31 02:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:19 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 32 pg[8.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [0] r=0 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnpmok", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnpmok", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnpmok", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 python3[93974]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:19 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.pnpmok on compute-0
Jan 31 02:23:19 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.pnpmok on compute-0
Jan 31 02:23:19 np0005603621 podman[93975]: 2026-01-31 07:23:19.272201804 +0000 UTC m=+0.043372323 container create 6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f (image=quay.io/ceph/ceph:v18, name=keen_gould, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:23:19 np0005603621 systemd[1]: Started libpod-conmon-6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f.scope.
Jan 31 02:23:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5467e8eb6478a0bbd21d9220d1f1e32d7b514934cb5231b2543510e668554b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5467e8eb6478a0bbd21d9220d1f1e32d7b514934cb5231b2543510e668554b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:19 np0005603621 podman[93975]: 2026-01-31 07:23:19.334465144 +0000 UTC m=+0.105635683 container init 6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f (image=quay.io/ceph/ceph:v18, name=keen_gould, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:19 np0005603621 podman[93975]: 2026-01-31 07:23:19.338895271 +0000 UTC m=+0.110065790 container start 6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f (image=quay.io/ceph/ceph:v18, name=keen_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:19 np0005603621 podman[93975]: 2026-01-31 07:23:19.341987234 +0000 UTC m=+0.113157783 container attach 6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f (image=quay.io/ceph/ceph:v18, name=keen_gould, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:23:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v106: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:19 np0005603621 podman[93975]: 2026-01-31 07:23:19.254822726 +0000 UTC m=+0.025993265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 31 02:23:19 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 33 pg[8.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [0] r=0 lpr=32 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.102:0/665557881' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnpmok", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.pnpmok", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 31 02:23:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.694287269 +0000 UTC m=+0.036878537 container create ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rubin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 31 02:23:19 np0005603621 systemd[1]: Started libpod-conmon-ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8.scope.
Jan 31 02:23:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.765213258 +0000 UTC m=+0.107804616 container init ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rubin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.772306283 +0000 UTC m=+0.114897551 container start ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rubin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.677013384 +0000 UTC m=+0.019604683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.775629864 +0000 UTC m=+0.118221232 container attach ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rubin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:19 np0005603621 gifted_rubin[94173]: 167 167
Jan 31 02:23:19 np0005603621 systemd[1]: libpod-ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8.scope: Deactivated successfully.
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.776654828 +0000 UTC m=+0.119246096 container died ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-33f2070e62fbab9f8d698b3d15cdf4e70065eb57c2321faa2b69877b52819757-merged.mount: Deactivated successfully.
Jan 31 02:23:19 np0005603621 podman[94139]: 2026-01-31 07:23:19.813772783 +0000 UTC m=+0.156364051 container remove ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:23:19 np0005603621 systemd[1]: libpod-conmon-ce514dcd11f1fafa3ac3334d93c465bfc8d27f5a996fb358512dde1d30c136c8.scope: Deactivated successfully.
Jan 31 02:23:19 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 31 02:23:19 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:19 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:19 np0005603621 keen_gould[94039]: 
Jan 31 02:23:19 np0005603621 keen_gould[94039]: [{"container_id": "287b42eb8058", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.84%", "created": "2026-01-31T07:21:14.391848Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-31T07:21:14.499493Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\"", "2026-01-31T07:23:05.287639Z daemon:crash.compute-0 [INFO] \"Reconfigured crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:22:03.871286Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2026-01-31T07:21:14.069105Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@crash.compute-0", "version": "18.2.7"}, {"container_id": "435a482d140b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.87%", "created": "2026-01-31T07:21:48.652257Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-31T07:21:48.703784Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\"", "2026-01-31T07:23:06.600781Z daemon:crash.compute-1 [INFO] \"Reconfigured crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T07:23:09.998206Z", "memory_usage": 11723079, "ports": [], "service_name": "crash", "started": "2026-01-31T07:21:48.559934Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@crash.compute-1", "version": "18.2.7"}, {"container_id": "263f01735433", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.26%", "created": "2026-01-31T07:22:39.155126Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-31T07:22:39.212220Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T07:23:09.798647Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2026-01-31T07:22:39.047028Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@crash.compute-2", "version": "18.2.7"}, {"container_id": "27d29d569229", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "39.36%", "created": "2026-01-31T07:19:50.465167Z", "daemon_id": "compute-0.ddmhwk", "daemon_name": "mgr.compute-0.ddmhwk", "daemon_type": "mgr", "events": ["2026-01-31T07:23:04.691500Z daemon:mgr.compute-0.ddmhwk [INFO] \"Reconfigured mgr.compute-0.ddmhwk on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-31T07:22:03.871167Z", "memory_usage": 546203238, "ports": [9283, 8765, 8765], "service_name": "mgr", "started": "2026-01-31T07:19:50.392080Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@mgr.compute-0.ddmhwk", "version": "18.2.7"}, {"container_id": "fd7ea65af399", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "63.63%", "created": "2026-01-31T07:22:37.649586Z", "daemon_id": "compute-1.gxjgok", "daemon_name": "mgr.compute-1.gxjgok", "daemon_type": "mgr", "events": ["2026-01-31T07:22:37.717619Z daemon:mgr.compute-1.gxjgok [INFO] \"Deployed mgr.compute-1.gxjgok on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-31T07:23:09.998429Z", "memory_usage": 513382809, "ports": [8765], "service_name": "mgr", "started": "2026-01-31T07:22:37.562114Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@mgr.compute-1.gxjgok", "version": "18.2.7"}, {"container_id": "b585f94a3e6b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "53.38%", "created": "2026-01-31T07:22:31.995919Z", "daemon_id": "compute-2.cdjvtw", "daemon_name": "mgr.compute-2.cdjvtw", "daemon_type": "mgr", "events": ["2026-01-31T07:22:36.305594Z daemon:mgr.compute-2.cdjvtw [INFO] \"Deployed mgr.compute-2.cdjvtw on host 'compute-2'\"", "2026-01-31T07:23:08.976627Z daemon:mgr.compute-2.cdjvtw [INFO] \"Reconfigured mgr.compute-2.cdjvtw on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-31T07:23:09.798526Z", "memory_usage": 516423680, "ports": [8765], "service_name": "mgr", "started": "2026-01-31T07:22:31.922861Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@mgr.compute-2.cdjvtw", "version": "18.2.7"}, {"container_id": "8a056797e460", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.41%", "created": "2026-01-31T07:19:45.534010Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-31T07:23:04.079336Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-31T07:22:03.871022Z", "memory_request": 2147483648, "memory_usage": 32589742, "ports": [], "service_name": "mon", "started": "2026-01-31T07:19:48.327559Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2@mon.compute-0", "version": "18.2.7"}, {"container_id": "f7c1f81768ff", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f4
Jan 31 02:23:19 np0005603621 podman[93975]: 2026-01-31 07:23:19.93280064 +0000 UTC m=+0.703971159 container died 6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f (image=quay.io/ceph/ceph:v18, name=keen_gould, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:23:20 np0005603621 systemd[1]: libpod-6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f.scope: Deactivated successfully.
Jan 31 02:23:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7d5467e8eb6478a0bbd21d9220d1f1e32d7b514934cb5231b2543510e668554b-merged.mount: Deactivated successfully.
Jan 31 02:23:20 np0005603621 podman[93975]: 2026-01-31 07:23:20.078199905 +0000 UTC m=+0.849370424 container remove 6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f (image=quay.io/ceph/ceph:v18, name=keen_gould, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:23:20 np0005603621 systemd[1]: libpod-conmon-6dbcf703e2eafca47a9ceca390a8477d3a2ee871828ed9ef11c26068d2cc737f.scope: Deactivated successfully.
Jan 31 02:23:20 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:20 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:20 np0005603621 rsyslogd[998]: message too long (14366) with configured size 8096, begin of message is: [{"container_id": "287b42eb8058", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 02:23:20 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:23:20 np0005603621 systemd[1]: Starting Ceph rgw.rgw.compute-0.pnpmok for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:23:20 np0005603621 podman[94332]: 2026-01-31 07:23:20.540367154 +0000 UTC m=+0.041182500 container create 3d2c60d2a43b72ac915708b2ea14f4ac8b5ff4660c01be5af30a821a7121dba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-rgw-rgw-compute-0-pnpmok, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 31 02:23:20 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 34 pg[9.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: Deploying daemon rgw.rgw.compute-0.pnpmok on compute-0
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.101:0/4082344861' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:23:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d3e66217641a68a15506f8b733a614fa76e9f2714dda89488ea10d9ad13c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d3e66217641a68a15506f8b733a614fa76e9f2714dda89488ea10d9ad13c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d3e66217641a68a15506f8b733a614fa76e9f2714dda89488ea10d9ad13c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a37d3e66217641a68a15506f8b733a614fa76e9f2714dda89488ea10d9ad13c5/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.pnpmok supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:20 np0005603621 podman[94332]: 2026-01-31 07:23:20.608525571 +0000 UTC m=+0.109340957 container init 3d2c60d2a43b72ac915708b2ea14f4ac8b5ff4660c01be5af30a821a7121dba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-rgw-rgw-compute-0-pnpmok, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:23:20 np0005603621 podman[94332]: 2026-01-31 07:23:20.613300799 +0000 UTC m=+0.114116145 container start 3d2c60d2a43b72ac915708b2ea14f4ac8b5ff4660c01be5af30a821a7121dba9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-rgw-rgw-compute-0-pnpmok, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:20 np0005603621 bash[94332]: 3d2c60d2a43b72ac915708b2ea14f4ac8b5ff4660c01be5af30a821a7121dba9
Jan 31 02:23:20 np0005603621 podman[94332]: 2026-01-31 07:23:20.521569299 +0000 UTC m=+0.022384665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:20 np0005603621 systemd[1]: Started Ceph rgw.rgw.compute-0.pnpmok for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:20 np0005603621 radosgw[94351]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:23:20 np0005603621 radosgw[94351]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 31 02:23:20 np0005603621 radosgw[94351]: framework: beast
Jan 31 02:23:20 np0005603621 radosgw[94351]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 31 02:23:20 np0005603621 radosgw[94351]: init_numa not setting numa affinity
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev b93e9d36-b998-4bc3-bdae-db5bc74b3fef (Updating rgw.rgw deployment (+3 -> 3))
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event b93e9d36-b998-4bc3-bdae-db5bc74b3fef (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 1c871b4d-5809-4030-b8fe-712d828c86fa (Updating mds.cephfs deployment (+3 -> 3))
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.asgtzy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.asgtzy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.asgtzy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.asgtzy on compute-2
Jan 31 02:23:20 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.asgtzy on compute-2
Jan 31 02:23:21 np0005603621 python3[94436]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:21 np0005603621 podman[94437]: 2026-01-31 07:23:21.146439488 +0000 UTC m=+0.059097767 container create 0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d (image=quay.io/ceph/ceph:v18, name=angry_elgamal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 31 02:23:21 np0005603621 systemd[1]: Started libpod-conmon-0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d.scope.
Jan 31 02:23:21 np0005603621 podman[94437]: 2026-01-31 07:23:21.123432062 +0000 UTC m=+0.036090321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa624611d47cf10bf9a0a147f91c342773008308b7655a895df90adf83d44932/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa624611d47cf10bf9a0a147f91c342773008308b7655a895df90adf83d44932/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:21 np0005603621 podman[94437]: 2026-01-31 07:23:21.256725655 +0000 UTC m=+0.169383954 container init 0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d (image=quay.io/ceph/ceph:v18, name=angry_elgamal, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:23:21 np0005603621 podman[94437]: 2026-01-31 07:23:21.267144171 +0000 UTC m=+0.179802410 container start 0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d (image=quay.io/ceph/ceph:v18, name=angry_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:21 np0005603621 podman[94437]: 2026-01-31 07:23:21.271637141 +0000 UTC m=+0.184295470 container attach 0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d (image=quay.io/ceph/ceph:v18, name=angry_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:23:21 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 6 completed events
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 31 02:23:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v109: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.102:0/665557881' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.asgtzy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.asgtzy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:21 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 35 pg[9.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 31 02:23:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636321276' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 31 02:23:21 np0005603621 angry_elgamal[94452]: 
Jan 31 02:23:21 np0005603621 angry_elgamal[94452]: {"fsid":"2f5ab832-5f2e-5a84-bd93-cf8bab960ee2","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":45,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":35,"num_osds":3,"num_up_osds":3,"osd_up_since":1769844176,"num_in_osds":3,"osd_in_since":1769844161,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"unknown","count":1}],"num_pgs":8,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":503279616,"bytes_avail":22032715776,"bytes_total":22535995392,"unknown_pgs_ratio":0.125},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-31T07:23:09.340878+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.gxjgok":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.cdjvtw":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b93e9d36-b998-4bc3-bdae-db5bc74b3fef":{"message":"Updating rgw.rgw deployment (+3 -> 3) (3s)\n      [==================..........] (remaining: 1s)","progress":0.66666668653488159,"add_to_ceph_s":true}}}
Jan 31 02:23:21 np0005603621 systemd[1]: libpod-0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d.scope: Deactivated successfully.
Jan 31 02:23:21 np0005603621 podman[94484]: 2026-01-31 07:23:21.958728038 +0000 UTC m=+0.027391652 container died 0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d (image=quay.io/ceph/ceph:v18, name=angry_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aa624611d47cf10bf9a0a147f91c342773008308b7655a895df90adf83d44932-merged.mount: Deactivated successfully.
Jan 31 02:23:22 np0005603621 podman[94484]: 2026-01-31 07:23:22.165981759 +0000 UTC m=+0.234645353 container remove 0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d (image=quay.io/ceph/ceph:v18, name=angry_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:23:22 np0005603621 systemd[1]: libpod-conmon-0561796ed9cb42e6125e2d93bc371626255fec40498d1bc469af650b4486c40d.scope: Deactivated successfully.
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jroeqh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jroeqh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e3 new map
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:03.855587+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.asgtzy{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: Deploying daemon mds.cephfs.compute-2.asgtzy on compute-2
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] up:boot
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] as mds.0
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.asgtzy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.asgtzy"} v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.asgtzy"}]: dispatch
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e3 all = 0
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3859572337' entity='client.rgw.rgw.compute-0.pnpmok' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e4 new map
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:22.914433+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jroeqh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:creating}
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:22 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.jroeqh on compute-0
Jan 31 02:23:22 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.jroeqh on compute-0
Jan 31 02:23:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.asgtzy is now active in filesystem cephfs as rank 0
Jan 31 02:23:23 np0005603621 python3[94525]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:23 np0005603621 podman[94600]: 2026-01-31 07:23:23.177761384 +0000 UTC m=+0.044832832 container create ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa (image=quay.io/ceph/ceph:v18, name=blissful_buck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:23:23 np0005603621 systemd[1]: Started libpod-conmon-ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa.scope.
Jan 31 02:23:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9e3f0dfbbc9478cf089c884e87872a16f497a35a48f020a25974752ba8af4c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df9e3f0dfbbc9478cf089c884e87872a16f497a35a48f020a25974752ba8af4c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:23 np0005603621 podman[94600]: 2026-01-31 07:23:23.160359835 +0000 UTC m=+0.027431313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:23 np0005603621 podman[94600]: 2026-01-31 07:23:23.264472757 +0000 UTC m=+0.131544245 container init ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa (image=quay.io/ceph/ceph:v18, name=blissful_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:23:23 np0005603621 podman[94600]: 2026-01-31 07:23:23.270526839 +0000 UTC m=+0.137598287 container start ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa (image=quay.io/ceph/ceph:v18, name=blissful_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:23 np0005603621 podman[94600]: 2026-01-31 07:23:23.276942202 +0000 UTC m=+0.144013740 container attach ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa (image=quay.io/ceph/ceph:v18, name=blissful_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v112: 10 pgs: 1 unknown, 9 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.511215562 +0000 UTC m=+0.056118647 container create 310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:23 np0005603621 systemd[1]: Started libpod-conmon-310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287.scope.
Jan 31 02:23:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.476031892 +0000 UTC m=+0.020934987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.58422121 +0000 UTC m=+0.129124315 container init 310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.589657401 +0000 UTC m=+0.134560476 container start 310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:23 np0005603621 confident_gagarin[94700]: 167 167
Jan 31 02:23:23 np0005603621 systemd[1]: libpod-310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287.scope: Deactivated successfully.
Jan 31 02:23:23 np0005603621 conmon[94700]: conmon 310f45423dd37a8fd9d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287.scope/container/memory.events
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.59656112 +0000 UTC m=+0.141464225 container attach 310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.597083777 +0000 UTC m=+0.141986852 container died 310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:23:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e7234ce43bd6f0e05d171f43bbaa3d0cbbad2d2bab0d83f207d16423d2568edb-merged.mount: Deactivated successfully.
Jan 31 02:23:23 np0005603621 podman[94683]: 2026-01-31 07:23:23.683448099 +0000 UTC m=+0.228351174 container remove 310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:23 np0005603621 systemd[1]: libpod-conmon-310f45423dd37a8fd9d75c7d5c0c02cdbbd4717292118f13bdb00d112c1e7287.scope: Deactivated successfully.
Jan 31 02:23:23 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:23 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:23 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:23 np0005603621 blissful_buck[94640]: 
Jan 31 02:23:23 np0005603621 blissful_buck[94640]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502912512","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.pnpmok","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.bjsbdg","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.aejomu","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 31 02:23:23 np0005603621 podman[94600]: 2026-01-31 07:23:23.833975875 +0000 UTC m=+0.701047323 container died ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa (image=quay.io/ceph/ceph:v18, name=blissful_buck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3859572337' entity='client.rgw.rgw.compute-0.pnpmok' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jroeqh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: daemon mds.cephfs.compute-2.asgtzy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3859572337' entity='client.rgw.rgw.compute-0.pnpmok' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.101:0/4082344861' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.102:0/665557881' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.jroeqh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: Deploying daemon mds.cephfs.compute-0.jroeqh on compute-0
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: daemon mds.cephfs.compute-2.asgtzy is now active in filesystem cephfs as rank 0
Jan 31 02:23:23 np0005603621 systemd[1]: libpod-ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa.scope: Deactivated successfully.
Jan 31 02:23:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-df9e3f0dfbbc9478cf089c884e87872a16f497a35a48f020a25974752ba8af4c-merged.mount: Deactivated successfully.
Jan 31 02:23:23 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e5 new map
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:23.967653+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] up:active
Jan 31 02:23:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active}
Jan 31 02:23:24 np0005603621 podman[94600]: 2026-01-31 07:23:24.001432374 +0000 UTC m=+0.868503852 container remove ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa (image=quay.io/ceph/ceph:v18, name=blissful_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:24 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:24 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:24 np0005603621 systemd[1]: libpod-conmon-ba0ebd89d3e98460f344593b21fc6d4987eaf452e170b98c3f012ac1ea2d89aa.scope: Deactivated successfully.
Jan 31 02:23:24 np0005603621 systemd[1]: Starting Ceph mds.cephfs.compute-0.jroeqh for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:23:24 np0005603621 podman[94899]: 2026-01-31 07:23:24.422568477 +0000 UTC m=+0.056089806 container create 41a77f8e2fff2b8ec08ee3239657a26d889a4c43656e169833ff12edd4843c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mds-cephfs-compute-0-jroeqh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:23:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434702c433a94fed853448d3da05cbf9280c7e9f6751da3dd50abb345b6d0e73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434702c433a94fed853448d3da05cbf9280c7e9f6751da3dd50abb345b6d0e73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434702c433a94fed853448d3da05cbf9280c7e9f6751da3dd50abb345b6d0e73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434702c433a94fed853448d3da05cbf9280c7e9f6751da3dd50abb345b6d0e73/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.jroeqh supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:24 np0005603621 podman[94899]: 2026-01-31 07:23:24.474576017 +0000 UTC m=+0.108097386 container init 41a77f8e2fff2b8ec08ee3239657a26d889a4c43656e169833ff12edd4843c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mds-cephfs-compute-0-jroeqh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:23:24 np0005603621 podman[94899]: 2026-01-31 07:23:24.482836731 +0000 UTC m=+0.116358100 container start 41a77f8e2fff2b8ec08ee3239657a26d889a4c43656e169833ff12edd4843c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mds-cephfs-compute-0-jroeqh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:23:24 np0005603621 bash[94899]: 41a77f8e2fff2b8ec08ee3239657a26d889a4c43656e169833ff12edd4843c0e
Jan 31 02:23:24 np0005603621 podman[94899]: 2026-01-31 07:23:24.400470952 +0000 UTC m=+0.033992361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:24 np0005603621 systemd[1]: Started Ceph mds.cephfs.compute-0.jroeqh for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:23:24 np0005603621 ceph-mds[94918]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:23:24 np0005603621 ceph-mds[94918]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 31 02:23:24 np0005603621 ceph-mds[94918]: main not setting numa affinity
Jan 31 02:23:24 np0005603621 ceph-mds[94918]: pidfile_write: ignore empty --pid-file
Jan 31 02:23:24 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mds-cephfs-compute-0-jroeqh[94914]: starting mds.cephfs.compute-0.jroeqh at 
Jan 31 02:23:24 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Updating MDS map to version 5 from mon.0
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bkrghs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bkrghs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bkrghs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.bkrghs on compute-1
Jan 31 02:23:24 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.bkrghs on compute-1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/3859572337' entity='client.rgw.rgw.compute-0.pnpmok' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bkrghs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bkrghs", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: Deploying daemon mds.cephfs.compute-1.bkrghs on compute-1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 31 02:23:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 38 pg[11.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:24 np0005603621 python3[94962]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e6 new map
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:23.967653+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jroeqh{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:23:25 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Updating MDS map to version 6 from mon.0
Jan 31 02:23:25 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Monitors have assigned me to become a standby.
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] up:boot
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active} 1 up:standby
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.jroeqh"} v 0) v1
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.jroeqh"}]: dispatch
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e6 all = 0
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e7 new map
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:23.967653+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jroeqh{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active} 1 up:standby
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.031060331 +0000 UTC m=+0.049098683 container create c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e (image=quay.io/ceph/ceph:v18, name=nervous_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:25 np0005603621 systemd[1]: Started libpod-conmon-c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e.scope.
Jan 31 02:23:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.002859713 +0000 UTC m=+0.020898155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e058dd077976bcbc5059f2b9fbc8cdcf07e3395366fa3501514de7842ef4fd98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e058dd077976bcbc5059f2b9fbc8cdcf07e3395366fa3501514de7842ef4fd98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.112266821 +0000 UTC m=+0.130305193 container init c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e (image=quay.io/ceph/ceph:v18, name=nervous_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.117485125 +0000 UTC m=+0.135523477 container start c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e (image=quay.io/ceph/ceph:v18, name=nervous_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.121797619 +0000 UTC m=+0.139835991 container attach c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e (image=quay.io/ceph/ceph:v18, name=nervous_hodgkin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v115: 11 pgs: 1 unknown, 10 active+clean; 451 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030200486' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 31 02:23:25 np0005603621 nervous_hodgkin[94979]: mimic
Jan 31 02:23:25 np0005603621 systemd[1]: libpod-c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e.scope: Deactivated successfully.
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.678980737 +0000 UTC m=+0.697019119 container died c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e (image=quay.io/ceph/ceph:v18, name=nervous_hodgkin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:23:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e058dd077976bcbc5059f2b9fbc8cdcf07e3395366fa3501514de7842ef4fd98-merged.mount: Deactivated successfully.
Jan 31 02:23:25 np0005603621 podman[94963]: 2026-01-31 07:23:25.727350094 +0000 UTC m=+0.745388436 container remove c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e (image=quay.io/ceph/ceph:v18, name=nervous_hodgkin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:25 np0005603621 systemd[1]: libpod-conmon-c1422328be086e2963382bfcd2492b8f61ec156c5062b1bf50aef89a5a0f611e.scope: Deactivated successfully.
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 02:23:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 39 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.102:0/1907859104' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.101:0/1698301580' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 1c871b4d-5809-4030-b8fe-712d828c86fa (Updating mds.cephfs deployment (+3 -> 3))
Jan 31 02:23:26 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 1c871b4d-5809-4030-b8fe-712d828c86fa (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 7 completed events
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev ab0ccf7c-8fa5-4033-a300-7fac759d33d1 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:26 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.evwczw on compute-0
Jan 31 02:23:26 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.evwczw on compute-0
Jan 31 02:23:26 np0005603621 python3[95112]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:23:26 np0005603621 podman[95140]: 2026-01-31 07:23:26.780403291 +0000 UTC m=+0.053131167 container create 78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1 (image=quay.io/ceph/ceph:v18, name=romantic_cartwright, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:23:26 np0005603621 systemd[1]: Started libpod-conmon-78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1.scope.
Jan 31 02:23:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e161c323fd8cb5574b4a364abb7c2f771fab3ffe5e5264ffa3f606cdfbaccb23/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e161c323fd8cb5574b4a364abb7c2f771fab3ffe5e5264ffa3f606cdfbaccb23/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:26 np0005603621 podman[95140]: 2026-01-31 07:23:26.761690879 +0000 UTC m=+0.034418805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:23:26 np0005603621 podman[95140]: 2026-01-31 07:23:26.864890361 +0000 UTC m=+0.137618257 container init 78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1 (image=quay.io/ceph/ceph:v18, name=romantic_cartwright, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:23:26 np0005603621 podman[95140]: 2026-01-31 07:23:26.869902577 +0000 UTC m=+0.142630453 container start 78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1 (image=quay.io/ceph/ceph:v18, name=romantic_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:23:26 np0005603621 podman[95140]: 2026-01-31 07:23:26.872921268 +0000 UTC m=+0.145649174 container attach 78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1 (image=quay.io/ceph/ceph:v18, name=romantic_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 31 02:23:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.102:0/1907859104' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.101:0/1698301580' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: Deploying daemon haproxy.rgw.default.compute-0.evwczw on compute-0
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? 192.168.122.100:0/487760849' entity='client.rgw.rgw.compute-0.pnpmok' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-2.aejomu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: from='client.? ' entity='client.rgw.rgw.compute-1.bjsbdg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e8 new map
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:27.037579+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jroeqh{-1:14409} state up:standby seq 1 addr [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bkrghs{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/4027255140,v1:192.168.122.101:6805/4027255140] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/4027255140,v1:192.168.122.101:6805/4027255140] up:boot
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] up:active
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active} 2 up:standby
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bkrghs"} v 0) v1
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bkrghs"}]: dispatch
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e8 all = 0
Jan 31 02:23:27 np0005603621 radosgw[94351]: LDAP not started since no server URIs were provided in the configuration.
Jan 31 02:23:27 np0005603621 radosgw[94351]: framework: beast
Jan 31 02:23:27 np0005603621 radosgw[94351]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 31 02:23:27 np0005603621 radosgw[94351]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 31 02:23:27 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-rgw-rgw-compute-0-pnpmok[94347]: 2026-01-31T07:23:27.223+0000 7fb1e877d940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 31 02:23:27 np0005603621 radosgw[94351]: starting handler: beast
Jan 31 02:23:27 np0005603621 radosgw[94351]: set uid:gid to 167:167 (ceph:ceph)
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 radosgw[94351]: mgrc service_daemon_register rgw.14397 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.pnpmok,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026,kernel_version=5.14.0-665.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864292,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=4a080c62-ffea-408b-958a-f1cf7a54b487,zone_name=default,zonegroup_id=4b3ae999-cc0f-4a4e-9689-957f89598a27,zonegroup_name=default}
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v118: 11 pgs: 11 active+clean; 454 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 4.0 KiB/s wr, 15 op/s
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 31 02:23:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/76668332' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 31 02:23:27 np0005603621 romantic_cartwright[95177]: 
Jan 31 02:23:27 np0005603621 systemd[1]: libpod-78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1.scope: Deactivated successfully.
Jan 31 02:23:27 np0005603621 romantic_cartwright[95177]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":12}}
Jan 31 02:23:27 np0005603621 podman[95140]: 2026-01-31 07:23:27.525539379 +0000 UTC m=+0.798267275 container died 78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1 (image=quay.io/ceph/ceph:v18, name=romantic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:23:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e161c323fd8cb5574b4a364abb7c2f771fab3ffe5e5264ffa3f606cdfbaccb23-merged.mount: Deactivated successfully.
Jan 31 02:23:27 np0005603621 podman[95140]: 2026-01-31 07:23:27.577997043 +0000 UTC m=+0.850724929 container remove 78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1 (image=quay.io/ceph/ceph:v18, name=romantic_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:23:27 np0005603621 systemd[1]: libpod-conmon-78ed4c9a1606e5cc6b749766ec6444fe965210f499a3db7637e4b48c42603ad1.scope: Deactivated successfully.
Jan 31 02:23:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 02:23:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 02:23:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:28 np0005603621 podman[95193]: 2026-01-31 07:23:28.997521956 +0000 UTC m=+2.130186305 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 02:23:29 np0005603621 podman[95193]: 2026-01-31 07:23:29.019701524 +0000 UTC m=+2.152365823 container create 6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f (image=quay.io/ceph/haproxy:2.3, name=happy_einstein)
Jan 31 02:23:29 np0005603621 systemd[1]: Started libpod-conmon-6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f.scope.
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: Cluster is now healthy
Jan 31 02:23:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:29 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Updating MDS map to version 9 from mon.0
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e9 new map
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:27.037579+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jroeqh{-1:14409} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bkrghs{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/4027255140,v1:192.168.122.101:6805/4027255140] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:23:29 np0005603621 podman[95193]: 2026-01-31 07:23:29.094916425 +0000 UTC m=+2.227580714 container init 6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f (image=quay.io/ceph/haproxy:2.3, name=happy_einstein)
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] up:standby
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active} 2 up:standby
Jan 31 02:23:29 np0005603621 podman[95193]: 2026-01-31 07:23:29.100466339 +0000 UTC m=+2.233130608 container start 6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f (image=quay.io/ceph/haproxy:2.3, name=happy_einstein)
Jan 31 02:23:29 np0005603621 happy_einstein[95883]: 0 0
Jan 31 02:23:29 np0005603621 systemd[1]: libpod-6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f.scope: Deactivated successfully.
Jan 31 02:23:29 np0005603621 podman[95193]: 2026-01-31 07:23:29.103350835 +0000 UTC m=+2.236015114 container attach 6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f (image=quay.io/ceph/haproxy:2.3, name=happy_einstein)
Jan 31 02:23:29 np0005603621 podman[95193]: 2026-01-31 07:23:29.103958476 +0000 UTC m=+2.236622735 container died 6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f (image=quay.io/ceph/haproxy:2.3, name=happy_einstein)
Jan 31 02:23:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d858087b96d7be01bc986c602bcb41a24b664f8c75be17e3274602d0e17169d5-merged.mount: Deactivated successfully.
Jan 31 02:23:29 np0005603621 podman[95193]: 2026-01-31 07:23:29.137845033 +0000 UTC m=+2.270509292 container remove 6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f (image=quay.io/ceph/haproxy:2.3, name=happy_einstein)
Jan 31 02:23:29 np0005603621 systemd[1]: libpod-conmon-6ac0c574436414c50776c242dd676f7acf9187b1e9a721734588f0f1ed620a0f.scope: Deactivated successfully.
Jan 31 02:23:29 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:29 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:29 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v119: 11 pgs: 11 active+clean; 454 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 2.9 KiB/s wr, 11 op/s
Jan 31 02:23:29 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:29 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:29 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:29 np0005603621 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.evwczw for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:23:29 np0005603621 podman[96027]: 2026-01-31 07:23:29.854238835 +0000 UTC m=+0.045696181 container create e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e384f85db41244ef040f6e286250dda74f0b80329a42a697d92a582124d8ad/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:29 np0005603621 podman[96027]: 2026-01-31 07:23:29.903219664 +0000 UTC m=+0.094677010 container init e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:29 np0005603621 podman[96027]: 2026-01-31 07:23:29.907857638 +0000 UTC m=+0.099314974 container start e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:29 np0005603621 bash[96027]: e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe
Jan 31 02:23:29 np0005603621 podman[96027]: 2026-01-31 07:23:29.832535163 +0000 UTC m=+0.023992539 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 31 02:23:29 np0005603621 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.evwczw for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:23:29 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw[96042]: [NOTICE] 030/072329 (2) : New worker #1 (4) forked
Jan 31 02:23:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000034s ======
Jan 31 02:23:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 02:23:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:30 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.yyrexo on compute-2
Jan 31 02:23:30 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.yyrexo on compute-2
Jan 31 02:23:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:31 np0005603621 ceph-mon[74394]: Deploying daemon haproxy.rgw.default.compute-2.yyrexo on compute-2
Jan 31 02:23:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e10 new map
Jan 31 02:23:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-31T07:23:03.855545+0000#012modified#0112026-01-31T07:23:27.037579+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.asgtzy{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/2751451154,v1:192.168.122.102:6805/2751451154] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.jroeqh{-1:14409} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/105956008,v1:192.168.122.100:6807/105956008] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.bkrghs{-1:24146} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/4027255140,v1:192.168.122.101:6805/4027255140] compat {c=[1],r=[1],i=[7ff]}]
Jan 31 02:23:31 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/4027255140,v1:192.168.122.101:6805/4027255140] up:standby
Jan 31 02:23:31 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active} 2 up:standby
Jan 31 02:23:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v120: 11 pgs: 11 active+clean; 454 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 2.5 KiB/s wr, 9 op/s
Jan 31 02:23:31 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 52ee85e6-65ae-4f69-a57d-08de4849f314 (Global Recovery Event) in 10 seconds
Jan 31 02:23:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:31.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:33.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v121: 11 pgs: 11 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 160 KiB/s rd, 6.0 KiB/s wr, 300 op/s
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.wujrgc on compute-0
Jan 31 02:23:33 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.wujrgc on compute-0
Jan 31 02:23:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:33.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:23:34 np0005603621 ceph-mon[74394]: Deploying daemon keepalived.rgw.default.compute-0.wujrgc on compute-0
Jan 31 02:23:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:35.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v122: 11 pgs: 11 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 136 KiB/s rd, 5.1 KiB/s wr, 255 op/s
Jan 31 02:23:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:36 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 8 completed events
Jan 31 02:23:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:23:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.416203838 +0000 UTC m=+2.499189825 container create 9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9 (image=quay.io/ceph/keepalived:2.2.4, name=wonderful_franklin, com.redhat.component=keepalived-container, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, version=2.2.4)
Jan 31 02:23:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.393244675 +0000 UTC m=+2.476230712 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 02:23:36 np0005603621 systemd[1]: Started libpod-conmon-9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9.scope.
Jan 31 02:23:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.485210804 +0000 UTC m=+2.568196781 container init 9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9 (image=quay.io/ceph/keepalived:2.2.4, name=wonderful_franklin, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, architecture=x86_64, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, name=keepalived)
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.490612593 +0000 UTC m=+2.573598550 container start 9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9 (image=quay.io/ceph/keepalived:2.2.4, name=wonderful_franklin, release=1793, description=keepalived for Ceph, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.493709876 +0000 UTC m=+2.576695843 container attach 9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9 (image=quay.io/ceph/keepalived:2.2.4, name=wonderful_franklin, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, architecture=x86_64)
Jan 31 02:23:36 np0005603621 wonderful_franklin[96295]: 0 0
Jan 31 02:23:36 np0005603621 systemd[1]: libpod-9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9.scope: Deactivated successfully.
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.496327033 +0000 UTC m=+2.579313000 container died 9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9 (image=quay.io/ceph/keepalived:2.2.4, name=wonderful_franklin, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container)
Jan 31 02:23:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f8bb96e935fb17b77023dd73bbe17575adcb21fd5ae6419779fbd457b41c29d4-merged.mount: Deactivated successfully.
Jan 31 02:23:36 np0005603621 podman[96197]: 2026-01-31 07:23:36.530803399 +0000 UTC m=+2.613789386 container remove 9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9 (image=quay.io/ceph/keepalived:2.2.4, name=wonderful_franklin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph)
Jan 31 02:23:36 np0005603621 systemd[1]: libpod-conmon-9c95425854135d556e1c35a19e18e8cbfc58c93f87b3a9240af613257fc66ba9.scope: Deactivated successfully.
Jan 31 02:23:36 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:36 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:36 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:36 np0005603621 systemd[1]: Reloading.
Jan 31 02:23:36 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:23:36 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:23:37 np0005603621 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.wujrgc for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2...
Jan 31 02:23:37 np0005603621 podman[96440]: 2026-01-31 07:23:37.255137836 +0000 UTC m=+0.051309758 container create 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, release=1793, version=2.2.4, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 02:23:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d957c390eafe5c7e51f1f2a29a3f2cfc4db809d1d248b9d6ce6764e2a4e8b0e3/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:37 np0005603621 podman[96440]: 2026-01-31 07:23:37.324662908 +0000 UTC m=+0.120834860 container init 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, name=keepalived, version=2.2.4, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.openshift.expose-services=, release=1793, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2)
Jan 31 02:23:37 np0005603621 podman[96440]: 2026-01-31 07:23:37.231612773 +0000 UTC m=+0.027784755 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 31 02:23:37 np0005603621 podman[96440]: 2026-01-31 07:23:37.329897511 +0000 UTC m=+0.126069433 container start 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 02:23:37 np0005603621 bash[96440]: 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790
Jan 31 02:23:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: Running on Linux 5.14.0-665.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Jan 22 12:30:22 UTC 2026 (built for Linux 5.14.0)
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: Starting VRRP child process, pid=4
Jan 31 02:23:37 np0005603621 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.wujrgc for 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2.
Jan 31 02:23:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v123: 11 pgs: 11 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 122 KiB/s rd, 3.1 KiB/s wr, 225 op/s
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: Startup complete
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: (VI_0) Entering BACKUP STATE (init)
Jan 31 02:23:37 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:37 2026: VRRP_Script(check_backend) succeeded
Jan 31 02:23:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:37.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.voilty on compute-2
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.voilty on compute-2
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:23:38
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.mgr']
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 31 02:23:39 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev aa5672c4-f15d-42e2-b372-027c1352321b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:39.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v125: 11 pgs: 11 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 127 KiB/s rd, 3.2 KiB/s wr, 234 op/s
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: Deploying daemon keepalived.rgw.default.compute-2.voilty on compute-2
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000034s ======
Jan 31 02:23:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:39.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 31 02:23:40 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 9ac1d91e-5c0e-4854-89d8-59fbc2fd6560 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:40 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:40 2026: (VI_0) Entering MASTER STATE
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 31 02:23:41 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 8db01486-9d37-4c98-8e06-5ad3c20168ba (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:41.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v128: 42 pgs: 31 unknown, 11 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Jan 31 02:23:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:41.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev ab0ccf7c-8fa5-4033-a300-7fac759d33d1 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 31 02:23:42 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event ab0ccf7c-8fa5-4033-a300-7fac759d33d1 (Updating ingress.rgw.default deployment (+4 -> 4)) in 16 seconds
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 31 02:23:42 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 62b4952c-d42d-414f-a7b8-bbce1a60194a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:42 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=44 pruub=12.389646530s) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active pruub 114.058708191s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:42 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=44 pruub=12.389646530s) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown pruub 114.058708191s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 podman[96739]: 2026-01-31 07:23:42.999541851 +0000 UTC m=+0.071913576 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:23:43 np0005603621 podman[96739]: 2026-01-31 07:23:43.084859537 +0000 UTC m=+0.157231262 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 31 02:23:43 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev f867fa0b-fbcd-4d71-8724-1ea1a63d4c49 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=17/18 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=44/45 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=17/17 les/c/f=18/18/0 sis=44) [0] r=0 lpr=44 pi=[17,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v131: 104 pgs: 1 peering, 93 unknown, 10 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:43.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:43 np0005603621 podman[96896]: 2026-01-31 07:23:43.635358128 +0000 UTC m=+0.048240264 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:43 np0005603621 podman[96896]: 2026-01-31 07:23:43.6450143 +0000 UTC m=+0.057896426 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:43 np0005603621 podman[96962]: 2026-01-31 07:23:43.794362441 +0000 UTC m=+0.044692798 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vcs-type=git, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, distribution-scope=public, vendor=Red Hat, Inc.)
Jan 31 02:23:43 np0005603621 podman[96962]: 2026-01-31 07:23:43.805088199 +0000 UTC m=+0.055418586 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, com.redhat.component=keepalived-container, release=1793, io.openshift.tags=Ceph keepalived, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d1380012-2178-40b7-9826-a3120b329671 does not exist
Jan 31 02:23:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 42c41749-13e0-43c7-bbc9-089e8232cc0b does not exist
Jan 31 02:23:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 03991e7b-08f5-4570-aea3-eeabed4594bf does not exist
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:43.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 31 02:23:44 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 6b02531f-8010-4ada-9bae-1cb887af2eca (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:44 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 46 pg[6.0( v 40'39 (0'0,40'39] local-lis/les=20/21 n=22 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=13.989885330s) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 36'38 mlcod 36'38 active pruub 117.240554810s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:44 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 46 pg[6.0( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46 pruub=13.989885330s) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 36'38 mlcod 0'0 unknown pruub 117.240554810s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.43360508 +0000 UTC m=+0.021516569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.542974957 +0000 UTC m=+0.130886446 container create d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:23:44 np0005603621 systemd[1]: Started libpod-conmon-d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183.scope.
Jan 31 02:23:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.687211424 +0000 UTC m=+0.275122953 container init d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.697128353 +0000 UTC m=+0.285039842 container start d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:44 np0005603621 infallible_perlman[97149]: 167 167
Jan 31 02:23:44 np0005603621 systemd[1]: libpod-d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183.scope: Deactivated successfully.
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.754794914 +0000 UTC m=+0.342706373 container attach d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.755237684 +0000 UTC m=+0.343149143 container died d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:23:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d1724c4ec464c8c213f5b154e017e1d800f8ce6b56de211f2a693a99ce1d064f-merged.mount: Deactivated successfully.
Jan 31 02:23:44 np0005603621 podman[97133]: 2026-01-31 07:23:44.802333749 +0000 UTC m=+0.390245238 container remove d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:44 np0005603621 systemd[1]: libpod-conmon-d2f4ae709031a693ded244176e77420fd93149eaf208211adbafdee06dde6183.scope: Deactivated successfully.
Jan 31 02:23:44 np0005603621 podman[97173]: 2026-01-31 07:23:44.956939897 +0000 UTC m=+0.042868535 container create 8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:23:44 np0005603621 systemd[1]: Started libpod-conmon-8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c.scope.
Jan 31 02:23:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47663b60430b1efe9f54fec496337613b00ab1e276f82e390729d20d8e116ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47663b60430b1efe9f54fec496337613b00ab1e276f82e390729d20d8e116ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47663b60430b1efe9f54fec496337613b00ab1e276f82e390729d20d8e116ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47663b60430b1efe9f54fec496337613b00ab1e276f82e390729d20d8e116ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47663b60430b1efe9f54fec496337613b00ab1e276f82e390729d20d8e116ca8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:45 np0005603621 podman[97173]: 2026-01-31 07:23:44.936173556 +0000 UTC m=+0.022102194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:45 np0005603621 podman[97173]: 2026-01-31 07:23:45.112414064 +0000 UTC m=+0.198342682 container init 8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bose, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:23:45 np0005603621 podman[97173]: 2026-01-31 07:23:45.119154987 +0000 UTC m=+0.205083605 container start 8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 31 02:23:45 np0005603621 podman[97173]: 2026-01-31 07:23:45.145504012 +0000 UTC m=+0.231432650 container attach 8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:23:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v133: 150 pgs: 1 peering, 108 unknown, 41 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:45.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 31 02:23:45 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev 9a7c0758-b6f9-4174-a173-183e0e43fc3b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.a( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.7( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.4( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.5( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=20/21 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.d( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.f( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.e( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.2( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.9( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.8( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.c( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=20/21 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.4( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.0( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 36'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.c( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.6( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 47 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=20/20 les/c/f=21/21/0 sis=46) [0] r=0 lpr=46 pi=[20,46)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:45 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc[96456]: Sat Jan 31 07:23:45 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 31 02:23:45 np0005603621 mystifying_bose[97190]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:23:45 np0005603621 mystifying_bose[97190]: --> relative data size: 1.0
Jan 31 02:23:45 np0005603621 mystifying_bose[97190]: --> All data devices are unavailable
Jan 31 02:23:45 np0005603621 systemd[1]: libpod-8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c.scope: Deactivated successfully.
Jan 31 02:23:45 np0005603621 podman[97207]: 2026-01-31 07:23:45.937288918 +0000 UTC m=+0.028995219 container died 8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bose, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:23:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:45.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:23:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-47663b60430b1efe9f54fec496337613b00ab1e276f82e390729d20d8e116ca8-merged.mount: Deactivated successfully.
Jan 31 02:23:45 np0005603621 podman[97207]: 2026-01-31 07:23:45.981054214 +0000 UTC m=+0.072760495 container remove 8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_bose, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:45 np0005603621 systemd[1]: libpod-conmon-8f71e227e20c8728fc350e8225e56129d45a6c9ceed26526c4caf3a46f31da3c.scope: Deactivated successfully.
Jan 31 02:23:46 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 9 completed events
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 31 02:23:46 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev a0bb6d82-8851-4a10-be9d-88f3ed2f211f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.484776877 +0000 UTC m=+0.041098082 container create 1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:46 np0005603621 systemd[1]: Started libpod-conmon-1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e.scope.
Jan 31 02:23:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.468222728 +0000 UTC m=+0.024544023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.5691024 +0000 UTC m=+0.125423635 container init 1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.575951864 +0000 UTC m=+0.132273069 container start 1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.578935097 +0000 UTC m=+0.135256342 container attach 1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:23:46 np0005603621 mystifying_curie[97377]: 167 167
Jan 31 02:23:46 np0005603621 systemd[1]: libpod-1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e.scope: Deactivated successfully.
Jan 31 02:23:46 np0005603621 conmon[97377]: conmon 1ac4ba3291c5b7a16e7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e.scope/container/memory.events
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.581199401 +0000 UTC m=+0.137520606 container died 1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:23:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-de671047b9297043f4e42b590657cff1fb32b1570efaf63e46ef0eef78af063e-merged.mount: Deactivated successfully.
Jan 31 02:23:46 np0005603621 podman[97362]: 2026-01-31 07:23:46.619766391 +0000 UTC m=+0.176087596 container remove 1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:23:46 np0005603621 systemd[1]: libpod-conmon-1ac4ba3291c5b7a16e7adc9c07b23e3494468d6cce42e824a9357ef35fe43b5e.scope: Deactivated successfully.
Jan 31 02:23:46 np0005603621 podman[97403]: 2026-01-31 07:23:46.785773953 +0000 UTC m=+0.082593342 container create 9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:23:46 np0005603621 podman[97403]: 2026-01-31 07:23:46.733944404 +0000 UTC m=+0.030763803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:46 np0005603621 systemd[1]: Started libpod-conmon-9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b.scope.
Jan 31 02:23:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1324d2868adefbf5c38d0213ec6af9b46c6d73f6de68d9e83cec6dd837bd3439/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1324d2868adefbf5c38d0213ec6af9b46c6d73f6de68d9e83cec6dd837bd3439/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1324d2868adefbf5c38d0213ec6af9b46c6d73f6de68d9e83cec6dd837bd3439/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1324d2868adefbf5c38d0213ec6af9b46c6d73f6de68d9e83cec6dd837bd3439/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:46 np0005603621 podman[97403]: 2026-01-31 07:23:46.877173247 +0000 UTC m=+0.173992686 container init 9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:23:46 np0005603621 podman[97403]: 2026-01-31 07:23:46.885462896 +0000 UTC m=+0.182282285 container start 9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:23:46 np0005603621 podman[97403]: 2026-01-31 07:23:46.889553614 +0000 UTC m=+0.186372993 container attach 9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:23:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v136: 181 pgs: 1 peering, 93 unknown, 87 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:23:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:47.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 31 02:23:47 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev df62371a-7397-40f3-a0ad-0e116615d88e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 31 02:23:47 np0005603621 lucid_austin[97420]: {
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:    "0": [
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:        {
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "devices": [
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "/dev/loop3"
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            ],
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "lv_name": "ceph_lv0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "lv_size": "7511998464",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "name": "ceph_lv0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "tags": {
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.cluster_name": "ceph",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.crush_device_class": "",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.encrypted": "0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.osd_id": "0",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.type": "block",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:                "ceph.vdo": "0"
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            },
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "type": "block",
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:            "vg_name": "ceph_vg0"
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:        }
Jan 31 02:23:47 np0005603621 lucid_austin[97420]:    ]
Jan 31 02:23:47 np0005603621 lucid_austin[97420]: }
Jan 31 02:23:47 np0005603621 systemd[1]: libpod-9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b.scope: Deactivated successfully.
Jan 31 02:23:47 np0005603621 podman[97403]: 2026-01-31 07:23:47.652087617 +0000 UTC m=+0.948907026 container died 9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:23:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1324d2868adefbf5c38d0213ec6af9b46c6d73f6de68d9e83cec6dd837bd3439-merged.mount: Deactivated successfully.
Jan 31 02:23:47 np0005603621 podman[97403]: 2026-01-31 07:23:47.702950873 +0000 UTC m=+0.999770212 container remove 9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:23:47 np0005603621 systemd[1]: libpod-conmon-9d2f98673120ef5a3a024ef917bbf552f7a70b41900105b59a51bc836001415b.scope: Deactivated successfully.
Jan 31 02:23:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 49 pg[8.0( v 33'8 (0'0,33'8] local-lis/les=32/33 n=6 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49 pruub=11.634441376s) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 lcod 33'7 mlcod 33'7 active pruub 118.379112244s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 49 pg[9.0( v 40'1015 (0'0,40'1015] local-lis/les=34/35 n=177 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=13.690087318s) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 40'1014 mlcod 40'1014 active pruub 120.434898376s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 49 pg[8.0( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49 pruub=11.634441376s) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 lcod 33'7 mlcod 0'0 unknown pruub 118.379112244s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:47.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 49 pg[9.0( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49 pruub=13.690087318s) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 40'1014 mlcod 0'0 unknown pruub 120.434898376s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.209434242 +0000 UTC m=+0.036629554 container create 6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:23:48 np0005603621 systemd[1]: Started libpod-conmon-6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0.scope.
Jan 31 02:23:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.26662564 +0000 UTC m=+0.093821012 container init 6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.271802125 +0000 UTC m=+0.098997437 container start 6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hugle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 31 02:23:48 np0005603621 silly_hugle[97602]: 167 167
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.275855333 +0000 UTC m=+0.103050695 container attach 6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hugle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:48 np0005603621 systemd[1]: libpod-6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0.scope: Deactivated successfully.
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.276225832 +0000 UTC m=+0.103421164 container died 6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hugle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.192001552 +0000 UTC m=+0.019196924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fd7f8d8a76a68281ab7eb11799c7c1ba8ab5626448ca74c09a4f5ca19332409c-merged.mount: Deactivated successfully.
Jan 31 02:23:48 np0005603621 podman[97585]: 2026-01-31 07:23:48.31391135 +0000 UTC m=+0.141106662 container remove 6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:48 np0005603621 systemd[1]: libpod-conmon-6b58a9e92b737ef5fff660dc6721fe0f2cd40f63b4682f192f1413c906de68d0.scope: Deactivated successfully.
Jan 31 02:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 31 02:23:48 np0005603621 podman[97625]: 2026-01-31 07:23:48.426221568 +0000 UTC m=+0.035757673 container create 72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:23:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 31 02:23:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] update: starting ev afd3535c-8e73-4db7-ba9e-045bf00d18bc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev aa5672c4-f15d-42e2-b372-027c1352321b (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event aa5672c4-f15d-42e2-b372-027c1352321b (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 9ac1d91e-5c0e-4854-89d8-59fbc2fd6560 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 9ac1d91e-5c0e-4854-89d8-59fbc2fd6560 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 8db01486-9d37-4c98-8e06-5ad3c20168ba (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 8db01486-9d37-4c98-8e06-5ad3c20168ba (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 62b4952c-d42d-414f-a7b8-bbce1a60194a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 62b4952c-d42d-414f-a7b8-bbce1a60194a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev f867fa0b-fbcd-4d71-8724-1ea1a63d4c49 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event f867fa0b-fbcd-4d71-8724-1ea1a63d4c49 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 6b02531f-8010-4ada-9bae-1cb887af2eca (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 6b02531f-8010-4ada-9bae-1cb887af2eca (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev 9a7c0758-b6f9-4174-a173-183e0e43fc3b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event 9a7c0758-b6f9-4174-a173-183e0e43fc3b (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev a0bb6d82-8851-4a10-be9d-88f3ed2f211f (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event a0bb6d82-8851-4a10-be9d-88f3ed2f211f (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev df62371a-7397-40f3-a0ad-0e116615d88e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event df62371a-7397-40f3-a0ad-0e116615d88e (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] complete: finished ev afd3535c-8e73-4db7-ba9e-045bf00d18bc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 31 02:23:48 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event afd3535c-8e73-4db7-ba9e-045bf00d18bc (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 31 02:23:48 np0005603621 systemd[1]: Started libpod-conmon-72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad.scope.
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.14( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1a( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.15( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1b( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1b( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1a( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.18( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.19( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.18( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.19( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1f( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1e( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1f( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1e( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1c( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1d( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1c( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1d( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.3( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.2( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.6( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.7( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.6( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.4( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.7( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.5( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.d( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.c( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.f( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.e( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1( v 33'8 (0'0,33'8] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.d( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.3( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.e( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.c( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.9( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.f( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.8( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.8( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.9( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.2( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.a( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.a( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.5( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.b( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.4( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.b( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.15( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.14( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.17( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.16( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.17( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.11( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.10( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.10( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.11( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.13( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.12( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.12( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.14( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1b( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1a( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1f( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.18( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1e( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1c( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.19( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.2( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.16( v 40'1015 lc 0'0 (0'0,40'1015] local-lis/les=34/35 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.13( v 33'8 lc 0'0 (0'0,33'8] local-lis/les=32/33 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'7 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.6( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1c( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.4( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.7( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1d( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.c( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.e( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.0( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 33'7 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.0( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 40'1014 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.5( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.1( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.d( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.c( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.8( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.9( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.f( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.3( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.2( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.4( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.b( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.14( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.15( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.10( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.11( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.12( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.16( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.a( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.17( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[8.13( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=32/32 les/c/f=33/33/0 sis=49) [0] r=0 lpr=49 pi=[32,49)/1 crt=33'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 50 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=34/34 les/c/f=35/35/0 sis=49) [0] r=0 lpr=49 pi=[34,49)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1054210e96dee29808cebf52947783861c601d5de3116ec5f10db24f9dd8846/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1054210e96dee29808cebf52947783861c601d5de3116ec5f10db24f9dd8846/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1054210e96dee29808cebf52947783861c601d5de3116ec5f10db24f9dd8846/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1054210e96dee29808cebf52947783861c601d5de3116ec5f10db24f9dd8846/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:48 np0005603621 podman[97625]: 2026-01-31 07:23:48.412123528 +0000 UTC m=+0.021659663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:48 np0005603621 podman[97625]: 2026-01-31 07:23:48.536895236 +0000 UTC m=+0.146431361 container init 72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:23:48 np0005603621 podman[97625]: 2026-01-31 07:23:48.54332555 +0000 UTC m=+0.152861655 container start 72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:48 np0005603621 podman[97625]: 2026-01-31 07:23:48.610771786 +0000 UTC m=+0.220307881 container attach 72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 31 02:23:49 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 31 02:23:49 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 31 02:23:49 np0005603621 magical_hellman[97641]: {
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:        "osd_id": 0,
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:        "type": "bluestore"
Jan 31 02:23:49 np0005603621 magical_hellman[97641]:    }
Jan 31 02:23:49 np0005603621 magical_hellman[97641]: }
Jan 31 02:23:49 np0005603621 systemd[1]: libpod-72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad.scope: Deactivated successfully.
Jan 31 02:23:49 np0005603621 podman[97625]: 2026-01-31 07:23:49.338825117 +0000 UTC m=+0.948361232 container died 72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v139: 243 pgs: 62 unknown, 181 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c1054210e96dee29808cebf52947783861c601d5de3116ec5f10db24f9dd8846-merged.mount: Deactivated successfully.
Jan 31 02:23:49 np0005603621 podman[97625]: 2026-01-31 07:23:49.462202521 +0000 UTC m=+1.071738626 container remove 72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:49 np0005603621 systemd[1]: libpod-conmon-72ee310e6f2d1968c1e14151f05b8aa2afbc708f3b9cba531ac3c7ce8a2129ad.scope: Deactivated successfully.
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 31 02:23:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5c537bb1-024c-4a28-89c6-54dcc91debfe does not exist
Jan 31 02:23:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4fb167e6-83cb-48e7-bce4-db909117b613 does not exist
Jan 31 02:23:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7b904e5c-9818-4ee3-8b78-b77836fb278b does not exist
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 31 02:23:50 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=15.325853348s) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active pruub 124.757347107s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:50 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 51 pg[11.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51 pruub=15.325853348s) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown pruub 124.757347107s@ mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.2 deep-scrub starts
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.2 deep-scrub ok
Jan 31 02:23:51 np0005603621 podman[97900]: 2026-01-31 07:23:51.228698915 +0000 UTC m=+0.049160015 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:23:51 np0005603621 podman[97900]: 2026-01-31 07:23:51.326273427 +0000 UTC m=+0.146734527 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:23:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v141: 305 pgs: 124 unknown, 181 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:23:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:51.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:23:51 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 19 completed events
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.11( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.12( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.13( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.15( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.16( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.7( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.8( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.9( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.2( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.3( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.5( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1f( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1e( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1d( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1c( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1b( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1a( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.18( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=38/39 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=38/38 les/c/f=39/39/0 sis=51) [0] r=0 lpr=51 pi=[38,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:23:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:51 np0005603621 podman[98057]: 2026-01-31 07:23:51.789341141 +0000 UTC m=+0.042811824 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:51 np0005603621 podman[98057]: 2026-01-31 07:23:51.799061654 +0000 UTC m=+0.052532317 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:23:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:51 np0005603621 podman[98126]: 2026-01-31 07:23:51.963704414 +0000 UTC m=+0.042670150 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, version=2.2.4, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 02:23:51 np0005603621 podman[98126]: 2026-01-31 07:23:51.976208735 +0000 UTC m=+0.055174461 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, version=2.2.4, vendor=Red Hat, Inc., description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, com.redhat.component=keepalived-container)
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:23:52 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.3 deep-scrub starts
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.3 deep-scrub ok
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 37dd0f13-12c0-4d32-b45a-bcf634603a9b does not exist
Jan 31 02:23:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6e8eec69-8e1f-4b43-923c-c25fd0429e5a does not exist
Jan 31 02:23:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 39a07dca-712f-4dfd-b607-bd7ae86cc755 does not exist
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.803106229 +0000 UTC m=+0.035885936 container create 36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:52 np0005603621 systemd[1]: Started libpod-conmon-36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f.scope.
Jan 31 02:23:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.865297458 +0000 UTC m=+0.098077175 container init 36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.86953105 +0000 UTC m=+0.102310757 container start 36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:23:52 np0005603621 naughty_williams[98318]: 167 167
Jan 31 02:23:52 np0005603621 systemd[1]: libpod-36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f.scope: Deactivated successfully.
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.874644513 +0000 UTC m=+0.107424240 container attach 36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.875035133 +0000 UTC m=+0.107814830 container died 36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.787340838 +0000 UTC m=+0.020120575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b831ceafd5489635dd24796328e0b7e0235c64e8e2d1e27825cde80483eef443-merged.mount: Deactivated successfully.
Jan 31 02:23:52 np0005603621 podman[98302]: 2026-01-31 07:23:52.910953038 +0000 UTC m=+0.143732765 container remove 36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:52 np0005603621 systemd[1]: libpod-conmon-36bc2e418cd5c6f6fb1344e2ff8fbab4c9a580bfcd1e9d0f00d5ac8160b05d2f.scope: Deactivated successfully.
Jan 31 02:23:53 np0005603621 podman[98342]: 2026-01-31 07:23:53.05411632 +0000 UTC m=+0.053912781 container create c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:23:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 31 02:23:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 31 02:23:53 np0005603621 systemd[1]: Started libpod-conmon-c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0.scope.
Jan 31 02:23:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:53 np0005603621 podman[98342]: 2026-01-31 07:23:53.029619059 +0000 UTC m=+0.029415560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79024eccd65239341d4de1bd22aea291ba5adeac7808dcf93d6dbcaf8cc58bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79024eccd65239341d4de1bd22aea291ba5adeac7808dcf93d6dbcaf8cc58bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79024eccd65239341d4de1bd22aea291ba5adeac7808dcf93d6dbcaf8cc58bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79024eccd65239341d4de1bd22aea291ba5adeac7808dcf93d6dbcaf8cc58bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79024eccd65239341d4de1bd22aea291ba5adeac7808dcf93d6dbcaf8cc58bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:53 np0005603621 podman[98342]: 2026-01-31 07:23:53.148556276 +0000 UTC m=+0.148352737 container init c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:23:53 np0005603621 podman[98342]: 2026-01-31 07:23:53.154302344 +0000 UTC m=+0.154098795 container start c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:53 np0005603621 podman[98342]: 2026-01-31 07:23:53.157952223 +0000 UTC m=+0.157748684 container attach c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:23:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 31 unknown, 274 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:53.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:53 np0005603621 gracious_mcnulty[98358]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:23:53 np0005603621 gracious_mcnulty[98358]: --> relative data size: 1.0
Jan 31 02:23:53 np0005603621 gracious_mcnulty[98358]: --> All data devices are unavailable
Jan 31 02:23:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:53.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:53 np0005603621 podman[98342]: 2026-01-31 07:23:53.9712827 +0000 UTC m=+0.971079121 container died c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:53 np0005603621 systemd[1]: libpod-c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0.scope: Deactivated successfully.
Jan 31 02:23:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f79024eccd65239341d4de1bd22aea291ba5adeac7808dcf93d6dbcaf8cc58bb-merged.mount: Deactivated successfully.
Jan 31 02:23:54 np0005603621 podman[98342]: 2026-01-31 07:23:54.019107022 +0000 UTC m=+1.018903443 container remove c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:23:54 np0005603621 systemd[1]: libpod-conmon-c8207746db4bca2612d46f5268b695d72e864ea31f7b08c58e56afdbc30c7fe0.scope: Deactivated successfully.
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.599168955 +0000 UTC m=+0.045638570 container create a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kalam, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:23:54 np0005603621 systemd[1]: Started libpod-conmon-a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02.scope.
Jan 31 02:23:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.574501761 +0000 UTC m=+0.020971476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.681031339 +0000 UTC m=+0.127500964 container init a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kalam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.685996818 +0000 UTC m=+0.132466433 container start a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kalam, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.689135904 +0000 UTC m=+0.135605549 container attach a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:23:54 np0005603621 beautiful_kalam[98541]: 167 167
Jan 31 02:23:54 np0005603621 systemd[1]: libpod-a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02.scope: Deactivated successfully.
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.691018849 +0000 UTC m=+0.137488484 container died a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:23:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4172b16ff3f2e8b8f8c945370b9a81ec3a3e6ba2887aa67540acacc5d3d743b2-merged.mount: Deactivated successfully.
Jan 31 02:23:54 np0005603621 podman[98525]: 2026-01-31 07:23:54.726021193 +0000 UTC m=+0.172490808 container remove a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:23:54 np0005603621 systemd[1]: libpod-conmon-a9edf8a985e7312d5ab01c1c3cf659352bc0e1671d965a595cbbccf655dc0b02.scope: Deactivated successfully.
Jan 31 02:23:54 np0005603621 podman[98567]: 2026-01-31 07:23:54.828821241 +0000 UTC m=+0.032372991 container create 6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:23:54 np0005603621 systemd[1]: Started libpod-conmon-6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97.scope.
Jan 31 02:23:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a178ee346bc4074707346a34fc0c892f5c5f473cfe41de51b7d5f0940dcfc93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a178ee346bc4074707346a34fc0c892f5c5f473cfe41de51b7d5f0940dcfc93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a178ee346bc4074707346a34fc0c892f5c5f473cfe41de51b7d5f0940dcfc93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a178ee346bc4074707346a34fc0c892f5c5f473cfe41de51b7d5f0940dcfc93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:54 np0005603621 podman[98567]: 2026-01-31 07:23:54.897853446 +0000 UTC m=+0.101405216 container init 6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:23:54 np0005603621 podman[98567]: 2026-01-31 07:23:54.904083265 +0000 UTC m=+0.107635015 container start 6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:23:54 np0005603621 podman[98567]: 2026-01-31 07:23:54.907441547 +0000 UTC m=+0.110993317 container attach 6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:23:54 np0005603621 podman[98567]: 2026-01-31 07:23:54.814714961 +0000 UTC m=+0.018266741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v144: 305 pgs: 305 active+clean; 457 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:55.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 31 02:23:55 np0005603621 agitated_germain[98583]: {
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:    "0": [
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:        {
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "devices": [
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "/dev/loop3"
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            ],
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "lv_name": "ceph_lv0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "lv_size": "7511998464",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "name": "ceph_lv0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "tags": {
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.cluster_name": "ceph",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.crush_device_class": "",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.encrypted": "0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.osd_id": "0",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.type": "block",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:                "ceph.vdo": "0"
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            },
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "type": "block",
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:            "vg_name": "ceph_vg0"
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:        }
Jan 31 02:23:55 np0005603621 agitated_germain[98583]:    ]
Jan 31 02:23:55 np0005603621 agitated_germain[98583]: }
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 31 02:23:55 np0005603621 systemd[1]: libpod-6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97.scope: Deactivated successfully.
Jan 31 02:23:55 np0005603621 podman[98567]: 2026-01-31 07:23:55.668980595 +0000 UTC m=+0.872532375 container died 6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.11( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.793218613s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281295776s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.636021614s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.124137878s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.642559052s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130661011s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.11( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.793174744s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281295776s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.12( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.793257713s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281379700s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.642498970s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130661011s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.12( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.793222427s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281379700s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838701248s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.326950073s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838687897s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.326950073s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838595390s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.326957703s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1f( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.635781288s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.124137878s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641895294s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130310059s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838544846s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.326957703s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641869545s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130310059s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.10( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792693138s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281257629s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838386536s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.326965332s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641674042s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130271912s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.10( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792668343s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281257629s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838362694s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.326965332s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641652107s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130271912s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.17( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792442322s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281143188s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.17( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792423248s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281143188s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.16( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792657852s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281425476s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838201523s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.326980591s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.838179588s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.326980591s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.16( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792629242s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281425476s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.19( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641471863s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130348206s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.15( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792206764s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281135559s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.19( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641454697s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130348206s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641133308s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130096436s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641113281s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130096436s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.4( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.791962624s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.280975342s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641198158s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130249023s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.4( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.791932106s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.280975342s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.641184807s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130249023s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.15( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.792052269s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281135559s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.732986450s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222198486s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.5( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.732970238s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222198486s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.b( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.791834831s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.281089783s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.837791443s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327087402s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.b( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.791798592s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.281089783s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.837766647s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327087402s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.837688446s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327110291s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.837671280s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327110291s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.a( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.791421890s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.280891418s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.6( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.640747070s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130233765s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.a( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.791402817s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.280891418s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.6( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.640718460s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130233765s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.640746117s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130340576s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.640727997s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130340576s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.9( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.790674210s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.280342102s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.837470055s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327194214s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.9( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.790638924s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.280342102s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.837451935s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327194214s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.732656479s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222183228s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.7( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.732354164s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222183228s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.3( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.640330315s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130264282s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.732314110s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222244263s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.3( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.640315056s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130264282s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.8( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.790297508s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.280296326s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.1( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.732273102s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222244263s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.8( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.790284157s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.280296326s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.f( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.790297508s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.280380249s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.f( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.790282249s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.280380249s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731943130s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222213745s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.3( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731909752s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222213745s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639966965s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130287170s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.d( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.787889481s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.278221130s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836887360s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327239990s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639936447s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130287170s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.3( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.789990425s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.280441284s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.3( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.789965630s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.280441284s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731698036s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222213745s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.d( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731675148s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222213745s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836846352s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327239990s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639636040s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130310059s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.d( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.787566185s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.278221130s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639616966s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130310059s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639590263s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130317688s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836650848s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327392578s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.c( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639566422s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130317688s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731500626s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222259521s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836627960s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327392578s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.f( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731475830s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222259521s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639816284s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130653381s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639787674s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130653381s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836522102s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327438354s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836500168s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327438354s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.c( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.786029816s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.277030945s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.c( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.786010742s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.277030945s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731319427s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222412109s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.b( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.731298447s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222412109s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.5( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.786520004s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.277664185s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639254570s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130401611s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639228821s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130401611s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.6( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.785449028s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.276741028s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639105797s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130416870s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.6( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.785426140s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.276741028s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.639081955s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130416870s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836123466s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327484131s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836070061s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327484131s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.730819702s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222274780s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836015701s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327499390s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=53 pruub=13.730797768s) [2] r=-1 lpr=53 pi=[46,53)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222274780s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835994720s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327499390s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836031914s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327644348s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.5( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.786071777s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.277664185s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638813019s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130432129s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.1c( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.785127640s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.276756287s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.836009026s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327644348s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638784409s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130432129s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.1c( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.785108566s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.276756287s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.2( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.783659935s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.275360107s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.2( v 33'8 (0'0,33'8] local-lis/les=49/50 n=1 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.783634186s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.275360107s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835826874s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327575684s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835806847s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327575684s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835807800s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327651978s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835767746s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327651978s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835765839s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327682495s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835743904s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327682495s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638466835s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130546570s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.1f( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.783125877s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.275207520s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638440132s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130546570s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835618973s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327751160s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835596085s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327751160s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.1f( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.783081055s) [2] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.275207520s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835576057s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327827454s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638558388s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130523682s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.15( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638416290s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130676270s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835553169s) [1] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327827454s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638245583s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130523682s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.19( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.783020020s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.275329590s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.19( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782986641s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.275329590s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.18( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782803535s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.275253296s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.15( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.638398170s) [2] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130676270s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.1b( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782487869s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.275047302s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.1b( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782466888s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.275047302s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835186005s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327835083s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835138321s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active pruub 126.327781677s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835165977s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327835083s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/38 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=11.835105896s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.327781677s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.18( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782775879s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.275253296s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.14( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782175064s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 active pruub 123.275047302s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[8.14( v 33'8 (0'0,33'8] local-lis/les=49/50 n=0 ec=49/32 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=8.782149315s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=33'8 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 123.275047302s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.637526512s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active pruub 126.130554199s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=44/45 n=0 ec=44/17 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.637500763s) [1] r=-1 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 126.130554199s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.19( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.17( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.a( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.6( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.2( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.4( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.b( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6a178ee346bc4074707346a34fc0c892f5c5f473cfe41de51b7d5f0940dcfc93-merged.mount: Deactivated successfully.
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.1e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.1( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.1e( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.5( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.e( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.b( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.18( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[10.1b( empty local-lis/les=0/0 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 podman[98567]: 2026-01-31 07:23:55.744438464 +0000 UTC m=+0.947990214 container remove 6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_germain, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:23:55 np0005603621 systemd[1]: libpod-conmon-6003f0ed54c4a498c4c4bcc5792d23865eaede07289b15a0769f1e2a6ea49a97.scope: Deactivated successfully.
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[7.10( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:23:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:55.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.245218785 +0000 UTC m=+0.047897295 container create 4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:23:56 np0005603621 systemd[1]: Started libpod-conmon-4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52.scope.
Jan 31 02:23:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.318991264 +0000 UTC m=+0.121669834 container init 4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.226994197 +0000 UTC m=+0.029672727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.324405695 +0000 UTC m=+0.127084205 container start 4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.32835622 +0000 UTC m=+0.131034750 container attach 4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:56 np0005603621 sharp_proskuriakova[98761]: 167 167
Jan 31 02:23:56 np0005603621 systemd[1]: libpod-4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52.scope: Deactivated successfully.
Jan 31 02:23:56 np0005603621 conmon[98761]: conmon 4f054265a1e44ff935bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52.scope/container/memory.events
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.330492352 +0000 UTC m=+0.133170862 container died 4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5f4638c65d7fa055aaec2bfe63f07efe0ba02535822b532012cdfc62bd1395f0-merged.mount: Deactivated successfully.
Jan 31 02:23:56 np0005603621 podman[98744]: 2026-01-31 07:23:56.370524027 +0000 UTC m=+0.173202537 container remove 4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:23:56 np0005603621 systemd[1]: libpod-conmon-4f054265a1e44ff935bbd8450fac20e7975e9ca1c3568f21ace805c5a50f5d52.scope: Deactivated successfully.
Jan 31 02:23:56 np0005603621 ceph-mgr[74689]: [progress INFO root] Completed event aa2579ec-e1de-4afa-a28a-1a233c358dc7 (Global Recovery Event) in 15 seconds
Jan 31 02:23:56 np0005603621 podman[98786]: 2026-01-31 07:23:56.468714563 +0000 UTC m=+0.030340942 container create 2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:23:56 np0005603621 systemd[1]: Started libpod-conmon-2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf.scope.
Jan 31 02:23:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:23:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333993cc15adc0182655670efe57d99bcf63bd9084cbc512f685967aa470d2ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333993cc15adc0182655670efe57d99bcf63bd9084cbc512f685967aa470d2ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333993cc15adc0182655670efe57d99bcf63bd9084cbc512f685967aa470d2ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333993cc15adc0182655670efe57d99bcf63bd9084cbc512f685967aa470d2ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:23:56 np0005603621 podman[98786]: 2026-01-31 07:23:56.455781642 +0000 UTC m=+0.017408041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:23:56 np0005603621 podman[98786]: 2026-01-31 07:23:56.560490066 +0000 UTC m=+0.122116465 container init 2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_gates, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:23:56 np0005603621 podman[98786]: 2026-01-31 07:23:56.570159449 +0000 UTC m=+0.131785868 container start 2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Jan 31 02:23:56 np0005603621 podman[98786]: 2026-01-31 07:23:56.573971861 +0000 UTC m=+0.135598270 container attach 2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_gates, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.1e( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.13( v 37'48 (0'0,37'48] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.19( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.15( v 52'51 lc 37'34 (0'0,52'51] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.14( v 52'51 lc 37'45 (0'0,52'51] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=52'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.6( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.b( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.1( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.8( v 37'48 (0'0,37'48] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.4( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.2( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.2( v 37'48 (0'0,37'48] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.b( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.5( v 37'48 (0'0,37'48] local-lis/les=53/54 n=1 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.10( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.a( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.e( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.18( v 37'48 (0'0,37'48] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.1b( v 37'48 (0'0,37'48] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=44/16 lis/c=44/44 les/c/f=45/45/0 sis=53) [0] r=0 lpr=53 pi=[44,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[5.17( empty local-lis/les=53/54 n=0 ec=46/19 lis/c=46/46 les/c/f=47/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[2.1e( empty local-lis/les=53/54 n=0 ec=42/14 lis/c=42/42 les/c/f=43/43/0 sis=53) [0] r=0 lpr=53 pi=[42,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/21 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 54 pg[10.19( v 37'48 (0'0,37'48] local-lis/les=53/54 n=0 ec=51/36 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=37'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:23:57 np0005603621 exciting_gates[98802]: {
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:        "osd_id": 0,
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:        "type": "bluestore"
Jan 31 02:23:57 np0005603621 exciting_gates[98802]:    }
Jan 31 02:23:57 np0005603621 exciting_gates[98802]: }
Jan 31 02:23:57 np0005603621 systemd[1]: libpod-2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf.scope: Deactivated successfully.
Jan 31 02:23:57 np0005603621 podman[98786]: 2026-01-31 07:23:57.330878108 +0000 UTC m=+0.892504497 container died 2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:23:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v147: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 02:23:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-333993cc15adc0182655670efe57d99bcf63bd9084cbc512f685967aa470d2ef-merged.mount: Deactivated successfully.
Jan 31 02:23:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:57.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:23:57 np0005603621 podman[98786]: 2026-01-31 07:23:57.384450508 +0000 UTC m=+0.946076887 container remove 2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:23:57 np0005603621 systemd[1]: libpod-conmon-2e4f62d93b8a62c4644436a2bbef4338bcfbc1bcdb6514601aca00841c231baf.scope: Deactivated successfully.
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3df9ca7a-d3b3-48eb-8168-6a5c3877c38f does not exist
Jan 31 02:23:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 03406729-264d-4d6e-825a-1b476d1dcbbc does not exist
Jan 31 02:23:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 82cf72de-a3c7-42b8-b9ce-dea40cf0a7d5 does not exist
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.680471420s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222198486s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.a( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.680378914s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222198486s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.6( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.680383682s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222396851s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.6( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.680306435s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222396851s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.679758072s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222244263s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.679718018s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222244263s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.679506302s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 128.222274780s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:23:57 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 55 pg[6.2( v 40'39 (0'0,40'39] local-lis/les=46/47 n=2 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=11.679359436s) [1] r=-1 lpr=55 pi=[46,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.222274780s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:23:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:57.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:23:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 31 02:23:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 31 02:23:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 31 02:23:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 02:23:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 31 02:23:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 1 active+recovery_wait, 7 active+recovery_wait+degraded, 2 active+recovering, 295 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 10/215 objects degraded (4.651%); 1/215 objects misplaced (0.465%); 22 B/s, 1 objects/s recovering
Jan 31 02:23:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:23:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:23:59.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:23:59 np0005603621 ceph-mon[74394]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 10/215 objects degraded (4.651%), 7 pgs degraded (PG_DEGRADED)
Jan 31 02:23:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:23:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:23:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:23:59.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:00 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 31 02:24:00 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 31 02:24:00 np0005603621 ceph-mon[74394]: Health check failed: Degraded data redundancy: 10/215 objects degraded (4.651%), 7 pgs degraded (PG_DEGRADED)
Jan 31 02:24:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 1 active+recovery_wait, 7 active+recovery_wait+degraded, 2 active+recovering, 295 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 10/215 objects degraded (4.651%); 1/215 objects misplaced (0.465%); 16 B/s, 1 objects/s recovering
Jan 31 02:24:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:01.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:01 np0005603621 ceph-mgr[74689]: [progress INFO root] Writing back 20 completed events
Jan 31 02:24:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 31 02:24:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:01 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:01.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 1 active+recovery_wait+degraded, 1 active+recovering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1/215 objects degraded (0.465%); 196 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 02:24:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:03.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 163 B/s, 1 keys/s, 2 objects/s recovering
Jan 31 02:24:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 31 02:24:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 02:24:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 31 02:24:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 02:24:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:05.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/215 objects degraded (0.465%), 1 pg degraded)
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 31 02:24:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.904987335s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.281524658s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.904897690s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.281524658s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.903845787s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.281356812s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.903804779s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.281356812s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.903292656s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.281188965s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.903173447s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.281188965s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.898409843s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.277297974s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.898321152s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.277297974s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.898141861s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.277420044s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.897985458s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.277420044s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.897537231s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.276962280s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.897396088s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.276962280s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.900627136s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.280471802s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.895713806s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.275604248s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.900493622s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.280471802s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:06 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 57 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=13.895528793s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.275604248s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1/215 objects degraded (0.465%), 1 pg degraded)
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: Cluster is now healthy
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:07 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 58 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 152 B/s, 1 keys/s, 1 objects/s recovering
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 31 02:24:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 02:24:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:07.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:07.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 59 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[49,58)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 31 02:24:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.738675117s) [2] async=[2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179199219s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.17( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.738554955s) [2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179199219s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.737518311s) [2] async=[2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179092407s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.f( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.737421036s) [2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179092407s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.728878975s) [2] async=[2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.170669556s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.3( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.728790283s) [2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.170669556s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.737278938s) [2] async=[2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179275513s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.737170219s) [2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179275513s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.736989021s) [2] async=[2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179138184s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:08 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 60 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=60 pruub=15.736874580s) [2] r=-1 lpr=60 pi=[49,60)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179138184s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 31 02:24:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 02:24:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:09.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 31 02:24:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.754400253s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.280838013s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.754278183s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.280838013s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=61 pruub=14.652816772s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179534912s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=61 pruub=14.653287888s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179580688s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.b( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=61 pruub=14.652476311s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179534912s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.13( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=5 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=61 pruub=14.652431488s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179580688s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=61 pruub=14.652213097s) [2] async=[2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 143.179519653s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.749692917s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.277130127s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.7( v 40'1015 (0'0,40'1015] local-lis/les=58/59 n=6 ec=49/34 lis/c=58/49 les/c/f=59/50/0 sis=61 pruub=14.652161598s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 143.179519653s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.749668121s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.277130127s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.747844696s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.275619507s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.747557640s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.275573730s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.747712135s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.275619507s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 61 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=61 pruub=10.747538567s) [2] r=-1 lpr=61 pi=[49,61)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.275573730s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:09.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 02:24:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 31 02:24:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 31 02:24:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 31 02:24:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:10 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 62 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 159 B/s, 2 keys/s, 1 objects/s recovering
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 02:24:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:11.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 31 02:24:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.637660027s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.282196045s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.637579918s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.282196045s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.633061409s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.278472900s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.632853508s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.278472900s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.629736900s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.275680542s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.629681587s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.275680542s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.628995895s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 139.275619507s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=63 pruub=8.628771782s) [1] r=-1 lpr=63 pi=[49,63)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 139.275619507s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[6.6( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:11 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 63 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=62) [2]/[0] async=[2] r=0 lpr=62 pi=[49,62)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:11.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 31 02:24:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 31 02:24:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 02:24:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 31 02:24:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 31 02:24:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=6 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.034096718s) [2] async=[2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 146.675201416s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.5( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=6 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.033517838s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.675201416s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=6 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.041275978s) [2] async=[2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 146.683868408s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.d( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=6 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.041060448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.683868408s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=5 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.040904999s) [2] async=[2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 146.683792114s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=5 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.038414955s) [2] async=[2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 146.681304932s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.15( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=5 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.040860176s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.683792114s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[9.1d( v 40'1015 (0'0,40'1015] local-lis/les=62/63 n=5 ec=49/34 lis/c=62/49 les/c/f=63/50/0 sis=64 pruub=15.038352013s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.681304932s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[6.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=63/64 n=2 ec=46/20 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:12 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 64 pg[6.e( v 40'39 lc 36'19 (0'0,40'39] local-lis/les=63/64 n=1 ec=46/20 lis/c=55/55 les/c/f=56/56/0 sis=63) [0] r=0 lpr=63 pi=[55,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:13 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.12 deep-scrub starts
Jan 31 02:24:13 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.12 deep-scrub ok
Jan 31 02:24:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 302 B/s, 9 objects/s recovering
Jan 31 02:24:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:13.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 31 02:24:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 31 02:24:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 31 02:24:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:13.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 65 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] async=[1] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 65 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] async=[1] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 65 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] async=[1] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 65 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=64) [1]/[0] async=[1] r=0 lpr=64 pi=[49,64)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:14 np0005603621 python3[98970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:14 np0005603621 podman[98971]: 2026-01-31 07:24:14.312686948 +0000 UTC m=+0.053602643 container create b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:24:14 np0005603621 systemd[1]: Started libpod-conmon-b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd.scope.
Jan 31 02:24:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:24:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4856db0bb82810af198369191522514f9ea8cee807287ed8d5fa99df43bab8de/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:24:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4856db0bb82810af198369191522514f9ea8cee807287ed8d5fa99df43bab8de/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:24:14 np0005603621 podman[98971]: 2026-01-31 07:24:14.28413553 +0000 UTC m=+0.025051275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:24:14 np0005603621 podman[98971]: 2026-01-31 07:24:14.397235505 +0000 UTC m=+0.138151180 container init b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:24:14 np0005603621 podman[98971]: 2026-01-31 07:24:14.402078633 +0000 UTC m=+0.142994308 container start b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:24:14 np0005603621 podman[98971]: 2026-01-31 07:24:14.405141467 +0000 UTC m=+0.146057172 container attach b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:24:14 np0005603621 modest_sanderson[98986]: could not fetch user info: no user info saved
Jan 31 02:24:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 31 02:24:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 31 02:24:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.174504280s) [1] async=[1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 148.946289062s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.e( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.174382210s) [1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.946289062s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.174103737s) [1] async=[1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 148.946136475s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.173676491s) [1] async=[1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 148.946121216s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.6( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=6 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.173611641s) [1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.946121216s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.167893410s) [1] async=[1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 148.940536499s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.167511940s) [1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.940536499s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:14 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 66 pg[9.16( v 40'1015 (0'0,40'1015] local-lis/les=64/65 n=5 ec=49/34 lis/c=64/49 les/c/f=65/50/0 sis=66 pruub=15.173329353s) [1] r=-1 lpr=66 pi=[49,66)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 148.946136475s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:14 np0005603621 systemd[1]: libpod-b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd.scope: Deactivated successfully.
Jan 31 02:24:14 np0005603621 podman[98971]: 2026-01-31 07:24:14.999288699 +0000 UTC m=+0.740204404 container died b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd (image=quay.io/ceph/ceph:v18, name=modest_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:24:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4856db0bb82810af198369191522514f9ea8cee807287ed8d5fa99df43bab8de-merged.mount: Deactivated successfully.
Jan 31 02:24:15 np0005603621 podman[98971]: 2026-01-31 07:24:15.037595913 +0000 UTC m=+0.778511588 container remove b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd (image=quay.io/ceph/ceph:v18, name=modest_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:24:15 np0005603621 systemd[1]: libpod-conmon-b151414d923930ba51ece966d85dcd2b26630daf699755a66ce1a9debfa93bbd.scope: Deactivated successfully.
Jan 31 02:24:15 np0005603621 python3[99108]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 342 B/s, 10 objects/s recovering
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.372589088 +0000 UTC m=+0.041742597 container create 636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9 (image=quay.io/ceph/ceph:v18, name=festive_buck, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:24:15 np0005603621 systemd[1]: Started libpod-conmon-636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9.scope.
Jan 31 02:24:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:15.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:24:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33ceeee41fda0637175455412d14b418b3f0826bbd9a76cac4b00247df3e77/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:24:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33ceeee41fda0637175455412d14b418b3f0826bbd9a76cac4b00247df3e77/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.350100106 +0000 UTC m=+0.019253625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.459278018 +0000 UTC m=+0.128431587 container init 636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9 (image=quay.io/ceph/ceph:v18, name=festive_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.467139327 +0000 UTC m=+0.136292806 container start 636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9 (image=quay.io/ceph/ceph:v18, name=festive_buck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.471094183 +0000 UTC m=+0.140247742 container attach 636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9 (image=quay.io/ceph/ceph:v18, name=festive_buck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:24:15 np0005603621 festive_buck[99126]: {
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "user_id": "openstack",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "display_name": "openstack",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "email": "",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "suspended": 0,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "max_buckets": 1000,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "subusers": [],
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "keys": [
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        {
Jan 31 02:24:15 np0005603621 festive_buck[99126]:            "user": "openstack",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:            "access_key": "PNJO4XB76WZ659CX1VL4",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:            "secret_key": "AfcXWgoS8OV5xZNwJ4evfAdaPn6too3pabj798BN"
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        }
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    ],
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "swift_keys": [],
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "caps": [],
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "op_mask": "read, write, delete",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "default_placement": "",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "default_storage_class": "",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "placement_tags": [],
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "bucket_quota": {
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "enabled": false,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "check_on_raw": false,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "max_size": -1,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "max_size_kb": 0,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "max_objects": -1
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    },
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "user_quota": {
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "enabled": false,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "check_on_raw": false,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "max_size": -1,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "max_size_kb": 0,
Jan 31 02:24:15 np0005603621 festive_buck[99126]:        "max_objects": -1
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    },
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "temp_url_keys": [],
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "type": "rgw",
Jan 31 02:24:15 np0005603621 festive_buck[99126]:    "mfa_ids": []
Jan 31 02:24:15 np0005603621 festive_buck[99126]: }
Jan 31 02:24:15 np0005603621 festive_buck[99126]: 
Jan 31 02:24:15 np0005603621 systemd[1]: libpod-636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9.scope: Deactivated successfully.
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.742238399 +0000 UTC m=+0.411391868 container died 636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9 (image=quay.io/ceph/ceph:v18, name=festive_buck, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:24:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9c33ceeee41fda0637175455412d14b418b3f0826bbd9a76cac4b00247df3e77-merged.mount: Deactivated successfully.
Jan 31 02:24:15 np0005603621 podman[99109]: 2026-01-31 07:24:15.778499403 +0000 UTC m=+0.447652872 container remove 636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9 (image=quay.io/ceph/ceph:v18, name=festive_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:24:15 np0005603621 systemd[1]: libpod-conmon-636af4aa168f7784c9c80d69e163c7e2b8218e9189a9f0038a14b8de1dfb49f9.scope: Deactivated successfully.
Jan 31 02:24:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 31 02:24:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 31 02:24:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 31 02:24:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:15.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 4 remapped+peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 451 B/s wr, 2 op/s; 35 B/s, 1 objects/s recovering
Jan 31 02:24:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:17.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:17.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s; 162 B/s, 8 objects/s recovering
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 02:24:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:19.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 31 02:24:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 31 02:24:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:19.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:20 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Jan 31 02:24:20 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Jan 31 02:24:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 02:24:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 31 02:24:21 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 31 02:24:21 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 31 02:24:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 319 B/s wr, 2 op/s; 127 B/s, 7 objects/s recovering
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 02:24:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:21.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 02:24:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 31 02:24:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:21.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 69 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=69 pruub=14.009536743s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 155.281097412s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 69 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=69 pruub=14.009471893s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.281097412s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 69 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=69 pruub=10.950692177s) [1] r=-1 lpr=69 pi=[46,69)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 152.222885132s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 69 pg[6.8( v 40'39 (0'0,40'39] local-lis/les=46/47 n=1 ec=46/20 lis/c=46/46 les/c/f=47/47/0 sis=69 pruub=10.950662613s) [1] r=-1 lpr=69 pi=[46,69)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 152.222885132s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 69 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=69 pruub=14.003809929s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 155.276290894s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 69 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=69 pruub=14.003770828s) [2] r=-1 lpr=69 pi=[49,69)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.276290894s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 31 02:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 02:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 31 02:24:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 31 02:24:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 70 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2]/[0] r=0 lpr=70 pi=[49,70)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 70 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2]/[0] r=0 lpr=70 pi=[49,70)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 70 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2]/[0] r=0 lpr=70 pi=[49,70)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:22 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 70 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2]/[0] r=0 lpr=70 pi=[49,70)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 0 op/s; 135 B/s, 7 objects/s recovering
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 02:24:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:23.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=12.736867905s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 155.280136108s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=12.736797333s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.280136108s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=12.732548714s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 155.276412964s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=12.732396126s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.276412964s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=70/71 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[49,70)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 02:24:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 31 02:24:23 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 71 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=70/71 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[49,70)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:23.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 31 02:24:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 31 02:24:24 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=70/71 n=5 ec=49/34 lis/c=70/49 les/c/f=71/50/0 sis=72 pruub=14.996663094s) [2] async=[2] r=-1 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 158.554702759s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=70/71 n=6 ec=49/34 lis/c=70/49 les/c/f=71/50/0 sis=72 pruub=14.990535736s) [2] async=[2] r=-1 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 158.548583984s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.18( v 40'1015 (0'0,40'1015] local-lis/les=70/71 n=5 ec=49/34 lis/c=70/49 les/c/f=71/50/0 sis=72 pruub=14.996572495s) [2] r=-1 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.554702759s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.8( v 40'1015 (0'0,40'1015] local-lis/les=70/71 n=6 ec=49/34 lis/c=70/49 les/c/f=71/50/0 sis=72 pruub=14.990433693s) [2] r=-1 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.548583984s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[0] r=0 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[0] r=0 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[0] r=0 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[0] r=0 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 72 pg[6.9( v 40'39 (0'0,40'39] local-lis/les=71/72 n=1 ec=46/20 lis/c=53/53 les/c/f=54/54/0 sis=71) [0] r=0 lpr=71 pi=[53,71)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 02:24:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 31 02:24:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 02:24:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:25.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 31 02:24:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 73 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=10.719699860s) [1] r=-1 lpr=73 pi=[49,73)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 155.281097412s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 73 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=10.719619751s) [1] r=-1 lpr=73 pi=[49,73)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.281097412s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 73 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=10.713315010s) [1] r=-1 lpr=73 pi=[49,73)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 155.276519775s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 73 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=73 pruub=10.713281631s) [1] r=-1 lpr=73 pi=[49,73)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 155.276519775s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 31 02:24:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 73 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=72/73 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 73 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=72/73 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 02:24:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 31 02:24:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:25.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 31 02:24:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 31 02:24:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 31 02:24:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=74) [1]/[0] r=0 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=74) [1]/[0] r=0 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=72/73 n=5 ec=49/34 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.994070053s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 160.572143555s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=72/73 n=6 ec=49/34 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.994052887s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 160.572052002s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.9( v 40'1015 (0'0,40'1015] local-lis/les=72/73 n=6 ec=49/34 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993851662s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.572052002s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=74) [1]/[0] r=0 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=72/73 n=5 ec=49/34 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993805885s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 160.572143555s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 74 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=74) [1]/[0] r=0 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 2 remapped+peering, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:24:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 31 02:24:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:27.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 31 02:24:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 31 02:24:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 31 02:24:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 31 02:24:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 31 02:24:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:27.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 31 02:24:27 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 75 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=74/75 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 75 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=74/75 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=74) [1]/[0] async=[1] r=0 lpr=74 pi=[49,74)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 31 02:24:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 31 02:24:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 31 02:24:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 76 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=74/75 n=6 ec=49/34 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.356637001s) [1] async=[1] r=-1 lpr=76 pi=[49,76)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 162.808654785s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 76 pg[9.a( v 40'1015 (0'0,40'1015] local-lis/les=74/75 n=6 ec=49/34 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.356509209s) [1] r=-1 lpr=76 pi=[49,76)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.808654785s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 76 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=74/75 n=5 ec=49/34 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.349847794s) [1] async=[1] r=-1 lpr=76 pi=[49,76)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 162.802169800s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 76 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=74/75 n=5 ec=49/34 lis/c=74/49 les/c/f=75/50/0 sis=76 pruub=15.349697113s) [1] r=-1 lpr=76 pi=[49,76)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 162.802169800s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 192 B/s, 8 objects/s recovering
Jan 31 02:24:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:29.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 31 02:24:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 31 02:24:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 31 02:24:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:31 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 31 02:24:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 4 peering, 301 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 166 B/s, 7 objects/s recovering
Jan 31 02:24:31 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 31 02:24:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:31.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 128 B/s, 5 objects/s recovering
Jan 31 02:24:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:33.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:24:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:33.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:24:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 459 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 B/s, 1 objects/s recovering
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 02:24:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:35.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 02:24:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 31 02:24:35 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=57/57 les/c/f=58/58/0 sis=78) [0] r=0 lpr=78 pi=[57,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:36.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 31 02:24:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 02:24:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 31 02:24:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 31 02:24:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 31 02:24:36 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 79 pg[6.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=78/79 n=1 ec=46/20 lis/c=57/57 les/c/f=58/58/0 sis=78) [0] r=0 lpr=78 pi=[57,78)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 459 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 02:24:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:37.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 02:24:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 31 02:24:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:38.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:24:38
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:24:38 np0005603621 systemd-logind[818]: New session 34 of user zuul.
Jan 31 02:24:38 np0005603621 systemd[1]: Started Session 34 of User zuul.
Jan 31 02:24:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 02:24:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 31 02:24:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 459 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 02:24:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:39.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:39 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 31 02:24:39 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 31 02:24:39 np0005603621 python3.9[99436]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 02:24:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 31 02:24:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:40.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 31 02:24:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 02:24:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 31 02:24:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 31 02:24:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 31 02:24:41 np0005603621 python3.9[99651]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:24:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 459 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 02:24:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:41.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:41 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 31 02:24:41 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 31 02:24:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 31 02:24:41 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 83 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=63/64 n=1 ec=46/20 lis/c=63/63 les/c/f=64/64/0 sis=83 pruub=10.942709923s) [1] r=-1 lpr=83 pi=[63,83)/1 crt=40'39 mlcod 40'39 active pruub 171.647338867s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:41 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 83 pg[6.e( v 40'39 (0'0,40'39] local-lis/les=63/64 n=1 ec=46/20 lis/c=63/63 les/c/f=64/64/0 sis=83 pruub=10.942598343s) [1] r=-1 lpr=83 pi=[63,83)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 171.647338867s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:42.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 31 02:24:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 02:24:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 31 02:24:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 31 02:24:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 31 02:24:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 2 active+remapped, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 02:24:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 31 02:24:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 31 02:24:43 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 85 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/20 lis/c=57/57 les/c/f=58/58/0 sis=85) [0] r=0 lpr=85 pi=[57,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:44 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Jan 31 02:24:44 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Jan 31 02:24:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 31 02:24:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 02:24:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 31 02:24:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 31 02:24:44 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 31 02:24:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 86 pg[6.f( v 40'39 lc 36'1 (0'0,40'39] local-lis/les=85/86 n=1 ec=46/20 lis/c=57/57 les/c/f=58/58/0 sis=85) [0] r=0 lpr=85 pi=[57,85)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 31 02:24:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:45.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 31 02:24:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 31 02:24:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 31 02:24:46 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Jan 31 02:24:46 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Jan 31 02:24:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 31 02:24:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 31 02:24:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 31 02:24:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 2 peering, 303 active+clean; 459 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 150 B/s, 0 objects/s recovering
Jan 31 02:24:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:47.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 31 02:24:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 31 02:24:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.817536757863391e-06 of space, bias 4.0, pg target 0.002181044109436069 quantized to 16 (current 16)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:24:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:24:48 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 31 02:24:48 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 31 02:24:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 305 active+clean; 459 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 187 B/s, 3 objects/s recovering
Jan 31 02:24:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 31 02:24:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 02:24:49 np0005603621 systemd[1]: session-34.scope: Deactivated successfully.
Jan 31 02:24:49 np0005603621 systemd[1]: session-34.scope: Consumed 8.187s CPU time.
Jan 31 02:24:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:49.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:49 np0005603621 systemd-logind[818]: Session 34 logged out. Waiting for processes to exit.
Jan 31 02:24:49 np0005603621 systemd-logind[818]: Removed session 34.
Jan 31 02:24:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:50.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 31 02:24:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 02:24:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 31 02:24:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 31 02:24:50 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 90 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=90 pruub=10.365588188s) [1] r=-1 lpr=90 pi=[49,90)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 179.282257080s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:50 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 90 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=90 pruub=10.365479469s) [1] r=-1 lpr=90 pi=[49,90)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.282257080s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 31 02:24:50 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 31 02:24:50 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 31 02:24:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 31 02:24:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 31 02:24:51 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 31 02:24:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 91 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=91) [1]/[0] r=0 lpr=91 pi=[49,91)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:51 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 91 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=91) [1]/[0] r=0 lpr=91 pi=[49,91)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 31 02:24:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 459 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 2 objects/s recovering
Jan 31 02:24:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 31 02:24:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 02:24:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:51.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:51 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 31 02:24:51 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 31 02:24:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 31 02:24:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 02:24:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 31 02:24:52 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 31 02:24:52 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 92 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=92 pruub=8.334322929s) [1] r=-1 lpr=92 pi=[49,92)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 179.282043457s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:52 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 92 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=92 pruub=8.334246635s) [1] r=-1 lpr=92 pi=[49,92)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.282043457s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 31 02:24:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 31 02:24:52 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 92 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=91/92 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=91) [1]/[0] async=[1] r=0 lpr=91 pi=[49,91)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 31 02:24:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 31 02:24:53 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 31 02:24:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 93 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=93) [1]/[0] r=0 lpr=93 pi=[49,93)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 93 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=91/92 n=6 ec=49/34 lis/c=91/49 les/c/f=92/50/0 sis=93 pruub=14.992338181s) [1] async=[1] r=-1 lpr=93 pi=[49,93)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 186.960311890s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 93 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=93) [1]/[0] r=0 lpr=93 pi=[49,93)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:24:53 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 93 pg[9.10( v 40'1015 (0'0,40'1015] local-lis/les=91/92 n=6 ec=49/34 lis/c=91/49 les/c/f=92/50/0 sis=93 pruub=14.992252350s) [1] r=-1 lpr=93 pi=[49,93)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.960311890s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 1 remapped+peering, 304 active+clean; 459 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:24:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:53.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 31 02:24:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 31 02:24:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:24:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:54.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:24:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 31 02:24:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 31 02:24:54 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 31 02:24:54 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 94 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=93/94 n=6 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[49,93)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:24:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 31 02:24:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 31 02:24:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 31 02:24:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 95 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=93/94 n=6 ec=49/34 lis/c=93/49 les/c/f=94/50/0 sis=95 pruub=14.939438820s) [1] async=[1] r=-1 lpr=95 pi=[49,95)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 188.990661621s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:55 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 95 pg[9.11( v 40'1015 (0'0,40'1015] local-lis/les=93/94 n=6 ec=49/34 lis/c=93/49 les/c/f=94/50/0 sis=95 pruub=14.939312935s) [1] r=-1 lpr=95 pi=[49,95)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 188.990661621s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 1 peering, 304 active+clean; 459 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 02:24:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:24:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:55.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:24:55 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 31 02:24:55 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 31 02:24:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:56.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 31 02:24:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 31 02:24:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 31 02:24:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 31 02:24:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:57.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:57 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 31 02:24:57 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 31 02:24:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:24:58.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:58 np0005603621 podman[99940]: 2026-01-31 07:24:58.613663729 +0000 UTC m=+0.078145197 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:24:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:24:58 np0005603621 podman[99940]: 2026-01-31 07:24:58.782000445 +0000 UTC m=+0.246481893 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:24:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:24:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:24:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 02:24:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:24:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:24:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:24:59.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:24:59 np0005603621 podman[100092]: 2026-01-31 07:24:59.516724911 +0000 UTC m=+0.079150480 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:24:59 np0005603621 podman[100092]: 2026-01-31 07:24:59.524924961 +0000 UTC m=+0.087350550 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 podman[100158]: 2026-01-31 07:24:59.749634501 +0000 UTC m=+0.047842063 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vendor=Red Hat, Inc., release=1793, vcs-type=git, version=2.2.4)
Jan 31 02:24:59 np0005603621 podman[100158]: 2026-01-31 07:24:59.758991878 +0000 UTC m=+0.057199450 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=keepalived, release=1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.28.2)
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 31 02:24:59 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 97 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=97 pruub=8.493535995s) [1] r=-1 lpr=97 pi=[49,97)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 187.282363892s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:24:59 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 97 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=97 pruub=8.493323326s) [1] r=-1 lpr=97 pi=[49,97)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.282363892s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:24:59 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 31 02:25:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:00 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 31 02:25:00 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:25:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e90ad963-9e03-4b54-ac33-6f538eac67cd does not exist
Jan 31 02:25:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 86abfb9d-a016-4622-b383-cd845ee26b0a does not exist
Jan 31 02:25:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ee656ae6-c7fd-446b-98bf-2e339d0c8cf2 does not exist
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:25:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 31 02:25:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 31 02:25:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 31 02:25:01 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 98 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=98) [1]/[0] r=0 lpr=98 pi=[49,98)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:01 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 98 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=49/50 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=98) [1]/[0] r=0 lpr=98 pi=[49,98)/1 crt=40'1015 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.149341433 +0000 UTC m=+0.045422176 container create e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lehmann, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:25:01 np0005603621 systemd[1]: Started libpod-conmon-e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5.scope.
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.123359296 +0000 UTC m=+0.019440059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:25:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.238375136 +0000 UTC m=+0.134455939 container init e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.244563693 +0000 UTC m=+0.140644396 container start e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lehmann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.24763519 +0000 UTC m=+0.143716003 container attach e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:25:01 np0005603621 friendly_lehmann[100478]: 167 167
Jan 31 02:25:01 np0005603621 systemd[1]: libpod-e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5.scope: Deactivated successfully.
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.253061003 +0000 UTC m=+0.149141696 container died e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lehmann, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:25:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-22d8992f61c3329c0ee22eed9fab000e5fd916ac635ece36228a523ac4ebfecd-merged.mount: Deactivated successfully.
Jan 31 02:25:01 np0005603621 podman[100462]: 2026-01-31 07:25:01.302803165 +0000 UTC m=+0.198883868 container remove e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lehmann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:25:01 np0005603621 systemd[1]: libpod-conmon-e9890cf6a6ea47ab4b4e2a6a4854b7a2583722433c35f1ec25979c86639d96b5.scope: Deactivated successfully.
Jan 31 02:25:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 31 02:25:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 31 02:25:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 02:25:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:01.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:01 np0005603621 podman[100503]: 2026-01-31 07:25:01.497686035 +0000 UTC m=+0.090461278 container create 3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:25:01 np0005603621 podman[100503]: 2026-01-31 07:25:01.441972653 +0000 UTC m=+0.034747986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:25:01 np0005603621 systemd[1]: Started libpod-conmon-3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b.scope.
Jan 31 02:25:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:25:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a24a7f9b515858efe65e2ca1fdf612183cf2d719cefb9f74b1b84ce52d6bfbd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a24a7f9b515858efe65e2ca1fdf612183cf2d719cefb9f74b1b84ce52d6bfbd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a24a7f9b515858efe65e2ca1fdf612183cf2d719cefb9f74b1b84ce52d6bfbd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a24a7f9b515858efe65e2ca1fdf612183cf2d719cefb9f74b1b84ce52d6bfbd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a24a7f9b515858efe65e2ca1fdf612183cf2d719cefb9f74b1b84ce52d6bfbd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:01 np0005603621 podman[100503]: 2026-01-31 07:25:01.581579374 +0000 UTC m=+0.174354657 container init 3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:25:01 np0005603621 podman[100503]: 2026-01-31 07:25:01.58710423 +0000 UTC m=+0.179879503 container start 3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:25:01 np0005603621 podman[100503]: 2026-01-31 07:25:01.590062314 +0000 UTC m=+0.182837757 container attach 3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:25:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 31 02:25:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 02:25:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 31 02:25:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:02.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:02 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 31 02:25:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 31 02:25:02 np0005603621 quizzical_dirac[100519]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:25:02 np0005603621 quizzical_dirac[100519]: --> relative data size: 1.0
Jan 31 02:25:02 np0005603621 quizzical_dirac[100519]: --> All data devices are unavailable
Jan 31 02:25:02 np0005603621 systemd[1]: libpod-3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b.scope: Deactivated successfully.
Jan 31 02:25:02 np0005603621 conmon[100519]: conmon 3b51bdbff436e2c19bee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b.scope/container/memory.events
Jan 31 02:25:02 np0005603621 podman[100503]: 2026-01-31 07:25:02.411435927 +0000 UTC m=+1.004211190 container died 3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:25:02 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 99 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=98/99 n=5 ec=49/34 lis/c=49/49 les/c/f=50/50/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[49,98)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:25:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0a24a7f9b515858efe65e2ca1fdf612183cf2d719cefb9f74b1b84ce52d6bfbd-merged.mount: Deactivated successfully.
Jan 31 02:25:02 np0005603621 podman[100503]: 2026-01-31 07:25:02.472243371 +0000 UTC m=+1.065018634 container remove 3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:25:02 np0005603621 systemd[1]: libpod-conmon-3b51bdbff436e2c19bee53f6f2e78ef343eeb1e590e5ee59661c65e18666165b.scope: Deactivated successfully.
Jan 31 02:25:02 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 31 02:25:02 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 31 02:25:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 31 02:25:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 31 02:25:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 31 02:25:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 31 02:25:03 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 100 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=98/99 n=5 ec=49/34 lis/c=98/49 les/c/f=99/50/0 sis=100 pruub=15.357340813s) [1] async=[1] r=-1 lpr=100 pi=[49,100)/1 crt=40'1015 lcod 0'0 mlcod 0'0 active pruub 197.249679565s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:03 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 100 pg[9.12( v 40'1015 (0'0,40'1015] local-lis/les=98/99 n=5 ec=49/34 lis/c=98/49 les/c/f=99/50/0 sis=100 pruub=15.357175827s) [1] r=-1 lpr=100 pi=[49,100)/1 crt=40'1015 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.249679565s@ mbc={}] state<Start>: transitioning to Stray
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.248611802 +0000 UTC m=+0.049900919 container create dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:25:03 np0005603621 systemd[1]: Started libpod-conmon-dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9.scope.
Jan 31 02:25:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.228551683 +0000 UTC m=+0.029840800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.33720413 +0000 UTC m=+0.138493327 container init dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.347919101 +0000 UTC m=+0.149208218 container start dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:25:03 np0005603621 clever_pare[100755]: 167 167
Jan 31 02:25:03 np0005603621 systemd[1]: libpod-dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9.scope: Deactivated successfully.
Jan 31 02:25:03 np0005603621 conmon[100755]: conmon dcdd4450e78438ccca32 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9.scope/container/memory.events
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.353652843 +0000 UTC m=+0.154942030 container attach dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.355278106 +0000 UTC m=+0.156567243 container died dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:25:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d68e1283dd897ad52554fb372d5630d307648965c746db3da5f89d3d8b8ecf7a-merged.mount: Deactivated successfully.
Jan 31 02:25:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 remapped+peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:03 np0005603621 podman[100738]: 2026-01-31 07:25:03.402445376 +0000 UTC m=+0.203734473 container remove dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pare, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:25:03 np0005603621 systemd[1]: libpod-conmon-dcdd4450e78438ccca325b1da4dd4e5e7c74c9cedbcebebfee9bc0545713abd9.scope: Deactivated successfully.
Jan 31 02:25:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:03.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:03 np0005603621 podman[100780]: 2026-01-31 07:25:03.569692377 +0000 UTC m=+0.058397339 container create 0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:25:03 np0005603621 systemd[1]: Started libpod-conmon-0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34.scope.
Jan 31 02:25:03 np0005603621 podman[100780]: 2026-01-31 07:25:03.540593801 +0000 UTC m=+0.029298813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:25:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:25:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8134bee72bbc08b86a2b2a9018a793338a4fb9eedb035693d883dd591c4da2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8134bee72bbc08b86a2b2a9018a793338a4fb9eedb035693d883dd591c4da2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8134bee72bbc08b86a2b2a9018a793338a4fb9eedb035693d883dd591c4da2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8134bee72bbc08b86a2b2a9018a793338a4fb9eedb035693d883dd591c4da2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:03 np0005603621 podman[100780]: 2026-01-31 07:25:03.681012848 +0000 UTC m=+0.169717860 container init 0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:25:03 np0005603621 podman[100780]: 2026-01-31 07:25:03.695933003 +0000 UTC m=+0.184637965 container start 0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_keller, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:25:03 np0005603621 podman[100780]: 2026-01-31 07:25:03.700025254 +0000 UTC m=+0.188730286 container attach 0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_keller, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:25:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:04.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 31 02:25:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 31 02:25:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 31 02:25:04 np0005603621 gracious_keller[100797]: {
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:    "0": [
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:        {
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "devices": [
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "/dev/loop3"
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            ],
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "lv_name": "ceph_lv0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "lv_size": "7511998464",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "name": "ceph_lv0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "tags": {
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.cluster_name": "ceph",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.crush_device_class": "",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.encrypted": "0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.osd_id": "0",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.type": "block",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:                "ceph.vdo": "0"
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            },
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "type": "block",
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:            "vg_name": "ceph_vg0"
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:        }
Jan 31 02:25:04 np0005603621 gracious_keller[100797]:    ]
Jan 31 02:25:04 np0005603621 gracious_keller[100797]: }
Jan 31 02:25:04 np0005603621 systemd[1]: libpod-0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34.scope: Deactivated successfully.
Jan 31 02:25:04 np0005603621 podman[100780]: 2026-01-31 07:25:04.552126473 +0000 UTC m=+1.040831415 container died 0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:25:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2d8134bee72bbc08b86a2b2a9018a793338a4fb9eedb035693d883dd591c4da2-merged.mount: Deactivated successfully.
Jan 31 02:25:04 np0005603621 podman[100780]: 2026-01-31 07:25:04.621699036 +0000 UTC m=+1.110403988 container remove 0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:25:04 np0005603621 systemd[1]: libpod-conmon-0ea6ed7b9b5e7a6002cfd8c4b9558a1899a9d9c136cce102fb3eaa0731407c34.scope: Deactivated successfully.
Jan 31 02:25:04 np0005603621 systemd-logind[818]: New session 35 of user zuul.
Jan 31 02:25:04 np0005603621 systemd[1]: Started Session 35 of User zuul.
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.308427215 +0000 UTC m=+0.048750993 container create 79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:25:05 np0005603621 systemd[1]: Started libpod-conmon-79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543.scope.
Jan 31 02:25:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.286951672 +0000 UTC m=+0.027275510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:25:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 25 B/s, 0 objects/s recovering
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.416464051 +0000 UTC m=+0.156787829 container init 79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.424344713 +0000 UTC m=+0.164668481 container start 79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cannon, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.428325779 +0000 UTC m=+0.168649547 container attach 79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:25:05 np0005603621 zen_cannon[101128]: 167 167
Jan 31 02:25:05 np0005603621 systemd[1]: libpod-79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543.scope: Deactivated successfully.
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.433977729 +0000 UTC m=+0.174301497 container died 79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:25:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-93b217303432564028540015f1be7115bf7a5e9b610492897654c8ed06d741e8-merged.mount: Deactivated successfully.
Jan 31 02:25:05 np0005603621 podman[101085]: 2026-01-31 07:25:05.468805217 +0000 UTC m=+0.209128985 container remove 79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cannon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:25:05 np0005603621 python3.9[101125]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 31 02:25:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:05 np0005603621 systemd[1]: libpod-conmon-79c1277ca677b8b929064e7be49d3b27c8e3130bac3d1c0175b5462feb8b3543.scope: Deactivated successfully.
Jan 31 02:25:05 np0005603621 podman[101178]: 2026-01-31 07:25:05.629026585 +0000 UTC m=+0.054042831 container create 6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:25:05 np0005603621 systemd[1]: Started libpod-conmon-6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2.scope.
Jan 31 02:25:05 np0005603621 podman[101178]: 2026-01-31 07:25:05.601677174 +0000 UTC m=+0.026693510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:25:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:25:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b0231a520dd031032ca58025aa25f6b49dc8126806aaedf9df62322cfca650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b0231a520dd031032ca58025aa25f6b49dc8126806aaedf9df62322cfca650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b0231a520dd031032ca58025aa25f6b49dc8126806aaedf9df62322cfca650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b0231a520dd031032ca58025aa25f6b49dc8126806aaedf9df62322cfca650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:25:05 np0005603621 podman[101178]: 2026-01-31 07:25:05.75053255 +0000 UTC m=+0.175548876 container init 6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:25:05 np0005603621 podman[101178]: 2026-01-31 07:25:05.757878083 +0000 UTC m=+0.182894349 container start 6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:25:05 np0005603621 podman[101178]: 2026-01-31 07:25:05.761781058 +0000 UTC m=+0.186797334 container attach 6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:25:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:06.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]: {
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:        "osd_id": 0,
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:        "type": "bluestore"
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]:    }
Jan 31 02:25:06 np0005603621 stupefied_faraday[101217]: }
Jan 31 02:25:06 np0005603621 systemd[1]: libpod-6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2.scope: Deactivated successfully.
Jan 31 02:25:06 np0005603621 podman[101178]: 2026-01-31 07:25:06.609586701 +0000 UTC m=+1.034602947 container died 6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:25:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d5b0231a520dd031032ca58025aa25f6b49dc8126806aaedf9df62322cfca650-merged.mount: Deactivated successfully.
Jan 31 02:25:06 np0005603621 podman[101178]: 2026-01-31 07:25:06.679799875 +0000 UTC m=+1.104816161 container remove 6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:25:06 np0005603621 systemd[1]: libpod-conmon-6dbdf3cb6e80e5a1a396db499c06ca718a5bb404bdc894dd11271096cd558cf2.scope: Deactivated successfully.
Jan 31 02:25:06 np0005603621 python3.9[101350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:25:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:25:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:25:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:25:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:25:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8ab2397d-dc5f-4000-80af-2c6cd7b18a60 does not exist
Jan 31 02:25:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1854f377-aa84-45cd-86a3-c908ff736b70 does not exist
Jan 31 02:25:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4b8143e6-3c8e-4196-a003-dac577652ca1 does not exist
Jan 31 02:25:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 31 02:25:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:07.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:25:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:25:07 np0005603621 python3.9[101585]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:25:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:08.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:25:08 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 31 02:25:08 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 31 02:25:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:08 np0005603621 python3.9[101738]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:25:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 02:25:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:09.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:09 np0005603621 python3.9[101893]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 31 02:25:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 31 02:25:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:10.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:10 np0005603621 python3.9[102045]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:25:10 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 31 02:25:10 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 31 02:25:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 31 02:25:11 np0005603621 python3.9[102195]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:25:11 np0005603621 network[102212]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:25:11 np0005603621 network[102213]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:25:11 np0005603621 network[102214]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:25:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 02:25:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:11.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:11 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 31 02:25:11 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 31 02:25:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 31 02:25:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:12.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 31 02:25:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 31 02:25:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 31 02:25:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 31 02:25:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 02:25:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:13 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 31 02:25:13 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 31 02:25:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 31 02:25:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:14.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 31 02:25:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 31 02:25:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 31 02:25:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 31 02:25:15 np0005603621 python3.9[102476]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:25:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 02:25:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:15.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 31 02:25:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 31 02:25:15 np0005603621 python3.9[102627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:25:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:16 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 31 02:25:16 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 31 02:25:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 31 02:25:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 31 02:25:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 31 02:25:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 31 02:25:17 np0005603621 python3.9[102781]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:25:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 460 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 02:25:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:17 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 31 02:25:17 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 31 02:25:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 31 02:25:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:18.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:18 np0005603621 python3.9[102940]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:25:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 31 02:25:19 np0005603621 python3.9[103024]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:25:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 48 B/s, 1 objects/s recovering
Jan 31 02:25:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:25:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:19.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:25:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 31 02:25:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:21.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:21 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 31 02:25:21 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 31 02:25:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 29 B/s, 1 objects/s recovering
Jan 31 02:25:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 31 02:25:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 02:25:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:23.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:23 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 31 02:25:23 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 31 02:25:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 31 02:25:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 02:25:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 31 02:25:24 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 31 02:25:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:24.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:24 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 110 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=74/74 les/c/f=75/75/0 sis=110) [0] r=0 lpr=110 pi=[74,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 31 02:25:24 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 31 02:25:24 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 31 02:25:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 31 02:25:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 31 02:25:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 31 02:25:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 31 02:25:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 111 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=74/74 les/c/f=75/75/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[74,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:25 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 111 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=74/74 les/c/f=75/75/0 sis=111) [0]/[2] r=-1 lpr=111 pi=[74,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:25:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Jan 31 02:25:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 31 02:25:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 02:25:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:25.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:25 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 31 02:25:25 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 31 02:25:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:26.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 31 02:25:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 31 02:25:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 02:25:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 31 02:25:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 31 02:25:26 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 112 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=76/76 les/c/f=77/77/0 sis=112) [0] r=0 lpr=112 pi=[76,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:26 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 31 02:25:26 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 31 02:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 31 02:25:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 31 02:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 31 02:25:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 31 02:25:27 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 113 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=111/74 les/c/f=112/75/0 sis=113) [0] r=0 lpr=113 pi=[74,113)/1 luod=0'0 crt=40'1015 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:27 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 113 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=111/74 les/c/f=112/75/0 sis=113) [0] r=0 lpr=113 pi=[74,113)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:27 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 113 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=76/76 les/c/f=77/77/0 sis=113) [0]/[1] r=-1 lpr=113 pi=[76,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:27 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 113 pg[9.1a( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=76/76 les/c/f=77/77/0 sis=113) [0]/[1] r=-1 lpr=113 pi=[76,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:25:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:27.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:25:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:28.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 31 02:25:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 114 pg[9.19( v 40'1015 (0'0,40'1015] local-lis/les=113/114 n=5 ec=49/34 lis/c=111/74 les/c/f=112/75/0 sis=113) [0] r=0 lpr=113 pi=[74,113)/1 crt=40'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 31 02:25:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 31 02:25:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 115 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=113/76 les/c/f=114/77/0 sis=115) [0] r=0 lpr=115 pi=[76,115)/1 luod=0'0 crt=40'1015 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:28 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 115 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=113/76 les/c/f=114/77/0 sis=115) [0] r=0 lpr=115 pi=[76,115)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:29.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 31 02:25:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 31 02:25:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 31 02:25:29 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 116 pg[9.1a( v 40'1015 (0'0,40'1015] local-lis/les=115/116 n=5 ec=49/34 lis/c=113/76 les/c/f=114/77/0 sis=115) [0] r=0 lpr=115 pi=[76,115)/1 crt=40'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:25:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:30 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 31 02:25:30 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 31 02:25:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:32.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:32 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Jan 31 02:25:32 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Jan 31 02:25:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s; 0 B/s, 1 objects/s recovering
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 02:25:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 31 02:25:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 31 02:25:33 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 117 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=60/60 les/c/f=61/61/0 sis=117) [0] r=0 lpr=117 pi=[60,117)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:34.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 31 02:25:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 31 02:25:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 31 02:25:34 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 118 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=60/60 les/c/f=61/61/0 sis=118) [0]/[2] r=-1 lpr=118 pi=[60,118)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:34 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 118 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=60/60 les/c/f=61/61/0 sis=118) [0]/[2] r=-1 lpr=118 pi=[60,118)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:25:34 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 31 02:25:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 0 B/s wr, 10 op/s; 39 B/s, 2 objects/s recovering
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 02:25:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:35.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:35 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 31 02:25:35 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 31 02:25:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 31 02:25:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.003000096s ======
Jan 31 02:25:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:36.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000096s
Jan 31 02:25:36 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 31 02:25:36 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:36.971143) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844336971274, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7398, "num_deletes": 251, "total_data_size": 9581481, "memory_usage": 9826880, "flush_reason": "Manual Compaction"}
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 31 02:25:36 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 120 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=118/60 les/c/f=119/61/0 sis=120) [0] r=0 lpr=120 pi=[60,120)/1 luod=0'0 crt=40'1015 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 31 02:25:36 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 120 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=118/60 les/c/f=119/61/0 sis=120) [0] r=0 lpr=120 pi=[60,120)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844337023483, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7760422, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 7534, "table_properties": {"data_size": 7733265, "index_size": 17826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 76117, "raw_average_key_size": 23, "raw_value_size": 7669630, "raw_average_value_size": 2339, "num_data_blocks": 787, "num_entries": 3279, "num_filter_entries": 3279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843988, "oldest_key_time": 1769843988, "file_creation_time": 1769844336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 52419 microseconds, and 23419 cpu microseconds.
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.023560) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7760422 bytes OK
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.023594) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.025911) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.025935) EVENT_LOG_v1 {"time_micros": 1769844337025928, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.025967) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9549340, prev total WAL file size 9549340, number of live WAL files 2.
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.027925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7578KB) 13(53KB) 8(1944B)]
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844337028050, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7817215, "oldest_snapshot_seqno": -1}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3091 keys, 7772610 bytes, temperature: kUnknown
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844337077554, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7772610, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7745937, "index_size": 17859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 74022, "raw_average_key_size": 23, "raw_value_size": 7684053, "raw_average_value_size": 2485, "num_data_blocks": 793, "num_entries": 3091, "num_filter_entries": 3091, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769844337, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.078132) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7772610 bytes
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.079609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.7 rd, 155.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3383, records dropped: 292 output_compression: NoCompression
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.079697) EVENT_LOG_v1 {"time_micros": 1769844337079626, "job": 4, "event": "compaction_finished", "compaction_time_micros": 49872, "compaction_time_cpu_micros": 24824, "output_level": 6, "num_output_files": 1, "total_output_size": 7772610, "num_input_records": 3383, "num_output_records": 3091, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844337081537, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844337081976, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844337082264, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:25:37.027650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:25:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 02:25:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:37.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:37 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 31 02:25:37 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 31 02:25:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 31 02:25:38 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 121 pg[9.1b( v 40'1015 (0'0,40'1015] local-lis/les=120/121 n=5 ec=49/34 lis/c=118/60 les/c/f=119/61/0 sis=120) [0] r=0 lpr=120 pi=[60,120)/1 crt=40'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:25:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:38.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:25:38
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'vms', 'images', 'backups', 'cephfs.cephfs.data']
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:25:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 31 02:25:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 31 02:25:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 31 02:25:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 31 02:25:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 1 remapped+peering, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:39.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 31 02:25:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 31 02:25:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 31 02:25:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 31 02:25:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 31 02:25:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 31 02:25:40 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 31 02:25:40 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 31 02:25:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 1 remapped+peering, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:41.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 31 02:25:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 31 02:25:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 31 02:25:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 69 B/s, 1 objects/s recovering
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 02:25:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:43.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 31 02:25:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 31 02:25:44 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 126 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=126) [0] r=0 lpr=126 pi=[66,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:44.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 31 02:25:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 31 02:25:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 31 02:25:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 127 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[66,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:45 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 127 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=66/66 les/c/f=67/67/0 sis=127) [0]/[1] r=-1 lpr=127 pi=[66,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:25:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 31 02:25:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 70 B/s, 1 objects/s recovering
Jan 31 02:25:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 31 02:25:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:25:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:45.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 31 02:25:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 31 02:25:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:46.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:25:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 31 02:25:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 31 02:25:46 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 128 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=88/88 les/c/f=89/89/0 sis=128) [0] r=0 lpr=128 pi=[88,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 31 02:25:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 31 02:25:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 31 02:25:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 129 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=88/88 les/c/f=89/89/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[88,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 129 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=127/66 les/c/f=128/67/0 sis=129) [0] r=0 lpr=129 pi=[66,129)/1 luod=0'0 crt=40'1015 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 129 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/34 lis/c=88/88 les/c/f=89/89/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[88,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 31 02:25:47 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 129 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=127/66 les/c/f=128/67/0 sis=129) [0] r=0 lpr=129 pi=[66,129)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 31 02:25:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:47 np0005603621 systemd[76060]: Created slice User Background Tasks Slice.
Jan 31 02:25:47 np0005603621 systemd[76060]: Starting Cleanup of User's Temporary Files and Directories...
Jan 31 02:25:47 np0005603621 systemd[76060]: Finished Cleanup of User's Temporary Files and Directories.
Jan 31 02:25:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:47.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:48.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 31 02:25:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 130 pg[9.1e( v 40'1015 (0'0,40'1015] local-lis/les=129/130 n=5 ec=49/34 lis/c=127/66 les/c/f=128/67/0 sis=129) [0] r=0 lpr=129 pi=[66,129)/1 crt=40'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:25:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 31 02:25:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 31 02:25:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 131 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=129/88 les/c/f=130/89/0 sis=131) [0] r=0 lpr=131 pi=[88,131)/1 luod=0'0 crt=40'1015 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 31 02:25:48 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 131 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=0/0 n=5 ec=49/34 lis/c=129/88 les/c/f=130/89/0 sis=131) [0] r=0 lpr=131 pi=[88,131)/1 crt=40'1015 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 31 02:25:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:49.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 31 02:25:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 31 02:25:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 02:25:49 np0005603621 ceph-osd[84880]: osd.0 pg_epoch: 132 pg[9.1f( v 40'1015 (0'0,40'1015] local-lis/les=131/132 n=5 ec=49/34 lis/c=129/88 les/c/f=130/89/0 sis=131) [0] r=0 lpr=131 pi=[88,131)/1 crt=40'1015 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 31 02:25:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:50.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:50 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.12 deep-scrub starts
Jan 31 02:25:50 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.12 deep-scrub ok
Jan 31 02:25:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 1 unknown, 304 active+clean; 459 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:25:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:51.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:52.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 170 B/s wr, 11 op/s; 54 B/s, 3 objects/s recovering
Jan 31 02:25:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:25:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:54.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:54 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 31 02:25:54 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 31 02:25:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 5.5 KiB/s rd, 140 B/s wr, 10 op/s; 45 B/s, 2 objects/s recovering
Jan 31 02:25:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:25:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:55.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:25:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:56.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 117 B/s wr, 8 op/s; 37 B/s, 2 objects/s recovering
Jan 31 02:25:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:25:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:57.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:25:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:25:58.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:25:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 7 op/s; 32 B/s, 1 objects/s recovering
Jan 31 02:25:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:25:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:25:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:25:59.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:25:59 np0005603621 python3.9[103443]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:26:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:00.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 0 B/s wr, 6 op/s; 28 B/s, 1 objects/s recovering
Jan 31 02:26:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:01.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:01 np0005603621 python3.9[103731]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 31 02:26:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:02.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:02 np0005603621 python3.9[103883]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 31 02:26:03 np0005603621 python3.9[104035]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:26:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Jan 31 02:26:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:03.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:03 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 31 02:26:03 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 31 02:26:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:04.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:04 np0005603621 python3.9[104238]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 31 02:26:04 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 31 02:26:04 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 31 02:26:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:05.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:05 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Jan 31 02:26:05 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Jan 31 02:26:06 np0005603621 python3.9[104391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:26:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:06.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 31 02:26:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 31 02:26:06 np0005603621 python3.9[104543]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:26:07 np0005603621 python3.9[104621]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:26:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:07 np0005603621 podman[104817]: 2026-01-31 07:26:07.868249594 +0000 UTC m=+0.071866146 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:26:07 np0005603621 podman[104817]: 2026-01-31 07:26:07.967184884 +0000 UTC m=+0.170801406 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:26:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:08.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:26:08 np0005603621 podman[105099]: 2026-01-31 07:26:08.554167172 +0000 UTC m=+0.060998167 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:26:08 np0005603621 python3.9[105066]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:26:08 np0005603621 podman[105099]: 2026-01-31 07:26:08.563128772 +0000 UTC m=+0.069959777 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 podman[105191]: 2026-01-31 07:26:08.751911668 +0000 UTC m=+0.055406351 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.component=keepalived-container, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Jan 31 02:26:08 np0005603621 podman[105191]: 2026-01-31 07:26:08.760800276 +0000 UTC m=+0.064294899 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, description=keepalived for Ceph, release=1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, architecture=x86_64)
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 804b97b0-163c-4b82-9b23-29f27bc9421d does not exist
Jan 31 02:26:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2e296be0-635b-480c-abfa-a3366dcb7955 does not exist
Jan 31 02:26:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ccac0b30-a22a-4b2e-8fc2-8ae9cee79c92 does not exist
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:26:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:09.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:09 np0005603621 python3.9[105558]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:10.000698199 +0000 UTC m=+0.042585581 container create d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:26:10 np0005603621 systemd[1]: Started libpod-conmon-d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d.scope.
Jan 31 02:26:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:09.976667559 +0000 UTC m=+0.018554931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:10.086853921 +0000 UTC m=+0.128741343 container init d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:10.093356144 +0000 UTC m=+0.135243526 container start d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:26:10 np0005603621 frosty_chebyshev[105665]: 167 167
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:10.098698991 +0000 UTC m=+0.140586473 container attach d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:10.099765935 +0000 UTC m=+0.141653317 container died d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:26:10 np0005603621 systemd[1]: libpod-d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d.scope: Deactivated successfully.
Jan 31 02:26:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a1601c7e7f45f7af70bfe85ce1532c111b91b5f770bc80e54a8c4b268a48ccaa-merged.mount: Deactivated successfully.
Jan 31 02:26:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:10.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:10 np0005603621 podman[105649]: 2026-01-31 07:26:10.150119758 +0000 UTC m=+0.192007130 container remove d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_chebyshev, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:26:10 np0005603621 systemd[1]: libpod-conmon-d84ec33583ad8f949c8d382ac44fdb7f6ed53d3d108b55bc2ae0e4a738fbfc8d.scope: Deactivated successfully.
Jan 31 02:26:10 np0005603621 podman[105714]: 2026-01-31 07:26:10.269547058 +0000 UTC m=+0.040449924 container create 1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:26:10 np0005603621 systemd[1]: Started libpod-conmon-1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2.scope.
Jan 31 02:26:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:26:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b4fb1cc4938167b25517bf303a8b0c419d1cb3399a6e459133d09d9793396/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b4fb1cc4938167b25517bf303a8b0c419d1cb3399a6e459133d09d9793396/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b4fb1cc4938167b25517bf303a8b0c419d1cb3399a6e459133d09d9793396/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b4fb1cc4938167b25517bf303a8b0c419d1cb3399a6e459133d09d9793396/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b4fb1cc4938167b25517bf303a8b0c419d1cb3399a6e459133d09d9793396/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:10 np0005603621 podman[105714]: 2026-01-31 07:26:10.250192163 +0000 UTC m=+0.021095049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:26:10 np0005603621 podman[105714]: 2026-01-31 07:26:10.381288959 +0000 UTC m=+0.152191845 container init 1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wiles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:26:10 np0005603621 podman[105714]: 2026-01-31 07:26:10.387067889 +0000 UTC m=+0.157970745 container start 1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:26:10 np0005603621 podman[105714]: 2026-01-31 07:26:10.397091663 +0000 UTC m=+0.167994529 container attach 1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wiles, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:26:10 np0005603621 python3.9[105840]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 31 02:26:11 np0005603621 wonderful_wiles[105772]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:26:11 np0005603621 wonderful_wiles[105772]: --> relative data size: 1.0
Jan 31 02:26:11 np0005603621 wonderful_wiles[105772]: --> All data devices are unavailable
Jan 31 02:26:11 np0005603621 systemd[1]: libpod-1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2.scope: Deactivated successfully.
Jan 31 02:26:11 np0005603621 podman[105928]: 2026-01-31 07:26:11.23252471 +0000 UTC m=+0.045652097 container died 1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wiles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:26:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d62b4fb1cc4938167b25517bf303a8b0c419d1cb3399a6e459133d09d9793396-merged.mount: Deactivated successfully.
Jan 31 02:26:11 np0005603621 podman[105928]: 2026-01-31 07:26:11.346725238 +0000 UTC m=+0.159852605 container remove 1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:26:11 np0005603621 systemd[1]: libpod-conmon-1e77104add409f75ea51043564b007cb4fe0a51fd89b49129dea229122727bf2.scope: Deactivated successfully.
Jan 31 02:26:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:11.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:11 np0005603621 python3.9[106037]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:26:11 np0005603621 podman[106204]: 2026-01-31 07:26:11.923887138 +0000 UTC m=+0.041269510 container create 8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:26:11 np0005603621 systemd[1]: Started libpod-conmon-8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3.scope.
Jan 31 02:26:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:26:12 np0005603621 podman[106204]: 2026-01-31 07:26:12.003822135 +0000 UTC m=+0.121204527 container init 8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:26:12 np0005603621 podman[106204]: 2026-01-31 07:26:11.909157727 +0000 UTC m=+0.026540119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:26:12 np0005603621 podman[106204]: 2026-01-31 07:26:12.014419526 +0000 UTC m=+0.131801938 container start 8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_taussig, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:26:12 np0005603621 podman[106204]: 2026-01-31 07:26:12.018472792 +0000 UTC m=+0.135855194 container attach 8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_taussig, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:26:12 np0005603621 priceless_taussig[106267]: 167 167
Jan 31 02:26:12 np0005603621 systemd[1]: libpod-8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3.scope: Deactivated successfully.
Jan 31 02:26:12 np0005603621 podman[106204]: 2026-01-31 07:26:12.019709781 +0000 UTC m=+0.137092183 container died 8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:26:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fd7975d5875fd3aac987675deca63aaf994abf7b24ea86fc20626d8c685da522-merged.mount: Deactivated successfully.
Jan 31 02:26:12 np0005603621 podman[106204]: 2026-01-31 07:26:12.056317605 +0000 UTC m=+0.173699997 container remove 8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:26:12 np0005603621 systemd[1]: libpod-conmon-8e496a64ad6de476c94fb5e80a2e0f9057bd340fafb36ab9ee56119b51adacf3.scope: Deactivated successfully.
Jan 31 02:26:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:12.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:12 np0005603621 podman[106349]: 2026-01-31 07:26:12.195490312 +0000 UTC m=+0.044354686 container create fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:26:12 np0005603621 systemd[1]: Started libpod-conmon-fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b.scope.
Jan 31 02:26:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:26:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7573a27790afa47a864ee79f92388956182aafd14ea1ef426e9413266635242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7573a27790afa47a864ee79f92388956182aafd14ea1ef426e9413266635242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7573a27790afa47a864ee79f92388956182aafd14ea1ef426e9413266635242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7573a27790afa47a864ee79f92388956182aafd14ea1ef426e9413266635242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:12 np0005603621 podman[106349]: 2026-01-31 07:26:12.174457756 +0000 UTC m=+0.023322170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:26:12 np0005603621 podman[106349]: 2026-01-31 07:26:12.277485334 +0000 UTC m=+0.126349738 container init fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:26:12 np0005603621 podman[106349]: 2026-01-31 07:26:12.283780241 +0000 UTC m=+0.132644625 container start fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:26:12 np0005603621 podman[106349]: 2026-01-31 07:26:12.288939932 +0000 UTC m=+0.137804316 container attach fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:26:12 np0005603621 python3.9[106344]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 31 02:26:12 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.5 deep-scrub starts
Jan 31 02:26:12 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.5 deep-scrub ok
Jan 31 02:26:13 np0005603621 zen_fermat[106365]: {
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:    "0": [
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:        {
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "devices": [
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "/dev/loop3"
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            ],
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "lv_name": "ceph_lv0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "lv_size": "7511998464",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "name": "ceph_lv0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "tags": {
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.cluster_name": "ceph",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.crush_device_class": "",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.encrypted": "0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.osd_id": "0",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.type": "block",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:                "ceph.vdo": "0"
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            },
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "type": "block",
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:            "vg_name": "ceph_vg0"
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:        }
Jan 31 02:26:13 np0005603621 zen_fermat[106365]:    ]
Jan 31 02:26:13 np0005603621 zen_fermat[106365]: }
Jan 31 02:26:13 np0005603621 systemd[1]: libpod-fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b.scope: Deactivated successfully.
Jan 31 02:26:13 np0005603621 podman[106349]: 2026-01-31 07:26:13.054025002 +0000 UTC m=+0.902889386 container died fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:26:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a7573a27790afa47a864ee79f92388956182aafd14ea1ef426e9413266635242-merged.mount: Deactivated successfully.
Jan 31 02:26:13 np0005603621 podman[106349]: 2026-01-31 07:26:13.158544067 +0000 UTC m=+1.007408441 container remove fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:26:13 np0005603621 systemd[1]: libpod-conmon-fcdcc1fc0e6eb703ca9fc6f31ae9cb2af6e0633459c862bb6f42bed576363c8b.scope: Deactivated successfully.
Jan 31 02:26:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:13 np0005603621 python3.9[106539]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:26:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:13.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:13 np0005603621 podman[106683]: 2026-01-31 07:26:13.789237509 +0000 UTC m=+0.091885061 container create c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:26:13 np0005603621 podman[106683]: 2026-01-31 07:26:13.734699875 +0000 UTC m=+0.037347447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:26:13 np0005603621 systemd[1]: Started libpod-conmon-c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47.scope.
Jan 31 02:26:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:26:13 np0005603621 podman[106683]: 2026-01-31 07:26:13.921613925 +0000 UTC m=+0.224261487 container init c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:26:13 np0005603621 podman[106683]: 2026-01-31 07:26:13.928769218 +0000 UTC m=+0.231416770 container start c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:26:13 np0005603621 musing_fermi[106700]: 167 167
Jan 31 02:26:13 np0005603621 systemd[1]: libpod-c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47.scope: Deactivated successfully.
Jan 31 02:26:13 np0005603621 podman[106683]: 2026-01-31 07:26:13.968274102 +0000 UTC m=+0.270921674 container attach c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:26:13 np0005603621 podman[106683]: 2026-01-31 07:26:13.969018136 +0000 UTC m=+0.271665708 container died c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:26:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8991a12708acf785cadf3c9278a1ab0b02a5cfab9fa3d6d0125258d1dc5dc1a3-merged.mount: Deactivated successfully.
Jan 31 02:26:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:14.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:14 np0005603621 podman[106683]: 2026-01-31 07:26:14.155393238 +0000 UTC m=+0.458040790 container remove c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:26:14 np0005603621 systemd[1]: libpod-conmon-c8e8c06481d40b582aca0b72d1cae9360a3e3dfc8c2cf33df1839e6a13bb7f47.scope: Deactivated successfully.
Jan 31 02:26:14 np0005603621 podman[106724]: 2026-01-31 07:26:14.290665944 +0000 UTC m=+0.041009833 container create c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yalow, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:26:14 np0005603621 systemd[1]: Started libpod-conmon-c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc.scope.
Jan 31 02:26:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:26:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521ecfec31ed1c9aba29ae940fb525f9ffccfa2028928899d4565637f150ef56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521ecfec31ed1c9aba29ae940fb525f9ffccfa2028928899d4565637f150ef56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521ecfec31ed1c9aba29ae940fb525f9ffccfa2028928899d4565637f150ef56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521ecfec31ed1c9aba29ae940fb525f9ffccfa2028928899d4565637f150ef56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:26:14 np0005603621 podman[106724]: 2026-01-31 07:26:14.36515922 +0000 UTC m=+0.115503129 container init c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:26:14 np0005603621 podman[106724]: 2026-01-31 07:26:14.273135866 +0000 UTC m=+0.023479775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:26:14 np0005603621 podman[106724]: 2026-01-31 07:26:14.37184452 +0000 UTC m=+0.122188399 container start c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:26:14 np0005603621 podman[106724]: 2026-01-31 07:26:14.386037203 +0000 UTC m=+0.136381112 container attach c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yalow, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:26:14 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 31 02:26:14 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]: {
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:        "osd_id": 0,
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:        "type": "bluestore"
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]:    }
Jan 31 02:26:15 np0005603621 quizzical_yalow[106741]: }
Jan 31 02:26:15 np0005603621 systemd[1]: libpod-c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc.scope: Deactivated successfully.
Jan 31 02:26:15 np0005603621 podman[106724]: 2026-01-31 07:26:15.171893262 +0000 UTC m=+0.922237241 container died c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:26:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-521ecfec31ed1c9aba29ae940fb525f9ffccfa2028928899d4565637f150ef56-merged.mount: Deactivated successfully.
Jan 31 02:26:15 np0005603621 podman[106724]: 2026-01-31 07:26:15.226541429 +0000 UTC m=+0.976885328 container remove c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:26:15 np0005603621 systemd[1]: libpod-conmon-c7e8919f6ca0e9cbce0bfda3e48ec432de6089a4fcf6cbd8a1a40b1f57d835bc.scope: Deactivated successfully.
Jan 31 02:26:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:26:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:26:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d7e3838e-84ce-46ae-b83f-5fdbf5c843dc does not exist
Jan 31 02:26:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d43a7ca7-23ec-4e04-8542-3476161d041a does not exist
Jan 31 02:26:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 32cf5d4d-12c0-41ed-8b66-c7f7a74f1c53 does not exist
Jan 31 02:26:15 np0005603621 python3.9[106924]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:26:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:15.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:16 np0005603621 python3.9[107130]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:26:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:16.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:26:16 np0005603621 python3.9[107208]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:26:17 np0005603621 python3.9[107360]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:26:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:17 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.3 deep-scrub starts
Jan 31 02:26:17 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.3 deep-scrub ok
Jan 31 02:26:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:17 np0005603621 python3.9[107439]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:26:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:26:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:17.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:26:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:18.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:18 np0005603621 python3.9[107591]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:26:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:19.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:20.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:20 np0005603621 python3.9[107743]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:26:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:21.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:21 np0005603621 python3.9[107896]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 31 02:26:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:22.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:22 np0005603621 python3.9[108046]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:26:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:23.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:24 np0005603621 python3.9[108249]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:26:24 np0005603621 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 31 02:26:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:24 np0005603621 systemd[1]: tuned.service: Deactivated successfully.
Jan 31 02:26:24 np0005603621 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 31 02:26:24 np0005603621 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 31 02:26:24 np0005603621 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 31 02:26:24 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.1e deep-scrub starts
Jan 31 02:26:24 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.1e deep-scrub ok
Jan 31 02:26:24 np0005603621 python3.9[108410]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 31 02:26:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:25.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:26 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 31 02:26:26 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 31 02:26:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:27.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:28 np0005603621 python3.9[108564]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:26:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:29 np0005603621 python3.9[108718]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:26:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:29.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:30.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:30 np0005603621 systemd[1]: session-35.scope: Deactivated successfully.
Jan 31 02:26:30 np0005603621 systemd[1]: session-35.scope: Consumed 1min 2.217s CPU time.
Jan 31 02:26:30 np0005603621 systemd-logind[818]: Session 35 logged out. Waiting for processes to exit.
Jan 31 02:26:30 np0005603621 systemd-logind[818]: Removed session 35.
Jan 31 02:26:30 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 31 02:26:30 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 31 02:26:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:31 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 31 02:26:31 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 31 02:26:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:26:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:31.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:26:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:33 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 31 02:26:33 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 31 02:26:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:33.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:34.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:34 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 31 02:26:34 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 31 02:26:35 np0005603621 systemd-logind[818]: New session 36 of user zuul.
Jan 31 02:26:35 np0005603621 systemd[1]: Started Session 36 of User zuul.
Jan 31 02:26:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:35.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:26:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:36.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:26:36 np0005603621 python3.9[108902]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:26:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:37 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 31 02:26:37 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 31 02:26:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:37.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:37 np0005603621 python3.9[109059]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 31 02:26:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:38.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:26:38
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:26:38 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:26:38 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:26:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:38 np0005603621 python3.9[109212]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:26:39 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 31 02:26:39 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 31 02:26:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:39 np0005603621 python3.9[109296]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:26:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:39.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:41 np0005603621 python3.9[109451]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:26:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:41.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:42 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.15 deep-scrub starts
Jan 31 02:26:42 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.15 deep-scrub ok
Jan 31 02:26:43 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Jan 31 02:26:43 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Jan 31 02:26:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:43.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:44 np0005603621 python3.9[109655]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:26:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:44.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:44 np0005603621 python3.9[109808]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:26:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:26:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:45.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:26:45 np0005603621 python3.9[109961]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 31 02:26:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:46 np0005603621 python3.9[110111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:26:47 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 31 02:26:47 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 31 02:26:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:47.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:47 np0005603621 python3.9[110270]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:26:48 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 31 02:26:48 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 31 02:26:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:26:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:26:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:49.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:50 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Jan 31 02:26:50 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Jan 31 02:26:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:50.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:50 np0005603621 python3.9[110424]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:26:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:51 np0005603621 python3.9[110712]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 31 02:26:52 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Jan 31 02:26:52 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Jan 31 02:26:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:26:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:52.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:26:52 np0005603621 python3.9[110862]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:26:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Jan 31 02:26:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Jan 31 02:26:53 np0005603621 python3.9[111016]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:26:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:54.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:55 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 31 02:26:55 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 31 02:26:55 np0005603621 python3.9[111170]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:26:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:55.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:26:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:56.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:26:57 np0005603621 python3.9[111324]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:26:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:26:58 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 31 02:26:58 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 31 02:26:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:26:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:26:58.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:26:58 np0005603621 python3.9[111479]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 31 02:26:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:26:59 np0005603621 systemd[1]: session-36.scope: Deactivated successfully.
Jan 31 02:26:59 np0005603621 systemd[1]: session-36.scope: Consumed 17.525s CPU time.
Jan 31 02:26:59 np0005603621 systemd-logind[818]: Session 36 logged out. Waiting for processes to exit.
Jan 31 02:26:59 np0005603621 systemd-logind[818]: Removed session 36.
Jan 31 02:26:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:26:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:26:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:26:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:26:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:00.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:01 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 31 02:27:01 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 31 02:27:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:01.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:02.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:04.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:04 np0005603621 systemd-logind[818]: New session 37 of user zuul.
Jan 31 02:27:04 np0005603621 systemd[1]: Started Session 37 of User zuul.
Jan 31 02:27:05 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 31 02:27:05 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 31 02:27:05 np0005603621 python3.9[111710]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:27:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:05.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 31 02:27:06 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 31 02:27:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:06.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:06 np0005603621 python3.9[111865]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:27:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:07 np0005603621 python3.9[112058]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:27:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:07.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:08 np0005603621 systemd[1]: session-37.scope: Deactivated successfully.
Jan 31 02:27:08 np0005603621 systemd[1]: session-37.scope: Consumed 2.122s CPU time.
Jan 31 02:27:08 np0005603621 systemd-logind[818]: Session 37 logged out. Waiting for processes to exit.
Jan 31 02:27:08 np0005603621 systemd-logind[818]: Removed session 37.
Jan 31 02:27:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:08.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:27:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:09.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:10.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:13 np0005603621 systemd-logind[818]: New session 38 of user zuul.
Jan 31 02:27:13 np0005603621 systemd[1]: Started Session 38 of User zuul.
Jan 31 02:27:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:13.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:14 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 31 02:27:14 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 31 02:27:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:14.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:14 np0005603621 python3.9[112241]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:27:15 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 31 02:27:15 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 31 02:27:15 np0005603621 python3.9[112395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:27:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:15.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:16.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:16 np0005603621 python3.9[112654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:27:17 np0005603621 python3.9[112767]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:27:17 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Jan 31 02:27:17 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Jan 31 02:27:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:17.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:27:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:27:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:18 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 31 02:27:18 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 31 02:27:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:27:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5b1615d1-2dd5-4299-a5fc-19d6085f1235 does not exist
Jan 31 02:27:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0c7b3a67-fd48-40a7-abf5-e44b96a395d2 does not exist
Jan 31 02:27:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cca65093-d8ca-425a-be2c-b6abc219887d does not exist
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:27:18 np0005603621 python3.9[112946]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.030196178 +0000 UTC m=+0.040696442 container create 631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:27:19 np0005603621 systemd[1]: Started libpod-conmon-631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c.scope.
Jan 31 02:27:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.010338843 +0000 UTC m=+0.020839157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.121211781 +0000 UTC m=+0.131712075 container init 631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.12785862 +0000 UTC m=+0.138358884 container start 631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.131901858 +0000 UTC m=+0.142402132 container attach 631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:27:19 np0005603621 adoring_ellis[113118]: 167 167
Jan 31 02:27:19 np0005603621 systemd[1]: libpod-631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c.scope: Deactivated successfully.
Jan 31 02:27:19 np0005603621 conmon[113118]: conmon 631ed3b814c3c39a8a8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c.scope/container/memory.events
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.135584213 +0000 UTC m=+0.146084497 container died 631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:27:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c51ff018688fc7dffeea989153084fcf14305a39a9c4825dd0fc14e6cf785e85-merged.mount: Deactivated successfully.
Jan 31 02:27:19 np0005603621 podman[113080]: 2026-01-31 07:27:19.216839699 +0000 UTC m=+0.227339963 container remove 631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ellis, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:27:19 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 31 02:27:19 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 31 02:27:19 np0005603621 systemd[1]: libpod-conmon-631ed3b814c3c39a8a8c8dfea17edeb9f3644fd934a977104f7957e62324409c.scope: Deactivated successfully.
Jan 31 02:27:19 np0005603621 podman[113168]: 2026-01-31 07:27:19.343138162 +0000 UTC m=+0.055151946 container create 321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:27:19 np0005603621 systemd[1]: Started libpod-conmon-321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71.scope.
Jan 31 02:27:19 np0005603621 podman[113168]: 2026-01-31 07:27:19.308270615 +0000 UTC m=+0.020284459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:27:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:27:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5fa8ea6a5a70e0d358092c105764a0eb385c25e32d9dca7e1a43f8149044e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5fa8ea6a5a70e0d358092c105764a0eb385c25e32d9dca7e1a43f8149044e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5fa8ea6a5a70e0d358092c105764a0eb385c25e32d9dca7e1a43f8149044e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5fa8ea6a5a70e0d358092c105764a0eb385c25e32d9dca7e1a43f8149044e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5fa8ea6a5a70e0d358092c105764a0eb385c25e32d9dca7e1a43f8149044e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:19 np0005603621 podman[113168]: 2026-01-31 07:27:19.437085707 +0000 UTC m=+0.149099591 container init 321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:27:19 np0005603621 podman[113168]: 2026-01-31 07:27:19.451560402 +0000 UTC m=+0.163574216 container start 321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:27:19 np0005603621 podman[113168]: 2026-01-31 07:27:19.45657705 +0000 UTC m=+0.168590864 container attach 321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:27:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:19.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:20 np0005603621 python3.9[113318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:20 np0005603621 keen_gagarin[113187]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:27:20 np0005603621 keen_gagarin[113187]: --> relative data size: 1.0
Jan 31 02:27:20 np0005603621 keen_gagarin[113187]: --> All data devices are unavailable
Jan 31 02:27:20 np0005603621 systemd[1]: libpod-321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71.scope: Deactivated successfully.
Jan 31 02:27:20 np0005603621 podman[113168]: 2026-01-31 07:27:20.259850827 +0000 UTC m=+0.971864631 container died 321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:27:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ab5fa8ea6a5a70e0d358092c105764a0eb385c25e32d9dca7e1a43f8149044e3-merged.mount: Deactivated successfully.
Jan 31 02:27:20 np0005603621 podman[113168]: 2026-01-31 07:27:20.305317087 +0000 UTC m=+1.017330871 container remove 321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_gagarin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:27:20 np0005603621 systemd[1]: libpod-conmon-321eac6a1fb95be34cd9e1720225641ab4754646306ad579866c5685578d0f71.scope: Deactivated successfully.
Jan 31 02:27:20 np0005603621 python3.9[113594]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:27:20 np0005603621 podman[113634]: 2026-01-31 07:27:20.775659802 +0000 UTC m=+0.031065488 container create 65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:27:20 np0005603621 systemd[1]: Started libpod-conmon-65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a.scope.
Jan 31 02:27:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:27:20 np0005603621 podman[113634]: 2026-01-31 07:27:20.827704259 +0000 UTC m=+0.083109945 container init 65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:27:20 np0005603621 podman[113634]: 2026-01-31 07:27:20.833045117 +0000 UTC m=+0.088450783 container start 65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:27:20 np0005603621 systemd[1]: libpod-65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a.scope: Deactivated successfully.
Jan 31 02:27:20 np0005603621 reverent_bartik[113663]: 167 167
Jan 31 02:27:20 np0005603621 podman[113634]: 2026-01-31 07:27:20.838195549 +0000 UTC m=+0.093601225 container attach 65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:27:20 np0005603621 podman[113648]: 2026-01-31 07:27:20.838722076 +0000 UTC m=+0.058376207 container died 65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:27:20 np0005603621 podman[113634]: 2026-01-31 07:27:20.761701763 +0000 UTC m=+0.017107459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:27:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-be63f19a7dfc5a66758186972c7f579f9caebed2ba5d890e9cf717714abad3bd-merged.mount: Deactivated successfully.
Jan 31 02:27:20 np0005603621 podman[113634]: 2026-01-31 07:27:20.87828088 +0000 UTC m=+0.133686536 container remove 65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bartik, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:27:20 np0005603621 systemd[1]: libpod-conmon-65f475229afb00af1172351ea0adf6a01130590db7f467c8947d6138cd8e3d6a.scope: Deactivated successfully.
Jan 31 02:27:20 np0005603621 podman[113712]: 2026-01-31 07:27:20.989411806 +0000 UTC m=+0.036111687 container create 98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:27:21 np0005603621 systemd[1]: Started libpod-conmon-98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb.scope.
Jan 31 02:27:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:27:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18913a0f7898a018550a6818ae68fea9f6d326568826e0062e46c0427cb6062e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18913a0f7898a018550a6818ae68fea9f6d326568826e0062e46c0427cb6062e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18913a0f7898a018550a6818ae68fea9f6d326568826e0062e46c0427cb6062e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18913a0f7898a018550a6818ae68fea9f6d326568826e0062e46c0427cb6062e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:21 np0005603621 podman[113712]: 2026-01-31 07:27:20.975276491 +0000 UTC m=+0.021976382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:27:21 np0005603621 podman[113712]: 2026-01-31 07:27:21.079162299 +0000 UTC m=+0.125862250 container init 98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:27:21 np0005603621 podman[113712]: 2026-01-31 07:27:21.085636613 +0000 UTC m=+0.132336524 container start 98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 02:27:21 np0005603621 podman[113712]: 2026-01-31 07:27:21.091118405 +0000 UTC m=+0.137818306 container attach 98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:27:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:21 np0005603621 python3.9[113862]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]: {
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:    "0": [
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:        {
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "devices": [
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "/dev/loop3"
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            ],
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "lv_name": "ceph_lv0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "lv_size": "7511998464",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "name": "ceph_lv0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "tags": {
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.cluster_name": "ceph",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.crush_device_class": "",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.encrypted": "0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.osd_id": "0",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.type": "block",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:                "ceph.vdo": "0"
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            },
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "type": "block",
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:            "vg_name": "ceph_vg0"
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:        }
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]:    ]
Jan 31 02:27:21 np0005603621 eloquent_swirles[113756]: }
Jan 31 02:27:21 np0005603621 systemd[1]: libpod-98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb.scope: Deactivated successfully.
Jan 31 02:27:21 np0005603621 podman[113712]: 2026-01-31 07:27:21.80657102 +0000 UTC m=+0.853270901 container died 98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:27:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-18913a0f7898a018550a6818ae68fea9f6d326568826e0062e46c0427cb6062e-merged.mount: Deactivated successfully.
Jan 31 02:27:21 np0005603621 podman[113712]: 2026-01-31 07:27:21.857078978 +0000 UTC m=+0.903778839 container remove 98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_swirles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:27:21 np0005603621 systemd[1]: libpod-conmon-98847eea019f45124f1425089dd09e798c2221497344d283e9caae555e12fdbb.scope: Deactivated successfully.
Jan 31 02:27:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:22 np0005603621 python3.9[113958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:22 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 31 02:27:22 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 31 02:27:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:22.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:22 np0005603621 podman[114199]: 2026-01-31 07:27:22.412876352 +0000 UTC m=+0.059404130 container create f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:27:22 np0005603621 systemd[1]: Started libpod-conmon-f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5.scope.
Jan 31 02:27:22 np0005603621 podman[114199]: 2026-01-31 07:27:22.370254521 +0000 UTC m=+0.016782329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:27:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:27:22 np0005603621 python3.9[114265]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:27:22 np0005603621 podman[114199]: 2026-01-31 07:27:22.692635271 +0000 UTC m=+0.339163109 container init f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 02:27:22 np0005603621 podman[114199]: 2026-01-31 07:27:22.698469715 +0000 UTC m=+0.344997533 container start f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:27:22 np0005603621 hungry_colden[114268]: 167 167
Jan 31 02:27:22 np0005603621 systemd[1]: libpod-f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5.scope: Deactivated successfully.
Jan 31 02:27:22 np0005603621 podman[114199]: 2026-01-31 07:27:22.722662016 +0000 UTC m=+0.369189794 container attach f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:27:22 np0005603621 podman[114199]: 2026-01-31 07:27:22.726668602 +0000 UTC m=+0.373196390 container died f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:27:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3423c3b0cbb6786fcbe36e7d3e9756e269157c2d88af74d6ec470b585db1b960-merged.mount: Deactivated successfully.
Jan 31 02:27:23 np0005603621 python3.9[114360]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:27:23 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 31 02:27:23 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 31 02:27:23 np0005603621 podman[114199]: 2026-01-31 07:27:23.419585507 +0000 UTC m=+1.066113285 container remove f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:27:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:23 np0005603621 systemd[1]: libpod-conmon-f69730df72f435dbf92c2842d0ab3164fff72cd12829d073699f6eb27ea6e7f5.scope: Deactivated successfully.
Jan 31 02:27:23 np0005603621 podman[114446]: 2026-01-31 07:27:23.580899761 +0000 UTC m=+0.023607313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:27:23 np0005603621 podman[114446]: 2026-01-31 07:27:23.843911885 +0000 UTC m=+0.286619417 container create 48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:27:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:23.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:24 np0005603621 python3.9[114585]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:27:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:24 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 31 02:27:24 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 31 02:27:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:24.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:24 np0005603621 systemd[1]: Started libpod-conmon-48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c.scope.
Jan 31 02:27:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:27:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c327e284a4d5208cdee5e5e7b23b8f03b09cd69957b975b7695d0fc89c04d7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c327e284a4d5208cdee5e5e7b23b8f03b09cd69957b975b7695d0fc89c04d7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c327e284a4d5208cdee5e5e7b23b8f03b09cd69957b975b7695d0fc89c04d7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c327e284a4d5208cdee5e5e7b23b8f03b09cd69957b975b7695d0fc89c04d7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:27:24 np0005603621 podman[114446]: 2026-01-31 07:27:24.463325738 +0000 UTC m=+0.906033360 container init 48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:27:24 np0005603621 podman[114446]: 2026-01-31 07:27:24.477601168 +0000 UTC m=+0.920308710 container start 48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:27:24 np0005603621 podman[114446]: 2026-01-31 07:27:24.509331836 +0000 UTC m=+0.952039378 container attach 48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:27:24 np0005603621 python3.9[114742]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:27:25 np0005603621 python3.9[114896]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]: {
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:        "osd_id": 0,
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:        "type": "bluestore"
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]:    }
Jan 31 02:27:25 np0005603621 zen_wescoff[114711]: }
Jan 31 02:27:25 np0005603621 systemd[1]: libpod-48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c.scope: Deactivated successfully.
Jan 31 02:27:25 np0005603621 podman[114446]: 2026-01-31 07:27:25.322458593 +0000 UTC m=+1.765166145 container died 48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:27:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c327e284a4d5208cdee5e5e7b23b8f03b09cd69957b975b7695d0fc89c04d7a4-merged.mount: Deactivated successfully.
Jan 31 02:27:25 np0005603621 podman[114446]: 2026-01-31 07:27:25.688227289 +0000 UTC m=+2.130934841 container remove 48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_wescoff, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:27:25 np0005603621 systemd[1]: libpod-conmon-48c18622f70952dd189620fb5d14fc7d7407373d9cd3cb4a5e25c0fd80aa347c.scope: Deactivated successfully.
Jan 31 02:27:25 np0005603621 python3.9[115078]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:27:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:27:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:27:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:25.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b91e2765-fcd7-44c6-9689-95e062157c86 does not exist
Jan 31 02:27:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ed2f0a09-0255-45c3-8999-b926a80495d8 does not exist
Jan 31 02:27:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7049b893-b481-4587-b39b-50a5e8dfb47f does not exist
Jan 31 02:27:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:26.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:26 np0005603621 python3.9[115280]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:27:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:27:27 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 31 02:27:27 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 31 02:27:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:27.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:27:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:27:28 np0005603621 python3.9[115434]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:27:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:29 np0005603621 python3.9[115588]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:27:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:29.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:30 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 31 02:27:30 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 31 02:27:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:30.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:30 np0005603621 python3.9[115741]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:27:31 np0005603621 python3.9[115893]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:27:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:31.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:32 np0005603621 python3.9[116047]: ansible-service_facts Invoked
Jan 31 02:27:32 np0005603621 network[116064]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:27:32 np0005603621 network[116065]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:27:32 np0005603621 network[116066]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:27:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:33.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:34.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:36.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:37 np0005603621 python3.9[116520]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:27:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 459 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:38 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 31 02:27:38 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 31 02:27:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:38.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:27:38
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'backups', '.mgr', 'default.rgw.log', 'images', 'volumes']
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:27:39 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 31 02:27:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:39 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 31 02:27:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:39.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:39 np0005603621 python3.9[116675]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 31 02:27:40 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 31 02:27:40 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 31 02:27:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 31 02:27:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 31 02:27:41 np0005603621 python3.9[116827]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:27:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:41 np0005603621 python3.9[116906]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:42.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:42 np0005603621 python3.9[117058]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:27:43 np0005603621 python3.9[117136]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:43 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 31 02:27:43 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 31 02:27:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:44 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Jan 31 02:27:44 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Jan 31 02:27:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:44.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:44 np0005603621 python3.9[117339]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:45 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 31 02:27:45 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 31 02:27:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:46.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:46 np0005603621 python3.9[117492]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:27:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:47 np0005603621 python3.9[117577]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:27:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:48 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 31 02:27:48 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 31 02:27:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:48.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:27:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:27:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:49 np0005603621 systemd[1]: session-38.scope: Deactivated successfully.
Jan 31 02:27:49 np0005603621 systemd[1]: session-38.scope: Consumed 21.058s CPU time.
Jan 31 02:27:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:49 np0005603621 systemd-logind[818]: Session 38 logged out. Waiting for processes to exit.
Jan 31 02:27:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:49 np0005603621 systemd-logind[818]: Removed session 38.
Jan 31 02:27:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:51 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 31 02:27:51 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 31 02:27:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:27:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:27:52 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 31 02:27:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:52 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 31 02:27:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 31 02:27:53 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 31 02:27:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:53.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:54.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:27:54 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 31 02:27:54 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 31 02:27:55 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 31 02:27:55 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 31 02:27:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:55 np0005603621 systemd-logind[818]: New session 39 of user zuul.
Jan 31 02:27:55 np0005603621 systemd[1]: Started Session 39 of User zuul.
Jan 31 02:27:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:55.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:56.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:56 np0005603621 python3.9[117763]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:57 np0005603621 python3.9[117915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:27:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 1 active+clean+scrubbing+deep, 304 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:57 np0005603621 python3.9[117994]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:27:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:27:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:57.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:27:58 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 31 02:27:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:27:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:27:58.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:27:58 np0005603621 ceph-osd[84880]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 31 02:27:58 np0005603621 systemd-logind[818]: Session 39 logged out. Waiting for processes to exit.
Jan 31 02:27:58 np0005603621 systemd[1]: session-39.scope: Deactivated successfully.
Jan 31 02:27:58 np0005603621 systemd[1]: session-39.scope: Consumed 1.281s CPU time.
Jan 31 02:27:58 np0005603621 systemd-logind[818]: Removed session 39.
Jan 31 02:27:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:27:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:27:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:27:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:27:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:27:59.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:00.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:01.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:02.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:03.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:04 np0005603621 systemd-logind[818]: New session 40 of user zuul.
Jan 31 02:28:04 np0005603621 systemd[1]: Started Session 40 of User zuul.
Jan 31 02:28:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:05 np0005603621 python3.9[118226]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:28:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:06.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:06 np0005603621 python3.9[118382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:07 np0005603621 python3.9[118558]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:28:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:28:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:08.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:08 np0005603621 python3.9[118636]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.dx2lks1g recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:28:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:09 np0005603621 python3.9[118788]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:09 np0005603621 python3.9[118867]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.js3rdawy recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:09.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:10.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:10 np0005603621 python3.9[119019]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:28:11 np0005603621 python3.9[119171]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:28:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:11.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:28:12 np0005603621 python3.9[119250]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:28:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:12.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:12 np0005603621 python3.9[119402]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:13 np0005603621 python3.9[119480]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:28:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:13 np0005603621 python3.9[119633]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:13.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:14.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:14 np0005603621 python3.9[119785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:14 np0005603621 python3.9[119863]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:15 np0005603621 python3.9[120016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:15.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:16 np0005603621 python3.9[120094]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:28:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:16.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:28:17 np0005603621 python3.9[120246]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:28:17 np0005603621 systemd[1]: Reloading.
Jan 31 02:28:17 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:28:17 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:28:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:17.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:18.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:18 np0005603621 python3.9[120436]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:19 np0005603621 python3.9[120514]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:19 np0005603621 python3.9[120667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:19.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:20.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:20 np0005603621 python3.9[120745]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:21 np0005603621 python3.9[120897]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:28:21 np0005603621 systemd[1]: Reloading.
Jan 31 02:28:21 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:28:21 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:28:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:21 np0005603621 systemd[1]: Starting Create netns directory...
Jan 31 02:28:21 np0005603621 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:28:21 np0005603621 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:28:21 np0005603621 systemd[1]: Finished Create netns directory.
Jan 31 02:28:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:22 np0005603621 python3.9[121089]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:28:22 np0005603621 network[121106]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:28:22 np0005603621 network[121107]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:28:22 np0005603621 network[121108]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:28:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:23.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.646338) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844504646456, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 251, "total_data_size": 3394975, "memory_usage": 3458544, "flush_reason": "Manual Compaction"}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844504682250, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3315327, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7535, "largest_seqno": 9940, "table_properties": {"data_size": 3305440, "index_size": 5677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 27086, "raw_average_key_size": 21, "raw_value_size": 3282604, "raw_average_value_size": 2628, "num_data_blocks": 252, "num_entries": 1249, "num_filter_entries": 1249, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844337, "oldest_key_time": 1769844337, "file_creation_time": 1769844504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 35956 microseconds, and 8089 cpu microseconds.
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.682310) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3315327 bytes OK
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.682341) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.685040) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.685060) EVENT_LOG_v1 {"time_micros": 1769844504685054, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.685086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3384446, prev total WAL file size 3384446, number of live WAL files 2.
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.686020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3237KB)], [20(7590KB)]
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844504686248, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11087937, "oldest_snapshot_seqno": -1}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3819 keys, 9565275 bytes, temperature: kUnknown
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844504802578, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9565275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9534083, "index_size": 20522, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 91932, "raw_average_key_size": 24, "raw_value_size": 9459636, "raw_average_value_size": 2476, "num_data_blocks": 896, "num_entries": 3819, "num_filter_entries": 3819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769844504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.802825) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9565275 bytes
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.804506) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.3 rd, 82.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 4340, records dropped: 521 output_compression: NoCompression
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.804524) EVENT_LOG_v1 {"time_micros": 1769844504804514, "job": 6, "event": "compaction_finished", "compaction_time_micros": 116371, "compaction_time_cpu_micros": 26301, "output_level": 6, "num_output_files": 1, "total_output_size": 9565275, "num_input_records": 4340, "num_output_records": 3819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844504805029, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844504805646, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.685804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.805777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.805785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.805787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.805790) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:28:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:28:24.805792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:28:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:25.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:26 np0005603621 python3.9[121422]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:26 np0005603621 python3.9[121548]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:27 np0005603621 python3.9[121784]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:28:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:28:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:27.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:28 np0005603621 python3.9[121937]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev addae9c2-926b-4cf7-8e85-f0eec982bdbd does not exist
Jan 31 02:28:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1d34bbef-1772-4f48-a69c-6a9da6ee30aa does not exist
Jan 31 02:28:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 64b99728-ce2b-4147-8173-20239bb63bff does not exist
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:28:28 np0005603621 python3.9[122015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:28:28 np0005603621 podman[122181]: 2026-01-31 07:28:28.985236836 +0000 UTC m=+0.045286482 container create e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:28:29 np0005603621 systemd[1]: Started libpod-conmon-e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b.scope.
Jan 31 02:28:29 np0005603621 podman[122181]: 2026-01-31 07:28:28.963035924 +0000 UTC m=+0.023085560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:28:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:28:29 np0005603621 podman[122181]: 2026-01-31 07:28:29.081334683 +0000 UTC m=+0.141384429 container init e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:28:29 np0005603621 podman[122181]: 2026-01-31 07:28:29.08948518 +0000 UTC m=+0.149534806 container start e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldstine, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 02:28:29 np0005603621 podman[122181]: 2026-01-31 07:28:29.092886388 +0000 UTC m=+0.152936114 container attach e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldstine, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:28:29 np0005603621 xenodochial_goldstine[122197]: 167 167
Jan 31 02:28:29 np0005603621 systemd[1]: libpod-e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b.scope: Deactivated successfully.
Jan 31 02:28:29 np0005603621 podman[122181]: 2026-01-31 07:28:29.099082934 +0000 UTC m=+0.159132590 container died e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:28:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9737f52ae7e9283d3ef1f630ef5b8b33257036f1cd433f0f811f3f76e98d460d-merged.mount: Deactivated successfully.
Jan 31 02:28:29 np0005603621 podman[122181]: 2026-01-31 07:28:29.147150862 +0000 UTC m=+0.207200478 container remove e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldstine, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:28:29 np0005603621 systemd[1]: libpod-conmon-e4bc557c61f668a1071f6c898c860ca87c52461135edd89685026202f1b3401b.scope: Deactivated successfully.
Jan 31 02:28:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:29 np0005603621 podman[122272]: 2026-01-31 07:28:29.266147953 +0000 UTC m=+0.040310065 container create e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_varahamihira, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:28:29 np0005603621 systemd[1]: Started libpod-conmon-e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39.scope.
Jan 31 02:28:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:28:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca897a30623740c7264b75e36915acbb90cbcf7f844b94725526aa9b6d955f1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca897a30623740c7264b75e36915acbb90cbcf7f844b94725526aa9b6d955f1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca897a30623740c7264b75e36915acbb90cbcf7f844b94725526aa9b6d955f1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca897a30623740c7264b75e36915acbb90cbcf7f844b94725526aa9b6d955f1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca897a30623740c7264b75e36915acbb90cbcf7f844b94725526aa9b6d955f1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:29 np0005603621 podman[122272]: 2026-01-31 07:28:29.247692689 +0000 UTC m=+0.021854831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:28:29 np0005603621 podman[122272]: 2026-01-31 07:28:29.354589617 +0000 UTC m=+0.128751739 container init e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_varahamihira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:28:29 np0005603621 podman[122272]: 2026-01-31 07:28:29.359044518 +0000 UTC m=+0.133206600 container start e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_varahamihira, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:28:29 np0005603621 podman[122272]: 2026-01-31 07:28:29.362135566 +0000 UTC m=+0.136297658 container attach e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:28:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:29 np0005603621 python3.9[122370]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 31 02:28:29 np0005603621 systemd[1]: Starting Time & Date Service...
Jan 31 02:28:29 np0005603621 systemd[1]: Started Time & Date Service.
Jan 31 02:28:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:29.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:30 np0005603621 determined_varahamihira[122289]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:28:30 np0005603621 determined_varahamihira[122289]: --> relative data size: 1.0
Jan 31 02:28:30 np0005603621 determined_varahamihira[122289]: --> All data devices are unavailable
Jan 31 02:28:30 np0005603621 systemd[1]: libpod-e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39.scope: Deactivated successfully.
Jan 31 02:28:30 np0005603621 podman[122272]: 2026-01-31 07:28:30.092482955 +0000 UTC m=+0.866645047 container died e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:28:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ca897a30623740c7264b75e36915acbb90cbcf7f844b94725526aa9b6d955f1a-merged.mount: Deactivated successfully.
Jan 31 02:28:30 np0005603621 podman[122272]: 2026-01-31 07:28:30.141923697 +0000 UTC m=+0.916085829 container remove e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_varahamihira, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:28:30 np0005603621 systemd[1]: libpod-conmon-e72f9b8120b2f3ce4004fb951d80a52c4c7b421010d70fc60fb36ed7b3607d39.scope: Deactivated successfully.
Jan 31 02:28:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.572320706 +0000 UTC m=+0.037637999 container create f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:28:30 np0005603621 systemd[1]: Started libpod-conmon-f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38.scope.
Jan 31 02:28:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.64964358 +0000 UTC m=+0.114960903 container init f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.556377433 +0000 UTC m=+0.021694766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.6553396 +0000 UTC m=+0.120656903 container start f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:28:30 np0005603621 confident_bardeen[122709]: 167 167
Jan 31 02:28:30 np0005603621 systemd[1]: libpod-f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38.scope: Deactivated successfully.
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.661553436 +0000 UTC m=+0.126870739 container attach f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.662280509 +0000 UTC m=+0.127597832 container died f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:28:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9084d9ebbb05756d075919e677f6f9c8e1607cf5499fc30725de7a74d1565a9d-merged.mount: Deactivated successfully.
Jan 31 02:28:30 np0005603621 podman[122690]: 2026-01-31 07:28:30.702528811 +0000 UTC m=+0.167846114 container remove f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:28:30 np0005603621 systemd[1]: libpod-conmon-f126d969d1bd7b537f96115f844494c39c7246a8993355b298bd7f476cd2bb38.scope: Deactivated successfully.
Jan 31 02:28:30 np0005603621 python3.9[122698]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:30 np0005603621 podman[122738]: 2026-01-31 07:28:30.81642798 +0000 UTC m=+0.038085994 container create 385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:28:30 np0005603621 systemd[1]: Started libpod-conmon-385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1.scope.
Jan 31 02:28:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:28:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38da58e57e2163fc9cd974855ea3907e88e2dff6fb331500d52d033d1c1cd466/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38da58e57e2163fc9cd974855ea3907e88e2dff6fb331500d52d033d1c1cd466/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38da58e57e2163fc9cd974855ea3907e88e2dff6fb331500d52d033d1c1cd466/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38da58e57e2163fc9cd974855ea3907e88e2dff6fb331500d52d033d1c1cd466/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:30 np0005603621 podman[122738]: 2026-01-31 07:28:30.893897298 +0000 UTC m=+0.115555382 container init 385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:28:30 np0005603621 podman[122738]: 2026-01-31 07:28:30.798472702 +0000 UTC m=+0.020130736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:28:30 np0005603621 podman[122738]: 2026-01-31 07:28:30.898605847 +0000 UTC m=+0.120263851 container start 385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:28:30 np0005603621 podman[122738]: 2026-01-31 07:28:30.901718906 +0000 UTC m=+0.123377000 container attach 385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_faraday, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:28:31 np0005603621 python3.9[122907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]: {
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:    "0": [
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:        {
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "devices": [
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "/dev/loop3"
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            ],
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "lv_name": "ceph_lv0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "lv_size": "7511998464",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "name": "ceph_lv0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "tags": {
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.cluster_name": "ceph",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.crush_device_class": "",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.encrypted": "0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.osd_id": "0",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.type": "block",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:                "ceph.vdo": "0"
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            },
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "type": "block",
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:            "vg_name": "ceph_vg0"
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:        }
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]:    ]
Jan 31 02:28:31 np0005603621 beautiful_faraday[122775]: }
Jan 31 02:28:31 np0005603621 systemd[1]: libpod-385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1.scope: Deactivated successfully.
Jan 31 02:28:31 np0005603621 podman[122738]: 2026-01-31 07:28:31.641523062 +0000 UTC m=+0.863181096 container died 385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:28:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-38da58e57e2163fc9cd974855ea3907e88e2dff6fb331500d52d033d1c1cd466-merged.mount: Deactivated successfully.
Jan 31 02:28:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:31.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:32 np0005603621 python3.9[123002]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:32 np0005603621 podman[122738]: 2026-01-31 07:28:32.131577038 +0000 UTC m=+1.353235052 container remove 385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:28:32 np0005603621 systemd[1]: libpod-conmon-385c3adc4072d60808b2c29adeaef333183823d2574df013a438c2a358c46dd1.scope: Deactivated successfully.
Jan 31 02:28:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:32 np0005603621 podman[123291]: 2026-01-31 07:28:32.631388362 +0000 UTC m=+0.067434452 container create 8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamarr, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:28:32 np0005603621 python3.9[123271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:32 np0005603621 podman[123291]: 2026-01-31 07:28:32.58323726 +0000 UTC m=+0.019283360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:28:32 np0005603621 systemd[1]: Started libpod-conmon-8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9.scope.
Jan 31 02:28:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:28:32 np0005603621 podman[123291]: 2026-01-31 07:28:32.994250148 +0000 UTC m=+0.430296338 container init 8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamarr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:28:33 np0005603621 podman[123291]: 2026-01-31 07:28:33.001227769 +0000 UTC m=+0.437273889 container start 8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:28:33 np0005603621 musing_lamarr[123356]: 167 167
Jan 31 02:28:33 np0005603621 systemd[1]: libpod-8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9.scope: Deactivated successfully.
Jan 31 02:28:33 np0005603621 python3.9[123387]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tr9_jjlf recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:33 np0005603621 podman[123291]: 2026-01-31 07:28:33.11582033 +0000 UTC m=+0.551866520 container attach 8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:28:33 np0005603621 podman[123291]: 2026-01-31 07:28:33.116537032 +0000 UTC m=+0.552583112 container died 8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:28:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bf76495d83e86eca2e92d673583bf8c896ece2bd431811bba3b024003bf635bb-merged.mount: Deactivated successfully.
Jan 31 02:28:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:33 np0005603621 podman[123291]: 2026-01-31 07:28:33.712610278 +0000 UTC m=+1.148656358 container remove 8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamarr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:28:33 np0005603621 systemd[1]: libpod-conmon-8c87bcf160ad15f222e00443b234dc0e7fd893293f7c069f2185e412f8b1b9c9.scope: Deactivated successfully.
Jan 31 02:28:33 np0005603621 python3.9[123554]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:33 np0005603621 podman[123566]: 2026-01-31 07:28:33.838820675 +0000 UTC m=+0.056497575 container create dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 31 02:28:33 np0005603621 systemd[1]: Started libpod-conmon-dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91.scope.
Jan 31 02:28:33 np0005603621 podman[123566]: 2026-01-31 07:28:33.811610796 +0000 UTC m=+0.029287716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:28:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:28:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2eb4d482a7a09ce16fa16f99149e193ab52adab18966ace4f18a8158288407/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2eb4d482a7a09ce16fa16f99149e193ab52adab18966ace4f18a8158288407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2eb4d482a7a09ce16fa16f99149e193ab52adab18966ace4f18a8158288407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c2eb4d482a7a09ce16fa16f99149e193ab52adab18966ace4f18a8158288407/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:28:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:33.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:33 np0005603621 podman[123566]: 2026-01-31 07:28:33.992622906 +0000 UTC m=+0.210299856 container init dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:28:34 np0005603621 podman[123566]: 2026-01-31 07:28:34.004852552 +0000 UTC m=+0.222529492 container start dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:28:34 np0005603621 podman[123566]: 2026-01-31 07:28:34.046333303 +0000 UTC m=+0.264010293 container attach dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:28:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:34 np0005603621 python3.9[123665]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]: {
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:        "osd_id": 0,
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:        "type": "bluestore"
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]:    }
Jan 31 02:28:34 np0005603621 hopeful_lehmann[123594]: }
Jan 31 02:28:34 np0005603621 systemd[1]: libpod-dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91.scope: Deactivated successfully.
Jan 31 02:28:34 np0005603621 podman[123566]: 2026-01-31 07:28:34.854564673 +0000 UTC m=+1.072241603 container died dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:28:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6c2eb4d482a7a09ce16fa16f99149e193ab52adab18966ace4f18a8158288407-merged.mount: Deactivated successfully.
Jan 31 02:28:35 np0005603621 podman[123566]: 2026-01-31 07:28:35.229141179 +0000 UTC m=+1.446818109 container remove dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:28:35 np0005603621 python3.9[123845]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:28:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:28:35 np0005603621 systemd[1]: libpod-conmon-dbf0faebc7d58ec04db1fb4ce4c458f69133b86528450683742f67a81e4dfe91.scope: Deactivated successfully.
Jan 31 02:28:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:28:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 308dc90b-dcb1-4f3c-98fe-3547cc45027b does not exist
Jan 31 02:28:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 86d54806-af19-4dd0-86c0-479cbb6b6d40 does not exist
Jan 31 02:28:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4baa3b6d-6998-41de-858c-10a634cb4c56 does not exist
Jan 31 02:28:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:36 np0005603621 python3[124049]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:28:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:28:37 np0005603621 python3.9[124201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:37 np0005603621 python3.9[124279]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:38 np0005603621 python3.9[124432]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:28:38
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'vms', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:28:39 np0005603621 python3.9[124557]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844517.815191-899-187358509223671/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:39 np0005603621 python3.9[124710]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:40.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:40 np0005603621 python3.9[124788]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:41 np0005603621 python3.9[124940]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:41.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:42 np0005603621 python3.9[125019]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:42 np0005603621 python3.9[125171]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:43 np0005603621 python3.9[125249]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:44 np0005603621 python3.9[125402]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:28:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:44 np0005603621 python3.9[125607]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:45 np0005603621 python3.9[125760]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:46.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:46.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:46 np0005603621 python3.9[125912]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:47 np0005603621 python3.9[126064]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:28:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:48.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:48 np0005603621 python3.9[126217]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 31 02:28:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:28:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:28:48 np0005603621 systemd[1]: session-40.scope: Deactivated successfully.
Jan 31 02:28:48 np0005603621 systemd[1]: session-40.scope: Consumed 25.816s CPU time.
Jan 31 02:28:48 np0005603621 systemd-logind[818]: Session 40 logged out. Waiting for processes to exit.
Jan 31 02:28:48 np0005603621 systemd-logind[818]: Removed session 40.
Jan 31 02:28:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:50.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:50.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:52.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:54.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:54 np0005603621 systemd-logind[818]: New session 41 of user zuul.
Jan 31 02:28:54 np0005603621 systemd[1]: Started Session 41 of User zuul.
Jan 31 02:28:55 np0005603621 python3.9[126400]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 31 02:28:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:28:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:56.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:28:56 np0005603621 python3.9[126553]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:28:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:57 np0005603621 python3.9[126707]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 31 02:28:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:57 np0005603621 python3.9[126860]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.yucszeno follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:28:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:28:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:28:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:28:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:28:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:28:58 np0005603621 python3.9[126985]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.yucszeno mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844537.1747568-107-43733520265841/.source.yucszeno _original_basename=.i40oqngs follow=False checksum=894df0945bb562bf664b2d53d15fbd1da03ff944 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:28:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:28:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:28:59 np0005603621 python3.9[127137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:29:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:00 np0005603621 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 02:29:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:29:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:29:00 np0005603621 python3.9[127293]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXc4C5rCDfOfKEuMVHI9SatZ+NRO9lp335K0yZ19CDCOGSUNO2lblpRlgxO3tw3S+UGGiC/7/HHeZBA2Zd+SUVMb7ytbl5c3+XuZIIQF6DyIIDSELf0FoE0NhuSjKFilPsxyxxGYgH+gVaTZkuGhDoljaywQBSPGZdDwejVKWPVuui5xe0X4T0WVfT5avLSpIL3WjJ9hmzEaR0dUqrbKvPUAXJPDqQOZbQZbpXDIi48NPUDFwByej1xHWHRQaPJ/M6AsyrZKP/hiF2xt0mCIk1FANldusq4OUs9r/0KTVrPRCpSrsSimKBtEMJVdxqxAasE7H07sSdwFcWNC21LtsH8+/LM0oofIZ3D0Lom0NoLaC+Ocy2vqbIhOPYJ6c7Q8J/p4NFiA/lD+bgyjOOnm3Ls4VaaHXUyknu259henkVzJ+iZuRNY8ki345nrzPLoLYyxVwRkSuONyYlRp36jjp0QIL9kXLFlJ2OTHvb9FUhlG7RnxzPeHZhsihSHJv1rgU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAT2MDVMbPz3xtbIO31qZj2gzOQiz4a8pTNWAmd0+CUW#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFU8ym/rLGJxMpEsk09j3JHOh1hW4Vrm23tIOjn4/YJIrK1UFRFiQLDm+yZuj1NhWfbg71SK8ZuZ2miEJ20BHno=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAgshePGnD7oc3Zg8kfD9lUGSfPfE1OzPUGBHE12jLoyHnXwKTxYFYSMTWRcYgdFu4HaP0ShO1gEQF+1nDXxrozH/m2qxK/YPC5cVYCPvscwRdlyUNPOV0rpiruVZptTQ1iibsmRwMbxliXD2t13CtsrNjy9iuLgtvvnkfUh0wZKcZ8Jglg6E4vRTBPgXo3fJCfPF9Iz7GE50DpWAU8OnoLNlOf54/tcd8CyOrmLF9RwHTgNtN9FXscdQ3/A8avCF0WPWNUmfLFc20yOtfrq/xxjJMLn4KOZu1D1yjK5BSJu2pv/j0NPrTFKgPKYWjiXPdttcyubkXNZP96jkK9dgTgsEGRKuM83QpDIu7823wv4/GtEi+IsJeyqCN+3VAJo9hDB9eES8qlX4jAg6Kxen1oNkL9M+tz7N0BSdnxbS3skWEw6MsHlsBLOw7KMYe8gq8JoqHLBKBFQZZbjwaK5kNTeu6l5zAYERpt8uAEZkplq2vV5+4EOh7RPncmKuH0Xs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF//s6MNfOt3MK/jBcrJ5VkyeSY5eg1jUHN32BLTGZtT#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEtIHGkRVmmqcsRoXLuIEWyuaX3BoKld3DircbfvRpdFLzOwbxRaZ6uUN5f7sBun3oAcQLdnixnG3R/YK8L7HpM=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCSEo2WrFN8DnR2/d+p3YtsWos96nHz1MZInXN3md5cJXE0icMDwEWJuGIDUd5e0SA6Q7i33i/WIEmt/wGMoNhoTI+f3plB2NyAn5vyVQGTZv7m+tOLQI3/k50Kxnpu0c5gO509yln6RcLe4MutF0imS/fINCM+Nznh7oKbn6hELTDlxDz0JH8dNsZGmtVmgnhwIrglpxAg/WpeOWkCmuuXmysx1JcAhIK5016MzaM9cOtHAGzj5s0GE7nQoH4yG0Ak3zMU/DPKr91Xq/m9PCnGKautoHmHgrEG6u+1WubtakbBxlfmroKbvrIFL6KKQzY0SiTrBsH3nZRaFGCqE0ZEyHvJz8AO3quWg2oaXRJWN98f7k3l5dtVJIuwyJxVnv6fUGuLbGxOp4T6UDPqC7b2Eg17EtpUjy77F/+8yrX6NH+hXwcWBwHelRCDSiceGQTm1uexb8Xo1R1Wt9h24H2yRKPFrqzf1R9J2vipDouDo7RLefAiCXEJDdlewdKUM5c=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILwGOCpzCDE8uIHb4RBldbKfEvxhUdsBT4K7sPU4vZLU#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLS8teLqq0Lmt8g22OKhtEhLCXd5cBLM6W2oDJcWxQl8DloBMMFjgDlHt0rzjMKEL0SpxkPbH7sPV1zbWKKJI9M=#012 create=True mode=0644 path=/tmp/ansible.yucszeno state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:01 np0005603621 python3.9[127446]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yucszeno' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:29:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:02.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:29:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:29:02 np0005603621 python3.9[127600]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yucszeno state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:02 np0005603621 systemd[1]: session-41.scope: Deactivated successfully.
Jan 31 02:29:02 np0005603621 systemd[1]: session-41.scope: Consumed 4.315s CPU time.
Jan 31 02:29:02 np0005603621 systemd-logind[818]: Session 41 logged out. Waiting for processes to exit.
Jan 31 02:29:02 np0005603621 systemd-logind[818]: Removed session 41.
Jan 31 02:29:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:04.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:06.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:29:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:29:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:08.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:29:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:09 np0005603621 systemd-logind[818]: New session 42 of user zuul.
Jan 31 02:29:09 np0005603621 systemd[1]: Started Session 42 of User zuul.
Jan 31 02:29:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:10.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:10 np0005603621 python3.9[127833]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:29:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:12 np0005603621 python3.9[127990]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:29:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:12.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:12 np0005603621 python3.9[128144]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:29:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:13 np0005603621 python3.9[128298]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:29:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:14 np0005603621 python3.9[128451]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:29:15 np0005603621 python3.9[128603]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:15 np0005603621 systemd[1]: session-42.scope: Deactivated successfully.
Jan 31 02:29:15 np0005603621 systemd[1]: session-42.scope: Consumed 3.438s CPU time.
Jan 31 02:29:15 np0005603621 systemd-logind[818]: Session 42 logged out. Waiting for processes to exit.
Jan 31 02:29:15 np0005603621 systemd-logind[818]: Removed session 42.
Jan 31 02:29:15 np0005603621 systemd-logind[818]: Session 18 logged out. Waiting for processes to exit.
Jan 31 02:29:15 np0005603621 systemd[1]: session-18.scope: Deactivated successfully.
Jan 31 02:29:15 np0005603621 systemd[1]: session-18.scope: Consumed 1min 10.659s CPU time.
Jan 31 02:29:15 np0005603621 systemd-logind[818]: Removed session 18.
Jan 31 02:29:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:18.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:20.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:21 np0005603621 systemd-logind[818]: New session 43 of user zuul.
Jan 31 02:29:21 np0005603621 systemd[1]: Started Session 43 of User zuul.
Jan 31 02:29:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:22 np0005603621 python3.9[128787]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:29:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:23 np0005603621 python3.9[128943]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:29:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:24 np0005603621 python3.9[129028]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 31 02:29:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:24.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:26 np0005603621 python3.9[129230]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:29:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:27 np0005603621 python3.9[129382]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:29:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:28.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:28 np0005603621 python3.9[129532]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:29:29 np0005603621 python3.9[129682]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:29 np0005603621 systemd-logind[818]: Session 43 logged out. Waiting for processes to exit.
Jan 31 02:29:29 np0005603621 systemd[1]: session-43.scope: Deactivated successfully.
Jan 31 02:29:29 np0005603621 systemd[1]: session-43.scope: Consumed 5.530s CPU time.
Jan 31 02:29:29 np0005603621 systemd-logind[818]: Removed session 43.
Jan 31 02:29:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:30.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:30.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:32.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.239311) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844574239371, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 811, "num_deletes": 250, "total_data_size": 1241341, "memory_usage": 1260720, "flush_reason": "Manual Compaction"}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844574248467, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 785535, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9941, "largest_seqno": 10751, "table_properties": {"data_size": 782043, "index_size": 1272, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8741, "raw_average_key_size": 19, "raw_value_size": 774753, "raw_average_value_size": 1768, "num_data_blocks": 56, "num_entries": 438, "num_filter_entries": 438, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844505, "oldest_key_time": 1769844505, "file_creation_time": 1769844574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9203 microseconds, and 2639 cpu microseconds.
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.248519) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 785535 bytes OK
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.248537) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.250520) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.250539) EVENT_LOG_v1 {"time_micros": 1769844574250534, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.250556) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1237406, prev total WAL file size 1237406, number of live WAL files 2.
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.251427) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(767KB)], [23(9341KB)]
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844574251482, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10350810, "oldest_snapshot_seqno": -1}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3771 keys, 7750738 bytes, temperature: kUnknown
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844574326107, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7750738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7722705, "index_size": 17491, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 91404, "raw_average_key_size": 24, "raw_value_size": 7651804, "raw_average_value_size": 2029, "num_data_blocks": 764, "num_entries": 3771, "num_filter_entries": 3771, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769844574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.326367) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7750738 bytes
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.339286) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.5 rd, 103.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(23.0) write-amplify(9.9) OK, records in: 4257, records dropped: 486 output_compression: NoCompression
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.339344) EVENT_LOG_v1 {"time_micros": 1769844574339324, "job": 8, "event": "compaction_finished", "compaction_time_micros": 74713, "compaction_time_cpu_micros": 30127, "output_level": 6, "num_output_files": 1, "total_output_size": 7750738, "num_input_records": 4257, "num_output_records": 3771, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844574339631, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844574340532, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.251250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.340638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.340644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.340645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.340647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:29:34.340649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:29:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:34.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:35 np0005603621 systemd-logind[818]: New session 44 of user zuul.
Jan 31 02:29:35 np0005603621 systemd[1]: Started Session 44 of User zuul.
Jan 31 02:29:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:36.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:36 np0005603621 python3.9[129939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:29:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:36.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:29:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b93f4b72-c78b-4f1f-b1c8-2dc9cb07ac27 does not exist
Jan 31 02:29:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 478cfb9b-f47c-4b78-ad88-cc2f2b63816d does not exist
Jan 31 02:29:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c7175e52-0bbc-43ad-9683-1c54f6967a9c does not exist
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:29:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:37.005512918 +0000 UTC m=+0.035142381 container create 0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pascal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:29:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:29:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:29:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:29:37 np0005603621 systemd[1]: Started libpod-conmon-0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe.scope.
Jan 31 02:29:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:37.081531263 +0000 UTC m=+0.111160726 container init 0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pascal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:36.990145679 +0000 UTC m=+0.019775142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:37.088264361 +0000 UTC m=+0.117893804 container start 0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pascal, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:37.092197159 +0000 UTC m=+0.121826622 container attach 0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pascal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:29:37 np0005603621 agitated_pascal[130181]: 167 167
Jan 31 02:29:37 np0005603621 systemd[1]: libpod-0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe.scope: Deactivated successfully.
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:37.093887924 +0000 UTC m=+0.123517397 container died 0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:29:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7b87ab59961ffef91bb9dd77cee93e8716aff22872231cfdb1a697e7db6575fa-merged.mount: Deactivated successfully.
Jan 31 02:29:37 np0005603621 podman[130164]: 2026-01-31 07:29:37.133992865 +0000 UTC m=+0.163622308 container remove 0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:29:37 np0005603621 systemd[1]: libpod-conmon-0d18564dfe4d260f937d210b953ccd2c2c75d0fbc45882a8c53752611e1891fe.scope: Deactivated successfully.
Jan 31 02:29:37 np0005603621 podman[130205]: 2026-01-31 07:29:37.245611654 +0000 UTC m=+0.037520898 container create 101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:29:37 np0005603621 systemd[1]: Started libpod-conmon-101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c.scope.
Jan 31 02:29:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:29:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786394db5627ae11c7e06d478da033058bae5b99bd8bd3dabfe2fd67aabe7c92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786394db5627ae11c7e06d478da033058bae5b99bd8bd3dabfe2fd67aabe7c92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786394db5627ae11c7e06d478da033058bae5b99bd8bd3dabfe2fd67aabe7c92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786394db5627ae11c7e06d478da033058bae5b99bd8bd3dabfe2fd67aabe7c92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:37 np0005603621 podman[130205]: 2026-01-31 07:29:37.225523173 +0000 UTC m=+0.017432427 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:29:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/786394db5627ae11c7e06d478da033058bae5b99bd8bd3dabfe2fd67aabe7c92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:37 np0005603621 podman[130205]: 2026-01-31 07:29:37.349250205 +0000 UTC m=+0.141159479 container init 101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:29:37 np0005603621 podman[130205]: 2026-01-31 07:29:37.354480615 +0000 UTC m=+0.146389859 container start 101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:29:37 np0005603621 podman[130205]: 2026-01-31 07:29:37.447547013 +0000 UTC m=+0.239456357 container attach 101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:29:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:37 np0005603621 python3.9[130355]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:38 np0005603621 infallible_banzai[130222]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:29:38 np0005603621 infallible_banzai[130222]: --> relative data size: 1.0
Jan 31 02:29:38 np0005603621 infallible_banzai[130222]: --> All data devices are unavailable
Jan 31 02:29:38 np0005603621 systemd[1]: libpod-101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c.scope: Deactivated successfully.
Jan 31 02:29:38 np0005603621 podman[130205]: 2026-01-31 07:29:38.135488501 +0000 UTC m=+0.927397775 container died 101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:29:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:38.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:29:38
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', '.rgw.root']
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:29:38 np0005603621 python3.9[130529]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-786394db5627ae11c7e06d478da033058bae5b99bd8bd3dabfe2fd67aabe7c92-merged.mount: Deactivated successfully.
Jan 31 02:29:38 np0005603621 podman[130205]: 2026-01-31 07:29:38.850092905 +0000 UTC m=+1.642002169 container remove 101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:29:38 np0005603621 systemd[1]: libpod-conmon-101b4d923a9b9ebd1e05c49a18ab669c85c8c335c251ba920c55116cd2f4006c.scope: Deactivated successfully.
Jan 31 02:29:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:39 np0005603621 python3.9[130782]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.420041638 +0000 UTC m=+0.050098996 container create 908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wescoff, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:29:39 np0005603621 systemd[1]: Started libpod-conmon-908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f.scope.
Jan 31 02:29:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.493046835 +0000 UTC m=+0.123104213 container init 908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.498688238 +0000 UTC m=+0.128745626 container start 908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.403513172 +0000 UTC m=+0.033570540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.501905163 +0000 UTC m=+0.131962521 container attach 908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:29:39 np0005603621 intelligent_wescoff[130863]: 167 167
Jan 31 02:29:39 np0005603621 systemd[1]: libpod-908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f.scope: Deactivated successfully.
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.503277747 +0000 UTC m=+0.133335145 container died 908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wescoff, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:29:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8ef5b5f0dbd5eb8e13b0ae125d429bc890dfe38023510d1b8f507e544213a56a-merged.mount: Deactivated successfully.
Jan 31 02:29:39 np0005603621 podman[130823]: 2026-01-31 07:29:39.537815987 +0000 UTC m=+0.167873345 container remove 908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_wescoff, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:29:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:39 np0005603621 systemd[1]: libpod-conmon-908064c8b1867c562a05e5f56520b4d4e77cfb8f8f3e726b37607c5df15fe96f.scope: Deactivated successfully.
Jan 31 02:29:39 np0005603621 podman[130914]: 2026-01-31 07:29:39.664587348 +0000 UTC m=+0.045458095 container create 9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:29:39 np0005603621 systemd[1]: Started libpod-conmon-9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f.scope.
Jan 31 02:29:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:29:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eaadec6fbd085f362a90b58d568ef1e8c1f1abb4731d20bdb80449d87cbd24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eaadec6fbd085f362a90b58d568ef1e8c1f1abb4731d20bdb80449d87cbd24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eaadec6fbd085f362a90b58d568ef1e8c1f1abb4731d20bdb80449d87cbd24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9eaadec6fbd085f362a90b58d568ef1e8c1f1abb4731d20bdb80449d87cbd24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:39 np0005603621 podman[130914]: 2026-01-31 07:29:39.649073055 +0000 UTC m=+0.029943822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:29:39 np0005603621 podman[130914]: 2026-01-31 07:29:39.758075409 +0000 UTC m=+0.138946176 container init 9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:29:39 np0005603621 podman[130914]: 2026-01-31 07:29:39.76301986 +0000 UTC m=+0.143890607 container start 9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:29:39 np0005603621 podman[130914]: 2026-01-31 07:29:39.768099945 +0000 UTC m=+0.148970692 container attach 9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:29:39 np0005603621 python3.9[131011]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844578.8011305-156-37582931459275/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d61b9e921126ce4abac8c8b6c262ff93dd25222d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:40.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]: {
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:    "0": [
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:        {
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "devices": [
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "/dev/loop3"
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            ],
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "lv_name": "ceph_lv0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "lv_size": "7511998464",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "name": "ceph_lv0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "tags": {
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.cluster_name": "ceph",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.crush_device_class": "",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.encrypted": "0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.osd_id": "0",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.type": "block",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:                "ceph.vdo": "0"
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            },
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "type": "block",
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:            "vg_name": "ceph_vg0"
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:        }
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]:    ]
Jan 31 02:29:40 np0005603621 amazing_goldwasser[130978]: }
Jan 31 02:29:40 np0005603621 systemd[1]: libpod-9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f.scope: Deactivated successfully.
Jan 31 02:29:40 np0005603621 podman[130914]: 2026-01-31 07:29:40.511923706 +0000 UTC m=+0.892794553 container died 9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:29:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d9eaadec6fbd085f362a90b58d568ef1e8c1f1abb4731d20bdb80449d87cbd24-merged.mount: Deactivated successfully.
Jan 31 02:29:40 np0005603621 podman[130914]: 2026-01-31 07:29:40.568393047 +0000 UTC m=+0.949263814 container remove 9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_goldwasser, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:29:40 np0005603621 systemd[1]: libpod-conmon-9c89814c43e699010640551913d74464cc46ec108065395248184b5e9a46232f.scope: Deactivated successfully.
Jan 31 02:29:40 np0005603621 python3.9[131165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:41 np0005603621 python3.9[131407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844580.1485722-156-268830953524669/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a66cd34ae464c50bbe4c963e6eef9b60dc2a1e49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.088794303 +0000 UTC m=+0.046311292 container create 4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:29:41 np0005603621 systemd[1]: Started libpod-conmon-4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe.scope.
Jan 31 02:29:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.070606393 +0000 UTC m=+0.028123402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.174890955 +0000 UTC m=+0.132407964 container init 4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.180431674 +0000 UTC m=+0.137948663 container start 4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:29:41 np0005603621 naughty_kepler[131482]: 167 167
Jan 31 02:29:41 np0005603621 systemd[1]: libpod-4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe.scope: Deactivated successfully.
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.185414466 +0000 UTC m=+0.142931485 container attach 4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.186585314 +0000 UTC m=+0.144102313 container died 4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:29:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-506dff5052266e3c0f153481cc89375179eb2add45c7e0ca0b43c944675dad59-merged.mount: Deactivated successfully.
Jan 31 02:29:41 np0005603621 podman[131441]: 2026-01-31 07:29:41.22531018 +0000 UTC m=+0.182827169 container remove 4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:29:41 np0005603621 systemd[1]: libpod-conmon-4675025bca8a57866572d506b0f88fbe6563436e764ae7969953c7fda7b6c2fe.scope: Deactivated successfully.
Jan 31 02:29:41 np0005603621 podman[131580]: 2026-01-31 07:29:41.356857296 +0000 UTC m=+0.037491667 container create db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:29:41 np0005603621 systemd[1]: Started libpod-conmon-db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e.scope.
Jan 31 02:29:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:29:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9051889d8245d56a6fb0618dedfb626b2014d4b3271a9425a988391123ecfb8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:41 np0005603621 podman[131580]: 2026-01-31 07:29:41.342300903 +0000 UTC m=+0.022935304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:29:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9051889d8245d56a6fb0618dedfb626b2014d4b3271a9425a988391123ecfb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9051889d8245d56a6fb0618dedfb626b2014d4b3271a9425a988391123ecfb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9051889d8245d56a6fb0618dedfb626b2014d4b3271a9425a988391123ecfb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:29:41 np0005603621 podman[131580]: 2026-01-31 07:29:41.452034932 +0000 UTC m=+0.132669333 container init db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:29:41 np0005603621 podman[131580]: 2026-01-31 07:29:41.459020829 +0000 UTC m=+0.139655220 container start db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:29:41 np0005603621 podman[131580]: 2026-01-31 07:29:41.463768673 +0000 UTC m=+0.144403054 container attach db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 02:29:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:41 np0005603621 python3.9[131654]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:42.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:42 np0005603621 python3.9[131777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844581.208907-156-126551942981362/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8a78abb5517d1e850bb0919bbbe9c44fb24ca327 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:42 np0005603621 youthful_moser[131621]: {
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:        "osd_id": 0,
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:        "type": "bluestore"
Jan 31 02:29:42 np0005603621 youthful_moser[131621]:    }
Jan 31 02:29:42 np0005603621 youthful_moser[131621]: }
Jan 31 02:29:42 np0005603621 podman[131580]: 2026-01-31 07:29:42.323988509 +0000 UTC m=+1.004622890 container died db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:29:42 np0005603621 systemd[1]: libpod-db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e.scope: Deactivated successfully.
Jan 31 02:29:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c9051889d8245d56a6fb0618dedfb626b2014d4b3271a9425a988391123ecfb8-merged.mount: Deactivated successfully.
Jan 31 02:29:42 np0005603621 podman[131580]: 2026-01-31 07:29:42.37522089 +0000 UTC m=+1.055855281 container remove db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:29:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:42 np0005603621 systemd[1]: libpod-conmon-db4ba6bf889f56f6644f6f171b4814399949b9db68f20f0f860807578fca961e.scope: Deactivated successfully.
Jan 31 02:29:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:29:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:29:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:29:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:29:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 93296f56-22e7-448a-9614-9decdac0431f does not exist
Jan 31 02:29:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 05a83e63-6147-407f-97d8-058cffe594b0 does not exist
Jan 31 02:29:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d22ee115-a66d-4871-b8f4-f5d411c47dae does not exist
Jan 31 02:29:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:29:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:29:42 np0005603621 python3.9[132009]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:43 np0005603621 python3.9[132161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:44.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:44 np0005603621 python3.9[132314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:44.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:44 np0005603621 python3.9[132437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844583.6673152-346-70972309972279/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4d01f00aa166a231934505d44129993d33315474 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:45 np0005603621 python3.9[132639]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:45 np0005603621 python3.9[132763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844584.7288885-346-133729452052846/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=649eeea41a1e15889a1c750fd61fb88aa589bc91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:46 np0005603621 python3.9[132915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:29:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:46.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:29:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:46.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:46 np0005603621 python3.9[133038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844585.708836-346-133020395370682/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=88bd33761f45e4942b9f6021c5462ba4f92a4c23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:47 np0005603621 python3.9[133190]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:47 np0005603621 python3.9[133343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:48.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:48 np0005603621 python3.9[133495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:48.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:29:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2408 writes, 10K keys, 2408 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2408 writes, 2408 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2408 writes, 10K keys, 2408 commit groups, 1.0 writes per commit group, ingest: 13.78 MB, 0.02 MB/s#012Interval WAL: 2408 writes, 2408 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    111.6      0.10              0.03         4    0.025       0      0       0.0       0.0#012  L6      1/0    7.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    115.8     99.3      0.24              0.08         3    0.080     11K   1299       0.0       0.0#012 Sum      1/0    7.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     81.4    103.0      0.34              0.12         7    0.049     11K   1299       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     82.4    104.1      0.34              0.12         6    0.056     11K   1299       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    115.8     99.3      0.24              0.08         3    0.080     11K   1299       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    115.9      0.10              0.03         3    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 1.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(55,910.47 KB,0.292477%) FilterBlock(8,42.11 KB,0.0135271%) IndexBlock(8,92.08 KB,0.029579%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:29:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:29:48 np0005603621 python3.9[133618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844587.956654-519-8353398731262/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eb646dcb3ca83677069d3f448351150445e6ad6b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:49 np0005603621 python3.9[133770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:49 np0005603621 python3.9[133894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844589.0552793-519-100994760317661/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=649eeea41a1e15889a1c750fd61fb88aa589bc91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:50.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:29:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:29:50 np0005603621 python3.9[134046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:50 np0005603621 python3.9[134169]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844590.0385573-519-249607669418258/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=0dfdec9f280cffe412a03a8532735939f77a5087 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:52.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:52 np0005603621 python3.9[134322]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:52.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:52 np0005603621 python3.9[134474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:53 np0005603621 python3.9[134597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844592.2717502-718-277014067067095/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:53 np0005603621 python3.9[134750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:29:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:54.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:29:54 np0005603621 python3.9[134902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:54 np0005603621 python3.9[135025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844594.0961933-796-255974276288904/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:55 np0005603621 python3.9[135178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:56 np0005603621 python3.9[135330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:56 np0005603621 python3.9[135453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844595.7376676-858-194448027998767/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:57 np0005603621 python3.9[135605]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:57 np0005603621 python3.9[135758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:29:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:29:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:58 np0005603621 python3.9[135881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844597.4263678-930-131187876227855/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:29:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:29:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:29:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:29:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:29:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:29:59 np0005603621 python3.9[136033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:29:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:29:59 np0005603621 python3.9[136186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 02:30:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:30:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:00.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:30:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:00 np0005603621 python3.9[136309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844599.4940755-1013-112928637419475/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 02:30:00 np0005603621 python3.9[136461]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:30:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:01 np0005603621 python3.9[136614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:02 np0005603621 python3.9[136737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844601.138216-1063-18218721993352/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=95f204ee8062e227608bf68163d0c9f95531c74c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:02.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:30:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:02.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:30:03 np0005603621 systemd[1]: session-44.scope: Deactivated successfully.
Jan 31 02:30:03 np0005603621 systemd[1]: session-44.scope: Consumed 19.488s CPU time.
Jan 31 02:30:03 np0005603621 systemd-logind[818]: Session 44 logged out. Waiting for processes to exit.
Jan 31 02:30:03 np0005603621 systemd-logind[818]: Removed session 44.
Jan 31 02:30:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:04.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:30:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:06.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:30:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:30:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:30:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:08.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:30:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:30:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:09 np0005603621 systemd-logind[818]: New session 45 of user zuul.
Jan 31 02:30:09 np0005603621 systemd[1]: Started Session 45 of User zuul.
Jan 31 02:30:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:09 np0005603621 python3.9[136971]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:10.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:10.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:10 np0005603621 python3.9[137123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:11 np0005603621 python3.9[137246]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844610.2895913-62-238358456172413/.source.conf _original_basename=ceph.conf follow=False checksum=23cbd0a652332596774a4195d9b5b25af094d504 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:12 np0005603621 python3.9[137399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:12.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:12.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:12 np0005603621 python3.9[137522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844611.594183-62-201117548237838/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=35152db97829fbbc30ac5e5c6e1f42921e77a1a7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:12 np0005603621 systemd-logind[818]: Session 45 logged out. Waiting for processes to exit.
Jan 31 02:30:12 np0005603621 systemd[1]: session-45.scope: Deactivated successfully.
Jan 31 02:30:12 np0005603621 systemd[1]: session-45.scope: Consumed 2.051s CPU time.
Jan 31 02:30:12 np0005603621 systemd-logind[818]: Removed session 45.
Jan 31 02:30:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:14.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:16.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:16.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:18 np0005603621 systemd-logind[818]: New session 46 of user zuul.
Jan 31 02:30:18 np0005603621 systemd[1]: Started Session 46 of User zuul.
Jan 31 02:30:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:18.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:18.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:19 np0005603621 python3.9[137703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:30:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:20 np0005603621 python3.9[137860]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:30:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:20.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:20.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:20 np0005603621 python3.9[138012]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:30:21 np0005603621 python3.9[138162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:30:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000034s ======
Jan 31 02:30:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:22.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 31 02:30:22 np0005603621 python3.9[138315]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 02:30:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:22.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:23 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 31 02:30:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:24 np0005603621 python3.9[138472]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:30:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:25 np0005603621 python3.9[138606]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:30:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:26.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:26.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:27 np0005603621 python3.9[138760]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:30:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:28 np0005603621 python3[138916]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 31 02:30:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:28.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000034s ======
Jan 31 02:30:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 31 02:30:28 np0005603621 python3.9[139068]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:29 np0005603621 python3.9[139221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:30 np0005603621 python3.9[139299]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:30.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:30.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:30 np0005603621 python3.9[139451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:31 np0005603621 python3.9[139529]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tracxkyu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:31 np0005603621 python3.9[139682]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:32.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:32 np0005603621 python3.9[139760]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:32.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:32 np0005603621 python3.9[139912]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:33 np0005603621 python3[140066]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:30:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:34 np0005603621 python3.9[140218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:34.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:35 np0005603621 python3.9[140343]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844633.9606571-431-117065141945799/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:36 np0005603621 python3.9[140496]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:36.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:36.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:36 np0005603621 python3.9[140621]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844635.387604-476-190795041958878/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:37 np0005603621 python3.9[140773]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:37 np0005603621 python3.9[140899]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844636.7180924-521-192137634073109/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:38.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:38 np0005603621 python3.9[141051]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:30:38
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes']
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:30:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:38.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:30:38 np0005603621 python3.9[141176]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844637.9620316-566-164162242389432/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:39 np0005603621 python3.9[141329]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:40.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:40 np0005603621 python3.9[141454]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844639.1632693-611-274279849086049/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:40.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:40 np0005603621 python3.9[141606]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:41 np0005603621 python3.9[141758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:42.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:42 np0005603621 python3.9[141914]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:42.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:43 np0005603621 python3.9[142141]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:30:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e09e8d4f-a89f-43ab-be77-ce7daf7ad258 does not exist
Jan 31 02:30:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev be853faf-e791-4d11-9b17-e9e821aad1f2 does not exist
Jan 31 02:30:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 96fed5de-5b32-46b5-b5e5-25871bc1a113 does not exist
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:30:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:43 np0005603621 python3.9[142376]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:30:43 np0005603621 podman[142529]: 2026-01-31 07:30:43.934268388 +0000 UTC m=+0.036247838 container create c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:30:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:30:43 np0005603621 systemd[1]: Started libpod-conmon-c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820.scope.
Jan 31 02:30:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:30:44 np0005603621 podman[142529]: 2026-01-31 07:30:44.00904017 +0000 UTC m=+0.111019620 container init c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:30:44 np0005603621 podman[142529]: 2026-01-31 07:30:44.014618402 +0000 UTC m=+0.116597852 container start c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:30:44 np0005603621 podman[142529]: 2026-01-31 07:30:43.917817062 +0000 UTC m=+0.019796542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:30:44 np0005603621 podman[142529]: 2026-01-31 07:30:44.01833691 +0000 UTC m=+0.120316420 container attach c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:30:44 np0005603621 amazing_khayyam[142583]: 167 167
Jan 31 02:30:44 np0005603621 systemd[1]: libpod-c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820.scope: Deactivated successfully.
Jan 31 02:30:44 np0005603621 podman[142529]: 2026-01-31 07:30:44.019596363 +0000 UTC m=+0.121575813 container died c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:30:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-47d8ecda769e97236fecce0876bf296ceca3c27191a71243f65c64d196848fb2-merged.mount: Deactivated successfully.
Jan 31 02:30:44 np0005603621 podman[142529]: 2026-01-31 07:30:44.072690829 +0000 UTC m=+0.174670289 container remove c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:30:44 np0005603621 systemd[1]: libpod-conmon-c2fe08870c850c9a74c679fc95aaa96cae69054b47c7fc114d245866d2efb820.scope: Deactivated successfully.
Jan 31 02:30:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:44.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:44 np0005603621 podman[142677]: 2026-01-31 07:30:44.186498085 +0000 UTC m=+0.042397300 container create e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:30:44 np0005603621 systemd[1]: Started libpod-conmon-e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3.scope.
Jan 31 02:30:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:30:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b016ba21ec37191fd981987d2d795f990a754cbfe0fac6681fa7ceb591609985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b016ba21ec37191fd981987d2d795f990a754cbfe0fac6681fa7ceb591609985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b016ba21ec37191fd981987d2d795f990a754cbfe0fac6681fa7ceb591609985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b016ba21ec37191fd981987d2d795f990a754cbfe0fac6681fa7ceb591609985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b016ba21ec37191fd981987d2d795f990a754cbfe0fac6681fa7ceb591609985/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:44 np0005603621 podman[142677]: 2026-01-31 07:30:44.170532345 +0000 UTC m=+0.026431590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:30:44 np0005603621 podman[142677]: 2026-01-31 07:30:44.272242244 +0000 UTC m=+0.128141489 container init e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:30:44 np0005603621 podman[142677]: 2026-01-31 07:30:44.277174144 +0000 UTC m=+0.133073369 container start e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:30:44 np0005603621 podman[142677]: 2026-01-31 07:30:44.280018602 +0000 UTC m=+0.135917827 container attach e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:30:44 np0005603621 python3.9[142691]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:44.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:44 np0005603621 python3.9[142859]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:45 np0005603621 magical_jennings[142700]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:30:45 np0005603621 magical_jennings[142700]: --> relative data size: 1.0
Jan 31 02:30:45 np0005603621 magical_jennings[142700]: --> All data devices are unavailable
Jan 31 02:30:45 np0005603621 systemd[1]: libpod-e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3.scope: Deactivated successfully.
Jan 31 02:30:45 np0005603621 podman[142677]: 2026-01-31 07:30:45.049252626 +0000 UTC m=+0.905151891 container died e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:30:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b016ba21ec37191fd981987d2d795f990a754cbfe0fac6681fa7ceb591609985-merged.mount: Deactivated successfully.
Jan 31 02:30:45 np0005603621 podman[142677]: 2026-01-31 07:30:45.106302648 +0000 UTC m=+0.962201893 container remove e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:30:45 np0005603621 systemd[1]: libpod-conmon-e1cd2ed9cac5f11c022686337f3c4dce523122c28e54f784808a0c106bf636f3.scope: Deactivated successfully.
Jan 31 02:30:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.655132019 +0000 UTC m=+0.044554144 container create 3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:30:45 np0005603621 systemd[1]: Started libpod-conmon-3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a.scope.
Jan 31 02:30:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.631212907 +0000 UTC m=+0.020635092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.731278379 +0000 UTC m=+0.120700514 container init 3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.738462386 +0000 UTC m=+0.127884481 container start 3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.742565707 +0000 UTC m=+0.131987812 container attach 3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:30:45 np0005603621 frosty_wu[143162]: 167 167
Jan 31 02:30:45 np0005603621 systemd[1]: libpod-3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a.scope: Deactivated successfully.
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.745482508 +0000 UTC m=+0.134904603 container died 3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:30:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d7dd9cdb1288d217a4bb35107de2ebf6a65cefb31247e068659ba609189a6cea-merged.mount: Deactivated successfully.
Jan 31 02:30:45 np0005603621 podman[143097]: 2026-01-31 07:30:45.786568201 +0000 UTC m=+0.175990296 container remove 3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wu, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:30:45 np0005603621 systemd[1]: libpod-conmon-3ac022cfb6e3161d7d16f38af4ff68bd3d188688ad23a91fa5b459b41b015e1a.scope: Deactivated successfully.
Jan 31 02:30:45 np0005603621 podman[143245]: 2026-01-31 07:30:45.951329369 +0000 UTC m=+0.083053268 container create 2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:30:45 np0005603621 podman[143245]: 2026-01-31 07:30:45.891562203 +0000 UTC m=+0.023286122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:30:46 np0005603621 systemd[1]: Started libpod-conmon-2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d.scope.
Jan 31 02:30:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:30:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f440b67c0c7e855a7ed4dc12de8b1ab3ce000913fb5a32c479ae67e721ac870/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f440b67c0c7e855a7ed4dc12de8b1ab3ce000913fb5a32c479ae67e721ac870/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f440b67c0c7e855a7ed4dc12de8b1ab3ce000913fb5a32c479ae67e721ac870/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f440b67c0c7e855a7ed4dc12de8b1ab3ce000913fb5a32c479ae67e721ac870/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:46 np0005603621 podman[143245]: 2026-01-31 07:30:46.093221591 +0000 UTC m=+0.224945490 container init 2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 02:30:46 np0005603621 podman[143245]: 2026-01-31 07:30:46.102129256 +0000 UTC m=+0.233853155 container start 2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:30:46 np0005603621 podman[143245]: 2026-01-31 07:30:46.126803856 +0000 UTC m=+0.258527805 container attach 2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:30:46 np0005603621 python3.9[143268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:30:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]: {
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:    "0": [
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:        {
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "devices": [
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "/dev/loop3"
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            ],
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "lv_name": "ceph_lv0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "lv_size": "7511998464",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "name": "ceph_lv0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "tags": {
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.cluster_name": "ceph",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.crush_device_class": "",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.encrypted": "0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.osd_id": "0",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.type": "block",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:                "ceph.vdo": "0"
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            },
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "type": "block",
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:            "vg_name": "ceph_vg0"
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:        }
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]:    ]
Jan 31 02:30:46 np0005603621 wizardly_merkle[143279]: }
Jan 31 02:30:46 np0005603621 systemd[1]: libpod-2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d.scope: Deactivated successfully.
Jan 31 02:30:46 np0005603621 podman[143245]: 2026-01-31 07:30:46.828247717 +0000 UTC m=+0.959971696 container died 2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:30:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8f440b67c0c7e855a7ed4dc12de8b1ab3ce000913fb5a32c479ae67e721ac870-merged.mount: Deactivated successfully.
Jan 31 02:30:46 np0005603621 podman[143245]: 2026-01-31 07:30:46.890610433 +0000 UTC m=+1.022334332 container remove 2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_merkle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:30:46 np0005603621 systemd[1]: libpod-conmon-2a6ced6ee7a99752d7fcd40d53151ba1d034281d23e10a074dcd6b69a6eea90d.scope: Deactivated successfully.
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.401119725 +0000 UTC m=+0.042592216 container create 08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:30:47 np0005603621 systemd[1]: Started libpod-conmon-08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536.scope.
Jan 31 02:30:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.45504222 +0000 UTC m=+0.096514721 container init 08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.459308637 +0000 UTC m=+0.100781118 container start 08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chatelet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 02:30:47 np0005603621 suspicious_chatelet[143611]: 167 167
Jan 31 02:30:47 np0005603621 systemd[1]: libpod-08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536.scope: Deactivated successfully.
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.463243512 +0000 UTC m=+0.104716123 container attach 08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.46347721 +0000 UTC m=+0.104949701 container died 08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chatelet, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:30:47 np0005603621 python3.9[143565]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:9e:41:65:cf" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.382399031 +0000 UTC m=+0.023871542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:30:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d3a8b3272cefe3b16a0e131cabaa3246640409059f21e907152dfb95a990dd27-merged.mount: Deactivated successfully.
Jan 31 02:30:47 np0005603621 ovs-vsctl[143624]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:9e:41:65:cf external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 31 02:30:47 np0005603621 podman[143594]: 2026-01-31 07:30:47.495784712 +0000 UTC m=+0.137257203 container remove 08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:30:47 np0005603621 systemd[1]: libpod-conmon-08a9b9966ed28aade0e139237e184636618d76136a6fcf6b8a115fe1466c6536.scope: Deactivated successfully.
Jan 31 02:30:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:47 np0005603621 podman[143660]: 2026-01-31 07:30:47.614982483 +0000 UTC m=+0.043079614 container create 0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:30:47 np0005603621 systemd[1]: Started libpod-conmon-0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547.scope.
Jan 31 02:30:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:30:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29145afcaf8fff1a81d4247140e612f59b45b4c3900eeffcec6c0ecac53bc5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29145afcaf8fff1a81d4247140e612f59b45b4c3900eeffcec6c0ecac53bc5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29145afcaf8fff1a81d4247140e612f59b45b4c3900eeffcec6c0ecac53bc5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a29145afcaf8fff1a81d4247140e612f59b45b4c3900eeffcec6c0ecac53bc5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:30:47 np0005603621 podman[143660]: 2026-01-31 07:30:47.595017736 +0000 UTC m=+0.023114957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:30:47 np0005603621 podman[143660]: 2026-01-31 07:30:47.694731297 +0000 UTC m=+0.122828448 container init 0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:30:47 np0005603621 podman[143660]: 2026-01-31 07:30:47.699224271 +0000 UTC m=+0.127321402 container start 0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:30:47 np0005603621 podman[143660]: 2026-01-31 07:30:47.703021182 +0000 UTC m=+0.131118343 container attach 0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:30:48 np0005603621 python3.9[143809]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:48.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:48 np0005603621 boring_hugle[143677]: {
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:        "osd_id": 0,
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:        "type": "bluestore"
Jan 31 02:30:48 np0005603621 boring_hugle[143677]:    }
Jan 31 02:30:48 np0005603621 boring_hugle[143677]: }
Jan 31 02:30:48 np0005603621 podman[143660]: 2026-01-31 07:30:48.488487513 +0000 UTC m=+0.916584644 container died 0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:30:48 np0005603621 systemd[1]: libpod-0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547.scope: Deactivated successfully.
Jan 31 02:30:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a29145afcaf8fff1a81d4247140e612f59b45b4c3900eeffcec6c0ecac53bc5a-merged.mount: Deactivated successfully.
Jan 31 02:30:48 np0005603621 podman[143660]: 2026-01-31 07:30:48.544094056 +0000 UTC m=+0.972191187 container remove 0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hugle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:30:48 np0005603621 systemd[1]: libpod-conmon-0b6617b125b350cd9802dda8387672fc263965cade1ce04a14fa83eb2dfec547.scope: Deactivated successfully.
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:30:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:30:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:30:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:30:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 85496182-c552-4381-9ac2-5b76980ba21b does not exist
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev baddff99-a524-4839-8e84-4a5c2443049f does not exist
Jan 31 02:30:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4e4d9d8e-0040-470a-83d1-460b04ecf1e5 does not exist
Jan 31 02:30:48 np0005603621 python3.9[144041]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:30:48 np0005603621 ovs-vsctl[144042]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 31 02:30:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:49 np0005603621 python3.9[144193]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:30:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:30:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:30:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000034s ======
Jan 31 02:30:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000034s
Jan 31 02:30:50 np0005603621 python3.9[144347]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:30:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:50 np0005603621 python3.9[144499]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:51 np0005603621 python3.9[144577]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:30:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:51 np0005603621 python3.9[144730]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:52.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:52 np0005603621 python3.9[144808]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:30:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:53 np0005603621 python3.9[144960]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:53 np0005603621 python3.9[145113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:54 np0005603621 python3.9[145191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:54 np0005603621 python3.9[145343]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:55 np0005603621 python3.9[145421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:55 np0005603621 python3.9[145574]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:30:55 np0005603621 systemd[1]: Reloading.
Jan 31 02:30:56 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:30:56 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:30:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:56.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:57 np0005603621 python3.9[145763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:57 np0005603621 python3.9[145841]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:58 np0005603621 python3.9[145994]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:30:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:30:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:30:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:30:58 np0005603621 python3.9[146072]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:30:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:30:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:30:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:30:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:30:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:30:59 np0005603621 python3.9[146224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:30:59 np0005603621 systemd[1]: Reloading.
Jan 31 02:30:59 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:30:59 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:30:59 np0005603621 systemd[1]: Starting Create netns directory...
Jan 31 02:30:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:30:59 np0005603621 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:30:59 np0005603621 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:30:59 np0005603621 systemd[1]: Finished Create netns directory.
Jan 31 02:31:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:31:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:31:00 np0005603621 python3.9[146417]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:01 np0005603621 python3.9[146572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:02 np0005603621 python3.9[146695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844661.219847-1364-207415009100142/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:31:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:02.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:31:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:02.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:03 np0005603621 python3.9[146847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:03 np0005603621 python3.9[147000]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:04.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:04 np0005603621 python3.9[147152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:05 np0005603621 python3.9[147275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844664.179417-1463-153950438061026/.source.json _original_basename=.kypfly71 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:05 np0005603621 python3.9[147476]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:31:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:31:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000035s ======
Jan 31 02:31:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:06.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000035s
Jan 31 02:31:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:07 np0005603621 python3.9[147900]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 31 02:31:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:08.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:31:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:08 np0005603621 python3.9[148052]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:31:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:10 np0005603621 python3[148205]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:31:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000070s ======
Jan 31 02:31:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:10.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000070s
Jan 31 02:31:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:12.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:12.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:14 np0005603621 podman[148218]: 2026-01-31 07:31:14.116471515 +0000 UTC m=+4.055187900 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 02:31:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:14 np0005603621 podman[148338]: 2026-01-31 07:31:14.212656853 +0000 UTC m=+0.035444080 container create d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 02:31:14 np0005603621 podman[148338]: 2026-01-31 07:31:14.19425084 +0000 UTC m=+0.017038087 image pull 9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 02:31:14 np0005603621 python3[148205]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 31 02:31:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:14.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 02:31:15 np0005603621 python3.9[148524]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:31:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:16 np0005603621 python3.9[148679]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:16 np0005603621 python3.9[148755]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:31:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:16.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:17 np0005603621 python3.9[148906]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844676.4746828-1697-9637500134213/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:17 np0005603621 python3.9[148982]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:31:17 np0005603621 systemd[1]: Reloading.
Jan 31 02:31:17 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:31:17 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:31:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:31:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:18.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:31:18 np0005603621 python3.9[149095]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:31:18 np0005603621 systemd[1]: Reloading.
Jan 31 02:31:18 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:31:18 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:31:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:18.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:18 np0005603621 systemd[1]: Starting ovn_controller container...
Jan 31 02:31:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56776b456fbbd22c6ccc926ba646f22d38cc6cae41edd11817bf4fc2e9ec4db0/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:18 np0005603621 systemd[1]: Started /usr/bin/podman healthcheck run d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613.
Jan 31 02:31:18 np0005603621 podman[149137]: 2026-01-31 07:31:18.784717192 +0000 UTC m=+0.112750586 container init d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:31:18 np0005603621 ovn_controller[149152]: + sudo -E kolla_set_configs
Jan 31 02:31:18 np0005603621 podman[149137]: 2026-01-31 07:31:18.816993534 +0000 UTC m=+0.145026938 container start d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 31 02:31:18 np0005603621 edpm-start-podman-container[149137]: ovn_controller
Jan 31 02:31:18 np0005603621 systemd[1]: Created slice User Slice of UID 0.
Jan 31 02:31:18 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 31 02:31:18 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 31 02:31:18 np0005603621 systemd[1]: Starting User Manager for UID 0...
Jan 31 02:31:18 np0005603621 edpm-start-podman-container[149136]: Creating additional drop-in dependency for "ovn_controller" (d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613)
Jan 31 02:31:18 np0005603621 systemd[1]: Reloading.
Jan 31 02:31:18 np0005603621 podman[149159]: 2026-01-31 07:31:18.932623982 +0000 UTC m=+0.098116784 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:31:18 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:31:18 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:31:18 np0005603621 systemd[149186]: Queued start job for default target Main User Target.
Jan 31 02:31:19 np0005603621 systemd[149186]: Created slice User Application Slice.
Jan 31 02:31:19 np0005603621 systemd[149186]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 31 02:31:19 np0005603621 systemd[149186]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:31:19 np0005603621 systemd[149186]: Reached target Paths.
Jan 31 02:31:19 np0005603621 systemd[149186]: Reached target Timers.
Jan 31 02:31:19 np0005603621 systemd[149186]: Starting D-Bus User Message Bus Socket...
Jan 31 02:31:19 np0005603621 systemd[149186]: Starting Create User's Volatile Files and Directories...
Jan 31 02:31:19 np0005603621 systemd[149186]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:31:19 np0005603621 systemd[149186]: Reached target Sockets.
Jan 31 02:31:19 np0005603621 systemd[149186]: Finished Create User's Volatile Files and Directories.
Jan 31 02:31:19 np0005603621 systemd[149186]: Reached target Basic System.
Jan 31 02:31:19 np0005603621 systemd[149186]: Reached target Main User Target.
Jan 31 02:31:19 np0005603621 systemd[149186]: Startup finished in 131ms.
Jan 31 02:31:19 np0005603621 systemd[1]: Started User Manager for UID 0.
Jan 31 02:31:19 np0005603621 systemd[1]: Started ovn_controller container.
Jan 31 02:31:19 np0005603621 systemd[1]: d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613-280d62f938195e00.service: Main process exited, code=exited, status=1/FAILURE
Jan 31 02:31:19 np0005603621 systemd[1]: d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613-280d62f938195e00.service: Failed with result 'exit-code'.
Jan 31 02:31:19 np0005603621 systemd[1]: Started Session c1 of User root.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: INFO:__main__:Validating config file
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: INFO:__main__:Writing out command to execute
Jan 31 02:31:19 np0005603621 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: ++ cat /run_command
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + ARGS=
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + sudo kolla_copy_cacerts
Jan 31 02:31:19 np0005603621 systemd[1]: Started Session c2 of User root.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + [[ ! -n '' ]]
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + . kolla_extend_start
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + umask 0022
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 31 02:31:19 np0005603621 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.2551] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.2561] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <warn>  [1769844679.2564] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.2576] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.2584] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.2590] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:31:19 np0005603621 kernel: br-int: entered promiscuous mode
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 31 02:31:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:31:19 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:19Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.2811] manager: (ovn-71aaf7-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 31 02:31:19 np0005603621 systemd-udevd[149285]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:31:19 np0005603621 kernel: genev_sys_6081: entered promiscuous mode
Jan 31 02:31:19 np0005603621 systemd-udevd[149287]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.3045] device (genev_sys_6081): carrier: link connected
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.3050] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.3469] manager: (ovn-7ec8bf-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 31 02:31:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:19 np0005603621 NetworkManager[49013]: <info>  [1769844679.7412] manager: (ovn-bd097f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 31 02:31:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:20.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:20 np0005603621 python3.9[149415]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 02:31:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:20.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:21 np0005603621 python3.9[149567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:21 np0005603621 python3.9[149691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844681.0200057-1832-214512298032518/.source.yaml _original_basename=.k789vpkb follow=False checksum=869a4744df33825307102f8d7b13c7e3fcbb8f59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:22.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:22 np0005603621 python3.9[149843]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:31:22 np0005603621 ovs-vsctl[149844]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 31 02:31:23 np0005603621 python3.9[149996]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:31:23 np0005603621 ovs-vsctl[149998]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 31 02:31:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:24 np0005603621 python3.9[150152]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:31:24 np0005603621 ovs-vsctl[150153]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 31 02:31:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:24.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:31:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:31:24 np0005603621 systemd-logind[818]: Session 46 logged out. Waiting for processes to exit.
Jan 31 02:31:24 np0005603621 systemd[1]: session-46.scope: Deactivated successfully.
Jan 31 02:31:24 np0005603621 systemd[1]: session-46.scope: Consumed 47.759s CPU time.
Jan 31 02:31:24 np0005603621 systemd-logind[818]: Removed session 46.
Jan 31 02:31:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:26.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:26.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:31:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:28.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:31:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:28.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:29 np0005603621 systemd[1]: Stopping User Manager for UID 0...
Jan 31 02:31:29 np0005603621 systemd[149186]: Activating special unit Exit the Session...
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped target Main User Target.
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped target Basic System.
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped target Paths.
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped target Sockets.
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped target Timers.
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:31:29 np0005603621 systemd[149186]: Closed D-Bus User Message Bus Socket.
Jan 31 02:31:29 np0005603621 systemd[149186]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:31:29 np0005603621 systemd[149186]: Removed slice User Application Slice.
Jan 31 02:31:29 np0005603621 systemd[149186]: Reached target Shutdown.
Jan 31 02:31:29 np0005603621 systemd[149186]: Finished Exit the Session.
Jan 31 02:31:29 np0005603621 systemd[149186]: Reached target Exit the Session.
Jan 31 02:31:29 np0005603621 systemd[1]: user@0.service: Deactivated successfully.
Jan 31 02:31:29 np0005603621 systemd[1]: Stopped User Manager for UID 0.
Jan 31 02:31:29 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 31 02:31:29 np0005603621 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 31 02:31:29 np0005603621 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 31 02:31:29 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 31 02:31:29 np0005603621 systemd[1]: Removed slice User Slice of UID 0.
Jan 31 02:31:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:30.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:30 np0005603621 systemd-logind[818]: New session 48 of user zuul.
Jan 31 02:31:30 np0005603621 systemd[1]: Started Session 48 of User zuul.
Jan 31 02:31:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:30.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:31 np0005603621 python3.9[150387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:31:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:32.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:32 np0005603621 python3.9[150544]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:32.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:32 np0005603621 python3.9[150696]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:33 np0005603621 python3.9[150848]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:34 np0005603621 python3.9[151001]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:34.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:34.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:34 np0005603621 python3.9[151154]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:35 np0005603621 python3.9[151305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:31:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:36.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:36 np0005603621 python3.9[151457]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 31 02:31:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:31:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:36.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:31:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:38 np0005603621 python3.9[151609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:31:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:38.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:31:38
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:31:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:38.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:38 np0005603621 python3.9[151730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844697.5503201-217-177697235833995/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:39 np0005603621 python3.9[151888]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:40 np0005603621 python3.9[152009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844699.2451425-262-62637868249669/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:40.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:41 np0005603621 python3.9[152161]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:31:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:42 np0005603621 python3.9[152246]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:31:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:42.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:42.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:44.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:45 np0005603621 python3.9[152400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:31:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:45 np0005603621 python3.9[152604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000066s ======
Jan 31 02:31:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:46.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000066s
Jan 31 02:31:46 np0005603621 python3.9[152725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844705.5612352-373-184057087012888/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:46 np0005603621 python3.9[152875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:47 np0005603621 python3.9[152996]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844706.541253-373-222843915831280/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:48.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:48 np0005603621 python3.9[153147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:31:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:31:48 np0005603621 python3.9[153268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844708.1148674-505-146479502586105/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:49 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:49Z|00025|memory|INFO|17280 kB peak resident set size after 30.0 seconds
Jan 31 02:31:49 np0005603621 ovn_controller[149152]: 2026-01-31T07:31:49Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:31:49 np0005603621 podman[153499]: 2026-01-31 07:31:49.259256805 +0000 UTC m=+0.066340350 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:31:49 np0005603621 python3.9[153544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:31:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:49 np0005603621 python3.9[153685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844709.019188-505-18592708242491/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:50.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:50.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:50 np0005603621 python3.9[153963]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:31:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:31:51 np0005603621 python3.9[154117]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 19b87db4-08bc-42fc-a4b1-1c0876ca5d02 does not exist
Jan 31 02:31:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d608e1c4-829c-4726-b2a2-96b231be2063 does not exist
Jan 31 02:31:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5b9c38c0-3679-48b2-8233-00bf4feb3ebb does not exist
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:31:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.16930523 +0000 UTC m=+0.057712791 container create 7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:31:52 np0005603621 python3.9[154395]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:52 np0005603621 systemd[1]: Started libpod-conmon-7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6.scope.
Jan 31 02:31:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:52.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.145500773 +0000 UTC m=+0.033908374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.245358563 +0000 UTC m=+0.133766134 container init 7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.253396472 +0000 UTC m=+0.141804053 container start 7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:31:52 np0005603621 wonderful_mclaren[154429]: 167 167
Jan 31 02:31:52 np0005603621 conmon[154429]: conmon 7041c88c3025f3faa03c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6.scope/container/memory.events
Jan 31 02:31:52 np0005603621 systemd[1]: libpod-7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6.scope: Deactivated successfully.
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.257543186 +0000 UTC m=+0.145950747 container attach 7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.259111437 +0000 UTC m=+0.147519018 container died 7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:31:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-52d6f0351e3060a24d9ddfb47bb995fbbc315cab55398f59eefae728e4ef4575-merged.mount: Deactivated successfully.
Jan 31 02:31:52 np0005603621 podman[154411]: 2026-01-31 07:31:52.302229567 +0000 UTC m=+0.190637148 container remove 7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:31:52 np0005603621 systemd[1]: libpod-conmon-7041c88c3025f3faa03c54737a700db0f99dc1c48b8797ab2239caf4cc504bf6.scope: Deactivated successfully.
Jan 31 02:31:52 np0005603621 podman[154506]: 2026-01-31 07:31:52.435210495 +0000 UTC m=+0.035883868 container create e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_colden, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:31:52 np0005603621 systemd[1]: Started libpod-conmon-e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce.scope.
Jan 31 02:31:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9643493ebda6aa30773408816b2893e0a476fbe279ebd62454efa3ac9bfc62af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9643493ebda6aa30773408816b2893e0a476fbe279ebd62454efa3ac9bfc62af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9643493ebda6aa30773408816b2893e0a476fbe279ebd62454efa3ac9bfc62af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9643493ebda6aa30773408816b2893e0a476fbe279ebd62454efa3ac9bfc62af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9643493ebda6aa30773408816b2893e0a476fbe279ebd62454efa3ac9bfc62af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:31:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:52.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:31:52 np0005603621 podman[154506]: 2026-01-31 07:31:52.419251481 +0000 UTC m=+0.019924874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:31:52 np0005603621 podman[154506]: 2026-01-31 07:31:52.540597495 +0000 UTC m=+0.141270918 container init e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:31:52 np0005603621 podman[154506]: 2026-01-31 07:31:52.549626996 +0000 UTC m=+0.150300369 container start e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:31:52 np0005603621 podman[154506]: 2026-01-31 07:31:52.553017975 +0000 UTC m=+0.153691388 container attach e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:31:52 np0005603621 python3.9[154542]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:53 np0005603621 python3.9[154702]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:53 np0005603621 strange_colden[154546]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:31:53 np0005603621 strange_colden[154546]: --> relative data size: 1.0
Jan 31 02:31:53 np0005603621 strange_colden[154546]: --> All data devices are unavailable
Jan 31 02:31:53 np0005603621 systemd[1]: libpod-e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce.scope: Deactivated successfully.
Jan 31 02:31:53 np0005603621 podman[154506]: 2026-01-31 07:31:53.318904324 +0000 UTC m=+0.919577707 container died e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_colden, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:31:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9643493ebda6aa30773408816b2893e0a476fbe279ebd62454efa3ac9bfc62af-merged.mount: Deactivated successfully.
Jan 31 02:31:53 np0005603621 podman[154506]: 2026-01-31 07:31:53.553306193 +0000 UTC m=+1.153979606 container remove e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_colden, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:31:53 np0005603621 systemd[1]: libpod-conmon-e0f20f15be7c0230ae27a1212ce6173f3e915dabfcab6bab1245e9e0c6d4c9ce.scope: Deactivated successfully.
Jan 31 02:31:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:53 np0005603621 python3.9[154802]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.068024042 +0000 UTC m=+0.048655129 container create 1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:31:54 np0005603621 systemd[1]: Started libpod-conmon-1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425.scope.
Jan 31 02:31:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.04034994 +0000 UTC m=+0.020981097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.141053377 +0000 UTC m=+0.121684514 container init 1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pike, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.149030424 +0000 UTC m=+0.129661471 container start 1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.152464715 +0000 UTC m=+0.133095852 container attach 1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:31:54 np0005603621 unruffled_pike[154987]: 167 167
Jan 31 02:31:54 np0005603621 systemd[1]: libpod-1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425.scope: Deactivated successfully.
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.154347276 +0000 UTC m=+0.134978323 container died 1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pike, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:31:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1ef6b30e516d020591ac83ed67a932401e849b60129e2e6e827aaaf8282fa094-merged.mount: Deactivated successfully.
Jan 31 02:31:54 np0005603621 podman[154971]: 2026-01-31 07:31:54.195542005 +0000 UTC m=+0.176173092 container remove 1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_pike, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:31:54 np0005603621 systemd[1]: libpod-conmon-1e2df46fb03a4125fdb37c7934265d8a5c9683f990eb060533989cb8957f3425.scope: Deactivated successfully.
Jan 31 02:31:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:54.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:31:54 np0005603621 podman[155009]: 2026-01-31 07:31:54.340400556 +0000 UTC m=+0.053795266 container create 8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:31:54 np0005603621 systemd[1]: Started libpod-conmon-8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92.scope.
Jan 31 02:31:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f3a78ae36eb1fd33afba83d37b55e5a66d09508136cfb1e54623a67e7ba337/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f3a78ae36eb1fd33afba83d37b55e5a66d09508136cfb1e54623a67e7ba337/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f3a78ae36eb1fd33afba83d37b55e5a66d09508136cfb1e54623a67e7ba337/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f3a78ae36eb1fd33afba83d37b55e5a66d09508136cfb1e54623a67e7ba337/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:54 np0005603621 podman[155009]: 2026-01-31 07:31:54.319164952 +0000 UTC m=+0.032559682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:31:54 np0005603621 podman[155009]: 2026-01-31 07:31:54.435191003 +0000 UTC m=+0.148585723 container init 8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 02:31:54 np0005603621 podman[155009]: 2026-01-31 07:31:54.443942775 +0000 UTC m=+0.157337495 container start 8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:31:54 np0005603621 podman[155009]: 2026-01-31 07:31:54.448221903 +0000 UTC m=+0.161616623 container attach 8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:31:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:31:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:54.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]: {
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:    "0": [
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:        {
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "devices": [
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "/dev/loop3"
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            ],
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "lv_name": "ceph_lv0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "lv_size": "7511998464",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "name": "ceph_lv0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "tags": {
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.cluster_name": "ceph",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.crush_device_class": "",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.encrypted": "0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.osd_id": "0",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.type": "block",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:                "ceph.vdo": "0"
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            },
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "type": "block",
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:            "vg_name": "ceph_vg0"
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:        }
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]:    ]
Jan 31 02:31:55 np0005603621 strange_bhabha[155026]: }
Jan 31 02:31:55 np0005603621 systemd[1]: libpod-8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92.scope: Deactivated successfully.
Jan 31 02:31:55 np0005603621 podman[155009]: 2026-01-31 07:31:55.139098134 +0000 UTC m=+0.852492854 container died 8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:31:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f4f3a78ae36eb1fd33afba83d37b55e5a66d09508136cfb1e54623a67e7ba337-merged.mount: Deactivated successfully.
Jan 31 02:31:55 np0005603621 podman[155009]: 2026-01-31 07:31:55.195618506 +0000 UTC m=+0.909013226 container remove 8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bhabha, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:31:55 np0005603621 systemd[1]: libpod-conmon-8fc2e92b93d9cbcfe5968357d930a155a859a61d02fbb897b18eaeb5baebbb92.scope: Deactivated successfully.
Jan 31 02:31:55 np0005603621 python3.9[155223]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:55 np0005603621 podman[155338]: 2026-01-31 07:31:55.744543168 +0000 UTC m=+0.041004633 container create 97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:31:55 np0005603621 systemd[1]: Started libpod-conmon-97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5.scope.
Jan 31 02:31:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:55 np0005603621 podman[155338]: 2026-01-31 07:31:55.81871099 +0000 UTC m=+0.115172535 container init 97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:31:55 np0005603621 podman[155338]: 2026-01-31 07:31:55.725163213 +0000 UTC m=+0.021624718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:31:55 np0005603621 podman[155338]: 2026-01-31 07:31:55.824394424 +0000 UTC m=+0.120855889 container start 97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:31:55 np0005603621 awesome_burnell[155382]: 167 167
Jan 31 02:31:55 np0005603621 systemd[1]: libpod-97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5.scope: Deactivated successfully.
Jan 31 02:31:55 np0005603621 podman[155338]: 2026-01-31 07:31:55.916761982 +0000 UTC m=+0.213223507 container attach 97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:31:55 np0005603621 podman[155338]: 2026-01-31 07:31:55.918049743 +0000 UTC m=+0.214511238 container died 97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:31:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8b20fb830f5f6ab43606395bdb94242f86ea77067fda0cec2519571df916a647-merged.mount: Deactivated successfully.
Jan 31 02:31:56 np0005603621 podman[155338]: 2026-01-31 07:31:56.077819426 +0000 UTC m=+0.374280891 container remove 97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:31:56 np0005603621 systemd[1]: libpod-conmon-97d2d3204f24c9c56224ce6f7e7b94d26aa0081a32b64f1578314265474c01a5.scope: Deactivated successfully.
Jan 31 02:31:56 np0005603621 python3.9[155497]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:56 np0005603621 podman[155506]: 2026-01-31 07:31:56.193656072 +0000 UTC m=+0.039917319 container create 8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:31:56 np0005603621 systemd[1]: Started libpod-conmon-8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3.scope.
Jan 31 02:31:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:56.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:31:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd11b7beccfc1caf184aff2feb01549f6fd7c1ac08ff2755a25766e35c6191b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd11b7beccfc1caf184aff2feb01549f6fd7c1ac08ff2755a25766e35c6191b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd11b7beccfc1caf184aff2feb01549f6fd7c1ac08ff2755a25766e35c6191b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd11b7beccfc1caf184aff2feb01549f6fd7c1ac08ff2755a25766e35c6191b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:31:56 np0005603621 podman[155506]: 2026-01-31 07:31:56.174318088 +0000 UTC m=+0.020579365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:31:56 np0005603621 podman[155506]: 2026-01-31 07:31:56.281952109 +0000 UTC m=+0.128213386 container init 8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jang, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:31:56 np0005603621 podman[155506]: 2026-01-31 07:31:56.291312121 +0000 UTC m=+0.137573368 container start 8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:31:56 np0005603621 podman[155506]: 2026-01-31 07:31:56.29468082 +0000 UTC m=+0.140942067 container attach 8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:31:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:31:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:31:56 np0005603621 python3.9[155604]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:57 np0005603621 magical_jang[155524]: {
Jan 31 02:31:57 np0005603621 magical_jang[155524]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:31:57 np0005603621 magical_jang[155524]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:31:57 np0005603621 magical_jang[155524]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:31:57 np0005603621 magical_jang[155524]:        "osd_id": 0,
Jan 31 02:31:57 np0005603621 magical_jang[155524]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:31:57 np0005603621 magical_jang[155524]:        "type": "bluestore"
Jan 31 02:31:57 np0005603621 magical_jang[155524]:    }
Jan 31 02:31:57 np0005603621 magical_jang[155524]: }
Jan 31 02:31:57 np0005603621 systemd[1]: libpod-8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3.scope: Deactivated successfully.
Jan 31 02:31:57 np0005603621 podman[155506]: 2026-01-31 07:31:57.060585599 +0000 UTC m=+0.906846856 container died 8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jang, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:31:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6fd11b7beccfc1caf184aff2feb01549f6fd7c1ac08ff2755a25766e35c6191b-merged.mount: Deactivated successfully.
Jan 31 02:31:57 np0005603621 podman[155506]: 2026-01-31 07:31:57.104269488 +0000 UTC m=+0.950530745 container remove 8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jang, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:31:57 np0005603621 systemd[1]: libpod-conmon-8e523861b7cebed66c7a99ef9b860edf8d0a0bc3f21562504206cd9a84d567c3.scope: Deactivated successfully.
Jan 31 02:31:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:31:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:31:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4a0319ff-ea2f-421e-b472-05cd1a789a95 does not exist
Jan 31 02:31:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 32819d64-fd34-4570-bb38-8378ce583719 does not exist
Jan 31 02:31:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5ec3f7f8-6bd0-4ddb-84e5-3140a000628d does not exist
Jan 31 02:31:57 np0005603621 python3.9[155783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:31:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:31:57 np0005603621 python3.9[155912]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:31:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:31:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:31:58.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:58 np0005603621 python3.9[156064]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:31:58 np0005603621 systemd[1]: Reloading.
Jan 31 02:31:58 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:31:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:31:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:31:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:31:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:31:58 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:32:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:00.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:00 np0005603621 python3.9[156254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:32:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:00.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:00 np0005603621 python3.9[156332]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:32:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7483 writes, 31K keys, 7483 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7483 writes, 1437 syncs, 5.21 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7483 writes, 31K keys, 7483 commit groups, 1.0 writes per commit group, ingest: 20.39 MB, 0.03 MB/s#012Interval WAL: 7483 writes, 1437 syncs, 5.21 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000122 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000122 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 31 02:32:01 np0005603621 python3.9[156485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:32:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:02 np0005603621 python3.9[156563]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:02.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:02.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:02 np0005603621 python3.9[156715]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:02 np0005603621 systemd[1]: Reloading.
Jan 31 02:32:02 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:32:02 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:32:03 np0005603621 systemd[1]: Starting Create netns directory...
Jan 31 02:32:03 np0005603621 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 31 02:32:03 np0005603621 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 31 02:32:03 np0005603621 systemd[1]: Finished Create netns directory.
Jan 31 02:32:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:04 np0005603621 python3.9[156909]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:32:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:32:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:04.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:32:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:04.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:04 np0005603621 python3.9[157061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:32:05 np0005603621 python3.9[157184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769844724.3751113-958-199282650662834/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:32:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:06 np0005603621 python3.9[157387]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:32:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:06.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:32:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:06 np0005603621 python3.9[157539]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:32:07 np0005603621 python3.9[157691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:08 np0005603621 python3.9[157815]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844727.0858417-1057-265796637189133/.source.json _original_basename=.h0heg72s follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:08.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:32:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:08.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:08 np0005603621 python3.9[157965]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.144453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844731144504, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1535, "num_deletes": 251, "total_data_size": 2765007, "memory_usage": 2805888, "flush_reason": "Manual Compaction"}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844731159343, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2702218, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10752, "largest_seqno": 12286, "table_properties": {"data_size": 2695154, "index_size": 4135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14061, "raw_average_key_size": 19, "raw_value_size": 2681102, "raw_average_value_size": 3698, "num_data_blocks": 187, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844574, "oldest_key_time": 1769844574, "file_creation_time": 1769844731, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 14918 microseconds, and 3655 cpu microseconds.
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.159379) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2702218 bytes OK
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.159394) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.161499) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.161510) EVENT_LOG_v1 {"time_micros": 1769844731161506, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.161526) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2758595, prev total WAL file size 2758595, number of live WAL files 2.
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.162094) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2638KB)], [26(7569KB)]
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844731162124, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10452956, "oldest_snapshot_seqno": -1}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3979 keys, 8238984 bytes, temperature: kUnknown
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844731227155, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8238984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8209837, "index_size": 18093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 96397, "raw_average_key_size": 24, "raw_value_size": 8135479, "raw_average_value_size": 2044, "num_data_blocks": 782, "num_entries": 3979, "num_filter_entries": 3979, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769844731, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.227380) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8238984 bytes
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.228632) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.5 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.4 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.9) write-amplify(3.0) OK, records in: 4496, records dropped: 517 output_compression: NoCompression
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.228653) EVENT_LOG_v1 {"time_micros": 1769844731228642, "job": 10, "event": "compaction_finished", "compaction_time_micros": 65115, "compaction_time_cpu_micros": 13899, "output_level": 6, "num_output_files": 1, "total_output_size": 8238984, "num_input_records": 4496, "num_output_records": 3979, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844731228989, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844731229776, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.161994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.229921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.229932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.229937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.229942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:32:11.229947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:32:11 np0005603621 python3.9[158389]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 31 02:32:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 02:32:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:12.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:12 np0005603621 python3.9[158542]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:32:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:12.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:13 np0005603621 python3[158694]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:32:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:14.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:14.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:16.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:16.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:32:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:18.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:32:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:20 np0005603621 podman[158791]: 2026-01-31 07:32:20.146914097 +0000 UTC m=+0.699386534 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:32:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:20.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:21 np0005603621 podman[158706]: 2026-01-31 07:32:21.600266839 +0000 UTC m=+8.213842640 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:32:21 np0005603621 podman[158860]: 2026-01-31 07:32:21.70638304 +0000 UTC m=+0.045554894 container create 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 31 02:32:21 np0005603621 podman[158860]: 2026-01-31 07:32:21.678461341 +0000 UTC m=+0.017633215 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:32:21 np0005603621 python3[158694]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:32:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:22.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:22.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:24.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:24.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:24 np0005603621 python3.9[159051]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:32:25 np0005603621 python3.9[159205]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:25 np0005603621 python3.9[159330]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:32:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:26.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:26 np0005603621 python3.9[159483]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769844745.7376306-1291-23986908210732/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:26.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:26 np0005603621 python3.9[159559]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:32:26 np0005603621 systemd[1]: Reloading.
Jan 31 02:32:26 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:32:26 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:32:27 np0005603621 python3.9[159670]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:27 np0005603621 systemd[1]: Reloading.
Jan 31 02:32:27 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:32:27 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:32:27 np0005603621 systemd[1]: Starting ovn_metadata_agent container...
Jan 31 02:32:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:28.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:32:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ac0e02931eacf6eed2f2c4c6b1921bb30c2aafd54d5acd018b9c101edb32e6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9ac0e02931eacf6eed2f2c4c6b1921bb30c2aafd54d5acd018b9c101edb32e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:28.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:28 np0005603621 systemd[1]: Started /usr/bin/podman healthcheck run 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52.
Jan 31 02:32:28 np0005603621 podman[159711]: 2026-01-31 07:32:28.685013883 +0000 UTC m=+0.674299767 container init 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + sudo -E kolla_set_configs
Jan 31 02:32:28 np0005603621 podman[159711]: 2026-01-31 07:32:28.703843541 +0000 UTC m=+0.693129425 container start 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 02:32:28 np0005603621 edpm-start-podman-container[159711]: ovn_metadata_agent
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Validating config file
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Copying service configuration files
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Writing out command to execute
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 31 02:32:28 np0005603621 edpm-start-podman-container[159710]: Creating additional drop-in dependency for "ovn_metadata_agent" (1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52)
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: ++ cat /run_command
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + CMD=neutron-ovn-metadata-agent
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + ARGS=
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + sudo kolla_copy_cacerts
Jan 31 02:32:28 np0005603621 systemd[1]: Reloading.
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: Running command: 'neutron-ovn-metadata-agent'
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + [[ ! -n '' ]]
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + . kolla_extend_start
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + umask 0022
Jan 31 02:32:28 np0005603621 ovn_metadata_agent[159729]: + exec neutron-ovn-metadata-agent
Jan 31 02:32:28 np0005603621 podman[159736]: 2026-01-31 07:32:28.781513859 +0000 UTC m=+0.068396553 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 02:32:28 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:32:28 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:32:28 np0005603621 systemd[1]: Started ovn_metadata_agent container.
Jan 31 02:32:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:30 np0005603621 python3.9[159970]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 31 02:32:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.413 159734 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.414 159734 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.414 159734 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.414 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.414 159734 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.414 159734 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.415 159734 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.416 159734 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.417 159734 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.418 159734 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.419 159734 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.420 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.421 159734 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.422 159734 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.423 159734 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.424 159734 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.425 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.426 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.427 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.428 159734 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.429 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.430 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.431 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.432 159734 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.433 159734 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.434 159734 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.435 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.436 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.437 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.438 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.439 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.440 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.441 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.442 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.443 159734 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.444 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.445 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.446 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.447 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.448 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.448 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.448 159734 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.448 159734 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.464 159734 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.465 159734 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.465 159734 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.465 159734 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.465 159734 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.478 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 59a8b96c-18d5-4426-968c-99837b56953c (UUID: 59a8b96c-18d5-4426-968c-99837b56953c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.502 159734 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.502 159734 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.502 159734 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.502 159734 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.505 159734 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.511 159734 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.515 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '59a8b96c-18d5-4426-968c-99837b56953c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], external_ids={}, name=59a8b96c-18d5-4426-968c-99837b56953c, nb_cfg_timestamp=1769844687279, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.516 159734 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe29effff40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.517 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.517 159734 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.518 159734 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.518 159734 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.521 159734 DEBUG oslo_service.service [-] Started child 159995 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.523 159734 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp07osgxqf/privsep.sock']#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.525 159995 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1934556'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 31 02:32:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:30.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.561 159995 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.561 159995 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.562 159995 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.566 159995 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.574 159995 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 31 02:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.583 159995 INFO eventlet.wsgi.server [-] (159995) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 31 02:32:31 np0005603621 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.091 159734 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.092 159734 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp07osgxqf/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:30.998 160056 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.001 160056 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.003 160056 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.003 160056 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160056#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.095 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6c16b9fa-7669-40b6-9a6b-1517ccc8e69c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:31 np0005603621 python3.9[160131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.554 160056 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.554 160056 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:32:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:31.555 160056 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:32:31 np0005603621 python3.9[160258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844750.922772-1426-2349833192724/.source.yaml _original_basename=.xn88uqj2 follow=False checksum=87ad539680adb8db4e1be011e7c446590196a675 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.038 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e21eb617-790b-4c4a-a380-c1d401411219]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.040 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, column=external_ids, values=({'neutron:ovn-metadata-id': '954b69e7-b9d1-58ac-9687-bc424bd8356f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.065 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.100 159734 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.100 159734 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.100 159734 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.100 159734 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.100 159734 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.100 159734 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.101 159734 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.102 159734 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.102 159734 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.102 159734 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.102 159734 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.102 159734 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.103 159734 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.104 159734 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.105 159734 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.106 159734 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.107 159734 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.107 159734 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.107 159734 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.107 159734 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.108 159734 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.109 159734 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.110 159734 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.111 159734 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.112 159734 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.112 159734 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.112 159734 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.112 159734 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.112 159734 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.113 159734 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.114 159734 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.115 159734 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.115 159734 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.115 159734 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.115 159734 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.115 159734 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.116 159734 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.117 159734 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.118 159734 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.118 159734 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.118 159734 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.118 159734 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.118 159734 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.118 159734 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.119 159734 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.120 159734 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.121 159734 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.122 159734 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.122 159734 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.122 159734 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.122 159734 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.123 159734 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.124 159734 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.125 159734 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.126 159734 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.126 159734 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.126 159734 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.126 159734 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.126 159734 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.127 159734 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.127 159734 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.127 159734 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.127 159734 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.127 159734 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.127 159734 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.128 159734 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.129 159734 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.130 159734 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.130 159734 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.130 159734 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.130 159734 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.130 159734 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.130 159734 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.131 159734 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.132 159734 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.133 159734 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.133 159734 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.133 159734 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.133 159734 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.133 159734 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.134 159734 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.135 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.136 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.137 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.138 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.138 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.138 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.138 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.138 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.138 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.139 159734 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.140 159734 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:32:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:32:32.140 159734 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:32:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:32 np0005603621 systemd-logind[818]: Session 48 logged out. Waiting for processes to exit.
Jan 31 02:32:32 np0005603621 systemd[1]: session-48.scope: Deactivated successfully.
Jan 31 02:32:32 np0005603621 systemd[1]: session-48.scope: Consumed 45.624s CPU time.
Jan 31 02:32:32 np0005603621 systemd-logind[818]: Removed session 48.
Jan 31 02:32:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:32.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:34.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:36.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:37 np0005603621 systemd-logind[818]: New session 49 of user zuul.
Jan 31 02:32:37 np0005603621 systemd[1]: Started Session 49 of User zuul.
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:32:38
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'backups', 'volumes', '.rgw.root', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:32:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:38 np0005603621 python3.9[160439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:32:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:39 np0005603621 python3.9[160596]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:32:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:40.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:41 np0005603621 python3.9[160761]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:32:41 np0005603621 systemd[1]: Reloading.
Jan 31 02:32:41 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:32:41 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:32:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:42.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:42 np0005603621 python3.9[160948]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:32:42 np0005603621 network[160965]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:32:42 np0005603621 network[160966]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:32:42 np0005603621 network[160967]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:32:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:42.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:44.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:44.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:46.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:46.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:47 np0005603621 python3.9[161281]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:47 np0005603621 python3.9[161435]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:48.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:48 np0005603621 python3.9[161588]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:48.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:32:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:32:49 np0005603621 python3.9[161741]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:49 np0005603621 python3.9[161895]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:50.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:50 np0005603621 python3.9[162048]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:50.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:50 np0005603621 podman[162201]: 2026-01-31 07:32:50.970484161 +0000 UTC m=+0.100928099 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 02:32:51 np0005603621 python3.9[162202]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:32:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:52 np0005603621 python3.9[162382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:52.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:52 np0005603621 python3.9[162534]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:53 np0005603621 python3.9[162686]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:53 np0005603621 python3.9[162839]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:54.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:54.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:54 np0005603621 python3.9[162991]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:55 np0005603621 python3.9[163143]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:55 np0005603621 python3.9[163296]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:56.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:32:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:56.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:32:56 np0005603621 python3.9[163448]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:57 np0005603621 python3.9[163600]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:32:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:32:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:32:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:32:58 np0005603621 python3.9[163860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 02:32:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:32:58.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:32:58 np0005603621 python3.9[164036]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:32:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f60c79bc-0efe-4075-8d43-ac6c9fb45e2e does not exist
Jan 31 02:32:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ff31a9ab-b252-438f-9208-1361805b00c6 does not exist
Jan 31 02:32:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 55400466-0a05-40b2-acc0-2c3e6c04da77 does not exist
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:32:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:32:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:32:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:32:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:32:58.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:59.013512523 +0000 UTC m=+0.035878676 container create e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:32:59 np0005603621 systemd[1]: Started libpod-conmon-e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3.scope.
Jan 31 02:32:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:59.086615315 +0000 UTC m=+0.108981478 container init e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:58.994468628 +0000 UTC m=+0.016834791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:59.092601342 +0000 UTC m=+0.114967485 container start e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:32:59 np0005603621 exciting_sinoussi[164347]: 167 167
Jan 31 02:32:59 np0005603621 systemd[1]: libpod-e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3.scope: Deactivated successfully.
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:59.098505327 +0000 UTC m=+0.120871570 container attach e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:59.09926261 +0000 UTC m=+0.121628793 container died e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:32:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-be345ff805384f81f99114b9e08b812b01972cbc591d51d77c2c920524640051-merged.mount: Deactivated successfully.
Jan 31 02:32:59 np0005603621 podman[164344]: 2026-01-31 07:32:59.132128645 +0000 UTC m=+0.086132008 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:32:59 np0005603621 podman[164329]: 2026-01-31 07:32:59.138848685 +0000 UTC m=+0.161214828 container remove e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:32:59 np0005603621 systemd[1]: libpod-conmon-e8e32e0fd3d6dda8c2f843ef07a1f8d866435da14f9528db6bb466ac86c59de3.scope: Deactivated successfully.
Jan 31 02:32:59 np0005603621 python3.9[164300]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:32:59 np0005603621 podman[164410]: 2026-01-31 07:32:59.259342434 +0000 UTC m=+0.036054392 container create 9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:32:59 np0005603621 systemd[1]: Started libpod-conmon-9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754.scope.
Jan 31 02:32:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:32:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e90aef21c68e1a6c70dcc3c0ff876e7c85ddbf13a2bdee5a9bea3e991577e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e90aef21c68e1a6c70dcc3c0ff876e7c85ddbf13a2bdee5a9bea3e991577e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e90aef21c68e1a6c70dcc3c0ff876e7c85ddbf13a2bdee5a9bea3e991577e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e90aef21c68e1a6c70dcc3c0ff876e7c85ddbf13a2bdee5a9bea3e991577e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e90aef21c68e1a6c70dcc3c0ff876e7c85ddbf13a2bdee5a9bea3e991577e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:32:59 np0005603621 podman[164410]: 2026-01-31 07:32:59.32253192 +0000 UTC m=+0.099243868 container init 9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:32:59 np0005603621 podman[164410]: 2026-01-31 07:32:59.328544999 +0000 UTC m=+0.105256927 container start 9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:32:59 np0005603621 podman[164410]: 2026-01-31 07:32:59.334153965 +0000 UTC m=+0.110865893 container attach 9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:32:59 np0005603621 podman[164410]: 2026-01-31 07:32:59.242936927 +0000 UTC m=+0.019648875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:32:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:32:59 np0005603621 python3.9[164560]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:33:00 np0005603621 hopeful_aryabhata[164454]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:33:00 np0005603621 hopeful_aryabhata[164454]: --> relative data size: 1.0
Jan 31 02:33:00 np0005603621 hopeful_aryabhata[164454]: --> All data devices are unavailable
Jan 31 02:33:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:00 np0005603621 systemd[1]: libpod-9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754.scope: Deactivated successfully.
Jan 31 02:33:00 np0005603621 podman[164410]: 2026-01-31 07:33:00.04409766 +0000 UTC m=+0.820809608 container died 9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:33:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-68e90aef21c68e1a6c70dcc3c0ff876e7c85ddbf13a2bdee5a9bea3e991577e4-merged.mount: Deactivated successfully.
Jan 31 02:33:00 np0005603621 podman[164410]: 2026-01-31 07:33:00.094178747 +0000 UTC m=+0.870890675 container remove 9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:33:00 np0005603621 systemd[1]: libpod-conmon-9e324407b7b2aa2c35c79458a99ff9cb46d472e0cff1a9531fc4e2b5569fd754.scope: Deactivated successfully.
Jan 31 02:33:00 np0005603621 python3.9[164722]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:33:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:00.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.550934852 +0000 UTC m=+0.036713431 container create 7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_thompson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:33:00 np0005603621 systemd[1]: Started libpod-conmon-7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443.scope.
Jan 31 02:33:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:00.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.613121778 +0000 UTC m=+0.098900367 container init 7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.61891299 +0000 UTC m=+0.104691589 container start 7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_thompson, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.622309341 +0000 UTC m=+0.108087970 container attach 7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_thompson, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:33:00 np0005603621 festive_thompson[164915]: 167 167
Jan 31 02:33:00 np0005603621 systemd[1]: libpod-7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443.scope: Deactivated successfully.
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.623412814 +0000 UTC m=+0.109191403 container died 7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_thompson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.537249276 +0000 UTC m=+0.023027875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:33:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-180382ca0e7ab7f77e76abe304dc01ff8fd51b1b2381695f11df8f2d13f18ebc-merged.mount: Deactivated successfully.
Jan 31 02:33:00 np0005603621 podman[164899]: 2026-01-31 07:33:00.662046022 +0000 UTC m=+0.147824611 container remove 7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_thompson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:33:00 np0005603621 systemd[1]: libpod-conmon-7128a5f53ce41f03391f4d40168e1e53f54943b751503cf4756d253b7b253443.scope: Deactivated successfully.
Jan 31 02:33:00 np0005603621 podman[164938]: 2026-01-31 07:33:00.800617486 +0000 UTC m=+0.038440612 container create d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:33:00 np0005603621 systemd[1]: Started libpod-conmon-d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7.scope.
Jan 31 02:33:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:33:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994ffcee41a4120367256f20ddc5c2a33def6833bb1ce935bc170d4ca8a9f6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994ffcee41a4120367256f20ddc5c2a33def6833bb1ce935bc170d4ca8a9f6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994ffcee41a4120367256f20ddc5c2a33def6833bb1ce935bc170d4ca8a9f6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994ffcee41a4120367256f20ddc5c2a33def6833bb1ce935bc170d4ca8a9f6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:00 np0005603621 podman[164938]: 2026-01-31 07:33:00.869591945 +0000 UTC m=+0.107415091 container init d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:33:00 np0005603621 podman[164938]: 2026-01-31 07:33:00.875280324 +0000 UTC m=+0.113103460 container start d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:33:00 np0005603621 podman[164938]: 2026-01-31 07:33:00.878329125 +0000 UTC m=+0.116152271 container attach d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:33:00 np0005603621 podman[164938]: 2026-01-31 07:33:00.78522361 +0000 UTC m=+0.023046776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]: {
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:    "0": [
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:        {
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "devices": [
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "/dev/loop3"
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            ],
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "lv_name": "ceph_lv0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "lv_size": "7511998464",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "name": "ceph_lv0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "tags": {
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.cluster_name": "ceph",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.crush_device_class": "",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.encrypted": "0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.osd_id": "0",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.type": "block",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:                "ceph.vdo": "0"
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            },
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "type": "block",
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:            "vg_name": "ceph_vg0"
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:        }
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]:    ]
Jan 31 02:33:01 np0005603621 distracted_agnesi[164955]: }
Jan 31 02:33:01 np0005603621 systemd[1]: libpod-d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7.scope: Deactivated successfully.
Jan 31 02:33:01 np0005603621 podman[164938]: 2026-01-31 07:33:01.605879812 +0000 UTC m=+0.843702988 container died d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:33:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3994ffcee41a4120367256f20ddc5c2a33def6833bb1ce935bc170d4ca8a9f6e-merged.mount: Deactivated successfully.
Jan 31 02:33:01 np0005603621 podman[164938]: 2026-01-31 07:33:01.686888628 +0000 UTC m=+0.924711764 container remove d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:33:01 np0005603621 systemd[1]: libpod-conmon-d6cf73974209fa56fca3885e8ce129d7205635433292b888be5e7cc072af64b7.scope: Deactivated successfully.
Jan 31 02:33:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:02 np0005603621 python3.9[165205]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.159135782 +0000 UTC m=+0.035076422 container create c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:33:02 np0005603621 systemd[1]: Started libpod-conmon-c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848.scope.
Jan 31 02:33:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.237725396 +0000 UTC m=+0.113666056 container init c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_almeida, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.14289624 +0000 UTC m=+0.018836900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.242803867 +0000 UTC m=+0.118744507 container start c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_almeida, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.245694063 +0000 UTC m=+0.121634733 container attach c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:33:02 np0005603621 clever_almeida[165264]: 167 167
Jan 31 02:33:02 np0005603621 systemd[1]: libpod-c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848.scope: Deactivated successfully.
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.247583129 +0000 UTC m=+0.123523769 container died c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:33:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0cca646530ef3f82b4b68c540d7c37dd8f706b017c08b416ddd21cffec611f91-merged.mount: Deactivated successfully.
Jan 31 02:33:02 np0005603621 podman[165246]: 2026-01-31 07:33:02.276651612 +0000 UTC m=+0.152592252 container remove c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_almeida, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:33:02 np0005603621 systemd[1]: libpod-conmon-c0e572d8c567465fb66b1c1ac17d84bf64efb1ab8f87f822d99f46fee3d57848.scope: Deactivated successfully.
Jan 31 02:33:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:02.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:02 np0005603621 podman[165360]: 2026-01-31 07:33:02.401239553 +0000 UTC m=+0.042348969 container create 24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:33:02 np0005603621 systemd[1]: Started libpod-conmon-24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6.scope.
Jan 31 02:33:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:33:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda84c175f3f3c45f0a4610bd8a731084622beba1f9dea85be3f1244f0fb884b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda84c175f3f3c45f0a4610bd8a731084622beba1f9dea85be3f1244f0fb884b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda84c175f3f3c45f0a4610bd8a731084622beba1f9dea85be3f1244f0fb884b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eda84c175f3f3c45f0a4610bd8a731084622beba1f9dea85be3f1244f0fb884b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:33:02 np0005603621 podman[165360]: 2026-01-31 07:33:02.464056927 +0000 UTC m=+0.105166343 container init 24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:33:02 np0005603621 podman[165360]: 2026-01-31 07:33:02.468627254 +0000 UTC m=+0.109736660 container start 24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dijkstra, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:33:02 np0005603621 podman[165360]: 2026-01-31 07:33:02.47254986 +0000 UTC m=+0.113659276 container attach 24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dijkstra, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:33:02 np0005603621 podman[165360]: 2026-01-31 07:33:02.378454736 +0000 UTC m=+0.019564252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:33:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:02.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:02 np0005603621 python3.9[165458]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]: {
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:        "osd_id": 0,
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:        "type": "bluestore"
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]:    }
Jan 31 02:33:03 np0005603621 pedantic_dijkstra[165380]: }
Jan 31 02:33:03 np0005603621 systemd[1]: libpod-24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6.scope: Deactivated successfully.
Jan 31 02:33:03 np0005603621 podman[165360]: 2026-01-31 07:33:03.286501113 +0000 UTC m=+0.927610539 container died 24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:33:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eda84c175f3f3c45f0a4610bd8a731084622beba1f9dea85be3f1244f0fb884b-merged.mount: Deactivated successfully.
Jan 31 02:33:03 np0005603621 podman[165360]: 2026-01-31 07:33:03.334600691 +0000 UTC m=+0.975710107 container remove 24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_dijkstra, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:33:03 np0005603621 systemd[1]: libpod-conmon-24a5bb2824c79d4530c5eddd40c1812438ca94b3b8a2544e039a7ad94b786cf6.scope: Deactivated successfully.
Jan 31 02:33:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:33:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:33:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:33:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:33:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a407d2a0-81f2-4fa4-a80c-a3c5a72709e1 does not exist
Jan 31 02:33:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 155dd508-3ae5-466a-aae1-d5671eff26b7 does not exist
Jan 31 02:33:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bc383737-a94f-4496-91ab-0b229e8c6529 does not exist
Jan 31 02:33:03 np0005603621 python3.9[165639]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:33:03 np0005603621 systemd[1]: Reloading.
Jan 31 02:33:03 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:33:03 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:33:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:04.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:33:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:33:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:04.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:04 np0005603621 python3.9[165876]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:05 np0005603621 python3.9[166029]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:05 np0005603621 python3.9[166183]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:06 np0005603621 python3.9[166386]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:06.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:07 np0005603621 python3.9[166539]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:07 np0005603621 python3.9[166693]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:08 np0005603621 python3.9[166846]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:33:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:08.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:33:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:08.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:10.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:10.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:10 np0005603621 python3.9[167000]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 31 02:33:11 np0005603621 python3.9[167153]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:33:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:12.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:12 np0005603621 python3.9[167312]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:33:12 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:33:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:13 np0005603621 python3.9[167474]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:33:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:14.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:14.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:14 np0005603621 python3.9[167558]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:33:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:16.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:16.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:18.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 31 02:33:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:20.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 31 02:33:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:21 np0005603621 podman[167670]: 2026-01-31 07:33:21.545072168 +0000 UTC m=+0.097507408 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:33:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:22.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:22.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:24.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:24.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:26.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 31 02:33:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 31 02:33:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 31 02:33:27 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 02:33:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 31 02:33:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:28.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:29 np0005603621 podman[167834]: 2026-01-31 07:33:29.487730053 +0000 UTC m=+0.044727467 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Jan 31 02:33:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:30.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:33:30.450 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:33:30.450 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:33:30.450 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:33:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:30.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 39 op/s
Jan 31 02:33:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:32.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Jan 31 02:33:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 458 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 111 op/s
Jan 31 02:33:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:36.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:37 np0005603621 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:33:37 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 97 KiB/s rd, 0 B/s wr, 161 op/s
Jan 31 02:33:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:33:38
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root']
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:33:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 31 02:33:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:40.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:40.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 152 op/s
Jan 31 02:33:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:42.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 0 B/s wr, 121 op/s
Jan 31 02:33:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:44.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:44.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:45 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 31 02:33:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 0 B/s wr, 89 op/s
Jan 31 02:33:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:46.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:46.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:47 np0005603621 kernel: SELinux:  Converting 2780 SID table entries...
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:33:47 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 49 op/s
Jan 31 02:33:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:48.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:33:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:33:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:48.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:50.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:50.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:52.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:52 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 31 02:33:52 np0005603621 podman[167930]: 2026-01-31 07:33:52.526298343 +0000 UTC m=+0.076907342 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 02:33:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:54.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:33:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:54.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:33:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:56.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:33:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:33:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:33:58.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:33:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:33:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:33:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:33:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:00.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:00 np0005603621 podman[169205]: 2026-01-31 07:34:00.473650956 +0000 UTC m=+0.036936628 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 02:34:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:02.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:02.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:04.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:34:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 83606194-6c7e-4335-a284-2c89452ee05f does not exist
Jan 31 02:34:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a1e23e7e-3bd8-4f34-9151-57b4ae20f2b0 does not exist
Jan 31 02:34:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9d5c0565-4361-4571-a75b-ed65ea06821c does not exist
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:34:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:04.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:04 np0005603621 podman[174040]: 2026-01-31 07:34:04.900333652 +0000 UTC m=+0.042841985 container create 1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:34:04 np0005603621 systemd[1]: Started libpod-conmon-1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda.scope.
Jan 31 02:34:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:34:04 np0005603621 podman[174040]: 2026-01-31 07:34:04.969648123 +0000 UTC m=+0.112156476 container init 1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chandrasekhar, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:34:04 np0005603621 podman[174040]: 2026-01-31 07:34:04.876985914 +0000 UTC m=+0.019494277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:34:04 np0005603621 podman[174040]: 2026-01-31 07:34:04.976247113 +0000 UTC m=+0.118755446 container start 1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chandrasekhar, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:34:04 np0005603621 podman[174040]: 2026-01-31 07:34:04.979443274 +0000 UTC m=+0.121951607 container attach 1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:34:04 np0005603621 zealous_chandrasekhar[174166]: 167 167
Jan 31 02:34:04 np0005603621 systemd[1]: libpod-1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda.scope: Deactivated successfully.
Jan 31 02:34:04 np0005603621 podman[174040]: 2026-01-31 07:34:04.981651583 +0000 UTC m=+0.124159916 container died 1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:34:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dadeec2dcd2ee35df6c0c6eaefaa4e92054ce2283e8ca7f10f18aa4767cce392-merged.mount: Deactivated successfully.
Jan 31 02:34:05 np0005603621 podman[174040]: 2026-01-31 07:34:05.020945206 +0000 UTC m=+0.163453539 container remove 1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:34:05 np0005603621 systemd[1]: libpod-conmon-1720cc65ef6d496948287e32b0e1291deb7b564d3c40c0c65e6c58d8c6072fda.scope: Deactivated successfully.
Jan 31 02:34:05 np0005603621 podman[174332]: 2026-01-31 07:34:05.137014967 +0000 UTC m=+0.036364981 container create 35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:34:05 np0005603621 systemd[1]: Started libpod-conmon-35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a.scope.
Jan 31 02:34:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:34:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683275758c03973389b8fa892eb06b2eb7a9b3fdd33f81d615cf6369c843e5bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683275758c03973389b8fa892eb06b2eb7a9b3fdd33f81d615cf6369c843e5bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683275758c03973389b8fa892eb06b2eb7a9b3fdd33f81d615cf6369c843e5bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683275758c03973389b8fa892eb06b2eb7a9b3fdd33f81d615cf6369c843e5bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683275758c03973389b8fa892eb06b2eb7a9b3fdd33f81d615cf6369c843e5bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:05 np0005603621 podman[174332]: 2026-01-31 07:34:05.123452738 +0000 UTC m=+0.022802772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:34:05 np0005603621 podman[174332]: 2026-01-31 07:34:05.248406639 +0000 UTC m=+0.147756673 container init 35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:34:05 np0005603621 podman[174332]: 2026-01-31 07:34:05.254362407 +0000 UTC m=+0.153712431 container start 35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:34:05 np0005603621 podman[174332]: 2026-01-31 07:34:05.262615168 +0000 UTC m=+0.161965182 container attach 35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:34:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:34:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:34:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:34:05 np0005603621 flamboyant_ardinghelli[174452]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:34:05 np0005603621 flamboyant_ardinghelli[174452]: --> relative data size: 1.0
Jan 31 02:34:05 np0005603621 flamboyant_ardinghelli[174452]: --> All data devices are unavailable
Jan 31 02:34:05 np0005603621 systemd[1]: libpod-35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a.scope: Deactivated successfully.
Jan 31 02:34:05 np0005603621 podman[174332]: 2026-01-31 07:34:05.998730635 +0000 UTC m=+0.898080649 container died 35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:34:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-683275758c03973389b8fa892eb06b2eb7a9b3fdd33f81d615cf6369c843e5bd-merged.mount: Deactivated successfully.
Jan 31 02:34:06 np0005603621 podman[174332]: 2026-01-31 07:34:06.048127006 +0000 UTC m=+0.947477030 container remove 35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:34:06 np0005603621 systemd[1]: libpod-conmon-35320c153f1bc33e503f229baccc038e802593796c6f4c8d4cc3aa834873a09a.scope: Deactivated successfully.
Jan 31 02:34:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:06.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.504638412 +0000 UTC m=+0.037044973 container create 977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:34:06 np0005603621 systemd[1]: Started libpod-conmon-977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb.scope.
Jan 31 02:34:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.563526044 +0000 UTC m=+0.095932645 container init 977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.570287268 +0000 UTC m=+0.102693839 container start 977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.573292463 +0000 UTC m=+0.105699034 container attach 977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:34:06 np0005603621 sleepy_rhodes[176151]: 167 167
Jan 31 02:34:06 np0005603621 systemd[1]: libpod-977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb.scope: Deactivated successfully.
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.578158627 +0000 UTC m=+0.110565218 container died 977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.486401775 +0000 UTC m=+0.018808376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:34:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2af44ffa378adb2006740984d7c69ed5b911087a70965619fdd2d7806b7fe484-merged.mount: Deactivated successfully.
Jan 31 02:34:06 np0005603621 podman[176061]: 2026-01-31 07:34:06.611854792 +0000 UTC m=+0.144261373 container remove 977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:34:06 np0005603621 systemd[1]: libpod-conmon-977c45243f437b49bba7d16dd909cbbef00cfd2a03866207555fe7bea1230bcb.scope: Deactivated successfully.
Jan 31 02:34:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:06.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:06 np0005603621 podman[176306]: 2026-01-31 07:34:06.736390839 +0000 UTC m=+0.043766334 container create de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 31 02:34:06 np0005603621 systemd[1]: Started libpod-conmon-de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7.scope.
Jan 31 02:34:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:34:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe7f15e76bec5cd6315e3f4f04947d53b87d87e3c85eeaf6d768f7feac20e46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe7f15e76bec5cd6315e3f4f04947d53b87d87e3c85eeaf6d768f7feac20e46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe7f15e76bec5cd6315e3f4f04947d53b87d87e3c85eeaf6d768f7feac20e46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebe7f15e76bec5cd6315e3f4f04947d53b87d87e3c85eeaf6d768f7feac20e46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:06 np0005603621 podman[176306]: 2026-01-31 07:34:06.712139773 +0000 UTC m=+0.019515318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:34:06 np0005603621 podman[176306]: 2026-01-31 07:34:06.821108399 +0000 UTC m=+0.128483954 container init de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lovelace, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:34:06 np0005603621 podman[176306]: 2026-01-31 07:34:06.825926051 +0000 UTC m=+0.133301556 container start de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lovelace, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:34:06 np0005603621 podman[176306]: 2026-01-31 07:34:06.828651567 +0000 UTC m=+0.136027082 container attach de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lovelace, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]: {
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:    "0": [
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:        {
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "devices": [
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "/dev/loop3"
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            ],
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "lv_name": "ceph_lv0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "lv_size": "7511998464",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "name": "ceph_lv0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "tags": {
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.cluster_name": "ceph",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.crush_device_class": "",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.encrypted": "0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.osd_id": "0",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.type": "block",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:                "ceph.vdo": "0"
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            },
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "type": "block",
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:            "vg_name": "ceph_vg0"
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:        }
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]:    ]
Jan 31 02:34:07 np0005603621 pensive_lovelace[176408]: }
Jan 31 02:34:07 np0005603621 systemd[1]: libpod-de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7.scope: Deactivated successfully.
Jan 31 02:34:07 np0005603621 podman[176306]: 2026-01-31 07:34:07.601044331 +0000 UTC m=+0.908419826 container died de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:34:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ebe7f15e76bec5cd6315e3f4f04947d53b87d87e3c85eeaf6d768f7feac20e46-merged.mount: Deactivated successfully.
Jan 31 02:34:07 np0005603621 podman[176306]: 2026-01-31 07:34:07.648970326 +0000 UTC m=+0.956345831 container remove de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lovelace, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:34:07 np0005603621 systemd[1]: libpod-conmon-de2b4dfe5ce2199b91422ccaf5a9d05668852b2d2ac20247446af616d19117f7.scope: Deactivated successfully.
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.164208878 +0000 UTC m=+0.035285966 container create ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:34:08 np0005603621 systemd[1]: Started libpod-conmon-ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e.scope.
Jan 31 02:34:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.236079702 +0000 UTC m=+0.107156820 container init ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.24108219 +0000 UTC m=+0.112159278 container start ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.243847747 +0000 UTC m=+0.114924835 container attach ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:34:08 np0005603621 magical_jemison[177817]: 167 167
Jan 31 02:34:08 np0005603621 systemd[1]: libpod-ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e.scope: Deactivated successfully.
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.246844312 +0000 UTC m=+0.117921420 container died ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.150847976 +0000 UTC m=+0.021925084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:34:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-39ded03be6d080708d1ac396cc83feec6b421271f0d892b85ced4c55170b70d8-merged.mount: Deactivated successfully.
Jan 31 02:34:08 np0005603621 podman[177716]: 2026-01-31 07:34:08.281743575 +0000 UTC m=+0.152820653 container remove ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jemison, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:34:08 np0005603621 systemd[1]: libpod-conmon-ce35abb4f78d7c8292c163eed6a77d3b2feeec4ee715a3b5fca5a33483c9ea5e.scope: Deactivated successfully.
Jan 31 02:34:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:08 np0005603621 podman[177991]: 2026-01-31 07:34:08.413264254 +0000 UTC m=+0.037249128 container create 0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:34:08 np0005603621 systemd[1]: Started libpod-conmon-0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35.scope.
Jan 31 02:34:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:34:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646be30d713d5d7d371360f7e6afecdb396c33b59399e3e759645812b3d77b90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646be30d713d5d7d371360f7e6afecdb396c33b59399e3e759645812b3d77b90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646be30d713d5d7d371360f7e6afecdb396c33b59399e3e759645812b3d77b90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646be30d713d5d7d371360f7e6afecdb396c33b59399e3e759645812b3d77b90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:34:08 np0005603621 podman[177991]: 2026-01-31 07:34:08.396820674 +0000 UTC m=+0.020805568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:34:08 np0005603621 podman[177991]: 2026-01-31 07:34:08.500716859 +0000 UTC m=+0.124701753 container init 0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:34:08 np0005603621 podman[177991]: 2026-01-31 07:34:08.506774961 +0000 UTC m=+0.130759835 container start 0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:34:08 np0005603621 podman[177991]: 2026-01-31 07:34:08.510265752 +0000 UTC m=+0.134250636 container attach 0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:34:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:08.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]: {
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:        "osd_id": 0,
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:        "type": "bluestore"
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]:    }
Jan 31 02:34:09 np0005603621 vigilant_chaplygin[178089]: }
Jan 31 02:34:09 np0005603621 systemd[1]: libpod-0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35.scope: Deactivated successfully.
Jan 31 02:34:09 np0005603621 podman[177991]: 2026-01-31 07:34:09.281405295 +0000 UTC m=+0.905390169 container died 0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:34:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-646be30d713d5d7d371360f7e6afecdb396c33b59399e3e759645812b3d77b90-merged.mount: Deactivated successfully.
Jan 31 02:34:09 np0005603621 podman[177991]: 2026-01-31 07:34:09.333838314 +0000 UTC m=+0.957823188 container remove 0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:34:09 np0005603621 systemd[1]: libpod-conmon-0c32c766372ba786bbf903cbc6d01050ee76f2cc974519b327afa69086d22f35.scope: Deactivated successfully.
Jan 31 02:34:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:34:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:34:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:34:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:34:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4bb072e7-fe2b-4946-8f51-bab26020c0ca does not exist
Jan 31 02:34:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 966af9e6-1780-4bab-a24b-c10d9070af01 does not exist
Jan 31 02:34:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1a4a083a-16d3-4b16-9661-551c6aeee594 does not exist
Jan 31 02:34:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:34:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:34:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:12.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:12.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:14.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:14.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:16.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:16.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:18.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:18.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:20.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:23 np0005603621 podman[185772]: 2026-01-31 07:34:23.588995584 +0000 UTC m=+0.136854758 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Jan 31 02:34:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:24.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:34:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:26.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:34:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:26.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:28.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:30 np0005603621 kernel: SELinux:  Converting 2781 SID table entries...
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability network_peer_controls=1
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability open_perms=1
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability extended_socket_class=1
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability always_check_network=0
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 31 02:34:30 np0005603621 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 31 02:34:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:30.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:34:30.451 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:34:30.452 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:34:30.452 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:34:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:30 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 31 02:34:31 np0005603621 podman[185859]: 2026-01-31 07:34:31.044561134 +0000 UTC m=+0.068712076 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:34:31 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:34:31 np0005603621 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Jan 31 02:34:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:32.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:32.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:34.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:36.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:36.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:34:38
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:34:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:38.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:34:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:38.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:39 np0005603621 systemd[1]: Stopping OpenSSH server daemon...
Jan 31 02:34:39 np0005603621 systemd[1]: sshd.service: Deactivated successfully.
Jan 31 02:34:39 np0005603621 systemd[1]: Stopped OpenSSH server daemon.
Jan 31 02:34:39 np0005603621 systemd[1]: sshd.service: Consumed 2.003s CPU time, read 32.0K from disk, written 0B to disk.
Jan 31 02:34:39 np0005603621 systemd[1]: Stopped target sshd-keygen.target.
Jan 31 02:34:39 np0005603621 systemd[1]: Stopping sshd-keygen.target...
Jan 31 02:34:39 np0005603621 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:34:39 np0005603621 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:34:39 np0005603621 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 31 02:34:39 np0005603621 systemd[1]: Reached target sshd-keygen.target.
Jan 31 02:34:39 np0005603621 systemd[1]: Starting OpenSSH server daemon...
Jan 31 02:34:39 np0005603621 systemd[1]: Started OpenSSH server daemon.
Jan 31 02:34:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:40 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:34:40 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:34:40 np0005603621 systemd[1]: Reloading.
Jan 31 02:34:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:40.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:40 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:34:40 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:34:40 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:34:40 np0005603621 auditd[697]: Audit daemon rotating log files
Jan 31 02:34:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:42.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:42.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:44.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:44.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:45 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:34:45 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:34:45 np0005603621 systemd[1]: man-db-cache-update.service: Consumed 6.358s CPU time.
Jan 31 02:34:45 np0005603621 systemd[1]: run-r35fc6e82b0654eea9a1aea6908b50fc7.service: Deactivated successfully.
Jan 31 02:34:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:34:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:48.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:34:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:34:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:48.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:34:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:34:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:50.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:34:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:52.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:34:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:52.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:54 np0005603621 podman[195464]: 2026-01-31 07:34:54.527359367 +0000 UTC m=+0.089321992 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:34:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:34:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:54.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:56.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:34:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:34:58.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:34:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:34:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:34:58.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:34:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:00.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:01 np0005603621 podman[195593]: 2026-01-31 07:35:01.137537489 +0000 UTC m=+0.043866035 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 02:35:01 np0005603621 python3.9[195640]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:35:01 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:01 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:01 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:02 np0005603621 python3.9[195831]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:35:02 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:02 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:02 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:02.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:03 np0005603621 python3.9[196021]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:35:03 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:03 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:03 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:04 np0005603621 python3.9[196212]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:35:04 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:04.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:04 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:04 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:04.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:05 np0005603621 python3.9[196402]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:05 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:05 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:05 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:06.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:06 np0005603621 python3.9[196593]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:06 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:06 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:06 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:07 np0005603621 python3.9[196833]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:07 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:07 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:07 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:35:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:08 np0005603621 python3.9[197023]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:09 np0005603621 python3.9[197178]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:09 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:09 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:09 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:10 np0005603621 python3.9[197469]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 31 02:35:10 np0005603621 systemd[1]: Reloading.
Jan 31 02:35:10 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:35:10 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:35:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:10 np0005603621 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 31 02:35:10 np0005603621 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 31 02:35:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:35:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:35:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:11 np0005603621 python3.9[197693]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4fd17fcd-4e8d-4deb-8cdd-333339bd2477 does not exist
Jan 31 02:35:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 704a8981-6969-4b55-a0a3-6e9ae34f8796 does not exist
Jan 31 02:35:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bb6d5f89-9e45-4efd-a9f7-212e7b2c263f does not exist
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:35:12 np0005603621 python3.9[197849]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.703463342 +0000 UTC m=+0.048386750 container create 0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:35:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:12.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:12 np0005603621 systemd[1]: Started libpod-conmon-0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189.scope.
Jan 31 02:35:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.674977756 +0000 UTC m=+0.019901184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.770153202 +0000 UTC m=+0.115076630 container init 0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wilson, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.774946075 +0000 UTC m=+0.119869473 container start 0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wilson, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.778862329 +0000 UTC m=+0.123785737 container attach 0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:35:12 np0005603621 systemd[1]: libpod-0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189.scope: Deactivated successfully.
Jan 31 02:35:12 np0005603621 compassionate_wilson[198131]: 167 167
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.780707858 +0000 UTC m=+0.125631256 container died 0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:12 np0005603621 conmon[198131]: conmon 0a732a034aca1c122172 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189.scope/container/memory.events
Jan 31 02:35:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d242db720dd0cad78b8b24c470050ef870318f37dcfa060c2950835ce8dff3b6-merged.mount: Deactivated successfully.
Jan 31 02:35:12 np0005603621 podman[198089]: 2026-01-31 07:35:12.819128659 +0000 UTC m=+0.164052067 container remove 0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:35:12 np0005603621 systemd[1]: libpod-conmon-0a732a034aca1c122172d8a5b500e20eca2c2d1dd47ffbe0b06c08d7f0d1d189.scope: Deactivated successfully.
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:35:12 np0005603621 podman[198184]: 2026-01-31 07:35:12.94970654 +0000 UTC m=+0.046511229 container create 94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:35:13 np0005603621 systemd[1]: Started libpod-conmon-94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76.scope.
Jan 31 02:35:13 np0005603621 podman[198184]: 2026-01-31 07:35:12.92859503 +0000 UTC m=+0.025399789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:35:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:35:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d7af7e93136df98710b739acd9be447eeaaf23d2612864b8f4e57ab0c4c84f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d7af7e93136df98710b739acd9be447eeaaf23d2612864b8f4e57ab0c4c84f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d7af7e93136df98710b739acd9be447eeaaf23d2612864b8f4e57ab0c4c84f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d7af7e93136df98710b739acd9be447eeaaf23d2612864b8f4e57ab0c4c84f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21d7af7e93136df98710b739acd9be447eeaaf23d2612864b8f4e57ab0c4c84f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:13 np0005603621 podman[198184]: 2026-01-31 07:35:13.055287296 +0000 UTC m=+0.152091985 container init 94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elgamal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:35:13 np0005603621 podman[198184]: 2026-01-31 07:35:13.067824925 +0000 UTC m=+0.164629634 container start 94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:35:13 np0005603621 podman[198184]: 2026-01-31 07:35:13.071868534 +0000 UTC m=+0.168673274 container attach 94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elgamal, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:35:13 np0005603621 python3.9[198172]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:13 np0005603621 python3.9[198361]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:13 np0005603621 sharp_elgamal[198201]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:35:13 np0005603621 sharp_elgamal[198201]: --> relative data size: 1.0
Jan 31 02:35:13 np0005603621 sharp_elgamal[198201]: --> All data devices are unavailable
Jan 31 02:35:13 np0005603621 systemd[1]: libpod-94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76.scope: Deactivated successfully.
Jan 31 02:35:13 np0005603621 podman[198184]: 2026-01-31 07:35:13.934910411 +0000 UTC m=+1.031715080 container died 94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:35:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-21d7af7e93136df98710b739acd9be447eeaaf23d2612864b8f4e57ab0c4c84f-merged.mount: Deactivated successfully.
Jan 31 02:35:13 np0005603621 podman[198184]: 2026-01-31 07:35:13.982833304 +0000 UTC m=+1.079638003 container remove 94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elgamal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:35:13 np0005603621 systemd[1]: libpod-conmon-94dc59f6ac9c2a1afe92831baf8368bd9f6a69957b2bb9436752370bee7abd76.scope: Deactivated successfully.
Jan 31 02:35:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:14.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.533054937 +0000 UTC m=+0.032253347 container create dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:14 np0005603621 systemd[1]: Started libpod-conmon-dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f.scope.
Jan 31 02:35:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.614487155 +0000 UTC m=+0.113685655 container init dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.520123045 +0000 UTC m=+0.019321455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:35:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.626123925 +0000 UTC m=+0.125322365 container start dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.629920116 +0000 UTC m=+0.129118606 container attach dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:14 np0005603621 goofy_solomon[198698]: 167 167
Jan 31 02:35:14 np0005603621 systemd[1]: libpod-dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f.scope: Deactivated successfully.
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.636435234 +0000 UTC m=+0.135633704 container died dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9754db4b47b5d3c1234b238ddc915daf92abaf0b3453cea083ea251c6df985a6-merged.mount: Deactivated successfully.
Jan 31 02:35:14 np0005603621 podman[198681]: 2026-01-31 07:35:14.681855107 +0000 UTC m=+0.181053547 container remove dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_solomon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:14 np0005603621 python3.9[198654]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:14 np0005603621 systemd[1]: libpod-conmon-dd435387bcf4461069afd11a3de92a7d69fbf1ca7516c8c81b9a1d31d63c898f.scope: Deactivated successfully.
Jan 31 02:35:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 02:35:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:14.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 02:35:14 np0005603621 podman[198724]: 2026-01-31 07:35:14.850988944 +0000 UTC m=+0.044560407 container create 93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:35:14 np0005603621 systemd[1]: Started libpod-conmon-93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26.scope.
Jan 31 02:35:14 np0005603621 podman[198724]: 2026-01-31 07:35:14.83230374 +0000 UTC m=+0.025875233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:35:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:35:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0a2978159531d5841a4f1e47d77943ce45f7e394e501129be4176a8173096a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0a2978159531d5841a4f1e47d77943ce45f7e394e501129be4176a8173096a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0a2978159531d5841a4f1e47d77943ce45f7e394e501129be4176a8173096a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f0a2978159531d5841a4f1e47d77943ce45f7e394e501129be4176a8173096a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:14 np0005603621 podman[198724]: 2026-01-31 07:35:14.951305463 +0000 UTC m=+0.144876976 container init 93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:35:14 np0005603621 podman[198724]: 2026-01-31 07:35:14.960369451 +0000 UTC m=+0.153940914 container start 93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:35:14 np0005603621 podman[198724]: 2026-01-31 07:35:14.964363509 +0000 UTC m=+0.157934972 container attach 93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:35:15 np0005603621 python3.9[198896]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]: {
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:    "0": [
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:        {
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "devices": [
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "/dev/loop3"
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            ],
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "lv_name": "ceph_lv0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "lv_size": "7511998464",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "name": "ceph_lv0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "tags": {
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.cluster_name": "ceph",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.crush_device_class": "",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.encrypted": "0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.osd_id": "0",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.type": "block",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:                "ceph.vdo": "0"
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            },
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "type": "block",
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:            "vg_name": "ceph_vg0"
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:        }
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]:    ]
Jan 31 02:35:15 np0005603621 brave_ritchie[198777]: }
Jan 31 02:35:15 np0005603621 systemd[1]: libpod-93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26.scope: Deactivated successfully.
Jan 31 02:35:15 np0005603621 podman[198724]: 2026-01-31 07:35:15.72093529 +0000 UTC m=+0.914506793 container died 93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:35:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8f0a2978159531d5841a4f1e47d77943ce45f7e394e501129be4176a8173096a-merged.mount: Deactivated successfully.
Jan 31 02:35:15 np0005603621 podman[198724]: 2026-01-31 07:35:15.784640346 +0000 UTC m=+0.978211809 container remove 93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:15 np0005603621 systemd[1]: libpod-conmon-93b54a64635d3b7324045a6df49bfec0b6fea1dc369ff88b7300d917ac775b26.scope: Deactivated successfully.
Jan 31 02:35:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.358218191 +0000 UTC m=+0.059915227 container create fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:16 np0005603621 systemd[1]: Started libpod-conmon-fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92.scope.
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.318230909 +0000 UTC m=+0.019927945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:35:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.431878992 +0000 UTC m=+0.133576028 container init fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.436797839 +0000 UTC m=+0.138494855 container start fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:35:16 np0005603621 happy_wescoff[199228]: 167 167
Jan 31 02:35:16 np0005603621 systemd[1]: libpod-fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92.scope: Deactivated successfully.
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.47331923 +0000 UTC m=+0.175016266 container attach fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.473817885 +0000 UTC m=+0.175514901 container died fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:35:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:16.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:16 np0005603621 python3.9[199176]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1a164e8e249e3a5fc9a65d30b5f3d9991f34f446d4d81dd051143046c224dc13-merged.mount: Deactivated successfully.
Jan 31 02:35:16 np0005603621 podman[199212]: 2026-01-31 07:35:16.72594601 +0000 UTC m=+0.427643066 container remove fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:35:16 np0005603621 systemd[1]: libpod-conmon-fdc3d37b950c6ec1489c5aeb45f8e4145ad9a0f69d9db74caf69f9756ea07b92.scope: Deactivated successfully.
Jan 31 02:35:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:16.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:16 np0005603621 podman[199332]: 2026-01-31 07:35:16.881774005 +0000 UTC m=+0.041486781 container create d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:35:16 np0005603621 systemd[1]: Started libpod-conmon-d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd.scope.
Jan 31 02:35:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:35:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e2a8e2984026f7a82cc5a3daba08226b60de9e7dd30871b10b0d069c517dda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e2a8e2984026f7a82cc5a3daba08226b60de9e7dd30871b10b0d069c517dda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e2a8e2984026f7a82cc5a3daba08226b60de9e7dd30871b10b0d069c517dda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0e2a8e2984026f7a82cc5a3daba08226b60de9e7dd30871b10b0d069c517dda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:35:16 np0005603621 podman[199332]: 2026-01-31 07:35:16.864676661 +0000 UTC m=+0.024389457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:35:17 np0005603621 podman[199332]: 2026-01-31 07:35:17.019323697 +0000 UTC m=+0.179036503 container init d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:35:17 np0005603621 podman[199332]: 2026-01-31 07:35:17.025024718 +0000 UTC m=+0.184737494 container start d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:35:17 np0005603621 podman[199332]: 2026-01-31 07:35:17.037708492 +0000 UTC m=+0.197421258 container attach d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:35:17 np0005603621 python3.9[199428]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]: {
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:        "osd_id": 0,
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:        "type": "bluestore"
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]:    }
Jan 31 02:35:17 np0005603621 lucid_merkle[199372]: }
Jan 31 02:35:17 np0005603621 systemd[1]: libpod-d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd.scope: Deactivated successfully.
Jan 31 02:35:17 np0005603621 podman[199332]: 2026-01-31 07:35:17.901814742 +0000 UTC m=+1.061527538 container died d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:35:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a0e2a8e2984026f7a82cc5a3daba08226b60de9e7dd30871b10b0d069c517dda-merged.mount: Deactivated successfully.
Jan 31 02:35:18 np0005603621 python3.9[199590]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:18 np0005603621 podman[199332]: 2026-01-31 07:35:18.111115507 +0000 UTC m=+1.270828303 container remove d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:35:18 np0005603621 systemd[1]: libpod-conmon-d7210d0e77e8d2bb8c0c4f74dc7645a3b7a03fd39a4b690a5211c9bd169757bd.scope: Deactivated successfully.
Jan 31 02:35:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:35:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:35:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d2d7ae2b-f670-4086-b0de-1f1b25d97272 does not exist
Jan 31 02:35:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0addefe6-31dc-4532-a7aa-dd0e07bd4d3a does not exist
Jan 31 02:35:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 74d4b412-b22b-4769-a334-60f60a7a6022 does not exist
Jan 31 02:35:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:18.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:18.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:18 np0005603621 python3.9[199819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:35:19 np0005603621 python3.9[199974]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.909660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844919909706, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1757, "num_deletes": 251, "total_data_size": 3282204, "memory_usage": 3330192, "flush_reason": "Manual Compaction"}
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844919988336, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 3225713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12287, "largest_seqno": 14043, "table_properties": {"data_size": 3217646, "index_size": 4946, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15085, "raw_average_key_size": 18, "raw_value_size": 3201662, "raw_average_value_size": 3962, "num_data_blocks": 221, "num_entries": 808, "num_filter_entries": 808, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844732, "oldest_key_time": 1769844732, "file_creation_time": 1769844919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 78729 microseconds, and 6031 cpu microseconds.
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.988390) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 3225713 bytes OK
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.988409) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.995359) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.995377) EVENT_LOG_v1 {"time_micros": 1769844919995371, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.995395) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3275018, prev total WAL file size 3275018, number of live WAL files 2.
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.996041) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(3150KB)], [29(8045KB)]
Jan 31 02:35:19 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844919996121, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 11464697, "oldest_snapshot_seqno": -1}
Jan 31 02:35:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4267 keys, 10960855 bytes, temperature: kUnknown
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844920177287, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 10960855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10927289, "index_size": 21808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10693, "raw_key_size": 103898, "raw_average_key_size": 24, "raw_value_size": 10845347, "raw_average_value_size": 2541, "num_data_blocks": 929, "num_entries": 4267, "num_filter_entries": 4267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769844919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.177711) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10960855 bytes
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.181257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.3 rd, 60.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.9 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(7.0) write-amplify(3.4) OK, records in: 4787, records dropped: 520 output_compression: NoCompression
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.181288) EVENT_LOG_v1 {"time_micros": 1769844920181274, "job": 12, "event": "compaction_finished", "compaction_time_micros": 181229, "compaction_time_cpu_micros": 18294, "output_level": 6, "num_output_files": 1, "total_output_size": 10960855, "num_input_records": 4787, "num_output_records": 4267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844920182104, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769844920183522, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:19.995913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.183625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.183630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.183631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.183633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:35:20 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:35:20.183635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:35:20 np0005603621 python3.9[200130]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:21 np0005603621 python3.9[200285]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:21 np0005603621 python3.9[200441]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 31 02:35:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:22.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:22.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:22 np0005603621 python3.9[200596]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:35:23 np0005603621 python3.9[200748]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:35:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:24 np0005603621 python3.9[200901]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:35:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:24 np0005603621 podman[201053]: 2026-01-31 07:35:24.638094995 +0000 UTC m=+0.076112591 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 02:35:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:24.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:24 np0005603621 python3.9[201054]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:25 np0005603621 python3.9[201229]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:35:25 np0005603621 python3.9[201382]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:35:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:26.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:26 np0005603621 python3.9[201532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:35:27 np0005603621 python3.9[201734]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:28 np0005603621 python3.9[201860]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844927.0029438-1646-136586398191957/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:28 np0005603621 python3.9[202012]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:29 np0005603621 python3.9[202137]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844928.4247885-1646-246875037564645/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:29 np0005603621 python3.9[202290]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:30 np0005603621 python3.9[202415]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844929.4562216-1646-204358579683925/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:35:30.452 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:35:30.453 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:35:30.453 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:35:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:30.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:30 np0005603621 python3.9[202567]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:31 np0005603621 podman[202664]: 2026-01-31 07:35:31.242350519 +0000 UTC m=+0.052186380 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:35:31 np0005603621 python3.9[202711]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844930.483779-1646-17776219324958/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:31 np0005603621 python3.9[202865]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:32 np0005603621 python3.9[202990]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844931.5589905-1646-128567989902626/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:32.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:33 np0005603621 python3.9[203142]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:33 np0005603621 python3.9[203268]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844932.593349-1646-220986739827848/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:34 np0005603621 python3.9[203420]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:34.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:34 np0005603621 python3.9[203543]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844933.7077906-1646-10863971384734/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:34.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:35 np0005603621 python3.9[203695]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:35 np0005603621 python3.9[203821]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769844934.7519796-1646-199194325620958/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:36.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:37 np0005603621 python3.9[203973]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 31 02:35:37 np0005603621 python3.9[204127]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:35:38
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.meta', 'images', 'vms', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:35:38 np0005603621 python3.9[204279]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:35:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:35:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:35:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:38.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:35:39 np0005603621 python3.9[204431]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:39 np0005603621 python3.9[204584]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:40 np0005603621 python3.9[204736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:40.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:40 np0005603621 python3.9[204888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:41 np0005603621 python3.9[205040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:42 np0005603621 python3.9[205193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:42.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:42 np0005603621 python3.9[205345]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:43 np0005603621 python3.9[205497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:43 np0005603621 python3.9[205650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:44 np0005603621 python3.9[205802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:45 np0005603621 python3.9[205954]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:45 np0005603621 python3.9[206106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:46 np0005603621 python3.9[206259]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:46.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:46 np0005603621 python3.9[206419]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844945.7647357-2309-184001557005139/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:47 np0005603621 python3.9[206584]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:48 np0005603621 python3.9[206708]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844946.969005-2309-236929697140445/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:48.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:48 np0005603621 python3.9[206860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:35:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:35:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:48.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:49 np0005603621 python3.9[206983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844948.1588502-2309-164153517543583/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:49 np0005603621 python3.9[207136]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:50 np0005603621 python3.9[207259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844949.308782-2309-26763807076557/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:50 np0005603621 python3.9[207411]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:50.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:51 np0005603621 python3.9[207534]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844950.371733-2309-208352325519933/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:51 np0005603621 python3.9[207687]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:52 np0005603621 python3.9[207810]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844951.3854187-2309-240860148847997/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:52 np0005603621 python3.9[207962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:53 np0005603621 python3.9[208085]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844952.484064-2309-77741799508151/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:54 np0005603621 python3.9[208238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:54.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:54 np0005603621 python3.9[208361]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844953.6070933-2309-42238739558094/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:54.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:35:55 np0005603621 podman[208485]: 2026-01-31 07:35:55.067298526 +0000 UTC m=+0.074011045 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:35:55 np0005603621 python3.9[208528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:55 np0005603621 python3.9[208661]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844954.7317214-2309-61817416180539/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:56 np0005603621 python3.9[208813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:35:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:35:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:35:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:56.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:35:56 np0005603621 python3.9[208936]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844955.909016-2309-230467658398824/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:57 np0005603621 python3.9[209088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:58 np0005603621 python3.9[209212]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844957.0860798-2309-264811237498622/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:35:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:35:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:35:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:35:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:35:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:35:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:35:58.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:35:58 np0005603621 python3.9[209364]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:35:59 np0005603621 python3.9[209487]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844958.338512-2309-163702007601887/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:35:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:00 np0005603621 python3.9[209640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:00.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:01 np0005603621 python3.9[209763]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844959.84726-2309-248231682343354/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:01 np0005603621 podman[209888]: 2026-01-31 07:36:01.496524564 +0000 UTC m=+0.049991634 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 02:36:01 np0005603621 python3.9[209936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:02 np0005603621 python3.9[210060]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844961.1986141-2309-11802026443502/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:02.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:02.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:03 np0005603621 python3.9[210210]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:03 np0005603621 python3.9[210366]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 31 02:36:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:04.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:04.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:06 np0005603621 dbus-broker-launch[808]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 31 02:36:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:06.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:06.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:06 np0005603621 python3.9[210524]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:07 np0005603621 python3.9[210726]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:07 np0005603621 python3.9[210879]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:36:08 np0005603621 python3.9[211031]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:08.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:09 np0005603621 python3.9[211183]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:10 np0005603621 python3.9[211336]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:10 np0005603621 python3.9[211488]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:11 np0005603621 python3.9[211640]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:11 np0005603621 python3.9[211793]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:12 np0005603621 python3.9[211945]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:13 np0005603621 python3.9[212097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:36:13 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:13 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:13 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:13 np0005603621 systemd[1]: Starting libvirt logging daemon socket...
Jan 31 02:36:13 np0005603621 systemd[1]: Listening on libvirt logging daemon socket.
Jan 31 02:36:13 np0005603621 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 31 02:36:13 np0005603621 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 31 02:36:13 np0005603621 systemd[1]: Starting libvirt logging daemon...
Jan 31 02:36:13 np0005603621 systemd[1]: Started libvirt logging daemon.
Jan 31 02:36:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:14 np0005603621 python3.9[212292]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:36:14 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:14 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:14 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:14 np0005603621 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 31 02:36:14 np0005603621 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 31 02:36:14 np0005603621 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 31 02:36:14 np0005603621 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 31 02:36:14 np0005603621 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 31 02:36:14 np0005603621 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 31 02:36:14 np0005603621 systemd[1]: Starting libvirt nodedev daemon...
Jan 31 02:36:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:14.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:14 np0005603621 systemd[1]: Started libvirt nodedev daemon.
Jan 31 02:36:15 np0005603621 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 31 02:36:15 np0005603621 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 31 02:36:15 np0005603621 python3.9[212509]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:36:15 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:15 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:15 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:15 np0005603621 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 31 02:36:15 np0005603621 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 31 02:36:15 np0005603621 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 31 02:36:15 np0005603621 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 31 02:36:15 np0005603621 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 31 02:36:15 np0005603621 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 31 02:36:15 np0005603621 systemd[1]: Starting libvirt proxy daemon...
Jan 31 02:36:15 np0005603621 systemd[1]: Started libvirt proxy daemon.
Jan 31 02:36:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:16.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:16 np0005603621 setroubleshoot[212433]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4b97d5d1-6c9f-454b-a9bb-19a9cbc8e231
Jan 31 02:36:16 np0005603621 setroubleshoot[212433]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 02:36:16 np0005603621 setroubleshoot[212433]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 4b97d5d1-6c9f-454b-a9bb-19a9cbc8e231
Jan 31 02:36:16 np0005603621 setroubleshoot[212433]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 31 02:36:16 np0005603621 python3.9[212729]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:36:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:16.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:16 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:16 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:16 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:17 np0005603621 systemd[1]: Listening on libvirt locking daemon socket.
Jan 31 02:36:17 np0005603621 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 31 02:36:17 np0005603621 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 31 02:36:17 np0005603621 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 31 02:36:17 np0005603621 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 31 02:36:17 np0005603621 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 31 02:36:17 np0005603621 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 31 02:36:17 np0005603621 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 31 02:36:17 np0005603621 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 31 02:36:17 np0005603621 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 31 02:36:17 np0005603621 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 02:36:17 np0005603621 systemd[1]: Started libvirt QEMU daemon.
Jan 31 02:36:17 np0005603621 python3.9[212945]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:36:17 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:18 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:18 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:18 np0005603621 systemd[1]: Starting libvirt secret daemon socket...
Jan 31 02:36:18 np0005603621 systemd[1]: Listening on libvirt secret daemon socket.
Jan 31 02:36:18 np0005603621 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 31 02:36:18 np0005603621 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 31 02:36:18 np0005603621 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 31 02:36:18 np0005603621 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 31 02:36:18 np0005603621 systemd[1]: Starting libvirt secret daemon...
Jan 31 02:36:18 np0005603621 systemd[1]: Started libvirt secret daemon.
Jan 31 02:36:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:18.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:18.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:19 np0005603621 python3.9[213256]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:19 np0005603621 podman[213351]: 2026-01-31 07:36:19.497985779 +0000 UTC m=+0.191657768 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:36:19 np0005603621 podman[213351]: 2026-01-31 07:36:19.593176153 +0000 UTC m=+0.286848102 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:36:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:19 np0005603621 python3.9[213510]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:36:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:20 np0005603621 podman[213756]: 2026-01-31 07:36:20.427064152 +0000 UTC m=+0.108379421 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:36:20 np0005603621 podman[213756]: 2026-01-31 07:36:20.475158675 +0000 UTC m=+0.156473924 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:36:20 np0005603621 python3.9[213797]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:20 np0005603621 podman[213882]: 2026-01-31 07:36:20.756562684 +0000 UTC m=+0.066422344 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 31 02:36:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:20.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:20 np0005603621 podman[213882]: 2026-01-31 07:36:20.805480393 +0000 UTC m=+0.115340013 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.component=keepalived-container, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, description=keepalived for Ceph, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 02:36:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:36:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:36:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:21 np0005603621 python3.9[214138]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 18fc5291-db82-417d-9755-10f580972c48 does not exist
Jan 31 02:36:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3ede002b-075c-46d2-b587-edc2be65585d does not exist
Jan 31 02:36:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 085ae60b-2245-40b1-b44f-78c3980c6389 does not exist
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:36:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:36:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:36:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:22 np0005603621 python3.9[214421]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:22 np0005603621 podman[214465]: 2026-01-31 07:36:22.303728045 +0000 UTC m=+0.021510732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:22 np0005603621 podman[214465]: 2026-01-31 07:36:22.401474129 +0000 UTC m=+0.119256846 container create ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:36:22 np0005603621 systemd[1]: Started libpod-conmon-ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4.scope.
Jan 31 02:36:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:22.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:36:22 np0005603621 podman[214465]: 2026-01-31 07:36:22.738032244 +0000 UTC m=+0.455814951 container init ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:36:22 np0005603621 podman[214465]: 2026-01-31 07:36:22.74451877 +0000 UTC m=+0.462301437 container start ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:36:22 np0005603621 exciting_mccarthy[214551]: 167 167
Jan 31 02:36:22 np0005603621 systemd[1]: libpod-ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4.scope: Deactivated successfully.
Jan 31 02:36:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:22.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:22 np0005603621 python3.9[214604]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844981.691762-3383-246680159820003/.source.xml follow=False _original_basename=secret.xml.j2 checksum=450e5279e3f961806683176060af91f2a100b4e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:22 np0005603621 podman[214465]: 2026-01-31 07:36:22.985154218 +0000 UTC m=+0.702936935 container attach ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:36:22 np0005603621 podman[214465]: 2026-01-31 07:36:22.986077246 +0000 UTC m=+0.703859933 container died ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:36:23 np0005603621 python3.9[214769]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ff2dc2547b1ddf2001ee7c99ca78cb86d66b71febd59cdda515bd855197f8f22-merged.mount: Deactivated successfully.
Jan 31 02:36:24 np0005603621 podman[214465]: 2026-01-31 07:36:24.034942903 +0000 UTC m=+1.752725580 container remove ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:36:24 np0005603621 systemd[1]: libpod-conmon-ded49a79f1a843d09973bae9ed78a3fc68d1bda956f7cb6a73b053ced4bbb5d4.scope: Deactivated successfully.
Jan 31 02:36:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:24 np0005603621 podman[214839]: 2026-01-31 07:36:24.136198748 +0000 UTC m=+0.027235354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:24 np0005603621 podman[214839]: 2026-01-31 07:36:24.358055962 +0000 UTC m=+0.249092518 container create f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:36:24 np0005603621 python3.9[214955]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:24 np0005603621 systemd[1]: Started libpod-conmon-f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da.scope.
Jan 31 02:36:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:24.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:36:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193cd9956c7457008a06bf014b5e2b379be78c165b5d69b73d51f1a26b5a4e21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193cd9956c7457008a06bf014b5e2b379be78c165b5d69b73d51f1a26b5a4e21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193cd9956c7457008a06bf014b5e2b379be78c165b5d69b73d51f1a26b5a4e21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193cd9956c7457008a06bf014b5e2b379be78c165b5d69b73d51f1a26b5a4e21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/193cd9956c7457008a06bf014b5e2b379be78c165b5d69b73d51f1a26b5a4e21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:24.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:24 np0005603621 podman[214839]: 2026-01-31 07:36:24.865766404 +0000 UTC m=+0.756803020 container init f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:36:24 np0005603621 podman[214839]: 2026-01-31 07:36:24.877062201 +0000 UTC m=+0.768098767 container start f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:36:25 np0005603621 podman[214839]: 2026-01-31 07:36:25.029634251 +0000 UTC m=+0.920670907 container attach f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:36:25 np0005603621 podman[215141]: 2026-01-31 07:36:25.534719411 +0000 UTC m=+0.093797540 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:36:25 np0005603621 epic_cohen[214958]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:36:25 np0005603621 epic_cohen[214958]: --> relative data size: 1.0
Jan 31 02:36:25 np0005603621 epic_cohen[214958]: --> All data devices are unavailable
Jan 31 02:36:25 np0005603621 systemd[1]: libpod-f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da.scope: Deactivated successfully.
Jan 31 02:36:25 np0005603621 podman[214839]: 2026-01-31 07:36:25.669726336 +0000 UTC m=+1.560762932 container died f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:36:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-193cd9956c7457008a06bf014b5e2b379be78c165b5d69b73d51f1a26b5a4e21-merged.mount: Deactivated successfully.
Jan 31 02:36:25 np0005603621 podman[214839]: 2026-01-31 07:36:25.767868883 +0000 UTC m=+1.658905449 container remove f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_cohen, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:36:25 np0005603621 systemd[1]: libpod-conmon-f7702f2cc11b2e9148e54356061e24a0d748264e1645695488d2c4d1612a14da.scope: Deactivated successfully.
Jan 31 02:36:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:26 np0005603621 podman[215518]: 2026-01-31 07:36:26.470662792 +0000 UTC m=+0.102793726 container create 3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:36:26 np0005603621 podman[215518]: 2026-01-31 07:36:26.400119869 +0000 UTC m=+0.032250903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:26 np0005603621 systemd[1]: Started libpod-conmon-3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0.scope.
Jan 31 02:36:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:26.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:36:26 np0005603621 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 31 02:36:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:26.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:26 np0005603621 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 31 02:36:26 np0005603621 podman[215518]: 2026-01-31 07:36:26.865709139 +0000 UTC m=+0.497840123 container init 3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:36:26 np0005603621 podman[215518]: 2026-01-31 07:36:26.873174505 +0000 UTC m=+0.505305439 container start 3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:36:26 np0005603621 vigorous_einstein[215630]: 167 167
Jan 31 02:36:26 np0005603621 systemd[1]: libpod-3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0.scope: Deactivated successfully.
Jan 31 02:36:26 np0005603621 python3.9[215635]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:26 np0005603621 podman[215518]: 2026-01-31 07:36:26.947522658 +0000 UTC m=+0.579653612 container attach 3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:36:26 np0005603621 podman[215518]: 2026-01-31 07:36:26.948310453 +0000 UTC m=+0.580441397 container died 3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:36:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ee454b72a87f19d7e688ad2176c587cdfff7cc5e548331a7f62543a04daabf56-merged.mount: Deactivated successfully.
Jan 31 02:36:27 np0005603621 podman[215518]: 2026-01-31 07:36:27.305720058 +0000 UTC m=+0.937850982 container remove 3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:36:27 np0005603621 systemd[1]: libpod-conmon-3d22ca2061aafc9a0b54acd7c29f8980712a2081a33853ae745f1610274fbbb0.scope: Deactivated successfully.
Jan 31 02:36:27 np0005603621 podman[215861]: 2026-01-31 07:36:27.478407466 +0000 UTC m=+0.042530578 container create 1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:36:27 np0005603621 podman[215861]: 2026-01-31 07:36:27.459798836 +0000 UTC m=+0.023921968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:27 np0005603621 systemd[1]: Started libpod-conmon-1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c.scope.
Jan 31 02:36:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:36:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2fbad868f17259653f595920299498bb5ce01aac821eee37cbad70356a0ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2fbad868f17259653f595920299498bb5ce01aac821eee37cbad70356a0ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2fbad868f17259653f595920299498bb5ce01aac821eee37cbad70356a0ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef2fbad868f17259653f595920299498bb5ce01aac821eee37cbad70356a0ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:27 np0005603621 python3.9[215854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:27 np0005603621 podman[215861]: 2026-01-31 07:36:27.619921285 +0000 UTC m=+0.184044417 container init 1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:36:27 np0005603621 podman[215861]: 2026-01-31 07:36:27.624332555 +0000 UTC m=+0.188455667 container start 1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:36:27 np0005603621 podman[215861]: 2026-01-31 07:36:27.630759858 +0000 UTC m=+0.194883020 container attach 1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:36:28 np0005603621 python3.9[216006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769844987.1161354-3548-222022995636666/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]: {
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:    "0": [
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:        {
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "devices": [
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "/dev/loop3"
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            ],
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "lv_name": "ceph_lv0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "lv_size": "7511998464",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "name": "ceph_lv0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "tags": {
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.cluster_name": "ceph",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.crush_device_class": "",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.encrypted": "0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.osd_id": "0",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.type": "block",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:                "ceph.vdo": "0"
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            },
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "type": "block",
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:            "vg_name": "ceph_vg0"
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:        }
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]:    ]
Jan 31 02:36:28 np0005603621 vibrant_cray[215879]: }
Jan 31 02:36:28 np0005603621 podman[215861]: 2026-01-31 07:36:28.387453524 +0000 UTC m=+0.951576646 container died 1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:36:28 np0005603621 systemd[1]: libpod-1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c.scope: Deactivated successfully.
Jan 31 02:36:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:28.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ef2fbad868f17259653f595920299498bb5ce01aac821eee37cbad70356a0ec5-merged.mount: Deactivated successfully.
Jan 31 02:36:28 np0005603621 podman[215861]: 2026-01-31 07:36:28.745920143 +0000 UTC m=+1.310043255 container remove 1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_cray, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:36:28 np0005603621 systemd[1]: libpod-conmon-1230de689260de69c5f69e75a24df084d366f7aa9c22ecbcdad95236399c861c.scope: Deactivated successfully.
Jan 31 02:36:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:28.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:28 np0005603621 python3.9[216173]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.293052054 +0000 UTC m=+0.064649127 container create d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.251080735 +0000 UTC m=+0.022677828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:29 np0005603621 systemd[1]: Started libpod-conmon-d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c.scope.
Jan 31 02:36:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.404144281 +0000 UTC m=+0.175741374 container init d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.408810758 +0000 UTC m=+0.180407831 container start d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:36:29 np0005603621 reverent_gould[216476]: 167 167
Jan 31 02:36:29 np0005603621 systemd[1]: libpod-d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c.scope: Deactivated successfully.
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.437154856 +0000 UTC m=+0.208751939 container attach d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.438037154 +0000 UTC m=+0.209634227 container died d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:36:29 np0005603621 python3.9[216480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-690e10807dbd90b3e34f95a1b3d88bf8ca34f430af17708ef9aaf1fc0353714e-merged.mount: Deactivated successfully.
Jan 31 02:36:29 np0005603621 podman[216386]: 2026-01-31 07:36:29.73512862 +0000 UTC m=+0.506725693 container remove d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:36:29 np0005603621 systemd[1]: libpod-conmon-d80dd2994d4eb43528a935b94cc3ccf002be67157fd103d714643098aec28b1c.scope: Deactivated successfully.
Jan 31 02:36:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:29 np0005603621 podman[216582]: 2026-01-31 07:36:29.8498119 +0000 UTC m=+0.036689982 container create e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:36:29 np0005603621 systemd[1]: Started libpod-conmon-e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e.scope.
Jan 31 02:36:29 np0005603621 podman[216582]: 2026-01-31 07:36:29.83150405 +0000 UTC m=+0.018382142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:36:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:36:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49995ae26988b42c87dc6d461cae8f868e6d079e37f10c16740296176925778/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49995ae26988b42c87dc6d461cae8f868e6d079e37f10c16740296176925778/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49995ae26988b42c87dc6d461cae8f868e6d079e37f10c16740296176925778/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f49995ae26988b42c87dc6d461cae8f868e6d079e37f10c16740296176925778/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:36:29 np0005603621 podman[216582]: 2026-01-31 07:36:29.956334743 +0000 UTC m=+0.143212835 container init e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:36:29 np0005603621 podman[216582]: 2026-01-31 07:36:29.961652891 +0000 UTC m=+0.148530963 container start e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:36:29 np0005603621 python3.9[216574]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:29 np0005603621 podman[216582]: 2026-01-31 07:36:29.971653617 +0000 UTC m=+0.158531709 container attach e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:36:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:36:30.453 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:36:30.454 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:36:30.454 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:36:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:30.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]: {
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:        "osd_id": 0,
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:        "type": "bluestore"
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]:    }
Jan 31 02:36:30 np0005603621 gallant_lovelace[216598]: }
Jan 31 02:36:30 np0005603621 systemd[1]: libpod-e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e.scope: Deactivated successfully.
Jan 31 02:36:30 np0005603621 conmon[216598]: conmon e1ebea2f5d326005754b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e.scope/container/memory.events
Jan 31 02:36:30 np0005603621 podman[216582]: 2026-01-31 07:36:30.669220501 +0000 UTC m=+0.856098603 container died e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:36:30 np0005603621 python3.9[216759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:30.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f49995ae26988b42c87dc6d461cae8f868e6d079e37f10c16740296176925778-merged.mount: Deactivated successfully.
Jan 31 02:36:30 np0005603621 podman[216582]: 2026-01-31 07:36:30.92439529 +0000 UTC m=+1.111273362 container remove e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lovelace, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:36:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:36:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:36:30 np0005603621 systemd[1]: libpod-conmon-e1ebea2f5d326005754bc5e0488faa14d187d326305e833f135277a506c0535e.scope: Deactivated successfully.
Jan 31 02:36:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1389cd8b-f680-48ad-a0df-f241c9e6e8ed does not exist
Jan 31 02:36:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a5e71f77-1aff-4428-8bcf-8364ccc66753 does not exist
Jan 31 02:36:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ef924ad7-2855-45f8-9518-50643b848ba2 does not exist
Jan 31 02:36:31 np0005603621 python3.9[216861]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.j5d0b2u3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:31 np0005603621 podman[217036]: 2026-01-31 07:36:31.667475134 +0000 UTC m=+0.044393696 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 02:36:31 np0005603621 python3.9[217079]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:36:32 np0005603621 python3.9[217159]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:32.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:32 np0005603621 python3.9[217311]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:32.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:33 np0005603621 python3[217464]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 31 02:36:34 np0005603621 python3.9[217617]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:34 np0005603621 python3.9[217695]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:34.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:34.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:35 np0005603621 python3.9[217847]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:36 np0005603621 python3.9[217973]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844994.8340251-3815-274964988981905/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:36.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:36 np0005603621 python3.9[218125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:36.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:37 np0005603621 python3.9[218203]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:37 np0005603621 python3.9[218356]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:38 np0005603621 python3.9[218434]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:36:38
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'volumes', 'images', '.rgw.root', 'default.rgw.control']
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:36:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:38.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:38.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:38 np0005603621 python3.9[218586]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:39 np0005603621 python3.9[218711]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769844998.3620312-3932-189711690579560/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:40 np0005603621 python3.9[218864]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:40 np0005603621 python3.9[219016]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:40.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:41 np0005603621 python3.9[219172]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:42 np0005603621 python3.9[219324]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:42.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:43 np0005603621 python3.9[219477]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:36:44 np0005603621 python3.9[219632]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:36:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:44.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:44.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:45 np0005603621 python3.9[219787]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:45 np0005603621 python3.9[219940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:46 np0005603621 python3.9[220063]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845005.2658322-4148-122614056580000/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:46.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:46.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:46 np0005603621 python3.9[220215]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:47 np0005603621 python3.9[220388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845006.443042-4193-274267550639018/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:48 np0005603621 python3.9[220541]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:48.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:36:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:36:48 np0005603621 python3.9[220664]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845007.6181414-4238-174292857447520/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:36:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:49 np0005603621 python3.9[220816]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:36:49 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:49 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:49 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:49 np0005603621 systemd[1]: Reached target edpm_libvirt.target.
Jan 31 02:36:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:50.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:50 np0005603621 python3.9[221008]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 31 02:36:50 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:50 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:50 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:36:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:36:51 np0005603621 systemd[1]: Reloading.
Jan 31 02:36:51 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:36:51 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:36:51 np0005603621 systemd[1]: session-49.scope: Deactivated successfully.
Jan 31 02:36:51 np0005603621 systemd[1]: session-49.scope: Consumed 2min 55.310s CPU time.
Jan 31 02:36:51 np0005603621 systemd-logind[818]: Session 49 logged out. Waiting for processes to exit.
Jan 31 02:36:51 np0005603621 systemd-logind[818]: Removed session 49.
Jan 31 02:36:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:36:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:52.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:36:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:52.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:54.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:36:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:54.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:56 np0005603621 podman[221109]: 2026-01-31 07:36:56.533706257 +0000 UTC m=+0.094647440 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 02:36:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:56.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:57 np0005603621 systemd-logind[818]: New session 50 of user zuul.
Jan 31 02:36:57 np0005603621 systemd[1]: Started Session 50 of User zuul.
Jan 31 02:36:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:36:58 np0005603621 python3.9[221290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:36:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:36:58.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:36:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:36:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:36:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:36:59 np0005603621 python3.9[221444]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:36:59 np0005603621 network[221462]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:36:59 np0005603621 network[221463]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:36:59 np0005603621 network[221464]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:36:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:00.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:01 np0005603621 podman[221565]: 2026-01-31 07:37:01.800317724 +0000 UTC m=+0.096051305 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 02:37:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:02.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:37:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:02.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:37:04 np0005603621 python3.9[221758]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 31 02:37:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:04.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:05 np0005603621 python3.9[221842]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:37:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:06.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:06.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:37:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:08.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:08.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:10.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:10.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:11 np0005603621 python3.9[222048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:37:12 np0005603621 python3.9[222201]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:37:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:12.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:12 np0005603621 python3.9[222354]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:37:13 np0005603621 python3.9[222506]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:37:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:14 np0005603621 python3.9[222660]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:37:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:14.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:37:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:14.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:37:14 np0005603621 python3.9[222783]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845033.8303416-245-88499311219399/.source.iscsi _original_basename=.gjazr8bv follow=False checksum=dfe19bc68f1a5a7e644bc65fdec007089e55fefe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:15 np0005603621 python3.9[222936]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:16 np0005603621 python3.9[223088]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:16.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:16.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:17 np0005603621 python3.9[223241]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:37:17 np0005603621 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 31 02:37:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:18 np0005603621 python3.9[223397]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:37:18 np0005603621 systemd[1]: Reloading.
Jan 31 02:37:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:18.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:18 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:37:18 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:37:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:18.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:18 np0005603621 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 02:37:18 np0005603621 systemd[1]: Starting Open-iSCSI...
Jan 31 02:37:18 np0005603621 kernel: Loading iSCSI transport class v2.0-870.
Jan 31 02:37:18 np0005603621 systemd[1]: Started Open-iSCSI.
Jan 31 02:37:18 np0005603621 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 31 02:37:18 np0005603621 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 31 02:37:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:20 np0005603621 python3.9[223598]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:37:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:20 np0005603621 network[223615]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:37:20 np0005603621 network[223616]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:37:20 np0005603621 network[223617]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:37:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:20.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:20.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:22.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:22.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:24 np0005603621 python3.9[223891]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:37:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:24.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:24.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:26 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:37:26 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:37:26 np0005603621 systemd[1]: Reloading.
Jan 31 02:37:26 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:37:26 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:37:26 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:37:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:37:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:26.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:37:26 np0005603621 podman[223963]: 2026-01-31 07:37:26.757528592 +0000 UTC m=+0.130317952 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 02:37:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:26.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:27 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:37:27 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:37:27 np0005603621 systemd[1]: run-r5bc4e3fb921e4fe1a8d5eb730773559c.service: Deactivated successfully.
Jan 31 02:37:28 np0005603621 python3.9[224284]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 02:37:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:28.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:28 np0005603621 python3.9[224436]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 31 02:37:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:28.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:29 np0005603621 python3.9[224592]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:37:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:29 np0005603621 python3.9[224716]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845049.0486925-509-224716961893367/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:37:30.453 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:37:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:37:30.454 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:37:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:37:30.454 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:37:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:30.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:30 np0005603621 python3.9[224868]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:30.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:37:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 90bf9309-03c0-4c49-bc1d-7d7b541e02ef does not exist
Jan 31 02:37:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bc706ee6-0f97-438a-82d3-4233eae255f6 does not exist
Jan 31 02:37:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b002cd46-1f17-4e58-80bd-09fa5ee22a55 does not exist
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:37:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:37:31 np0005603621 python3.9[225138]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:37:32 np0005603621 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 02:37:32 np0005603621 systemd[1]: Stopped Load Kernel Modules.
Jan 31 02:37:32 np0005603621 systemd[1]: Stopping Load Kernel Modules...
Jan 31 02:37:32 np0005603621 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:37:32 np0005603621 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:37:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:37:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:37:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:37:32 np0005603621 podman[225167]: 2026-01-31 07:37:32.053277758 +0000 UTC m=+0.060396434 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:37:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.426053786 +0000 UTC m=+0.046279154 container create 56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:37:32 np0005603621 systemd[1]: Started libpod-conmon-56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be.scope.
Jan 31 02:37:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.408502129 +0000 UTC m=+0.028727557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.503947984 +0000 UTC m=+0.124173382 container init 56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.512217101 +0000 UTC m=+0.132442469 container start 56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.516772793 +0000 UTC m=+0.136998181 container attach 56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:37:32 np0005603621 focused_solomon[225432]: 167 167
Jan 31 02:37:32 np0005603621 systemd[1]: libpod-56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be.scope: Deactivated successfully.
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.519998244 +0000 UTC m=+0.140223642 container died 56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:37:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cded6314e58df57a3648920d256170f3a14cece98770f5d034413a1f825ab7b1-merged.mount: Deactivated successfully.
Jan 31 02:37:32 np0005603621 podman[225372]: 2026-01-31 07:37:32.568875918 +0000 UTC m=+0.189101286 container remove 56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:37:32 np0005603621 systemd[1]: libpod-conmon-56673c20fd8367e3c50c5eede0ffcc960609e33e858657ddaf6ab09ed9cd85be.scope: Deactivated successfully.
Jan 31 02:37:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:32.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:32 np0005603621 podman[225510]: 2026-01-31 07:37:32.710503181 +0000 UTC m=+0.047799691 container create ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:37:32 np0005603621 systemd[1]: Started libpod-conmon-ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b.scope.
Jan 31 02:37:32 np0005603621 podman[225510]: 2026-01-31 07:37:32.693994547 +0000 UTC m=+0.031291057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:37:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:37:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d11c46658eeeca1e29c8dc435756d100b1d57cac8ada48f00aa634168174685/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d11c46658eeeca1e29c8dc435756d100b1d57cac8ada48f00aa634168174685/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d11c46658eeeca1e29c8dc435756d100b1d57cac8ada48f00aa634168174685/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d11c46658eeeca1e29c8dc435756d100b1d57cac8ada48f00aa634168174685/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d11c46658eeeca1e29c8dc435756d100b1d57cac8ada48f00aa634168174685/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:32 np0005603621 podman[225510]: 2026-01-31 07:37:32.824334649 +0000 UTC m=+0.161631169 container init ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:37:32 np0005603621 podman[225510]: 2026-01-31 07:37:32.830499801 +0000 UTC m=+0.167796351 container start ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:37:32 np0005603621 podman[225510]: 2026-01-31 07:37:32.834348681 +0000 UTC m=+0.171645201 container attach ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:37:32 np0005603621 python3.9[225504]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:37:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:33 np0005603621 goofy_lichterman[225526]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:37:33 np0005603621 goofy_lichterman[225526]: --> relative data size: 1.0
Jan 31 02:37:33 np0005603621 goofy_lichterman[225526]: --> All data devices are unavailable
Jan 31 02:37:33 np0005603621 systemd[1]: libpod-ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b.scope: Deactivated successfully.
Jan 31 02:37:33 np0005603621 podman[225510]: 2026-01-31 07:37:33.580998222 +0000 UTC m=+0.918294762 container died ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:37:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0d11c46658eeeca1e29c8dc435756d100b1d57cac8ada48f00aa634168174685-merged.mount: Deactivated successfully.
Jan 31 02:37:33 np0005603621 podman[225510]: 2026-01-31 07:37:33.644404468 +0000 UTC m=+0.981700998 container remove ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:37:33 np0005603621 systemd[1]: libpod-conmon-ef4321a6664c0647c141deff89fa6663e410996e4bcba703610904c72ed82e6b.scope: Deactivated successfully.
Jan 31 02:37:33 np0005603621 python3.9[225690]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:37:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:34 np0005603621 podman[225980]: 2026-01-31 07:37:34.248812576 +0000 UTC m=+0.039351887 container create 125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:37:34 np0005603621 systemd[1]: Started libpod-conmon-125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f.scope.
Jan 31 02:37:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:37:34 np0005603621 podman[225980]: 2026-01-31 07:37:34.326688074 +0000 UTC m=+0.117227405 container init 125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:37:34 np0005603621 podman[225980]: 2026-01-31 07:37:34.232777616 +0000 UTC m=+0.023316917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:37:34 np0005603621 podman[225980]: 2026-01-31 07:37:34.334208008 +0000 UTC m=+0.124747289 container start 125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:37:34 np0005603621 podman[225980]: 2026-01-31 07:37:34.33748932 +0000 UTC m=+0.128028621 container attach 125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:37:34 np0005603621 crazy_bhaskara[226014]: 167 167
Jan 31 02:37:34 np0005603621 systemd[1]: libpod-125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f.scope: Deactivated successfully.
Jan 31 02:37:34 np0005603621 conmon[226014]: conmon 125fded51727281459b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f.scope/container/memory.events
Jan 31 02:37:34 np0005603621 podman[226019]: 2026-01-31 07:37:34.385537798 +0000 UTC m=+0.031415540 container died 125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:37:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5a97fc806f6db8f69591bdd29f8338ad00683b319adcf600dff9c05a7149d39f-merged.mount: Deactivated successfully.
Jan 31 02:37:34 np0005603621 podman[226019]: 2026-01-31 07:37:34.416962897 +0000 UTC m=+0.062840619 container remove 125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:37:34 np0005603621 python3.9[226008]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:37:34 np0005603621 systemd[1]: libpod-conmon-125fded51727281459b696d5388091b24cba8eea7118cfe1d60ca925100da80f.scope: Deactivated successfully.
Jan 31 02:37:34 np0005603621 podman[226064]: 2026-01-31 07:37:34.545587375 +0000 UTC m=+0.048987597 container create f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pike, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:37:34 np0005603621 systemd[1]: Started libpod-conmon-f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a.scope.
Jan 31 02:37:34 np0005603621 podman[226064]: 2026-01-31 07:37:34.517014226 +0000 UTC m=+0.020414468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:37:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:37:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ec7ebc5aee5b9c87106d03ef382b7bfdd2b87c07f4489ec45ecc3186137fc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ec7ebc5aee5b9c87106d03ef382b7bfdd2b87c07f4489ec45ecc3186137fc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ec7ebc5aee5b9c87106d03ef382b7bfdd2b87c07f4489ec45ecc3186137fc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5ec7ebc5aee5b9c87106d03ef382b7bfdd2b87c07f4489ec45ecc3186137fc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:34 np0005603621 podman[226064]: 2026-01-31 07:37:34.647990788 +0000 UTC m=+0.151391010 container init f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:37:34 np0005603621 podman[226064]: 2026-01-31 07:37:34.654943304 +0000 UTC m=+0.158343526 container start f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:37:34 np0005603621 podman[226064]: 2026-01-31 07:37:34.659140935 +0000 UTC m=+0.162541197 container attach f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:37:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:34.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:35 np0005603621 python3.9[226184]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845053.963393-662-194714514228500/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:35 np0005603621 cool_pike[226108]: {
Jan 31 02:37:35 np0005603621 cool_pike[226108]:    "0": [
Jan 31 02:37:35 np0005603621 cool_pike[226108]:        {
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "devices": [
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "/dev/loop3"
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            ],
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "lv_name": "ceph_lv0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "lv_size": "7511998464",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "name": "ceph_lv0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "tags": {
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.cluster_name": "ceph",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.crush_device_class": "",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.encrypted": "0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.osd_id": "0",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.type": "block",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:                "ceph.vdo": "0"
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            },
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "type": "block",
Jan 31 02:37:35 np0005603621 cool_pike[226108]:            "vg_name": "ceph_vg0"
Jan 31 02:37:35 np0005603621 cool_pike[226108]:        }
Jan 31 02:37:35 np0005603621 cool_pike[226108]:    ]
Jan 31 02:37:35 np0005603621 cool_pike[226108]: }
Jan 31 02:37:35 np0005603621 systemd[1]: libpod-f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a.scope: Deactivated successfully.
Jan 31 02:37:35 np0005603621 podman[226064]: 2026-01-31 07:37:35.541787725 +0000 UTC m=+1.045187947 container died f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:37:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c5ec7ebc5aee5b9c87106d03ef382b7bfdd2b87c07f4489ec45ecc3186137fc6-merged.mount: Deactivated successfully.
Jan 31 02:37:35 np0005603621 podman[226064]: 2026-01-31 07:37:35.657574833 +0000 UTC m=+1.160975115 container remove f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_pike, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:37:35 np0005603621 systemd[1]: libpod-conmon-f498a9916125314ddbd2dbe78d89d0e0f8713b328da55c10509131ccb55fe28a.scope: Deactivated successfully.
Jan 31 02:37:35 np0005603621 python3.9[226341]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:37:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.317726458 +0000 UTC m=+0.061718065 container create 0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:37:36 np0005603621 systemd[1]: Started libpod-conmon-0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51.scope.
Jan 31 02:37:36 np0005603621 python3.9[226638]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.289233121 +0000 UTC m=+0.033224788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:37:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.416000801 +0000 UTC m=+0.159992458 container init 0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.426670894 +0000 UTC m=+0.170662471 container start 0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.430967948 +0000 UTC m=+0.174959615 container attach 0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:37:36 np0005603621 sweet_joliot[226662]: 167 167
Jan 31 02:37:36 np0005603621 systemd[1]: libpod-0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51.scope: Deactivated successfully.
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.432718652 +0000 UTC m=+0.176710229 container died 0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:37:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f00369f611c60ba1e84fdd913e58eee78529cf8fbf2c5b77525470b2f2dbd2aa-merged.mount: Deactivated successfully.
Jan 31 02:37:36 np0005603621 podman[226646]: 2026-01-31 07:37:36.473319988 +0000 UTC m=+0.217311555 container remove 0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:37:36 np0005603621 systemd[1]: libpod-conmon-0d7f1540a273e3aeda558f426b1d0596f68ba92c8ec318ba9d8521c9d69a3c51.scope: Deactivated successfully.
Jan 31 02:37:36 np0005603621 podman[226739]: 2026-01-31 07:37:36.66716844 +0000 UTC m=+0.071623784 container create cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:37:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:36.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:36 np0005603621 podman[226739]: 2026-01-31 07:37:36.619074731 +0000 UTC m=+0.023530065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:37:36 np0005603621 systemd[1]: Started libpod-conmon-cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48.scope.
Jan 31 02:37:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:37:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead679b2cf09e2fdbb5102d89e574298d9ff3ffda226786e4dbc21ee42a8475a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead679b2cf09e2fdbb5102d89e574298d9ff3ffda226786e4dbc21ee42a8475a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead679b2cf09e2fdbb5102d89e574298d9ff3ffda226786e4dbc21ee42a8475a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ead679b2cf09e2fdbb5102d89e574298d9ff3ffda226786e4dbc21ee42a8475a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:37:36 np0005603621 podman[226739]: 2026-01-31 07:37:36.787633355 +0000 UTC m=+0.192088689 container init cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:37:36 np0005603621 podman[226739]: 2026-01-31 07:37:36.795668715 +0000 UTC m=+0.200124029 container start cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:37:36 np0005603621 podman[226739]: 2026-01-31 07:37:36.799121642 +0000 UTC m=+0.203576956 container attach cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:37:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]: {
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:        "osd_id": 0,
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:        "type": "bluestore"
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]:    }
Jan 31 02:37:37 np0005603621 xenodochial_kapitsa[226779]: }
Jan 31 02:37:37 np0005603621 systemd[1]: libpod-cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48.scope: Deactivated successfully.
Jan 31 02:37:37 np0005603621 podman[226824]: 2026-01-31 07:37:37.754912642 +0000 UTC m=+0.038761689 container died cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:37:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ead679b2cf09e2fdbb5102d89e574298d9ff3ffda226786e4dbc21ee42a8475a-merged.mount: Deactivated successfully.
Jan 31 02:37:37 np0005603621 podman[226824]: 2026-01-31 07:37:37.819431933 +0000 UTC m=+0.103280880 container remove cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:37:37 np0005603621 systemd[1]: libpod-conmon-cc65d8b147c5469a79fe2a2496262944ea578188e8790244e859cc9931cafe48.scope: Deactivated successfully.
Jan 31 02:37:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:37:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:37:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:37:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:37:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 681cf675-87ad-4011-b743-ac58d0c6b639 does not exist
Jan 31 02:37:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 377ef3b4-2e88-4b33-af44-05bc95917145 does not exist
Jan 31 02:37:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6a24424f-b39c-492a-a308-7490abce330a does not exist
Jan 31 02:37:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:37:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:38 np0005603621 python3.9[226941]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:37:38
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', '.mgr']
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:37:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:38.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:38 np0005603621 python3.9[227093]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:39 np0005603621 python3.9[227246]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:40.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:40 np0005603621 python3.9[227398]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:41 np0005603621 python3.9[227550]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:42 np0005603621 python3.9[227703]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:42.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:42.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:43 np0005603621 python3.9[227855]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:37:43 np0005603621 python3.9[228010]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:37:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:44.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:44.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:45 np0005603621 python3.9[228163]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:37:45 np0005603621 systemd[1]: Listening on multipathd control socket.
Jan 31 02:37:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:46.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:46.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:46 np0005603621 python3.9[228320]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:37:47 np0005603621 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 31 02:37:47 np0005603621 udevadm[228325]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 31 02:37:47 np0005603621 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 31 02:37:47 np0005603621 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 02:37:47 np0005603621 multipathd[228329]: --------start up--------
Jan 31 02:37:47 np0005603621 multipathd[228329]: read /etc/multipath.conf
Jan 31 02:37:47 np0005603621 multipathd[228329]: path checkers start up
Jan 31 02:37:47 np0005603621 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:48 np0005603621 python3.9[228539]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:37:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:37:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:37:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:37:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:48.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:49 np0005603621 python3.9[228691]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 31 02:37:49 np0005603621 kernel: Key type psk registered
Jan 31 02:37:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:50 np0005603621 python3.9[228855]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:37:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:50 np0005603621 python3.9[228978]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769845069.51671-1052-20452647401550/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:50.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:51 np0005603621 python3.9[229130]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:37:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:52 np0005603621 python3.9[229283]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:37:52 np0005603621 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 31 02:37:52 np0005603621 systemd[1]: Stopped Load Kernel Modules.
Jan 31 02:37:52 np0005603621 systemd[1]: Stopping Load Kernel Modules...
Jan 31 02:37:52 np0005603621 systemd[1]: Starting Load Kernel Modules...
Jan 31 02:37:52 np0005603621 systemd[1]: Finished Load Kernel Modules.
Jan 31 02:37:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:52.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.845665) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845074845855, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 501, "total_data_size": 2681218, "memory_usage": 2718168, "flush_reason": "Manual Compaction"}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845074869161, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1530405, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14044, "largest_seqno": 15742, "table_properties": {"data_size": 1524843, "index_size": 2317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16629, "raw_average_key_size": 19, "raw_value_size": 1510835, "raw_average_value_size": 1744, "num_data_blocks": 107, "num_entries": 866, "num_filter_entries": 866, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769844920, "oldest_key_time": 1769844920, "file_creation_time": 1769845074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 23658 microseconds, and 4031 cpu microseconds.
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.869219) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1530405 bytes OK
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.869344) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.873044) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.873082) EVENT_LOG_v1 {"time_micros": 1769845074873073, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.873102) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2673132, prev total WAL file size 2673132, number of live WAL files 2.
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.873672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1494KB)], [32(10MB)]
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845074873794, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 12491260, "oldest_snapshot_seqno": -1}
Jan 31 02:37:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4189 keys, 8056565 bytes, temperature: kUnknown
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845074986174, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 8056565, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8027177, "index_size": 17822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103394, "raw_average_key_size": 24, "raw_value_size": 7950076, "raw_average_value_size": 1897, "num_data_blocks": 752, "num_entries": 4189, "num_filter_entries": 4189, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845074, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.986479) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8056565 bytes
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.988544) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.1 rd, 71.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 10.5 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.4) write-amplify(5.3) OK, records in: 5133, records dropped: 944 output_compression: NoCompression
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.988566) EVENT_LOG_v1 {"time_micros": 1769845074988555, "job": 14, "event": "compaction_finished", "compaction_time_micros": 112473, "compaction_time_cpu_micros": 22839, "output_level": 6, "num_output_files": 1, "total_output_size": 8056565, "num_input_records": 5133, "num_output_records": 4189, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845074988823, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845074989922, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.873596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.990038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.990045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.990047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.990048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:37:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:37:54.990050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:37:55 np0005603621 python3.9[229440]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 31 02:37:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:37:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:37:57 np0005603621 systemd[1]: Reloading.
Jan 31 02:37:57 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:37:57 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:37:57 np0005603621 podman[229448]: 2026-01-31 07:37:57.377368302 +0000 UTC m=+0.104238840 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 02:37:57 np0005603621 systemd[1]: Reloading.
Jan 31 02:37:57 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:37:57 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:37:57 np0005603621 systemd-logind[818]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 31 02:37:57 np0005603621 systemd-logind[818]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 31 02:37:57 np0005603621 lvm[229581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 02:37:57 np0005603621 lvm[229581]: VG ceph_vg0 finished
Jan 31 02:37:58 np0005603621 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 31 02:37:58 np0005603621 systemd[1]: Starting man-db-cache-update.service...
Jan 31 02:37:58 np0005603621 systemd[1]: Reloading.
Jan 31 02:37:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:37:58 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:37:58 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:37:58 np0005603621 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 31 02:37:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:37:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:37:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:37:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:37:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:37:59 np0005603621 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 31 02:37:59 np0005603621 systemd[1]: Finished man-db-cache-update.service.
Jan 31 02:37:59 np0005603621 systemd[1]: man-db-cache-update.service: Consumed 1.206s CPU time.
Jan 31 02:37:59 np0005603621 systemd[1]: run-r54c1ec537cd944bfbf943b1e0756786a.service: Deactivated successfully.
Jan 31 02:37:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:00 np0005603621 python3.9[230936]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:38:00 np0005603621 systemd[1]: Stopping Open-iSCSI...
Jan 31 02:38:00 np0005603621 iscsid[223437]: iscsid shutting down.
Jan 31 02:38:00 np0005603621 systemd[1]: iscsid.service: Deactivated successfully.
Jan 31 02:38:00 np0005603621 systemd[1]: Stopped Open-iSCSI.
Jan 31 02:38:00 np0005603621 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 31 02:38:00 np0005603621 systemd[1]: Starting Open-iSCSI...
Jan 31 02:38:00 np0005603621 systemd[1]: Started Open-iSCSI.
Jan 31 02:38:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:00.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:01 np0005603621 python3.9[231093]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:38:01 np0005603621 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 31 02:38:01 np0005603621 multipathd[228329]: exit (signal)
Jan 31 02:38:01 np0005603621 multipathd[228329]: --------shut down-------
Jan 31 02:38:01 np0005603621 systemd[1]: multipathd.service: Deactivated successfully.
Jan 31 02:38:01 np0005603621 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 31 02:38:01 np0005603621 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 31 02:38:01 np0005603621 multipathd[231100]: --------start up--------
Jan 31 02:38:01 np0005603621 multipathd[231100]: read /etc/multipath.conf
Jan 31 02:38:01 np0005603621 multipathd[231100]: path checkers start up
Jan 31 02:38:01 np0005603621 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 31 02:38:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:02 np0005603621 python3.9[231257]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 31 02:38:02 np0005603621 podman[231262]: 2026-01-31 07:38:02.553906831 +0000 UTC m=+0.093540687 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 02:38:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:38:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:02.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:38:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:38:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:38:03 np0005603621 python3.9[231432]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:04 np0005603621 python3.9[231585]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:38:04 np0005603621 systemd[1]: Reloading.
Jan 31 02:38:04 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:38:04 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:38:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:04.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:04.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:05 np0005603621 python3.9[231771]: ansible-ansible.builtin.service_facts Invoked
Jan 31 02:38:05 np0005603621 network[231788]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 31 02:38:05 np0005603621 network[231789]: 'network-scripts' will be removed from distribution in near future.
Jan 31 02:38:05 np0005603621 network[231790]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 31 02:38:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:06.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:06.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:38:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:08.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:08.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:10 np0005603621 python3.9[232115]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:10.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:10.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:11 np0005603621 python3.9[232268]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:11 np0005603621 python3.9[232422]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:12 np0005603621 python3.9[232575]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:12.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:12.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:13 np0005603621 python3.9[232728]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:14 np0005603621 python3.9[232882]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:14 np0005603621 python3.9[233035]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:14 np0005603621 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 31 02:38:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:14.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:15 np0005603621 python3.9[233189]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:38:15 np0005603621 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 31 02:38:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:38:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:16.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:38:16 np0005603621 python3.9[233344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:16.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:17 np0005603621 python3.9[233496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:17 np0005603621 python3.9[233649]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:18 np0005603621 python3.9[233801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:18.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:38:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:38:19 np0005603621 python3.9[233953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:19 np0005603621 python3.9[234106]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:20 np0005603621 python3.9[234258]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:20.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:20.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:21 np0005603621 python3.9[234410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:21 np0005603621 python3.9[234563]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:22.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:22 np0005603621 python3.9[234715]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:22.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:23 np0005603621 python3.9[234867]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:23 np0005603621 python3.9[235020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:24.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:24 np0005603621 python3.9[235172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:25 np0005603621 python3.9[235324]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:25 np0005603621 python3.9[235477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:26 np0005603621 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 31 02:38:26 np0005603621 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 31 02:38:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:26 np0005603621 python3.9[235631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:26.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:27 np0005603621 podman[235756]: 2026-01-31 07:38:27.650395742 +0000 UTC m=+0.108488673 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:38:27 np0005603621 python3.9[235829]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:28 np0005603621 python3.9[236012]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 31 02:38:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:28.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:29 np0005603621 python3.9[236164]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:38:29 np0005603621 systemd[1]: Reloading.
Jan 31 02:38:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:29 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:38:29 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:38:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:38:30.455 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:38:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:38:30.456 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:38:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:38:30.457 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:38:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 31 02:38:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:30.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 31 02:38:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:31 np0005603621 python3.9[236352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:31 np0005603621 python3.9[236506]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:32 np0005603621 python3.9[236659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:32.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:32 np0005603621 podman[236784]: 2026-01-31 07:38:32.742888162 +0000 UTC m=+0.072995926 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:38:32 np0005603621 python3.9[236818]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:32.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:33 np0005603621 python3.9[236982]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:34 np0005603621 python3.9[237136]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:34 np0005603621 python3.9[237289]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:34.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:34.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:35 np0005603621 python3.9[237442]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 31 02:38:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:38:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:38:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:37 np0005603621 python3.9[237596]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:37 np0005603621 python3.9[237749]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:38 np0005603621 python3.9[237901]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:38:38
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:38:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:38.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:39 np0005603621 python3.9[238184]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:38:39 np0005603621 python3.9[238337]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:40 np0005603621 python3.9[238489]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:38:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fdb55efb-9a7f-4c0d-807f-80c42447990c does not exist
Jan 31 02:38:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 203809ff-9aa1-42c7-b6e4-c8f40169c839 does not exist
Jan 31 02:38:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 95e0ba25-0204-48b6-874f-3179f42bcfc3 does not exist
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:38:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.741110634 +0000 UTC m=+0.031387349 container create 5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:38:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:40 np0005603621 systemd[1]: Started libpod-conmon-5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c.scope.
Jan 31 02:38:40 np0005603621 python3.9[238741]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.804797079 +0000 UTC m=+0.095073804 container init 5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.812431727 +0000 UTC m=+0.102708442 container start 5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.817143843 +0000 UTC m=+0.107420588 container attach 5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:38:40 np0005603621 interesting_fermi[238798]: 167 167
Jan 31 02:38:40 np0005603621 systemd[1]: libpod-5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c.scope: Deactivated successfully.
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.818557678 +0000 UTC m=+0.108834403 container died 5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.726071835 +0000 UTC m=+0.016348550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:38:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a1362d3a49b5e557c5d99c59a090b8084c95180412de382cf90bbeba12a127d9-merged.mount: Deactivated successfully.
Jan 31 02:38:40 np0005603621 podman[238781]: 2026-01-31 07:38:40.860872847 +0000 UTC m=+0.151149562 container remove 5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:38:40 np0005603621 systemd[1]: libpod-conmon-5535560d5ee4f88702e5b86fe08b9a601c67b2d16f8c287d4a79ea8b7c39ea5c.scope: Deactivated successfully.
Jan 31 02:38:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:40.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:40 np0005603621 podman[238890]: 2026-01-31 07:38:40.997363921 +0000 UTC m=+0.042061072 container create 2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:38:41 np0005603621 systemd[1]: Started libpod-conmon-2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34.scope.
Jan 31 02:38:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:38:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:38:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:38:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d919023165b9fedbc44f3cb6102b5d56e596fd04016be051e9babc4808406f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d919023165b9fedbc44f3cb6102b5d56e596fd04016be051e9babc4808406f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d919023165b9fedbc44f3cb6102b5d56e596fd04016be051e9babc4808406f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:41 np0005603621 podman[238890]: 2026-01-31 07:38:40.978186203 +0000 UTC m=+0.022883394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:38:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d919023165b9fedbc44f3cb6102b5d56e596fd04016be051e9babc4808406f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d919023165b9fedbc44f3cb6102b5d56e596fd04016be051e9babc4808406f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:41 np0005603621 podman[238890]: 2026-01-31 07:38:41.087996926 +0000 UTC m=+0.132694097 container init 2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:38:41 np0005603621 podman[238890]: 2026-01-31 07:38:41.097829332 +0000 UTC m=+0.142526503 container start 2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:38:41 np0005603621 podman[238890]: 2026-01-31 07:38:41.101961681 +0000 UTC m=+0.146658832 container attach 2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:38:41 np0005603621 python3.9[238995]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:41 np0005603621 determined_ride[238938]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:38:41 np0005603621 determined_ride[238938]: --> relative data size: 1.0
Jan 31 02:38:41 np0005603621 determined_ride[238938]: --> All data devices are unavailable
Jan 31 02:38:41 np0005603621 systemd[1]: libpod-2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34.scope: Deactivated successfully.
Jan 31 02:38:41 np0005603621 conmon[238938]: conmon 2aacb87df4721c3c263c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34.scope/container/memory.events
Jan 31 02:38:41 np0005603621 podman[238890]: 2026-01-31 07:38:41.88096514 +0000 UTC m=+0.925662331 container died 2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ride, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:38:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-02d919023165b9fedbc44f3cb6102b5d56e596fd04016be051e9babc4808406f-merged.mount: Deactivated successfully.
Jan 31 02:38:41 np0005603621 python3.9[239153]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:42 np0005603621 podman[238890]: 2026-01-31 07:38:42.037053585 +0000 UTC m=+1.081750766 container remove 2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:38:42 np0005603621 systemd[1]: libpod-conmon-2aacb87df4721c3c263ca1078e0072af422ef3f378f225dfe4951c5524ff8c34.scope: Deactivated successfully.
Jan 31 02:38:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:42 np0005603621 python3.9[239424]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.54126235 +0000 UTC m=+0.042962980 container create 30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poitras, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:38:42 np0005603621 systemd[1]: Started libpod-conmon-30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec.scope.
Jan 31 02:38:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.617106533 +0000 UTC m=+0.118807173 container init 30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poitras, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.523508067 +0000 UTC m=+0.025208717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.621509361 +0000 UTC m=+0.123209981 container start 30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:38:42 np0005603621 suspicious_poitras[239501]: 167 167
Jan 31 02:38:42 np0005603621 systemd[1]: libpod-30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec.scope: Deactivated successfully.
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.624588637 +0000 UTC m=+0.126289277 container attach 30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poitras, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.624839965 +0000 UTC m=+0.126540575 container died 30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:38:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6e7be2722ce99998c125089d9fd287583f9608b51d698c528e27b7e326a84596-merged.mount: Deactivated successfully.
Jan 31 02:38:42 np0005603621 podman[239462]: 2026-01-31 07:38:42.700380739 +0000 UTC m=+0.202081349 container remove 30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poitras, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:38:42 np0005603621 systemd[1]: libpod-conmon-30d312f751753f1592619bfa80c6e5410e81c35b10a1c002b98ba998016bbbec.scope: Deactivated successfully.
Jan 31 02:38:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:42.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:42 np0005603621 podman[239525]: 2026-01-31 07:38:42.836309795 +0000 UTC m=+0.050927108 container create 1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:38:42 np0005603621 systemd[1]: Started libpod-conmon-1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6.scope.
Jan 31 02:38:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:38:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109cc171d26140660002b4c22074d3495f1ec299903815d53cae66d7a2c7a99e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109cc171d26140660002b4c22074d3495f1ec299903815d53cae66d7a2c7a99e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109cc171d26140660002b4c22074d3495f1ec299903815d53cae66d7a2c7a99e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/109cc171d26140660002b4c22074d3495f1ec299903815d53cae66d7a2c7a99e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:42 np0005603621 podman[239525]: 2026-01-31 07:38:42.811541664 +0000 UTC m=+0.026159057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:38:42 np0005603621 podman[239525]: 2026-01-31 07:38:42.914052198 +0000 UTC m=+0.128669531 container init 1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:38:42 np0005603621 podman[239525]: 2026-01-31 07:38:42.920386666 +0000 UTC m=+0.135003979 container start 1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:38:42 np0005603621 podman[239525]: 2026-01-31 07:38:42.928347474 +0000 UTC m=+0.142964787 container attach 1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:38:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:42.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:43 np0005603621 sad_turing[239542]: {
Jan 31 02:38:43 np0005603621 sad_turing[239542]:    "0": [
Jan 31 02:38:43 np0005603621 sad_turing[239542]:        {
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "devices": [
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "/dev/loop3"
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            ],
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "lv_name": "ceph_lv0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "lv_size": "7511998464",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "name": "ceph_lv0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "tags": {
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.cluster_name": "ceph",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.crush_device_class": "",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.encrypted": "0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.osd_id": "0",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.type": "block",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:                "ceph.vdo": "0"
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            },
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "type": "block",
Jan 31 02:38:43 np0005603621 sad_turing[239542]:            "vg_name": "ceph_vg0"
Jan 31 02:38:43 np0005603621 sad_turing[239542]:        }
Jan 31 02:38:43 np0005603621 sad_turing[239542]:    ]
Jan 31 02:38:43 np0005603621 sad_turing[239542]: }
Jan 31 02:38:43 np0005603621 systemd[1]: libpod-1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6.scope: Deactivated successfully.
Jan 31 02:38:43 np0005603621 podman[239525]: 2026-01-31 07:38:43.627152224 +0000 UTC m=+0.841769537 container died 1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:38:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-109cc171d26140660002b4c22074d3495f1ec299903815d53cae66d7a2c7a99e-merged.mount: Deactivated successfully.
Jan 31 02:38:43 np0005603621 podman[239525]: 2026-01-31 07:38:43.667543863 +0000 UTC m=+0.882161186 container remove 1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:38:43 np0005603621 systemd[1]: libpod-conmon-1461c1ff8a44a1593092fc8703a6d9c513afa7e45f5e8f35b2e53d88df67f8d6.scope: Deactivated successfully.
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.175285018 +0000 UTC m=+0.030744069 container create 4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:38:44 np0005603621 systemd[1]: Started libpod-conmon-4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0.scope.
Jan 31 02:38:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.229658382 +0000 UTC m=+0.085117483 container init 4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.240677886 +0000 UTC m=+0.096136937 container start 4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:38:44 np0005603621 elastic_ardinghelli[239720]: 167 167
Jan 31 02:38:44 np0005603621 systemd[1]: libpod-4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0.scope: Deactivated successfully.
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.245147075 +0000 UTC m=+0.100606176 container attach 4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.245645581 +0000 UTC m=+0.101104642 container died 4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.16092482 +0000 UTC m=+0.016383911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:38:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-30939dfdba0c8e274e92eefdaccabeb2fb806061f730acd4b0b6df9356d456cf-merged.mount: Deactivated successfully.
Jan 31 02:38:44 np0005603621 podman[239704]: 2026-01-31 07:38:44.284849403 +0000 UTC m=+0.140308484 container remove 4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:38:44 np0005603621 systemd[1]: libpod-conmon-4c8f54ad7d5e7bfdcd55daa3df52aa9fbf6b99ccbbdd7c82bbf7d976cd6d62a0.scope: Deactivated successfully.
Jan 31 02:38:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:44 np0005603621 podman[239742]: 2026-01-31 07:38:44.417532518 +0000 UTC m=+0.040770992 container create 6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_joliot, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:38:44 np0005603621 systemd[1]: Started libpod-conmon-6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea.scope.
Jan 31 02:38:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:38:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eeffafaacb8dbf92cf3f4ff1c48144362a05d23d87199d34eaa716babed6761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eeffafaacb8dbf92cf3f4ff1c48144362a05d23d87199d34eaa716babed6761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eeffafaacb8dbf92cf3f4ff1c48144362a05d23d87199d34eaa716babed6761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eeffafaacb8dbf92cf3f4ff1c48144362a05d23d87199d34eaa716babed6761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:38:44 np0005603621 podman[239742]: 2026-01-31 07:38:44.400813427 +0000 UTC m=+0.024051881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:38:44 np0005603621 podman[239742]: 2026-01-31 07:38:44.504620862 +0000 UTC m=+0.127859316 container init 6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:38:44 np0005603621 podman[239742]: 2026-01-31 07:38:44.520989822 +0000 UTC m=+0.144228256 container start 6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_joliot, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:38:44 np0005603621 podman[239742]: 2026-01-31 07:38:44.524665707 +0000 UTC m=+0.147904151 container attach 6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:38:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:44.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:44.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:45 np0005603621 funny_joliot[239758]: {
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:        "osd_id": 0,
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:        "type": "bluestore"
Jan 31 02:38:45 np0005603621 funny_joliot[239758]:    }
Jan 31 02:38:45 np0005603621 funny_joliot[239758]: }
Jan 31 02:38:45 np0005603621 systemd[1]: libpod-6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea.scope: Deactivated successfully.
Jan 31 02:38:45 np0005603621 podman[239742]: 2026-01-31 07:38:45.338204582 +0000 UTC m=+0.961443016 container died 6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_joliot, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:38:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9eeffafaacb8dbf92cf3f4ff1c48144362a05d23d87199d34eaa716babed6761-merged.mount: Deactivated successfully.
Jan 31 02:38:45 np0005603621 podman[239742]: 2026-01-31 07:38:45.393720104 +0000 UTC m=+1.016958578 container remove 6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:38:45 np0005603621 systemd[1]: libpod-conmon-6f1b1bbd86ed2c4d0d635e3fcb9c7d1d2d6a991ba5b0bb8ea46e262fdd1472ea.scope: Deactivated successfully.
Jan 31 02:38:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:38:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:38:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c72592c3-aacd-42b9-9f63-5d76df01cf4c does not exist
Jan 31 02:38:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d300b261-be99-42fa-be8f-0d3ac035ad4b does not exist
Jan 31 02:38:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d5e63b6a-e646-4f67-83be-16f15966901f does not exist
Jan 31 02:38:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:38:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:46.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:46.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:38:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:38:48 np0005603621 python3.9[240022]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 31 02:38:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:48.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:48.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:49 np0005603621 python3.9[240175]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 31 02:38:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:50 np0005603621 python3.9[240334]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 31 02:38:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:50.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:38:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:50.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:38:52 np0005603621 systemd-logind[818]: New session 51 of user zuul.
Jan 31 02:38:52 np0005603621 systemd[1]: Started Session 51 of User zuul.
Jan 31 02:38:52 np0005603621 systemd[1]: session-51.scope: Deactivated successfully.
Jan 31 02:38:52 np0005603621 systemd-logind[818]: Session 51 logged out. Waiting for processes to exit.
Jan 31 02:38:52 np0005603621 systemd-logind[818]: Removed session 51.
Jan 31 02:38:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:52.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:52 np0005603621 python3.9[240521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:38:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:52.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:53 np0005603621 python3.9[240642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845132.3827975-2659-229388350662167/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:53 np0005603621 python3.9[240793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:38:54 np0005603621 python3.9[240869]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:54 np0005603621 python3.9[241019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:38:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:38:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:54.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:55 np0005603621 python3.9[241140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845134.3179593-2659-231542668805185/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:55 np0005603621 python3.9[241291]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:38:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:56 np0005603621 python3.9[241412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845135.3352842-2659-90500147196140/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:56.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:56 np0005603621 python3.9[241562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:38:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:56.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:57 np0005603621 python3.9[241683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845136.5515714-2659-122442215149250/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:57 np0005603621 podman[241808]: 2026-01-31 07:38:57.790913586 +0000 UTC m=+0.074619193 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:38:57 np0005603621 python3.9[241847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:38:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:38:58 np0005603621 python3.9[241981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845137.5183725-2659-91748848924267/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:38:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:38:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:38:58.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:38:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:38:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:38:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:38:58.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:38:59 np0005603621 python3.9[242133]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:38:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:00 np0005603621 python3.9[242286]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:39:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:00.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:00 np0005603621 python3.9[242438]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:01 np0005603621 python3.9[242590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:39:02 np0005603621 python3.9[242714]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769845141.106667-2980-79972519386876/.source _original_basename=.iptz41m8 follow=False checksum=86e11375b232c54aa1a10f687c578068aabc2c6e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 31 02:39:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:02.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:02 np0005603621 python3.9[242866]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:39:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:02.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:39:03 np0005603621 podman[242992]: 2026-01-31 07:39:03.417698713 +0000 UTC m=+0.052773308 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 02:39:03 np0005603621 python3.9[243030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:39:04 np0005603621 python3.9[243158]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845143.1232927-3058-24417379128809/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:39:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:04.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:04 np0005603621 python3.9[243308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 31 02:39:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:04.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:05 np0005603621 python3.9[243429]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769845144.3024077-3103-26688418329968/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 31 02:39:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:06 np0005603621 python3.9[243582]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 31 02:39:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:06.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:07 np0005603621 python3.9[243734]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:39:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:08 np0005603621 python3[243937]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:39:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:08.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:39:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:10.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:39:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:10.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:12.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:12.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:14.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:39:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:14.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:39:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:16.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:16.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:18.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:39:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:18.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:39:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:20.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:20.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:22.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:22.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:24.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:39:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:24.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:39:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:26.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:27.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).paxos(paxos updating c 1005..1710) accept timeout, calling fresh election
Jan 31 02:39:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 02:39:27 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(14) init, last seen epoch 14
Jan 31 02:39:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:28 np0005603621 ceph-mds[94918]: mds.beacon.cephfs.compute-0.jroeqh missed beacon ack from the monitors
Jan 31 02:39:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:28.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:29.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:39:30.456 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:39:30.456 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:39:30.456 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:39:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:30.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:39:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:39:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:31.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:32 np0005603621 ceph-mds[94918]: mds.beacon.cephfs.compute-0.jroeqh missed beacon ack from the monitors
Jan 31 02:39:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:32.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:33.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:34.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:35.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:36 np0005603621 ceph-mds[94918]: mds.beacon.cephfs.compute-0.jroeqh missed beacon ack from the monitors
Jan 31 02:39:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:36.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:37.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:39:38
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'vms', 'backups', 'images']
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:39:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:38.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:39.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: paxos.0).electionLogic(17) init, last seen epoch 17, mid-election, bumping
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:39:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:3300/0
Jan 31 02:39:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:3300/0
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 31 02:39:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 31 02:39:40 np0005603621 podman[244046]: 2026-01-31 07:39:39.999619289 +0000 UTC m=+11.929778364 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:39:40 np0005603621 podman[244084]: 2026-01-31 07:39:40.0184555 +0000 UTC m=+5.581661579 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 02:39:40 np0005603621 podman[243951]: 2026-01-31 07:39:40.119850863 +0000 UTC m=+31.252708532 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:39:40 np0005603621 podman[244247]: 2026-01-31 07:39:40.211182869 +0000 UTC m=+0.019919677 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:39:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:40 np0005603621 podman[244247]: 2026-01-31 07:39:40.420490878 +0000 UTC m=+0.229227656 container create d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:39:40 np0005603621 python3[243937]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 31 02:39:40 np0005603621 ceph-mds[94918]: mds.beacon.cephfs.compute-0.jroeqh missed beacon ack from the monitors
Jan 31 02:39:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:39:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:39:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:39:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:40.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:41.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.asgtzy=up:active} 2 up:standby
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ddmhwk(active, since 19m), standbys: compute-2.cdjvtw, compute-1.gxjgok
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:39:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:39:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:39:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 34c4ebfc-b30c-4bff-ac66-028f35e556d1 does not exist
Jan 31 02:39:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 808794c0-c2bb-44a9-890e-98069d725175 does not exist
Jan 31 02:39:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 68a57b42-00df-420d-a37e-86d5ce8658e2 does not exist
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:39:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:39:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:39:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:42.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:39:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:43.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:42.932948456 +0000 UTC m=+0.019167482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:43.053406637 +0000 UTC m=+0.139625613 container create 78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: mon.compute-0 calling monitor election
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: mon.compute-1 calling monitor election
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: mon.compute-2 calling monitor election
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: mon.compute-0 calling monitor election
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:39:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:39:43 np0005603621 systemd[1]: Started libpod-conmon-78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a.scope.
Jan 31 02:39:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:43.423648588 +0000 UTC m=+0.509867544 container init 78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:43.430135201 +0000 UTC m=+0.516354137 container start 78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:39:43 np0005603621 serene_meninsky[244498]: 167 167
Jan 31 02:39:43 np0005603621 systemd[1]: libpod-78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a.scope: Deactivated successfully.
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:43.437506363 +0000 UTC m=+0.523725309 container attach 78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:43.43870535 +0000 UTC m=+0.524924326 container died 78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:39:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1b511e00045e928926c5ebe829127a49ac77c034611def4eadd20ae8eaca205d-merged.mount: Deactivated successfully.
Jan 31 02:39:43 np0005603621 podman[244482]: 2026-01-31 07:39:43.483891778 +0000 UTC m=+0.570110724 container remove 78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:39:43 np0005603621 systemd[1]: libpod-conmon-78017c4d05417ac944ba9195aa79c41ba84016a5d6dee47cfbee5147278b2a3a.scope: Deactivated successfully.
Jan 31 02:39:43 np0005603621 podman[244523]: 2026-01-31 07:39:43.604151513 +0000 UTC m=+0.033536774 container create 6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kepler, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:39:43 np0005603621 systemd[1]: Started libpod-conmon-6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47.scope.
Jan 31 02:39:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8248b4d88955a1f5d48e9e222daaccd358317d9862b4ecb937f50c78f059b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8248b4d88955a1f5d48e9e222daaccd358317d9862b4ecb937f50c78f059b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8248b4d88955a1f5d48e9e222daaccd358317d9862b4ecb937f50c78f059b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8248b4d88955a1f5d48e9e222daaccd358317d9862b4ecb937f50c78f059b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8248b4d88955a1f5d48e9e222daaccd358317d9862b4ecb937f50c78f059b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:43 np0005603621 podman[244523]: 2026-01-31 07:39:43.679505548 +0000 UTC m=+0.108890839 container init 6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:39:43 np0005603621 podman[244523]: 2026-01-31 07:39:43.686525659 +0000 UTC m=+0.115910930 container start 6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:39:43 np0005603621 podman[244523]: 2026-01-31 07:39:43.589866554 +0000 UTC m=+0.019251835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:39:43 np0005603621 podman[244523]: 2026-01-31 07:39:43.690854164 +0000 UTC m=+0.120239445 container attach 6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:39:44 np0005603621 python3.9[244672]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:44 np0005603621 strange_kepler[244539]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:39:44 np0005603621 strange_kepler[244539]: --> relative data size: 1.0
Jan 31 02:39:44 np0005603621 strange_kepler[244539]: --> All data devices are unavailable
Jan 31 02:39:44 np0005603621 systemd[1]: libpod-6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47.scope: Deactivated successfully.
Jan 31 02:39:44 np0005603621 podman[244523]: 2026-01-31 07:39:44.402560242 +0000 UTC m=+0.831945573 container died 6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:39:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-afd8248b4d88955a1f5d48e9e222daaccd358317d9862b4ecb937f50c78f059b-merged.mount: Deactivated successfully.
Jan 31 02:39:44 np0005603621 podman[244523]: 2026-01-31 07:39:44.446578673 +0000 UTC m=+0.875963954 container remove 6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kepler, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:39:44 np0005603621 systemd[1]: libpod-conmon-6dabcd9c02502ed4cf906307d85ddcd30dd03d6ee30d96d5ee99927ed7491d47.scope: Deactivated successfully.
Jan 31 02:39:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:39:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:44.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:39:44 np0005603621 podman[244859]: 2026-01-31 07:39:44.886751079 +0000 UTC m=+0.033987588 container create 62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:39:44 np0005603621 systemd[1]: Started libpod-conmon-62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36.scope.
Jan 31 02:39:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:44 np0005603621 podman[244859]: 2026-01-31 07:39:44.949636712 +0000 UTC m=+0.096873211 container init 62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_liskov, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:39:44 np0005603621 podman[244859]: 2026-01-31 07:39:44.954235747 +0000 UTC m=+0.101472246 container start 62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_liskov, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:39:44 np0005603621 kind_liskov[244877]: 167 167
Jan 31 02:39:44 np0005603621 systemd[1]: libpod-62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36.scope: Deactivated successfully.
Jan 31 02:39:44 np0005603621 podman[244859]: 2026-01-31 07:39:44.959985438 +0000 UTC m=+0.107221947 container attach 62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_liskov, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:39:44 np0005603621 podman[244859]: 2026-01-31 07:39:44.960316708 +0000 UTC m=+0.107553207 container died 62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:39:44 np0005603621 podman[244859]: 2026-01-31 07:39:44.874589567 +0000 UTC m=+0.021826086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:39:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a42f2f56504bbfa685be6345f562d052a9340d310794d047b077348e0ddb2c3e-merged.mount: Deactivated successfully.
Jan 31 02:39:45 np0005603621 podman[244859]: 2026-01-31 07:39:45.006970752 +0000 UTC m=+0.154207261 container remove 62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:39:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:45.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:45 np0005603621 systemd[1]: libpod-conmon-62e0cebd2b16ae5766dd291cd74c56835c1284143768fdbd36e87d3531eb8f36.scope: Deactivated successfully.
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.117549563 +0000 UTC m=+0.037041593 container create cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:39:45 np0005603621 systemd[1]: Started libpod-conmon-cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069.scope.
Jan 31 02:39:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1757b8aaf88ca35771b0d5528469b9aa9e884f843ece3aee3c392eafc312799f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1757b8aaf88ca35771b0d5528469b9aa9e884f843ece3aee3c392eafc312799f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1757b8aaf88ca35771b0d5528469b9aa9e884f843ece3aee3c392eafc312799f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1757b8aaf88ca35771b0d5528469b9aa9e884f843ece3aee3c392eafc312799f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.194494478 +0000 UTC m=+0.113986528 container init cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_blackburn, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.102533112 +0000 UTC m=+0.022025162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.198885655 +0000 UTC m=+0.118377685 container start cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_blackburn, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.203138879 +0000 UTC m=+0.122630939 container attach cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:39:45 np0005603621 python3.9[245049]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]: {
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:    "0": [
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:        {
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "devices": [
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "/dev/loop3"
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            ],
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "lv_name": "ceph_lv0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "lv_size": "7511998464",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "name": "ceph_lv0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "tags": {
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.cluster_name": "ceph",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.crush_device_class": "",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.encrypted": "0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.osd_id": "0",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.type": "block",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:                "ceph.vdo": "0"
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            },
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "type": "block",
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:            "vg_name": "ceph_vg0"
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:        }
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]:    ]
Jan 31 02:39:45 np0005603621 busy_blackburn[244920]: }
Jan 31 02:39:45 np0005603621 systemd[1]: libpod-cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069.scope: Deactivated successfully.
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.907560978 +0000 UTC m=+0.827053018 container died cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:39:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1757b8aaf88ca35771b0d5528469b9aa9e884f843ece3aee3c392eafc312799f-merged.mount: Deactivated successfully.
Jan 31 02:39:45 np0005603621 podman[244901]: 2026-01-31 07:39:45.957102604 +0000 UTC m=+0.876594634 container remove cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_blackburn, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:39:45 np0005603621 systemd[1]: libpod-conmon-cdbbb25b21ebd4228b8d81c2118a4a11b4027fa956401843d27861e78e366069.scope: Deactivated successfully.
Jan 31 02:39:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.431163303 +0000 UTC m=+0.031698486 container create 055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:39:46 np0005603621 systemd[1]: Started libpod-conmon-055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5.scope.
Jan 31 02:39:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.493475769 +0000 UTC m=+0.094010972 container init 055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.49831232 +0000 UTC m=+0.098847503 container start 055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 31 02:39:46 np0005603621 nostalgic_vaughan[245376]: 167 167
Jan 31 02:39:46 np0005603621 systemd[1]: libpod-055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5.scope: Deactivated successfully.
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.502076579 +0000 UTC m=+0.102611802 container attach 055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.504140833 +0000 UTC m=+0.104676056 container died 055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.417017118 +0000 UTC m=+0.017552321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:39:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-238b92c5e89f497dd9215900694d18b9d188998cf2db05baa2fe2f99545b3d53-merged.mount: Deactivated successfully.
Jan 31 02:39:46 np0005603621 python3.9[245344]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 31 02:39:46 np0005603621 podman[245360]: 2026-01-31 07:39:46.550393445 +0000 UTC m=+0.150928658 container remove 055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_vaughan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:39:46 np0005603621 systemd[1]: libpod-conmon-055fceccd7d6bd626d7e5a9f51e45fba979afb0c859e7044c775d75347b18fe5.scope: Deactivated successfully.
Jan 31 02:39:46 np0005603621 podman[245426]: 2026-01-31 07:39:46.668969426 +0000 UTC m=+0.034438301 container create a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:39:46 np0005603621 systemd[1]: Started libpod-conmon-a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03.scope.
Jan 31 02:39:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9a706906b92b367290c8423bf5b70fdd8fc3056e606b5bbcfac71299bf7a2ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9a706906b92b367290c8423bf5b70fdd8fc3056e606b5bbcfac71299bf7a2ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9a706906b92b367290c8423bf5b70fdd8fc3056e606b5bbcfac71299bf7a2ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9a706906b92b367290c8423bf5b70fdd8fc3056e606b5bbcfac71299bf7a2ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:46 np0005603621 podman[245426]: 2026-01-31 07:39:46.743660011 +0000 UTC m=+0.109128886 container init a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:39:46 np0005603621 podman[245426]: 2026-01-31 07:39:46.654708289 +0000 UTC m=+0.020177184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:39:46 np0005603621 podman[245426]: 2026-01-31 07:39:46.75223178 +0000 UTC m=+0.117700655 container start a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:39:46 np0005603621 podman[245426]: 2026-01-31 07:39:46.754931425 +0000 UTC m=+0.120400300 container attach a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:39:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:46.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:47.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:47 np0005603621 python3[245574]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 31 02:39:47 np0005603621 condescending_napier[245442]: {
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:        "osd_id": 0,
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:        "type": "bluestore"
Jan 31 02:39:47 np0005603621 condescending_napier[245442]:    }
Jan 31 02:39:47 np0005603621 condescending_napier[245442]: }
Jan 31 02:39:47 np0005603621 systemd[1]: libpod-a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03.scope: Deactivated successfully.
Jan 31 02:39:47 np0005603621 podman[245426]: 2026-01-31 07:39:47.597450609 +0000 UTC m=+0.962919484 container died a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:39:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a9a706906b92b367290c8423bf5b70fdd8fc3056e606b5bbcfac71299bf7a2ac-merged.mount: Deactivated successfully.
Jan 31 02:39:47 np0005603621 podman[245426]: 2026-01-31 07:39:47.646922151 +0000 UTC m=+1.012391046 container remove a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:39:47 np0005603621 systemd[1]: libpod-conmon-a589c6ab06732f3ac075d6241b4902219a825512d8eb6f7f40b0cac7514ecb03.scope: Deactivated successfully.
Jan 31 02:39:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:39:47 np0005603621 podman[245639]: 2026-01-31 07:39:47.707238274 +0000 UTC m=+0.036653201 container create 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, container_name=nova_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 02:39:47 np0005603621 podman[245639]: 2026-01-31 07:39:47.687260698 +0000 UTC m=+0.016675615 image pull f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 31 02:39:47 np0005603621 python3[245574]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 31 02:39:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:39:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:39:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:39:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 49884c03-a1a1-451f-9ddb-fd7422f4c3b0 does not exist
Jan 31 02:39:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bf3e35db-e8b7-454d-9d87-612deb84e538 does not exist
Jan 31 02:39:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7d0dd602-6882-4f09-9009-2a1cd8d419e8 does not exist
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:39:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3782 writes, 16K keys, 3781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3782 writes, 3781 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1374 writes, 5618 keys, 1373 commit groups, 1.0 writes per commit group, ingest: 9.57 MB, 0.02 MB/s#012Interval WAL: 1374 writes, 1373 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     84.3      0.22              0.05         7    0.031       0      0       0.0       0.0#012  L6      1/0    7.68 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    101.2     83.2      0.60              0.14         6    0.100     26K   3280       0.0       0.0#012 Sum      1/0    7.68 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     74.1     83.5      0.82              0.18        13    0.063     26K   3280       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7     68.9     69.5      0.48              0.07         6    0.079     14K   1981       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    101.2     83.2      0.60              0.14         6    0.100     26K   3280       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     85.7      0.21              0.05         6    0.036       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.018, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 2.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(105,1.80 MB,0.593336%) FilterBlock(14,82.80 KB,0.0265975%) IndexBlock(14,168.55 KB,0.0541436%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:39:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:39:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:39:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:39:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:39:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:48.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:39:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:49.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:49 np0005603621 python3.9[245930]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:50 np0005603621 python3.9[246085]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:39:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:50.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:51 np0005603621 python3.9[246237]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769845190.3953528-3391-162224472768229/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 31 02:39:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:39:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:51.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:39:51 np0005603621 python3.9[246313]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 31 02:39:51 np0005603621 systemd[1]: Reloading.
Jan 31 02:39:51 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:39:51 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:39:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:52 np0005603621 python3.9[246425]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 31 02:39:52 np0005603621 systemd[1]: Reloading.
Jan 31 02:39:52 np0005603621 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 31 02:39:52 np0005603621 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 31 02:39:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:52 np0005603621 systemd[1]: Starting nova_compute container...
Jan 31 02:39:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:39:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:39:53 np0005603621 podman[246465]: 2026-01-31 07:39:53.020357525 +0000 UTC m=+0.114561387 container init 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 02:39:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:53.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:53 np0005603621 podman[246465]: 2026-01-31 07:39:53.031403022 +0000 UTC m=+0.125606874 container start 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + sudo -E kolla_set_configs
Jan 31 02:39:53 np0005603621 podman[246465]: nova_compute
Jan 31 02:39:53 np0005603621 systemd[1]: Started nova_compute container.
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Validating config file
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying service configuration files
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Deleting /etc/ceph
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Creating directory /etc/ceph
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Writing out command to execute
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:39:53 np0005603621 nova_compute[246480]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:39:53 np0005603621 nova_compute[246480]: ++ cat /run_command
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + CMD=nova-compute
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + ARGS=
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + sudo kolla_copy_cacerts
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + [[ ! -n '' ]]
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + . kolla_extend_start
Jan 31 02:39:53 np0005603621 nova_compute[246480]: Running command: 'nova-compute'
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + umask 0022
Jan 31 02:39:53 np0005603621 nova_compute[246480]: + exec nova-compute
Jan 31 02:39:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:54 np0005603621 python3.9[246643]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:54.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:55.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:55 np0005603621 python3.9[246793]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.181 246484 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.182 246484 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.182 246484 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.182 246484 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.335 246484 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.353 246484 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:39:56 np0005603621 nova_compute[246480]: 2026-01-31 07:39:56.354 246484 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:39:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:56 np0005603621 python3.9[246947]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 31 02:39:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:56.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:57.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:57 np0005603621 nova_compute[246480]: 2026-01-31 07:39:57.564 246484 INFO nova.virt.driver [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 02:39:57 np0005603621 nova_compute[246480]: 2026-01-31 07:39:57.981 246484 INFO nova.compute.provider_config [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.029 246484 DEBUG oslo_concurrency.lockutils [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.030 246484 DEBUG oslo_concurrency.lockutils [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.030 246484 DEBUG oslo_concurrency.lockutils [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.031 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.032 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.032 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.032 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.032 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.032 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.032 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.033 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.034 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.035 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.036 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.037 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.037 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.037 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.037 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.037 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.037 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.038 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.039 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.040 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.041 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.042 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.043 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.044 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.045 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.046 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.047 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.048 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.048 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.048 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.048 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.048 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.048 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.049 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.050 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.051 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.052 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.053 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.053 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.053 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.053 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.053 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.053 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.054 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.055 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.056 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.056 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.056 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.056 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.056 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.056 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.057 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.057 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.057 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.057 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.057 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.057 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.058 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.059 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.060 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.061 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.062 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.063 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.064 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.065 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.066 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.067 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.068 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.069 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.070 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.071 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.072 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.073 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.074 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.075 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.076 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.077 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.078 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.079 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.080 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.081 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.081 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.081 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.081 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.081 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.081 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.082 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.083 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.084 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.085 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.086 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.087 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.088 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.089 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.090 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.091 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.092 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.092 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.092 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.092 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.092 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.092 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.093 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.094 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.095 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.096 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.097 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.098 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.099 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.100 246484 WARNING oslo_config.cfg [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 02:39:58 np0005603621 nova_compute[246480]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 02:39:58 np0005603621 nova_compute[246480]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 02:39:58 np0005603621 nova_compute[246480]: and ``live_migration_inbound_addr`` respectively.
Jan 31 02:39:58 np0005603621 nova_compute[246480]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.100 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.100 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.100 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.100 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.100 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.101 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rbd_secret_uuid        = 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.102 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.103 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.104 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.104 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.104 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.104 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.104 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.104 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.105 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.106 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.106 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.106 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.106 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.106 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.106 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.107 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.108 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.109 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.110 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.111 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.112 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.113 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.114 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.115 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.116 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.117 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.118 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.119 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.119 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.119 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.119 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.119 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.119 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.120 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.121 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.122 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.123 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.124 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.124 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.124 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.124 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.124 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.124 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.125 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.126 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.127 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.128 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.128 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.128 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.128 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.128 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.128 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.129 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.130 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.131 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.132 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.133 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.134 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.134 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.134 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.134 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.134 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.134 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.135 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.135 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.135 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.135 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.135 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.135 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.136 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.137 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.138 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.139 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.140 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.141 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.142 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.143 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.144 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.145 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.145 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.145 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.145 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.145 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.145 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.146 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.147 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.148 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.148 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.148 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.148 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.148 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.149 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.149 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.149 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.150 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.150 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.150 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.151 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.151 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.151 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.152 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.152 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.153 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.153 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.153 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.153 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.153 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.154 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.155 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.156 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.157 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.158 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.159 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.160 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.161 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.162 246484 DEBUG oslo_service.service [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.163 246484 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.193 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.194 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.194 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.195 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 02:39:58 np0005603621 systemd[1]: Starting libvirt QEMU daemon...
Jan 31 02:39:58 np0005603621 python3.9[247101]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 02:39:58 np0005603621 systemd[1]: Started libvirt QEMU daemon.
Jan 31 02:39:58 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:39:58 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.252 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd13f90c220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.255 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd13f90c220> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.256 246484 INFO nova.virt.libvirt.driver [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.285 246484 WARNING nova.virt.libvirt.driver [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 02:39:58 np0005603621 nova_compute[246480]: 2026-01-31 07:39:58.286 246484 DEBUG nova.virt.libvirt.volume.mount [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 31 02:39:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:39:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:39:58.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.006 246484 INFO nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <host>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <uuid>4e415482-4f51-40cd-acc4-a0d3058a31bb</uuid>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <arch>x86_64</arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model>EPYC-Rome-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <vendor>AMD</vendor>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <microcode version='16777317'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <signature family='23' model='49' stepping='0'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='x2apic'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='tsc-deadline'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='osxsave'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='hypervisor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='tsc_adjust'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='spec-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='stibp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='arch-capabilities'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='cmp_legacy'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='topoext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='virt-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='lbrv'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='tsc-scale'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='vmcb-clean'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='pause-filter'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='pfthreshold'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='svme-addr-chk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='rdctl-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='skip-l1dfl-vmentry'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='mds-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature name='pschange-mc-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <pages unit='KiB' size='4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <pages unit='KiB' size='2048'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <pages unit='KiB' size='1048576'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <power_management>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <suspend_mem/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </power_management>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <iommu support='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <migration_features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <live/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <uri_transports>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <uri_transport>tcp</uri_transport>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <uri_transport>rdma</uri_transport>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </uri_transports>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </migration_features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <topology>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <cells num='1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <cell id='0'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          <memory unit='KiB'>7864292</memory>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          <pages unit='KiB' size='4'>1966073</pages>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          <pages unit='KiB' size='2048'>0</pages>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          <distances>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <sibling id='0' value='10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          </distances>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          <cpus num='8'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:          </cpus>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        </cell>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </cells>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </topology>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <cache>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </cache>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <secmodel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model>selinux</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <doi>0</doi>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </secmodel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <secmodel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model>dac</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <doi>0</doi>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </secmodel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </host>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <guest>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <os_type>hvm</os_type>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <arch name='i686'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <wordsize>32</wordsize>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <domain type='qemu'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <domain type='kvm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <pae/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <nonpae/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <acpi default='on' toggle='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <apic default='on' toggle='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <cpuselection/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <deviceboot/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <externalSnapshot/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </guest>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <guest>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <os_type>hvm</os_type>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <arch name='x86_64'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <wordsize>64</wordsize>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <domain type='qemu'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <domain type='kvm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <acpi default='on' toggle='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <apic default='on' toggle='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <cpuselection/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <deviceboot/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <externalSnapshot/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </guest>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 
Jan 31 02:39:59 np0005603621 nova_compute[246480]: </capabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: #033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.012 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.026 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 31 02:39:59 np0005603621 nova_compute[246480]: <domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <domain>kvm</domain>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <arch>i686</arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <vcpu max='4096'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <iothreads supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <os supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='firmware'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <loader supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>rom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pflash</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='readonly'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>yes</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='secure'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </loader>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </os>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='maximum' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='maximumMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-model' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <vendor>AMD</vendor>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='x2apic'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='stibp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='succor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lbrv'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='custom' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Dhyana-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:39:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 python3.9[247334]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <memoryBacking supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='sourceType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>anonymous</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>memfd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </memoryBacking>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <disk supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='diskDevice'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>disk</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cdrom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>floppy</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>lun</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>fdc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>sata</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </disk>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <graphics supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vnc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egl-headless</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </graphics>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <video supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='modelType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vga</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cirrus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>none</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>bochs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ramfb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </video>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hostdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='mode'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>subsystem</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='startupPolicy'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>mandatory</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>requisite</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>optional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='subsysType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pci</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='capsType'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='pciBackend'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hostdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <rng supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>random</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </rng>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <filesystem supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='driverType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>path</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>handle</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtiofs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </filesystem>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tpm supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-tis</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-crb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emulator</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>external</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendVersion'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>2.0</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </tpm>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <redirdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </redirdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <channel supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </channel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <crypto supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </crypto>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <interface supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>passt</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </interface>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <panic supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>isa</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>hyperv</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </panic>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <console supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>null</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dev</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pipe</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stdio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>udp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tcp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu-vdagent</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </console>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <gic supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <vmcoreinfo supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <genid supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backingStoreInput supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backup supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <async-teardown supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <s390-pv supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <ps2 supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tdx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sev supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sgx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hyperv supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='features'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>relaxed</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vapic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>spinlocks</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vpindex</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>runtime</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>synic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stimer</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reset</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vendor_id</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>frequencies</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reenlightenment</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tlbflush</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ipi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>avic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emsr_bitmap</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>xmm_input</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <spinlocks>4095</spinlocks>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <stimer_direct>on</stimer_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hyperv>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <launchSecurity supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: </domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.033 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 02:39:59 np0005603621 nova_compute[246480]: <domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <domain>kvm</domain>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <arch>i686</arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <vcpu max='240'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <iothreads supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <os supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='firmware'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <loader supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>rom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pflash</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='readonly'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>yes</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='secure'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </loader>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </os>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='maximum' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='maximumMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-model' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <vendor>AMD</vendor>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='x2apic'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='stibp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='succor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lbrv'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='custom' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Dhyana-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 systemd[1]: Stopping nova_compute container...
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <memoryBacking supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='sourceType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>anonymous</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>memfd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </memoryBacking>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <disk supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='diskDevice'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>disk</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cdrom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>floppy</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>lun</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ide</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>fdc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>sata</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </disk>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <graphics supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vnc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egl-headless</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </graphics>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <video supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='modelType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vga</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cirrus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>none</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>bochs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ramfb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </video>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hostdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='mode'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>subsystem</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='startupPolicy'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>mandatory</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>requisite</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>optional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='subsysType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pci</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='capsType'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='pciBackend'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hostdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <rng supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>random</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </rng>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <filesystem supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='driverType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>path</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>handle</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtiofs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </filesystem>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tpm supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-tis</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-crb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emulator</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>external</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendVersion'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>2.0</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </tpm>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <redirdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </redirdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <channel supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </channel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <crypto supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </crypto>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <interface supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>passt</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </interface>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <panic supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>isa</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>hyperv</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </panic>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <console supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>null</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dev</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pipe</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stdio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>udp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tcp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu-vdagent</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </console>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <gic supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <vmcoreinfo supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <genid supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backingStoreInput supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backup supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <async-teardown supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <s390-pv supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <ps2 supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tdx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sev supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sgx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hyperv supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='features'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>relaxed</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vapic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>spinlocks</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vpindex</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>runtime</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>synic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stimer</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reset</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vendor_id</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>frequencies</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reenlightenment</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tlbflush</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ipi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>avic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emsr_bitmap</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>xmm_input</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <spinlocks>4095</spinlocks>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <stimer_direct>on</stimer_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hyperv>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <launchSecurity supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: </domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.076 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.080 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 31 02:39:59 np0005603621 nova_compute[246480]: <domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <domain>kvm</domain>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <arch>x86_64</arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <vcpu max='4096'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <iothreads supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <os supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='firmware'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>efi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <loader supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>rom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pflash</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='readonly'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>yes</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='secure'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>yes</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </loader>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </os>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='maximum' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='maximumMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-model' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <vendor>AMD</vendor>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='x2apic'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='stibp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='succor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lbrv'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='custom' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Dhyana-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <memoryBacking supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='sourceType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>anonymous</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>memfd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </memoryBacking>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <disk supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='diskDevice'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>disk</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cdrom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>floppy</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>lun</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>fdc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>sata</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </disk>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <graphics supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vnc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egl-headless</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </graphics>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <video supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='modelType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vga</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cirrus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>none</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>bochs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ramfb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </video>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hostdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='mode'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>subsystem</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='startupPolicy'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>mandatory</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>requisite</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>optional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='subsysType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pci</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='capsType'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='pciBackend'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hostdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <rng supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>random</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </rng>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <filesystem supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='driverType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>path</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>handle</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtiofs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </filesystem>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tpm supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-tis</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-crb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emulator</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>external</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendVersion'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>2.0</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </tpm>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <redirdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </redirdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <channel supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </channel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <crypto supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </crypto>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <interface supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>passt</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </interface>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <panic supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>isa</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>hyperv</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </panic>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <console supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>null</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dev</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pipe</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stdio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>udp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tcp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu-vdagent</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </console>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <gic supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <vmcoreinfo supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <genid supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backingStoreInput supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backup supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <async-teardown supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <s390-pv supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <ps2 supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tdx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sev supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sgx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hyperv supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='features'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>relaxed</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vapic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>spinlocks</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vpindex</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>runtime</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>synic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stimer</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reset</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vendor_id</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>frequencies</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reenlightenment</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tlbflush</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ipi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>avic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emsr_bitmap</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>xmm_input</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <spinlocks>4095</spinlocks>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <stimer_direct>on</stimer_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hyperv>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <launchSecurity supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: </domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.149 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 31 02:39:59 np0005603621 nova_compute[246480]: <domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <domain>kvm</domain>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <arch>x86_64</arch>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <vcpu max='240'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <iothreads supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <os supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='firmware'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <loader supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>rom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pflash</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='readonly'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>yes</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='secure'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>no</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </loader>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </os>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='maximum' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='maximumMigratable'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>on</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>off</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='host-model' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <vendor>AMD</vendor>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='x2apic'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='stibp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='succor'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lbrv'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <mode name='custom' supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Broadwell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ddpd-u'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sha512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm3'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sm4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Cooperlake-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Denverton-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Dhyana-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amd-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='auto-ibrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibpb-brtype'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='no-nested-data-bp'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='null-sel-clr-base'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='perfmon-v2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbpb'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='stibp-always-on'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='EPYC-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-128'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-256'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx10-512'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='prefetchiti'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Haswell-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='IvyBridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='KnightsMill-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4fmaps'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-4vnniw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512er'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512pf'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fma4'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tbm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xop'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='amx-tile'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-bf16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-fp16'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bitalg'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vbmi2'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrc'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fzrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='la57'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='taa-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='tsx-ldtrk'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='SierraForest-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ifma'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-ne-convert'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx-vnni-int8'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bhi-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='bus-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cmpccxadd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fbsdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='fsrs'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ibrs-all'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='intel-psfd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ipred-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='lam'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mcdt-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pbrsb-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='psdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rrsba-ctrl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='serialize'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vaes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='vpclmulqdq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Client-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='hle'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='rtm'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Skylake-Server-v5'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512bw'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512cd'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512dq'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512f'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='avx512vl'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='invpcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pcid'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='pku'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='mpx'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v2'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v3'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='core-capability'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='split-lock-detect'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='Snowridge-v4'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='cldemote'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='erms'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='gfni'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdir64b'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='movdiri'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='xsaves'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='athlon-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='core2duo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='coreduo-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='n270-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='ss'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <blockers model='phenom-v1'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnow'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <feature name='3dnowext'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </blockers>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </mode>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <memoryBacking supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <enum name='sourceType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>anonymous</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <value>memfd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </memoryBacking>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <disk supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='diskDevice'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>disk</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cdrom</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>floppy</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>lun</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ide</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>fdc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>sata</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </disk>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <graphics supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vnc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egl-headless</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </graphics>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <video supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='modelType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vga</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>cirrus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>none</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>bochs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ramfb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </video>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hostdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='mode'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>subsystem</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='startupPolicy'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>mandatory</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>requisite</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>optional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='subsysType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pci</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>scsi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='capsType'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='pciBackend'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hostdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <rng supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtio-non-transitional</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>random</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>egd</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </rng>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <filesystem supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='driverType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>path</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>handle</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>virtiofs</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </filesystem>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tpm supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-tis</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tpm-crb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emulator</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>external</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendVersion'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>2.0</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </tpm>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <redirdev supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='bus'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>usb</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </redirdev>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <channel supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </channel>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <crypto supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendModel'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>builtin</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </crypto>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <interface supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='backendType'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>default</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>passt</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </interface>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <panic supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='model'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>isa</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>hyperv</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </panic>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <console supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='type'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>null</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vc</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pty</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dev</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>file</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>pipe</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stdio</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>udp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tcp</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>unix</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>qemu-vdagent</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>dbus</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </console>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </devices>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <gic supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <vmcoreinfo supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <genid supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backingStoreInput supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <backup supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <async-teardown supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <s390-pv supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <ps2 supported='yes'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <tdx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sev supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <sgx supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <hyperv supported='yes'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <enum name='features'>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>relaxed</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vapic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>spinlocks</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vpindex</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>runtime</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>synic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>stimer</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reset</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>vendor_id</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>frequencies</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>reenlightenment</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>tlbflush</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>ipi</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>avic</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>emsr_bitmap</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <value>xmm_input</value>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </enum>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      <defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <spinlocks>4095</spinlocks>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <stimer_direct>on</stimer_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_direct>on</tlbflush_direct>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <tlbflush_extended>on</tlbflush_extended>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:      </defaults>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    </hyperv>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:    <launchSecurity supported='no'/>
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  </features>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: </domainCapabilities>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.201 246484 DEBUG nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.202 246484 INFO nova.virt.libvirt.host [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Secure Boot support detected#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.203 246484 INFO nova.virt.libvirt.driver [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.203 246484 INFO nova.virt.libvirt.driver [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.213 246484 DEBUG nova.virt.libvirt.driver [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] cpu compare xml: <cpu match="exact">
Jan 31 02:39:59 np0005603621 nova_compute[246480]:  <model>Nehalem</model>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: </cpu>
Jan 31 02:39:59 np0005603621 nova_compute[246480]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.216 246484 DEBUG nova.virt.libvirt.driver [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.247 246484 INFO nova.virt.node [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Determined node identity d7116329-87c2-469a-b33a-1e01daf74ceb from /var/lib/nova/compute_id#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.266 246484 WARNING nova.compute.manager [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Compute nodes ['d7116329-87c2-469a-b33a-1e01daf74ceb'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.314 246484 INFO nova.compute.manager [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.347 246484 WARNING nova.compute.manager [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.348 246484 DEBUG oslo_concurrency.lockutils [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.348 246484 DEBUG oslo_concurrency.lockutils [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.348 246484 DEBUG oslo_concurrency.lockutils [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.348 246484 DEBUG nova.compute.resource_tracker [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.349 246484 DEBUG oslo_concurrency.processutils [None req-fedb6bf1-d64e-47db-9244-fe957f7b161b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.371 246484 DEBUG oslo_concurrency.lockutils [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.372 246484 DEBUG oslo_concurrency.lockutils [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:39:59 np0005603621 nova_compute[246480]: 2026-01-31 07:39:59.372 246484 DEBUG oslo_concurrency.lockutils [None req-8aa912db-135a-47c8-83c6-2a3bee5b2d55 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:39:59 np0005603621 virtqemud[247123]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 02:39:59 np0005603621 virtqemud[247123]: hostname: compute-0
Jan 31 02:39:59 np0005603621 virtqemud[247123]: End of file while reading data: Input/output error
Jan 31 02:39:59 np0005603621 systemd[1]: libpod-16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87.scope: Deactivated successfully.
Jan 31 02:39:59 np0005603621 systemd[1]: libpod-16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87.scope: Consumed 3.135s CPU time.
Jan 31 02:39:59 np0005603621 podman[247342]: 2026-01-31 07:39:59.829282691 +0000 UTC m=+0.726324337 container died 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 02:39:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87-userdata-shm.mount: Deactivated successfully.
Jan 31 02:39:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c-merged.mount: Deactivated successfully.
Jan 31 02:39:59 np0005603621 podman[247342]: 2026-01-31 07:39:59.887462288 +0000 UTC m=+0.784503934 container cleanup 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:39:59 np0005603621 podman[247342]: nova_compute
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.946985) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845199947029, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1112, "num_deletes": 255, "total_data_size": 1815984, "memory_usage": 1845792, "flush_reason": "Manual Compaction"}
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 31 02:39:59 np0005603621 podman[247373]: nova_compute
Jan 31 02:39:59 np0005603621 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 31 02:39:59 np0005603621 systemd[1]: Stopped nova_compute container.
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845199966449, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1767994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15743, "largest_seqno": 16854, "table_properties": {"data_size": 1762781, "index_size": 2673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11135, "raw_average_key_size": 18, "raw_value_size": 1752061, "raw_average_value_size": 2984, "num_data_blocks": 121, "num_entries": 587, "num_filter_entries": 587, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845075, "oldest_key_time": 1769845075, "file_creation_time": 1769845199, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 19533 microseconds, and 3887 cpu microseconds.
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:39:59 np0005603621 systemd[1]: Starting nova_compute container...
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.966517) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1767994 bytes OK
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.966534) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.971381) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.971412) EVENT_LOG_v1 {"time_micros": 1769845199971406, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.971434) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1810981, prev total WAL file size 1810981, number of live WAL files 2.
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.972127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1726KB)], [35(7867KB)]
Jan 31 02:39:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845199972161, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9824559, "oldest_snapshot_seqno": -1}
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4248 keys, 9463355 bytes, temperature: kUnknown
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845200048765, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9463355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9432504, "index_size": 19157, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 105928, "raw_average_key_size": 24, "raw_value_size": 9353051, "raw_average_value_size": 2201, "num_data_blocks": 801, "num_entries": 4248, "num_filter_entries": 4248, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845199, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.049031) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9463355 bytes
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.062596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.1 rd, 123.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(10.9) write-amplify(5.4) OK, records in: 4776, records dropped: 528 output_compression: NoCompression
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.062628) EVENT_LOG_v1 {"time_micros": 1769845200062612, "job": 16, "event": "compaction_finished", "compaction_time_micros": 76707, "compaction_time_cpu_micros": 15459, "output_level": 6, "num_output_files": 1, "total_output_size": 9463355, "num_input_records": 4776, "num_output_records": 4248, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845200063008, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845200064032, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:39:59.971898) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.064152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.064163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.064167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.064170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:40:00 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:40:00.064174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:40:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5700937b5df4b0ada5a5a64acd02557c30327e3c17146608882ad64e4c9145c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:00 np0005603621 podman[247384]: 2026-01-31 07:40:00.117166678 +0000 UTC m=+0.140481001 container init 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible)
Jan 31 02:40:00 np0005603621 podman[247384]: 2026-01-31 07:40:00.120940076 +0000 UTC m=+0.144254369 container start 16f3cf77eee09ab7709c67d820482fbb1b64354510093143a021a8f1ec2ccd87 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 02:40:00 np0005603621 podman[247384]: nova_compute
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + sudo -E kolla_set_configs
Jan 31 02:40:00 np0005603621 systemd[1]: Started nova_compute container.
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Validating config file
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying service configuration files
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /etc/ceph
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Creating directory /etc/ceph
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/ceph
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Writing out command to execute
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:40:00 np0005603621 nova_compute[247399]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 31 02:40:00 np0005603621 nova_compute[247399]: ++ cat /run_command
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + CMD=nova-compute
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + ARGS=
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + sudo kolla_copy_cacerts
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + [[ ! -n '' ]]
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + . kolla_extend_start
Jan 31 02:40:00 np0005603621 nova_compute[247399]: Running command: 'nova-compute'
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + echo 'Running command: '\''nova-compute'\'''
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + umask 0022
Jan 31 02:40:00 np0005603621 nova_compute[247399]: + exec nova-compute
Jan 31 02:40:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:00.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:01 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 02:40:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:01 np0005603621 python3.9[247562]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 31 02:40:01 np0005603621 systemd[1]: Started libpod-conmon-d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f.scope.
Jan 31 02:40:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ec11cdbcb40b771b81b61440ed9cd40aede5303662182c175c7663a47d31a/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ec11cdbcb40b771b81b61440ed9cd40aede5303662182c175c7663a47d31a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d0ec11cdbcb40b771b81b61440ed9cd40aede5303662182c175c7663a47d31a/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:01 np0005603621 podman[247589]: 2026-01-31 07:40:01.459947943 +0000 UTC m=+0.266381232 container init d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 31 02:40:01 np0005603621 podman[247589]: 2026-01-31 07:40:01.469964157 +0000 UTC m=+0.276397396 container start d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm)
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Applying nova statedir ownership
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 31 02:40:01 np0005603621 nova_compute_init[247610]: INFO:nova_statedir:Nova statedir ownership complete
Jan 31 02:40:01 np0005603621 systemd[1]: libpod-d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f.scope: Deactivated successfully.
Jan 31 02:40:01 np0005603621 python3.9[247562]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 31 02:40:01 np0005603621 podman[247611]: 2026-01-31 07:40:01.576899003 +0000 UTC m=+0.030805578 container died d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:40:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f-userdata-shm.mount: Deactivated successfully.
Jan 31 02:40:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6d0ec11cdbcb40b771b81b61440ed9cd40aede5303662182c175c7663a47d31a-merged.mount: Deactivated successfully.
Jan 31 02:40:01 np0005603621 podman[247611]: 2026-01-31 07:40:01.884129316 +0000 UTC m=+0.338035841 container cleanup d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Jan 31 02:40:01 np0005603621 systemd[1]: libpod-conmon-d831b7a5c844f5c0f6dcbb27e6cebb99677ec9ff9eb039872a437b5a2008a89f.scope: Deactivated successfully.
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.049 247403 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.050 247403 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.050 247403 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.050 247403 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.180 247403 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.189 247403 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.189 247403 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:40:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:02 np0005603621 systemd[1]: session-50.scope: Deactivated successfully.
Jan 31 02:40:02 np0005603621 systemd[1]: session-50.scope: Consumed 1min 48.598s CPU time.
Jan 31 02:40:02 np0005603621 systemd-logind[818]: Session 50 logged out. Waiting for processes to exit.
Jan 31 02:40:02 np0005603621 systemd-logind[818]: Removed session 50.
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.664 247403 INFO nova.virt.driver [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.783 247403 INFO nova.compute.provider_config [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.795 247403 DEBUG oslo_concurrency.lockutils [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.795 247403 DEBUG oslo_concurrency.lockutils [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.796 247403 DEBUG oslo_concurrency.lockutils [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.796 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.796 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.796 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.797 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.797 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.797 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.797 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.797 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.797 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.798 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.798 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.798 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.798 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.798 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.799 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.799 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.799 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.799 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.799 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.800 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.800 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.800 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.800 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.800 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.801 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.801 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.801 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.801 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.801 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.801 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.802 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.802 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.802 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.802 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.802 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.802 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.803 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.803 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.803 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.803 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.803 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.803 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.804 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.804 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.804 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.804 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.804 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.805 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.805 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.805 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.805 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.805 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.806 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.806 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.806 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.806 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.806 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.806 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.807 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.807 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.807 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.807 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.807 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.807 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.808 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.808 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.808 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.808 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.808 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.808 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.809 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.809 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.809 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.809 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.809 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.810 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.810 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.810 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.810 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.810 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.811 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.811 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.811 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.811 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.811 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.811 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.812 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.812 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.812 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.812 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.812 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.812 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.813 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.813 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.813 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.813 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.813 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.813 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.814 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.814 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.814 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.814 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.814 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.815 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.815 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.815 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.815 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.815 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.815 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.816 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.817 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.818 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.818 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.818 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.818 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.818 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.818 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.819 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.820 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.820 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.820 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.820 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.820 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.820 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.821 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.821 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.821 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.821 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.821 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.821 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.822 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.822 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.822 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.822 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.822 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.823 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.824 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.825 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.826 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.826 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.826 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.826 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.826 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.826 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.827 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.828 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.829 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.829 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.829 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.829 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.829 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.829 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.830 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.830 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.830 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.830 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.830 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.830 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.831 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.831 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.831 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.831 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.831 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.831 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.832 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:02.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.833 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.834 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.835 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.835 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.835 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.835 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.835 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.835 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.836 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.836 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.836 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.836 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.836 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.837 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.837 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.837 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.837 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.837 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.838 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.838 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.838 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.838 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.838 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.839 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.839 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.839 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.839 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.839 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.840 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.840 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.840 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.840 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.840 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.841 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.841 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.841 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.841 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.841 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.842 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.842 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.842 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.842 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.842 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.843 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.843 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.843 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.843 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.843 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.844 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.844 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.844 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.844 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.844 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.844 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.845 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.845 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.845 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.845 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.845 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.846 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.846 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.846 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.846 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.846 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.847 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.847 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.847 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.847 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.847 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.848 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.848 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.848 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.848 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.848 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.849 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.849 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.849 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.849 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.849 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.850 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.850 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.850 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.850 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.850 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.851 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.851 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.851 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.851 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.851 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.851 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.852 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.852 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.852 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.852 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.852 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.853 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.853 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.853 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.853 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.854 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.854 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.854 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.854 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.854 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.855 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.855 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.855 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.855 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.855 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.856 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.856 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.856 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.856 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.857 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.857 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.857 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.857 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.857 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.858 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.858 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.858 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.858 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.858 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.859 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.859 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.859 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.859 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.860 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.860 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.860 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.860 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.860 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.861 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.861 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.861 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.861 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.861 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.861 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.862 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.862 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.862 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.862 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.862 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.863 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.863 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.863 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.863 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.863 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.864 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.864 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.864 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.864 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.864 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.864 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.865 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.865 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.865 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.865 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.865 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.866 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.866 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.866 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.866 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.866 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.867 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.867 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.867 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.867 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.867 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.868 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.868 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.868 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.868 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.868 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.869 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.869 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.869 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.869 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.869 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.869 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.870 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.870 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.870 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.870 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.870 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.871 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.871 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.871 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.871 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.871 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.872 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.872 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.872 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.872 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.872 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.872 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.873 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.873 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.873 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.873 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.873 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.874 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.874 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.874 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.874 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.874 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.875 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.875 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.875 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.875 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.875 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.875 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.876 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.876 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.876 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.876 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.876 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.877 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.877 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.877 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.877 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.877 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.878 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.878 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.878 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.878 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.878 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.879 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.879 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.879 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.879 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.879 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.880 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.880 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.880 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.880 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.880 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.880 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.881 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.881 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.881 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.881 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.881 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.882 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.882 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.882 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.882 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.882 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.883 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.883 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.883 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.883 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.883 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.884 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.884 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.884 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.884 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.884 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.885 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.885 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.885 247403 WARNING oslo_config.cfg [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 31 02:40:02 np0005603621 nova_compute[247399]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 31 02:40:02 np0005603621 nova_compute[247399]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 31 02:40:02 np0005603621 nova_compute[247399]: and ``live_migration_inbound_addr`` respectively.
Jan 31 02:40:02 np0005603621 nova_compute[247399]: ).  Its value may be silently ignored in the future.#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.886 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.886 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.886 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.886 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.886 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.887 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.887 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.887 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.887 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.887 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.888 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.888 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.888 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.888 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.888 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.889 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.889 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.889 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.889 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rbd_secret_uuid        = 2f5ab832-5f2e-5a84-bd93-cf8bab960ee2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.889 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.890 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.890 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.890 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.890 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.890 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.891 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.891 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.891 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.891 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.891 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.892 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.892 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.892 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.892 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.893 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.893 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.893 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.893 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.893 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.893 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.894 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.894 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.894 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.894 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.895 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.895 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.895 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.895 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.895 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.896 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.896 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.896 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.896 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.896 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.897 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.897 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.897 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.897 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.897 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.898 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.898 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.898 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.898 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.898 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.899 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.899 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.899 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.899 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.899 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.900 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.900 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.900 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.900 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.900 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.901 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.901 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.901 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.901 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.901 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.902 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.902 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.902 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.902 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.902 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.903 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.903 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.903 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.903 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.903 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.904 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.904 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.904 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.904 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.904 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.905 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.905 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.905 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.905 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.905 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.905 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.906 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.906 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.906 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.906 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.906 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.907 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.907 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.907 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.907 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.907 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.908 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.908 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.908 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.908 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.908 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.909 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.909 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.909 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.909 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.909 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.909 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.910 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.910 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.910 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.910 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.910 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.911 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.911 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.911 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.911 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.911 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.912 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.912 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.912 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.912 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.912 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.913 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.913 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.913 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.913 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.914 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.914 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.914 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.914 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.914 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.914 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.915 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.915 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.915 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.915 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.916 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.916 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.916 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.916 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.917 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.917 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.917 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.918 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.918 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.919 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.919 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.919 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.920 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.920 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.921 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.921 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.922 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.922 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.923 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.923 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.923 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.924 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.924 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.925 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.925 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.926 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.926 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.926 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.927 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.927 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.928 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.928 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.929 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.929 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.929 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.930 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.930 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.931 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.931 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.931 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.932 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.932 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.933 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.933 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.934 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.934 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.934 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.935 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.936 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.936 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.937 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.937 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.937 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.938 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.938 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.939 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.939 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.939 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.939 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.940 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.940 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.941 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.941 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.941 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.942 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.942 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.943 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.943 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.944 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.944 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.944 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.945 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.945 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.946 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.946 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.947 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.947 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.947 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.948 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.948 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.948 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.949 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.949 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.950 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.950 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.950 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.951 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.951 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.952 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.952 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.952 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.953 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.953 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.954 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.954 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.954 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.955 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.955 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.956 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.956 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.957 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.957 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.958 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.958 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.958 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.959 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.959 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.959 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.960 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.960 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.960 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.961 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.961 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.962 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.962 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.962 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.963 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.963 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.963 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.964 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.964 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.964 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.965 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.965 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.966 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.966 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.967 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.967 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.968 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.968 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.968 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.969 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.969 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.970 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.970 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.971 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.971 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.971 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.972 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.972 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.972 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.972 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.973 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.973 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.973 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.973 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.973 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.974 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.974 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.974 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.974 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.974 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.975 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.975 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.975 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.975 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.975 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.976 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.976 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.976 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.976 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.976 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.977 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.977 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.977 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.977 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.977 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.978 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.978 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.978 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.978 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.978 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.979 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.979 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.979 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.979 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.979 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.980 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.980 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.980 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.980 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.980 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.981 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.981 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.981 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.981 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.981 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.982 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.982 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.982 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.982 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.982 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.983 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.983 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.983 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.983 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.983 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.984 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.984 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.984 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.984 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.985 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.985 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.985 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.985 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.985 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.985 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.986 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.986 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.986 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.986 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.986 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.987 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.987 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.987 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.987 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.987 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.987 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.988 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.989 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.990 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.991 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.991 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.991 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.991 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.991 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.991 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.992 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.993 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.994 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.995 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.995 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.995 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.995 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.995 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.995 247403 DEBUG oslo_service.service [None req-03d42464-0258-4822-a719-f9a0679fa8be - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 31 02:40:02 np0005603621 nova_compute[247399]: 2026-01-31 07:40:02.996 247403 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.021 247403 INFO nova.virt.node [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Determined node identity d7116329-87c2-469a-b33a-1e01daf74ceb from /var/lib/nova/compute_id#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.021 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.022 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.022 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.022 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.031 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbfae9c2580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.033 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbfae9c2580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.034 247403 INFO nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.037 247403 INFO nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Libvirt host capabilities <capabilities>
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <host>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <uuid>4e415482-4f51-40cd-acc4-a0d3058a31bb</uuid>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <cpu>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <arch>x86_64</arch>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model>EPYC-Rome-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <vendor>AMD</vendor>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <microcode version='16777317'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <signature family='23' model='49' stepping='0'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='x2apic'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='tsc-deadline'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='osxsave'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='hypervisor'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='tsc_adjust'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='spec-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='stibp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='arch-capabilities'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='ssbd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='cmp_legacy'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='topoext'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='virt-ssbd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='lbrv'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='tsc-scale'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='vmcb-clean'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='pause-filter'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='pfthreshold'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='svme-addr-chk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='rdctl-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='skip-l1dfl-vmentry'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='mds-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature name='pschange-mc-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <pages unit='KiB' size='4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <pages unit='KiB' size='2048'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <pages unit='KiB' size='1048576'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </cpu>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <power_management>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <suspend_mem/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </power_management>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <iommu support='no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <migration_features>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <live/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <uri_transports>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <uri_transport>tcp</uri_transport>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <uri_transport>rdma</uri_transport>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </uri_transports>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </migration_features>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <topology>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <cells num='1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <cell id='0'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          <memory unit='KiB'>7864292</memory>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          <pages unit='KiB' size='4'>1966073</pages>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          <pages unit='KiB' size='2048'>0</pages>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          <distances>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <sibling id='0' value='10'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          </distances>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          <cpus num='8'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:          </cpus>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        </cell>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </cells>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </topology>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <cache>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </cache>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <secmodel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model>selinux</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <doi>0</doi>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </secmodel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <secmodel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model>dac</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <doi>0</doi>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </secmodel>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  </host>
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <guest>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <os_type>hvm</os_type>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <arch name='i686'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <wordsize>32</wordsize>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <domain type='qemu'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <domain type='kvm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </arch>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <features>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <pae/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <nonpae/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <acpi default='on' toggle='yes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <apic default='on' toggle='no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <cpuselection/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <deviceboot/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <externalSnapshot/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </features>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  </guest>
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <guest>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <os_type>hvm</os_type>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <arch name='x86_64'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <wordsize>64</wordsize>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <domain type='qemu'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <domain type='kvm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </arch>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <features>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <acpi default='on' toggle='yes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <apic default='on' toggle='no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <cpuselection/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <deviceboot/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <disksnapshot default='on' toggle='no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <externalSnapshot/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </features>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  </guest>
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 
Jan 31 02:40:03 np0005603621 nova_compute[247399]: </capabilities>
Jan 31 02:40:03 np0005603621 nova_compute[247399]: #033[00m
Jan 31 02:40:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:03.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.044 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 31 02:40:03 np0005603621 nova_compute[247399]: 2026-01-31 07:40:03.047 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 31 02:40:03 np0005603621 nova_compute[247399]: <domainCapabilities>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <path>/usr/libexec/qemu-kvm</path>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <domain>kvm</domain>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <arch>i686</arch>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <vcpu max='240'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <iothreads supported='yes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <os supported='yes'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <enum name='firmware'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <loader supported='yes'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <enum name='type'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>rom</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>pflash</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </enum>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <enum name='readonly'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>yes</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>no</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </enum>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <enum name='secure'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>no</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </enum>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </loader>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:  <cpu>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <mode name='host-passthrough' supported='yes'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <enum name='hostPassthroughMigratable'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>on</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>off</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </enum>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </mode>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <mode name='maximum' supported='yes'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <enum name='maximumMigratable'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>on</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <value>off</value>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </enum>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </mode>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <mode name='host-model' supported='yes'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <vendor>AMD</vendor>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='x2apic'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='tsc-deadline'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='hypervisor'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='tsc_adjust'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='spec-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='stibp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='ssbd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='cmp_legacy'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='overflow-recov'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='succor'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='ibrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='amd-ssbd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='virt-ssbd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='lbrv'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='tsc-scale'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='vmcb-clean'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='flushbyasid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='pause-filter'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='pfthreshold'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='svme-addr-chk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <feature policy='disable' name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    </mode>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:    <mode name='custom' supported='yes'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-noTSX'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Broadwell-v4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server-v4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cascadelake-Server-v5'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='ClearwaterForest'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ne-convert'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bhi-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bhi-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cmpccxadd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ddpd-u'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='intel-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ipred-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='lam'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchiti'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rrsba-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sha512'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sm3'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sm4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='ClearwaterForest-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ne-convert'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bhi-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bhi-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cmpccxadd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ddpd-u'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='intel-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ipred-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='lam'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchiti'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rrsba-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sha512'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sm3'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sm4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cooperlake'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cooperlake-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Cooperlake-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Denverton'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mpx'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Denverton-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mpx'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Denverton-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Denverton-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Dhyana-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Genoa'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='auto-ibrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Genoa-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='auto-ibrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Genoa-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='auto-ibrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='perfmon-v2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Milan'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Milan-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Milan-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Milan-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Rome'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Rome-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Rome-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Rome-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Turin'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='auto-ibrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibpb-brtype'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='perfmon-v2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbpb'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-Turin-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amd-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='auto-ibrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vp2intersect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fs-gs-base-ns'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibpb-brtype'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='no-nested-data-bp'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='null-sel-clr-base'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='perfmon-v2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbpb'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='srso-user-kernel-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='stibp-always-on'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-v4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='EPYC-v5'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='GraniteRapids'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchiti'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='GraniteRapids-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchiti'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='GraniteRapids-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10-128'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10-256'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10-512'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchiti'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='GraniteRapids-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10-128'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10-256'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx10-512'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='prefetchiti'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-noTSX'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Haswell-v4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-noTSX'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v5'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v6'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Icelake-Server-v7'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='IvyBridge'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='IvyBridge-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='IvyBridge-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='IvyBridge-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='KnightsMill'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-4fmaps'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-4vnniw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512er'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512pf'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='KnightsMill-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-4fmaps'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-4vnniw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512er'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512pf'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Opteron_G4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fma4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xop'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Opteron_G4-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fma4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xop'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Opteron_G5'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fma4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tbm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xop'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Opteron_G5-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fma4'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tbm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xop'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SapphireRapids'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SapphireRapids-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SapphireRapids-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SapphireRapids-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SapphireRapids-v4'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='amx-tile'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-bf16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-fp16'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512-vpopcntdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bitalg'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512bw'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512cd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512dq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512f'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vbmi2'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx512vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrc'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fzrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='la57'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='taa-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='tsx-ldtrk'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SierraForest'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ne-convert'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cmpccxadd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SierraForest-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ne-convert'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cmpccxadd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SierraForest-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ne-convert'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bhi-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cmpccxadd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='intel-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ipred-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='lam'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rrsba-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='SierraForest-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ifma'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-ne-convert'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='avx-vnni-int8'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bhi-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='bus-lock-detect'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cldemote'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='cmpccxadd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fbsdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='fsrs'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='gfni'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ibrs-all'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='intel-psfd'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ipred-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='lam'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='mcdt-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdir64b'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='movdiri'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pbrsb-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pku'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='psdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rrsba-ctrl'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='sbdr-ssdp-no'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='serialize'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='ss'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vaes'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='vpclmulqdq'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='xsaves'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Skylake-Client'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Skylake-Client-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Skylake-Client-v1'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Skylake-Client-v2'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='hle'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='rtm'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      </blockers>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:      <blockers model='Skylake-Client-v3'>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='erms'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='invpcid'/>
Jan 31 02:40:03 np0005603621 nova_compute[247399]:        <feature name='pcid'/>
Jan 31 02:40:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:40:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:46.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:40:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:47 np0005603621 rsyslogd[998]: imjournal: 6776 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 31 02:40:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:40:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260439235' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:40:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:40:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260439235' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:40:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:40:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:40:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:48.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:40:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:40:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 233b8590-3a8d-4835-bdcc-657af6b71763 does not exist
Jan 31 02:40:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 08567d9e-8019-4e8b-a726-f5cbfb105397 does not exist
Jan 31 02:40:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d0a90e8b-d67b-4af9-a956-6a6fba15c98c does not exist
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.683406888 +0000 UTC m=+0.058835618 container create 172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:40:49 np0005603621 systemd[1]: Started libpod-conmon-172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574.scope.
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.647661345 +0000 UTC m=+0.023090135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:40:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.772623388 +0000 UTC m=+0.148052118 container init 172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.783596362 +0000 UTC m=+0.159025062 container start 172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:40:49 np0005603621 suspicious_allen[248317]: 167 167
Jan 31 02:40:49 np0005603621 systemd[1]: libpod-172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574.scope: Deactivated successfully.
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.793505963 +0000 UTC m=+0.168934683 container attach 172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.794171974 +0000 UTC m=+0.169600674 container died 172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:40:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d3d00d6e9e29e1f641c749a1b8baed7fdba9b6905318ad96ef9ed1a296cefac1-merged.mount: Deactivated successfully.
Jan 31 02:40:49 np0005603621 podman[248301]: 2026-01-31 07:40:49.851538625 +0000 UTC m=+0.226967355 container remove 172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:40:49 np0005603621 systemd[1]: libpod-conmon-172ecffd7bad924afcdeb9dc7ba9a03a0f8c8e6767c485a70fe45b20e577e574.scope: Deactivated successfully.
Jan 31 02:40:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:49 np0005603621 podman[248341]: 2026-01-31 07:40:49.970472347 +0000 UTC m=+0.037458776 container create 7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:40:50 np0005603621 systemd[1]: Started libpod-conmon-7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751.scope.
Jan 31 02:40:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2a0a8854f3942470e4edb84b52b0814163c6f4d36887180e074fa365edcc8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2a0a8854f3942470e4edb84b52b0814163c6f4d36887180e074fa365edcc8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2a0a8854f3942470e4edb84b52b0814163c6f4d36887180e074fa365edcc8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:50 np0005603621 podman[248341]: 2026-01-31 07:40:49.951559954 +0000 UTC m=+0.018546423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:40:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2a0a8854f3942470e4edb84b52b0814163c6f4d36887180e074fa365edcc8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2a0a8854f3942470e4edb84b52b0814163c6f4d36887180e074fa365edcc8b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:50 np0005603621 podman[248341]: 2026-01-31 07:40:50.086283452 +0000 UTC m=+0.153270001 container init 7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:40:50 np0005603621 podman[248341]: 2026-01-31 07:40:50.094676856 +0000 UTC m=+0.161663295 container start 7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:40:50 np0005603621 podman[248341]: 2026-01-31 07:40:50.098933099 +0000 UTC m=+0.165919568 container attach 7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:40:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:50 np0005603621 optimistic_cerf[248357]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:40:50 np0005603621 optimistic_cerf[248357]: --> relative data size: 1.0
Jan 31 02:40:50 np0005603621 optimistic_cerf[248357]: --> All data devices are unavailable
Jan 31 02:40:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:50.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:50 np0005603621 systemd[1]: libpod-7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751.scope: Deactivated successfully.
Jan 31 02:40:50 np0005603621 podman[248341]: 2026-01-31 07:40:50.93753073 +0000 UTC m=+1.004517209 container died 7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:40:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ed2a0a8854f3942470e4edb84b52b0814163c6f4d36887180e074fa365edcc8b-merged.mount: Deactivated successfully.
Jan 31 02:40:50 np0005603621 podman[248341]: 2026-01-31 07:40:50.991760922 +0000 UTC m=+1.058747371 container remove 7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:40:50 np0005603621 systemd[1]: libpod-conmon-7f6363af07ddc9102b5874f4e1ed29d75477fa39968e222f88f7e0ec79049751.scope: Deactivated successfully.
Jan 31 02:40:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:51.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.672996304 +0000 UTC m=+0.049703931 container create 28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:40:51 np0005603621 systemd[1]: Started libpod-conmon-28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22.scope.
Jan 31 02:40:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.65219012 +0000 UTC m=+0.028897797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.748001838 +0000 UTC m=+0.124709535 container init 28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.754397419 +0000 UTC m=+0.131105036 container start 28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.758443765 +0000 UTC m=+0.135151402 container attach 28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:40:51 np0005603621 elegant_hodgkin[248541]: 167 167
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.760035246 +0000 UTC m=+0.136742873 container died 28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:40:51 np0005603621 systemd[1]: libpod-28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22.scope: Deactivated successfully.
Jan 31 02:40:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cc3dc996dac5a6a81e2c7b086ec353f796b2677a1f71e4aea20f385a1d1e1ec8-merged.mount: Deactivated successfully.
Jan 31 02:40:51 np0005603621 podman[248525]: 2026-01-31 07:40:51.808363323 +0000 UTC m=+0.185070970 container remove 28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:40:51 np0005603621 systemd[1]: libpod-conmon-28f58f170c0f87840dc5faad06c4f61d6f9b78a99e4d884d3df860f8ea5eec22.scope: Deactivated successfully.
Jan 31 02:40:51 np0005603621 podman[248565]: 2026-01-31 07:40:51.949227703 +0000 UTC m=+0.053655104 container create 60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:40:51 np0005603621 systemd[1]: Started libpod-conmon-60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314.scope.
Jan 31 02:40:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e677fe9d8ec7024cdbb151be10bcd3cf1f104756c60ac87bda754b1666667720/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:52 np0005603621 podman[248565]: 2026-01-31 07:40:51.926620424 +0000 UTC m=+0.031047885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:40:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e677fe9d8ec7024cdbb151be10bcd3cf1f104756c60ac87bda754b1666667720/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e677fe9d8ec7024cdbb151be10bcd3cf1f104756c60ac87bda754b1666667720/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e677fe9d8ec7024cdbb151be10bcd3cf1f104756c60ac87bda754b1666667720/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:52 np0005603621 podman[248565]: 2026-01-31 07:40:52.058776042 +0000 UTC m=+0.163203423 container init 60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:40:52 np0005603621 podman[248565]: 2026-01-31 07:40:52.064734029 +0000 UTC m=+0.169161450 container start 60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:40:52 np0005603621 podman[248565]: 2026-01-31 07:40:52.119770626 +0000 UTC m=+0.224198007 container attach 60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:40:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]: {
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:    "0": [
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:        {
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "devices": [
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "/dev/loop3"
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            ],
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "lv_name": "ceph_lv0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "lv_size": "7511998464",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "name": "ceph_lv0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "tags": {
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.cluster_name": "ceph",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.crush_device_class": "",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.encrypted": "0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.osd_id": "0",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.type": "block",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:                "ceph.vdo": "0"
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            },
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "type": "block",
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:            "vg_name": "ceph_vg0"
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:        }
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]:    ]
Jan 31 02:40:52 np0005603621 vigorous_mccarthy[248581]: }
Jan 31 02:40:52 np0005603621 systemd[1]: libpod-60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314.scope: Deactivated successfully.
Jan 31 02:40:52 np0005603621 podman[248565]: 2026-01-31 07:40:52.788204586 +0000 UTC m=+0.892632007 container died 60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:40:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e677fe9d8ec7024cdbb151be10bcd3cf1f104756c60ac87bda754b1666667720-merged.mount: Deactivated successfully.
Jan 31 02:40:52 np0005603621 podman[248565]: 2026-01-31 07:40:52.844721139 +0000 UTC m=+0.949148540 container remove 60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:40:52 np0005603621 systemd[1]: libpod-conmon-60c8275e57afbf87e41bab6c34e9ce864c841cfb7dfa6adeadb4e0894bcf9314.scope: Deactivated successfully.
Jan 31 02:40:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:40:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:52.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:40:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:53.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.385360268 +0000 UTC m=+0.051694983 container create 04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shockley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:40:53 np0005603621 systemd[1]: Started libpod-conmon-04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c.scope.
Jan 31 02:40:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.365097543 +0000 UTC m=+0.031432338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.590075944 +0000 UTC m=+0.256410709 container init 04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.595720291 +0000 UTC m=+0.262055016 container start 04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shockley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:40:53 np0005603621 peaceful_shockley[248760]: 167 167
Jan 31 02:40:53 np0005603621 systemd[1]: libpod-04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c.scope: Deactivated successfully.
Jan 31 02:40:53 np0005603621 conmon[248760]: conmon 04f911ed3d3d6b365b66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c.scope/container/memory.events
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.614040376 +0000 UTC m=+0.280375171 container attach 04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.614586123 +0000 UTC m=+0.280920868 container died 04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:40:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dbb05f90114b0557d201b968a4b89bb9c35dd88e773693fb783ec00efa197483-merged.mount: Deactivated successfully.
Jan 31 02:40:53 np0005603621 podman[248743]: 2026-01-31 07:40:53.66895759 +0000 UTC m=+0.335292335 container remove 04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shockley, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:40:53 np0005603621 systemd[1]: libpod-conmon-04f911ed3d3d6b365b669767abaf60843d8b0f5b0215e45e20e4a8e5167b049c.scope: Deactivated successfully.
Jan 31 02:40:53 np0005603621 podman[248786]: 2026-01-31 07:40:53.785853678 +0000 UTC m=+0.038976754 container create 27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:40:53 np0005603621 systemd[1]: Started libpod-conmon-27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24.scope.
Jan 31 02:40:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:40:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d9a95fea2f5cd99715fefcb286bd8360e51b88b3748d1ea72e4398e3512ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d9a95fea2f5cd99715fefcb286bd8360e51b88b3748d1ea72e4398e3512ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d9a95fea2f5cd99715fefcb286bd8360e51b88b3748d1ea72e4398e3512ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997d9a95fea2f5cd99715fefcb286bd8360e51b88b3748d1ea72e4398e3512ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:40:53 np0005603621 podman[248786]: 2026-01-31 07:40:53.770504147 +0000 UTC m=+0.023627263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:40:53 np0005603621 podman[248786]: 2026-01-31 07:40:53.893483847 +0000 UTC m=+0.146606923 container init 27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:40:53 np0005603621 podman[248786]: 2026-01-31 07:40:53.901195879 +0000 UTC m=+0.154318955 container start 27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:40:53 np0005603621 podman[248786]: 2026-01-31 07:40:53.904247215 +0000 UTC m=+0.157370301 container attach 27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:40:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:54 np0005603621 busy_carson[248803]: {
Jan 31 02:40:54 np0005603621 busy_carson[248803]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:40:54 np0005603621 busy_carson[248803]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:40:54 np0005603621 busy_carson[248803]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:40:54 np0005603621 busy_carson[248803]:        "osd_id": 0,
Jan 31 02:40:54 np0005603621 busy_carson[248803]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:40:54 np0005603621 busy_carson[248803]:        "type": "bluestore"
Jan 31 02:40:54 np0005603621 busy_carson[248803]:    }
Jan 31 02:40:54 np0005603621 busy_carson[248803]: }
Jan 31 02:40:54 np0005603621 systemd[1]: libpod-27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24.scope: Deactivated successfully.
Jan 31 02:40:54 np0005603621 podman[248786]: 2026-01-31 07:40:54.618324207 +0000 UTC m=+0.871447283 container died 27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:40:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-997d9a95fea2f5cd99715fefcb286bd8360e51b88b3748d1ea72e4398e3512ac-merged.mount: Deactivated successfully.
Jan 31 02:40:54 np0005603621 podman[248786]: 2026-01-31 07:40:54.847534351 +0000 UTC m=+1.100657437 container remove 27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:40:54 np0005603621 systemd[1]: libpod-conmon-27233d17a6c55dce7f43610d84a36d06c5be0e766824136afac55e05a3813c24.scope: Deactivated successfully.
Jan 31 02:40:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:40:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:40:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:54.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:40:54 np0005603621 nova_compute[247399]: 2026-01-31 07:40:54.951 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:40:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:40:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:40:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:40:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:40:55 np0005603621 nova_compute[247399]: 2026-01-31 07:40:55.382 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:40:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:40:55 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8b4fa856-4ce9-4c78-b58f-56332717dac2 does not exist
Jan 31 02:40:55 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fd6a91a4-5131-4a26-af1f-fbfd98917ef8 does not exist
Jan 31 02:40:55 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9eae5597-3233-4a29-b34c-a4711c18b38e does not exist
Jan 31 02:40:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:40:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:40:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:56.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:57.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:40:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:40:58.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:40:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:40:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:40:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:40:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:00.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:01.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.201 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.201 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.224 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.225 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.226 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.226 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.227 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.227 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.228 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.228 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.228 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.265 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.266 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.266 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.267 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.267 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:41:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:41:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651718301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.677 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.842 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.844 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5168MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.844 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:02 np0005603621 nova_compute[247399]: 2026-01-31 07:41:02.845 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:02.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:03 np0005603621 nova_compute[247399]: 2026-01-31 07:41:03.618 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:41:03 np0005603621 nova_compute[247399]: 2026-01-31 07:41:03.619 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:41:03 np0005603621 nova_compute[247399]: 2026-01-31 07:41:03.678 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:41:04 np0005603621 nova_compute[247399]: 2026-01-31 07:41:04.104 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:41:04 np0005603621 nova_compute[247399]: 2026-01-31 07:41:04.109 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:41:04 np0005603621 nova_compute[247399]: 2026-01-31 07:41:04.137 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:41:04 np0005603621 nova_compute[247399]: 2026-01-31 07:41:04.139 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:41:04 np0005603621 nova_compute[247399]: 2026-01-31 07:41:04.139 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:05.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:06.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:41:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:08.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:10.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:12 np0005603621 podman[248990]: 2026-01-31 07:41:12.503979743 +0000 UTC m=+0.054861381 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 31 02:41:12 np0005603621 podman[248991]: 2026-01-31 07:41:12.527802781 +0000 UTC m=+0.077684119 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 02:41:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:12.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:13.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 02:41:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:14.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:15.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:41:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:41:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:41:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:18.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:41:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:19.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:20.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:21.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:22.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:23.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:24.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:26.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:27.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:28.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:41:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:29.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:41:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:41:30.460 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:41:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:41:30.460 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:41:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:41:30.460 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:41:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:30.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:31.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:41:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:32.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:41:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:35.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:36.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:41:38
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', 'backups']
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:41:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:40.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:42.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:43.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:43 np0005603621 podman[249099]: 2026-01-31 07:41:43.515912124 +0000 UTC m=+0.066757095 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:43 np0005603621 podman[249100]: 2026-01-31 07:41:43.556313661 +0000 UTC m=+0.104015223 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 02:41:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:44.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:45.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:41:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:41:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:48.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:49.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:41:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:50.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:41:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:52.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:41:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:54.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:55.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:56.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:41:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2fbde9b6-60b5-444c-bf3a-a94015cbf4f8 does not exist
Jan 31 02:41:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 69ca8e85-2db0-45a4-adcf-5ce181c8a4c3 does not exist
Jan 31 02:41:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 48e0b6a3-6868-4156-a937-84e76788e6b5 does not exist
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:41:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:41:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.58957754 +0000 UTC m=+0.059384194 container create e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:41:57 np0005603621 systemd[1]: Started libpod-conmon-e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2.scope.
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.548058288 +0000 UTC m=+0.017864972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:41:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.735977371 +0000 UTC m=+0.205784045 container init e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_zhukovsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.740877786 +0000 UTC m=+0.210684450 container start e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:57 np0005603621 mystifying_zhukovsky[249607]: 167 167
Jan 31 02:41:57 np0005603621 systemd[1]: libpod-e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2.scope: Deactivated successfully.
Jan 31 02:41:57 np0005603621 conmon[249607]: conmon e62c0bfa59112eebd9af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2.scope/container/memory.events
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.779213378 +0000 UTC m=+0.249020052 container attach e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.782646696 +0000 UTC m=+0.252453360 container died e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:41:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a8c74ffb74e803fab9fc3d78cdc9c79b6e97c39d185d319fb1de1f7ced2634ef-merged.mount: Deactivated successfully.
Jan 31 02:41:57 np0005603621 podman[249592]: 2026-01-31 07:41:57.832064356 +0000 UTC m=+0.301871010 container remove e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:41:57 np0005603621 systemd[1]: libpod-conmon-e62c0bfa59112eebd9af4e6b15165350d1e1cd4a3a52559d41828174819e88e2.scope: Deactivated successfully.
Jan 31 02:41:57 np0005603621 podman[249632]: 2026-01-31 07:41:57.970805078 +0000 UTC m=+0.037445186 container create 47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:41:58 np0005603621 systemd[1]: Started libpod-conmon-47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15.scope.
Jan 31 02:41:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:41:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782aa0868a90d5e6689dd9bb197811c640f915e289f034d36597f930ffc35eed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782aa0868a90d5e6689dd9bb197811c640f915e289f034d36597f930ffc35eed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782aa0868a90d5e6689dd9bb197811c640f915e289f034d36597f930ffc35eed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782aa0868a90d5e6689dd9bb197811c640f915e289f034d36597f930ffc35eed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/782aa0868a90d5e6689dd9bb197811c640f915e289f034d36597f930ffc35eed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:58 np0005603621 podman[249632]: 2026-01-31 07:41:58.035150886 +0000 UTC m=+0.101790984 container init 47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:41:58 np0005603621 podman[249632]: 2026-01-31 07:41:58.042814976 +0000 UTC m=+0.109455064 container start 47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:41:58 np0005603621 podman[249632]: 2026-01-31 07:41:58.047460962 +0000 UTC m=+0.114101060 container attach 47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:41:58 np0005603621 podman[249632]: 2026-01-31 07:41:57.95398477 +0000 UTC m=+0.020624888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:41:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:41:58 np0005603621 cool_matsumoto[249647]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:41:58 np0005603621 cool_matsumoto[249647]: --> relative data size: 1.0
Jan 31 02:41:58 np0005603621 cool_matsumoto[249647]: --> All data devices are unavailable
Jan 31 02:41:58 np0005603621 systemd[1]: libpod-47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15.scope: Deactivated successfully.
Jan 31 02:41:58 np0005603621 podman[249632]: 2026-01-31 07:41:58.785808101 +0000 UTC m=+0.852448199 container died 47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:41:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-782aa0868a90d5e6689dd9bb197811c640f915e289f034d36597f930ffc35eed-merged.mount: Deactivated successfully.
Jan 31 02:41:58 np0005603621 podman[249632]: 2026-01-31 07:41:58.832397303 +0000 UTC m=+0.899037401 container remove 47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:41:58 np0005603621 systemd[1]: libpod-conmon-47ab6ddb17116b47dcf585f20cbde1d404ffba4239cc53dbe523f54eff437a15.scope: Deactivated successfully.
Jan 31 02:41:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:41:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:41:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:41:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:41:59.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.376792158 +0000 UTC m=+0.047192471 container create f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goldwasser, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:59 np0005603621 systemd[1]: Started libpod-conmon-f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0.scope.
Jan 31 02:41:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.351371701 +0000 UTC m=+0.021772024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.455812807 +0000 UTC m=+0.126213110 container init f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goldwasser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.463002923 +0000 UTC m=+0.133403206 container start f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.466450391 +0000 UTC m=+0.136850704 container attach f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:41:59 np0005603621 dazzling_goldwasser[249830]: 167 167
Jan 31 02:41:59 np0005603621 systemd[1]: libpod-f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0.scope: Deactivated successfully.
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.468759114 +0000 UTC m=+0.139159407 container died f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goldwasser, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:41:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-66d781b3bb6084eb4a5ca5c3213f7ee7834b42c7d44829c22fe8e11cd285e5b6-merged.mount: Deactivated successfully.
Jan 31 02:41:59 np0005603621 podman[249812]: 2026-01-31 07:41:59.504395911 +0000 UTC m=+0.174796214 container remove f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:41:59 np0005603621 systemd[1]: libpod-conmon-f7e496a2157a997858e3109502b54e979575056ed12660a66245dca06dbc56c0.scope: Deactivated successfully.
Jan 31 02:41:59 np0005603621 podman[249854]: 2026-01-31 07:41:59.643144073 +0000 UTC m=+0.037974922 container create 9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:41:59 np0005603621 systemd[1]: Started libpod-conmon-9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b.scope.
Jan 31 02:41:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:41:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12b02c8cf529667996e8056277785bc2ba301e35236be10bd1578175d3d8da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12b02c8cf529667996e8056277785bc2ba301e35236be10bd1578175d3d8da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12b02c8cf529667996e8056277785bc2ba301e35236be10bd1578175d3d8da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12b02c8cf529667996e8056277785bc2ba301e35236be10bd1578175d3d8da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:41:59 np0005603621 podman[249854]: 2026-01-31 07:41:59.629411702 +0000 UTC m=+0.024242571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:41:59 np0005603621 podman[249854]: 2026-01-31 07:41:59.747243129 +0000 UTC m=+0.142073998 container init 9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:41:59 np0005603621 podman[249854]: 2026-01-31 07:41:59.752894826 +0000 UTC m=+0.147725695 container start 9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_roentgen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:41:59 np0005603621 podman[249854]: 2026-01-31 07:41:59.756296193 +0000 UTC m=+0.151127042 container attach 9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:41:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]: {
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:    "0": [
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:        {
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "devices": [
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "/dev/loop3"
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            ],
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "lv_name": "ceph_lv0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "lv_size": "7511998464",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "name": "ceph_lv0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "tags": {
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.cluster_name": "ceph",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.crush_device_class": "",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.encrypted": "0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.osd_id": "0",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.type": "block",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:                "ceph.vdo": "0"
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            },
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "type": "block",
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:            "vg_name": "ceph_vg0"
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:        }
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]:    ]
Jan 31 02:42:00 np0005603621 nostalgic_roentgen[249871]: }
Jan 31 02:42:00 np0005603621 systemd[1]: libpod-9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b.scope: Deactivated successfully.
Jan 31 02:42:00 np0005603621 podman[249854]: 2026-01-31 07:42:00.452650755 +0000 UTC m=+0.847481614 container died 9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_roentgen, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 02:42:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7c12b02c8cf529667996e8056277785bc2ba301e35236be10bd1578175d3d8da-merged.mount: Deactivated successfully.
Jan 31 02:42:00 np0005603621 podman[249854]: 2026-01-31 07:42:00.515381263 +0000 UTC m=+0.910212112 container remove 9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:42:00 np0005603621 systemd[1]: libpod-conmon-9ba736d262e2de7fd3e9c2091f618ec7d8d86f6aaec57f27c94b625a375dc48b.scope: Deactivated successfully.
Jan 31 02:42:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:01.017067928 +0000 UTC m=+0.035992439 container create 0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:42:01 np0005603621 systemd[1]: Started libpod-conmon-0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a.scope.
Jan 31 02:42:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:01.079052663 +0000 UTC m=+0.097977194 container init 0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:01.084760842 +0000 UTC m=+0.103685373 container start 0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:01.087668383 +0000 UTC m=+0.106592914 container attach 0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:42:01 np0005603621 dazzling_williamson[250050]: 167 167
Jan 31 02:42:01 np0005603621 systemd[1]: libpod-0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a.scope: Deactivated successfully.
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:01.089849481 +0000 UTC m=+0.108774032 container died 0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:00.999823978 +0000 UTC m=+0.018748509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:42:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a9ed62929bddf0ddc3faed45ba9f9da7c7b340e6215ef2b6036095461dab4ee6-merged.mount: Deactivated successfully.
Jan 31 02:42:01 np0005603621 podman[250034]: 2026-01-31 07:42:01.123898919 +0000 UTC m=+0.142823430 container remove 0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_williamson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:42:01 np0005603621 systemd[1]: libpod-conmon-0eca36d0139a2dbf73a0da90aa1bd69674b483db7a20df6b1541306863ea258a.scope: Deactivated successfully.
Jan 31 02:42:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:01.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:42:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8138 writes, 32K keys, 8138 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8138 writes, 1753 syncs, 4.64 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 655 writes, 1008 keys, 655 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s#012Interval WAL: 655 writes, 316 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 31 02:42:01 np0005603621 podman[250074]: 2026-01-31 07:42:01.252017209 +0000 UTC m=+0.038808199 container create 006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:42:01 np0005603621 systemd[1]: Started libpod-conmon-006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28.scope.
Jan 31 02:42:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:42:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622e4bbb59fbf9d974743f398a2f52c7fa24fb28a648ea9a721b447fe6623b44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:42:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622e4bbb59fbf9d974743f398a2f52c7fa24fb28a648ea9a721b447fe6623b44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:42:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622e4bbb59fbf9d974743f398a2f52c7fa24fb28a648ea9a721b447fe6623b44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:42:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/622e4bbb59fbf9d974743f398a2f52c7fa24fb28a648ea9a721b447fe6623b44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:42:01 np0005603621 podman[250074]: 2026-01-31 07:42:01.234138298 +0000 UTC m=+0.020929338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:42:01 np0005603621 podman[250074]: 2026-01-31 07:42:01.330105038 +0000 UTC m=+0.116896048 container init 006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euclid, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:42:01 np0005603621 podman[250074]: 2026-01-31 07:42:01.336146018 +0000 UTC m=+0.122937018 container start 006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euclid, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 02:42:01 np0005603621 podman[250074]: 2026-01-31 07:42:01.339225193 +0000 UTC m=+0.126016213 container attach 006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]: {
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:        "osd_id": 0,
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:        "type": "bluestore"
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]:    }
Jan 31 02:42:02 np0005603621 beautiful_euclid[250090]: }
Jan 31 02:42:02 np0005603621 systemd[1]: libpod-006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28.scope: Deactivated successfully.
Jan 31 02:42:02 np0005603621 podman[250074]: 2026-01-31 07:42:02.116385331 +0000 UTC m=+0.903176331 container died 006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euclid, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:42:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-622e4bbb59fbf9d974743f398a2f52c7fa24fb28a648ea9a721b447fe6623b44-merged.mount: Deactivated successfully.
Jan 31 02:42:02 np0005603621 podman[250074]: 2026-01-31 07:42:02.172320875 +0000 UTC m=+0.959111905 container remove 006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_euclid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:42:02 np0005603621 systemd[1]: libpod-conmon-006d8036790f625827398cfdcbed377ba846320426a77de1a2bfba9661fabf28.scope: Deactivated successfully.
Jan 31 02:42:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:42:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:42:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:42:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:42:02 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 918d0d3c-4074-4f94-bee8-3fca04fb50ac does not exist
Jan 31 02:42:02 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1d4a0089-c294-44cc-a98d-6f5f3e83c55a does not exist
Jan 31 02:42:02 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9b0423fe-5e0b-40b7-b281-5be5578bfd6d does not exist
Jan 31 02:42:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:02.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:03.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:42:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.132 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.133 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.152 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.152 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.153 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.153 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.153 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.153 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.195 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.196 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.196 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.196 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.196 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700864852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.665 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.778 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.779 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5155MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.780 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.780 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.835 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.835 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:42:04 np0005603621 nova_compute[247399]: 2026-01-31 07:42:04.852 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:42:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:04.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:05.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:42:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450961335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.266 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.271 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.294 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.295 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.296 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.341 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.341 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.341 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.366 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.366 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:05 np0005603621 nova_compute[247399]: 2026-01-31 07:42:05.366 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:42:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:42:05.445 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:42:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:42:05.446 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:42:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:42:05.447 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:42:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:42:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:09.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 02:42:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:12.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:13.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:14 np0005603621 podman[250273]: 2026-01-31 07:42:14.487712768 +0000 UTC m=+0.047325275 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 02:42:14 np0005603621 podman[250274]: 2026-01-31 07:42:14.53944329 +0000 UTC m=+0.099086848 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 02:42:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:15.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:17.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:17.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:42:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:19.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:42:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:19.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:21.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:21.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:23.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:23.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:25.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:25.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:27.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:27.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:29.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:42:30.461 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:42:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:42:30.462 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:42:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:42:30.462 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:42:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:31.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:33.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:35.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:35.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:37.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:37.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:42:38
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'images']
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:42:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:42:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:39.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:39.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:41.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:41.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:43.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:43.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:45.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:45.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:45 np0005603621 podman[250379]: 2026-01-31 07:42:45.489279425 +0000 UTC m=+0.046272162 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:42:45 np0005603621 podman[250380]: 2026-01-31 07:42:45.54553689 +0000 UTC m=+0.100934837 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 02:42:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:47.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:47.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:42:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:42:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:49.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:49.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:51.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:42:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:51.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:42:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:53.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:53.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:42:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:55.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:55.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:57.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:42:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:57.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:42:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:42:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:42:59.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:42:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:42:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:42:59.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:42:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:01.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:01.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:43:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:43:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:03.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:43:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:03.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:43:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 02:43:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:43:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 02:43:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.827 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.827 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.827 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.827 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:43:03 np0005603621 nova_compute[247399]: 2026-01-31 07:43:03.828 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:43:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:43:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467356832' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.221 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.335 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.336 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5188MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.336 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.336 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:43:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.447 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.447 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.475 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:43:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:43:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324551320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.887 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.892 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.946 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.948 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:43:04 np0005603621 nova_compute[247399]: 2026-01-31 07:43:04.949 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:43:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:43:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:05.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:43:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:05 np0005603621 nova_compute[247399]: 2026-01-31 07:43:05.944 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:05 np0005603621 nova_compute[247399]: 2026-01-31 07:43:05.945 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:05 np0005603621 nova_compute[247399]: 2026-01-31 07:43:05.945 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:43:05 np0005603621 nova_compute[247399]: 2026-01-31 07:43:05.945 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:43:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:06 np0005603621 nova_compute[247399]: 2026-01-31 07:43:06.814 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:43:06 np0005603621 nova_compute[247399]: 2026-01-31 07:43:06.814 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:06 np0005603621 nova_compute[247399]: 2026-01-31 07:43:06.815 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:06 np0005603621 nova_compute[247399]: 2026-01-31 07:43:06.815 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:06 np0005603621 nova_compute[247399]: 2026-01-31 07:43:06.815 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:43:06 np0005603621 nova_compute[247399]: 2026-01-31 07:43:06.815 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:43:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:07.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:43:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 01d7e9da-6b6e-4d78-aa85-3d1bd0318bcb does not exist
Jan 31 02:43:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 45377b1a-3f8e-4b6c-b28e-3b192b2f6f53 does not exist
Jan 31 02:43:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 73df02ad-53c8-42e5-87d9-49838ee34677 does not exist
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:43:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:43:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:08 np0005603621 podman[250800]: 2026-01-31 07:43:08.440382714 +0000 UTC m=+0.023170178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:43:08 np0005603621 podman[250800]: 2026-01-31 07:43:08.68640457 +0000 UTC m=+0.269191984 container create c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:43:08 np0005603621 systemd[1]: Started libpod-conmon-c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404.scope.
Jan 31 02:43:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:43:08 np0005603621 podman[250800]: 2026-01-31 07:43:08.974421733 +0000 UTC m=+0.557209177 container init c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hamilton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:43:08 np0005603621 podman[250800]: 2026-01-31 07:43:08.981196366 +0000 UTC m=+0.563983780 container start c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:43:08 np0005603621 vigorous_hamilton[250816]: 167 167
Jan 31 02:43:08 np0005603621 systemd[1]: libpod-c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404.scope: Deactivated successfully.
Jan 31 02:43:08 np0005603621 conmon[250816]: conmon c79e260055126f91ac10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404.scope/container/memory.events
Jan 31 02:43:09 np0005603621 podman[250800]: 2026-01-31 07:43:09.068524795 +0000 UTC m=+0.651312239 container attach c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:43:09 np0005603621 podman[250800]: 2026-01-31 07:43:09.069576727 +0000 UTC m=+0.652364141 container died c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hamilton, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:43:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:43:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:43:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:43:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-06c41e75ad774be2de69f2396177e132ef98a6bf0e20c622937c200b1e63745f-merged.mount: Deactivated successfully.
Jan 31 02:43:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:10 np0005603621 podman[250800]: 2026-01-31 07:43:10.57818551 +0000 UTC m=+2.160972924 container remove c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hamilton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:43:10 np0005603621 systemd[1]: libpod-conmon-c79e260055126f91ac104cc22054e4269b323b65782ef1ca045b3af4919bd404.scope: Deactivated successfully.
Jan 31 02:43:10 np0005603621 podman[250890]: 2026-01-31 07:43:10.66999364 +0000 UTC m=+0.019103521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:43:10 np0005603621 podman[250890]: 2026-01-31 07:43:10.801526995 +0000 UTC m=+0.150636846 container create 26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:43:10 np0005603621 systemd[1]: Started libpod-conmon-26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6.scope.
Jan 31 02:43:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:43:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b2917cedf943ae53d74ca86f6b17a32a2bfb0e309370b7e7759f707d997328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b2917cedf943ae53d74ca86f6b17a32a2bfb0e309370b7e7759f707d997328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b2917cedf943ae53d74ca86f6b17a32a2bfb0e309370b7e7759f707d997328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b2917cedf943ae53d74ca86f6b17a32a2bfb0e309370b7e7759f707d997328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b2917cedf943ae53d74ca86f6b17a32a2bfb0e309370b7e7759f707d997328/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:11 np0005603621 podman[250890]: 2026-01-31 07:43:11.07239072 +0000 UTC m=+0.421500631 container init 26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:43:11 np0005603621 podman[250890]: 2026-01-31 07:43:11.078708578 +0000 UTC m=+0.427818439 container start 26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:43:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:11.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:11 np0005603621 podman[250890]: 2026-01-31 07:43:11.248032818 +0000 UTC m=+0.597142679 container attach 26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pike, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:43:11 np0005603621 lucid_pike[250906]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:43:11 np0005603621 lucid_pike[250906]: --> relative data size: 1.0
Jan 31 02:43:11 np0005603621 lucid_pike[250906]: --> All data devices are unavailable
Jan 31 02:43:11 np0005603621 systemd[1]: libpod-26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6.scope: Deactivated successfully.
Jan 31 02:43:11 np0005603621 podman[250890]: 2026-01-31 07:43:11.828334218 +0000 UTC m=+1.177444069 container died 26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pike, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:43:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d9b2917cedf943ae53d74ca86f6b17a32a2bfb0e309370b7e7759f707d997328-merged.mount: Deactivated successfully.
Jan 31 02:43:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:12 np0005603621 podman[250890]: 2026-01-31 07:43:12.46358298 +0000 UTC m=+1.812692831 container remove 26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_pike, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:43:12 np0005603621 systemd[1]: libpod-conmon-26473aa76e4eb34ee1df4b11a60c422e9caf6dfb432aeb48282ad50e18cacaf6.scope: Deactivated successfully.
Jan 31 02:43:13 np0005603621 podman[251075]: 2026-01-31 07:43:12.923582997 +0000 UTC m=+0.017462539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:43:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:13.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:13 np0005603621 podman[251075]: 2026-01-31 07:43:13.171641406 +0000 UTC m=+0.265520928 container create 5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:43:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:13 np0005603621 systemd[1]: Started libpod-conmon-5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3.scope.
Jan 31 02:43:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:43:13 np0005603621 podman[251075]: 2026-01-31 07:43:13.576238806 +0000 UTC m=+0.670118408 container init 5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:43:13 np0005603621 podman[251075]: 2026-01-31 07:43:13.584000329 +0000 UTC m=+0.677879851 container start 5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:43:13 np0005603621 charming_ptolemy[251092]: 167 167
Jan 31 02:43:13 np0005603621 systemd[1]: libpod-5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3.scope: Deactivated successfully.
Jan 31 02:43:13 np0005603621 podman[251075]: 2026-01-31 07:43:13.702813535 +0000 UTC m=+0.796693137 container attach 5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:43:13 np0005603621 podman[251075]: 2026-01-31 07:43:13.703517837 +0000 UTC m=+0.797397399 container died 5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:43:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-abf9a020b6a9326bd597cdf4811538ecd7b37aaff8c29d68d1d02bbc6cff174a-merged.mount: Deactivated successfully.
Jan 31 02:43:14 np0005603621 podman[251075]: 2026-01-31 07:43:14.375684928 +0000 UTC m=+1.469564480 container remove 5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:43:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:14 np0005603621 systemd[1]: libpod-conmon-5b1af7f9b30146eb30c3e14839975683de6bfb7769b3473ccba4e8b27dff03a3.scope: Deactivated successfully.
Jan 31 02:43:14 np0005603621 podman[251117]: 2026-01-31 07:43:14.498091426 +0000 UTC m=+0.049051079 container create 98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:43:14 np0005603621 systemd[1]: Started libpod-conmon-98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70.scope.
Jan 31 02:43:14 np0005603621 podman[251117]: 2026-01-31 07:43:14.467130025 +0000 UTC m=+0.018089698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:43:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:43:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687d8c616f6438fdbca2643212e723d06d75e4bc740cab81ca5a61541123b235/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687d8c616f6438fdbca2643212e723d06d75e4bc740cab81ca5a61541123b235/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687d8c616f6438fdbca2643212e723d06d75e4bc740cab81ca5a61541123b235/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687d8c616f6438fdbca2643212e723d06d75e4bc740cab81ca5a61541123b235/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:14 np0005603621 podman[251117]: 2026-01-31 07:43:14.652497529 +0000 UTC m=+0.203457192 container init 98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:43:14 np0005603621 podman[251117]: 2026-01-31 07:43:14.658755745 +0000 UTC m=+0.209715398 container start 98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 02:43:14 np0005603621 podman[251117]: 2026-01-31 07:43:14.668294404 +0000 UTC m=+0.219254067 container attach 98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:43:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:15.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:15 np0005603621 eager_germain[251132]: {
Jan 31 02:43:15 np0005603621 eager_germain[251132]:    "0": [
Jan 31 02:43:15 np0005603621 eager_germain[251132]:        {
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "devices": [
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "/dev/loop3"
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            ],
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "lv_name": "ceph_lv0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "lv_size": "7511998464",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "name": "ceph_lv0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "tags": {
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.cluster_name": "ceph",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.crush_device_class": "",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.encrypted": "0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.osd_id": "0",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.type": "block",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:                "ceph.vdo": "0"
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            },
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "type": "block",
Jan 31 02:43:15 np0005603621 eager_germain[251132]:            "vg_name": "ceph_vg0"
Jan 31 02:43:15 np0005603621 eager_germain[251132]:        }
Jan 31 02:43:15 np0005603621 eager_germain[251132]:    ]
Jan 31 02:43:15 np0005603621 eager_germain[251132]: }
Jan 31 02:43:15 np0005603621 systemd[1]: libpod-98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70.scope: Deactivated successfully.
Jan 31 02:43:15 np0005603621 podman[251117]: 2026-01-31 07:43:15.347349561 +0000 UTC m=+0.898309214 container died 98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 31 02:43:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-687d8c616f6438fdbca2643212e723d06d75e4bc740cab81ca5a61541123b235-merged.mount: Deactivated successfully.
Jan 31 02:43:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:16 np0005603621 podman[251117]: 2026-01-31 07:43:16.657803999 +0000 UTC m=+2.208763682 container remove 98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:43:16 np0005603621 systemd[1]: libpod-conmon-98160e87652cdb666159e926ae9cfe1e72e11ff16e37b778689db5e32402bd70.scope: Deactivated successfully.
Jan 31 02:43:16 np0005603621 podman[251155]: 2026-01-31 07:43:16.746981466 +0000 UTC m=+1.121025079 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 02:43:16 np0005603621 podman[251156]: 2026-01-31 07:43:16.775883843 +0000 UTC m=+1.148873422 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:43:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:17.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.208871752 +0000 UTC m=+0.088864078 container create 9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:43:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:17.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.140504587 +0000 UTC m=+0.020496873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:43:17 np0005603621 systemd[1]: Started libpod-conmon-9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749.scope.
Jan 31 02:43:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.415375669 +0000 UTC m=+0.295368005 container init 9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.424736103 +0000 UTC m=+0.304762220 container start 9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:43:17 np0005603621 keen_franklin[251358]: 167 167
Jan 31 02:43:17 np0005603621 systemd[1]: libpod-9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749.scope: Deactivated successfully.
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.433089194 +0000 UTC m=+0.313081510 container attach 9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.433450216 +0000 UTC m=+0.313442502 container died 9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:43:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3d418362a66aba87e16d46af1e1566052fcb5e35480e2aaa0156fa91d2c0c87a-merged.mount: Deactivated successfully.
Jan 31 02:43:17 np0005603621 podman[251342]: 2026-01-31 07:43:17.490796743 +0000 UTC m=+0.370789039 container remove 9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:43:17 np0005603621 systemd[1]: libpod-conmon-9c0ddcc7fb76510a8f7bbaf0b44c6ce914860cfe01ed004cdaaac403f675e749.scope: Deactivated successfully.
Jan 31 02:43:17 np0005603621 podman[251384]: 2026-01-31 07:43:17.646390373 +0000 UTC m=+0.040878353 container create 4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:43:17 np0005603621 systemd[1]: Started libpod-conmon-4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10.scope.
Jan 31 02:43:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:43:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b801e27a1ade2da2b55a7023f75c93371b1526efc06dab7273340f1c94438a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b801e27a1ade2da2b55a7023f75c93371b1526efc06dab7273340f1c94438a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b801e27a1ade2da2b55a7023f75c93371b1526efc06dab7273340f1c94438a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b801e27a1ade2da2b55a7023f75c93371b1526efc06dab7273340f1c94438a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:43:17 np0005603621 podman[251384]: 2026-01-31 07:43:17.628551364 +0000 UTC m=+0.023039374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:43:17 np0005603621 podman[251384]: 2026-01-31 07:43:17.740896937 +0000 UTC m=+0.135384937 container init 4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:43:17 np0005603621 podman[251384]: 2026-01-31 07:43:17.750469927 +0000 UTC m=+0.144957907 container start 4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:43:17 np0005603621 podman[251384]: 2026-01-31 07:43:17.754880736 +0000 UTC m=+0.149368756 container attach 4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:43:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]: {
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:        "osd_id": 0,
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:        "type": "bluestore"
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]:    }
Jan 31 02:43:18 np0005603621 vigorous_hermann[251401]: }
Jan 31 02:43:18 np0005603621 systemd[1]: libpod-4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10.scope: Deactivated successfully.
Jan 31 02:43:18 np0005603621 conmon[251401]: conmon 4e14d8d7ec34c0077e38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10.scope/container/memory.events
Jan 31 02:43:18 np0005603621 podman[251384]: 2026-01-31 07:43:18.51358172 +0000 UTC m=+0.908069700 container died 4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:43:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-82b801e27a1ade2da2b55a7023f75c93371b1526efc06dab7273340f1c94438a-merged.mount: Deactivated successfully.
Jan 31 02:43:18 np0005603621 podman[251384]: 2026-01-31 07:43:18.55629839 +0000 UTC m=+0.950786380 container remove 4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:43:18 np0005603621 systemd[1]: libpod-conmon-4e14d8d7ec34c0077e382f134b68c4f91b8cacc425f267347edf6132dc901b10.scope: Deactivated successfully.
Jan 31 02:43:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:43:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:19.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:19.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:43:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 93956342-abd4-482a-8b74-e105d8063af8 does not exist
Jan 31 02:43:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev df2ee0b3-d1fa-467b-a62f-5c9674703c76 does not exist
Jan 31 02:43:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a389efee-f0b8-426a-924c-05361db47219 does not exist
Jan 31 02:43:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:43:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:43:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:21.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:43:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:23.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:23 np0005603621 nova_compute[247399]: 2026-01-31 07:43:23.471 247403 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.49 sec#033[00m
Jan 31 02:43:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:27.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:43:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:27.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:43:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:43:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:29.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:29.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Jan 31 02:43:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:43:30.462 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:43:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:43:30.463 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:43:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:43:30.463 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:43:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:43:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:31.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:43:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 93 op/s
Jan 31 02:43:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=404 latency=0.002000063s ======
Jan 31 02:43:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:34.128 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.002000063s
Jan 31 02:43:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - - [31/Jan/2026:07:43:34.236 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 31 02:43:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 31 02:43:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:35.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:43:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:35.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:43:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 31 02:43:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:43:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:43:38
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', 'images', 'volumes', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.meta']
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:43:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:43:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:39.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 177 op/s
Jan 31 02:43:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:41.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 31 02:43:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 31 02:43:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 31 02:43:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 458 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 0 B/s wr, 101 op/s
Jan 31 02:43:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 31 02:43:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 31 02:43:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 31 02:43:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:43.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:43:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:43:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 31 02:43:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 31 02:43:44 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 31 02:43:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 1.3 MiB/s wr, 1 op/s
Jan 31 02:43:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:45.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 8.4 MiB data, 161 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 1.3 MiB/s wr, 1 op/s
Jan 31 02:43:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 31 02:43:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 31 02:43:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 31 02:43:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:47.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:47 np0005603621 podman[251551]: 2026-01-31 07:43:47.548161563 +0000 UTC m=+0.098837831 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:43:47 np0005603621 podman[251552]: 2026-01-31 07:43:47.558928531 +0000 UTC m=+0.104238500 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 29 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 4.7 MiB/s wr, 48 op/s
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0013240755281034803 of space, bias 1.0, pg target 0.3972226584310441 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:43:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:43:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 31 02:43:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 31 02:43:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 31 02:43:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 41 MiB data, 186 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 6.5 MiB/s wr, 54 op/s
Jan 31 02:43:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:51.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Jan 31 02:43:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:53.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 4.1 MiB/s wr, 46 op/s
Jan 31 02:43:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:43:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:55.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 7.3 KiB/s rd, 1.3 MiB/s wr, 11 op/s
Jan 31 02:43:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:57.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:57.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.6 KiB/s rd, 1.2 MiB/s wr, 9 op/s
Jan 31 02:43:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:43:59.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:43:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:43:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:43:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:43:59.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.7 KiB/s rd, 299 B/s wr, 3 op/s
Jan 31 02:44:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:01.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Jan 31 02:44:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:03.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.238 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.238 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.239 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.239 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.239 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:03.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085363270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.661 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.794 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.795 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5204MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.795 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.795 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.873 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.874 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:44:03 np0005603621 nova_compute[247399]: 2026-01-31 07:44:03.920 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956212644' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:04 np0005603621 nova_compute[247399]: 2026-01-31 07:44:04.420 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:04 np0005603621 nova_compute[247399]: 2026-01-31 07:44:04.424 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:44:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:04 np0005603621 nova_compute[247399]: 2026-01-31 07:44:04.461 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:44:04 np0005603621 nova_compute[247399]: 2026-01-31 07:44:04.463 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:44:04 np0005603621 nova_compute[247399]: 2026-01-31 07:44:04.464 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:05.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.463 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.464 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.464 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.488 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.489 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.489 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.490 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:05 np0005603621 nova_compute[247399]: 2026-01-31 07:44:05.490 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:44:06 np0005603621 nova_compute[247399]: 2026-01-31 07:44:06.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:06 np0005603621 nova_compute[247399]: 2026-01-31 07:44:06.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:06 np0005603621 nova_compute[247399]: 2026-01-31 07:44:06.351 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:44:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:07.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:44:07 np0005603621 nova_compute[247399]: 2026-01-31 07:44:07.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:44:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:07.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:44:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:08.733 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:44:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:08.734 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:44:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:09.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:09.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:11.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:11.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:13.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:13.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:14.737 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.101091) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845455101119, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2158, "num_deletes": 251, "total_data_size": 4116524, "memory_usage": 4163968, "flush_reason": "Manual Compaction"}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845455169642, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 4037929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17259, "largest_seqno": 19416, "table_properties": {"data_size": 4028036, "index_size": 6323, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19870, "raw_average_key_size": 20, "raw_value_size": 4008304, "raw_average_value_size": 4094, "num_data_blocks": 281, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845218, "oldest_key_time": 1769845218, "file_creation_time": 1769845455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 68731 microseconds, and 5156 cpu microseconds.
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.169818) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 4037929 bytes OK
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.169859) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.172553) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.172826) EVENT_LOG_v1 {"time_micros": 1769845455172815, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.172851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4107815, prev total WAL file size 4107815, number of live WAL files 2.
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.174132) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3943KB)], [41(7586KB)]
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845455174198, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 11806882, "oldest_snapshot_seqno": -1}
Jan 31 02:44:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:15.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4510 keys, 9755778 bytes, temperature: kUnknown
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845455298952, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9755778, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9723053, "index_size": 20327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 112671, "raw_average_key_size": 24, "raw_value_size": 9638771, "raw_average_value_size": 2137, "num_data_blocks": 844, "num_entries": 4510, "num_filter_entries": 4510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845455, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.299227) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9755778 bytes
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.304025) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.6 rd, 78.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 7.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 5033, records dropped: 523 output_compression: NoCompression
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.304062) EVENT_LOG_v1 {"time_micros": 1769845455304045, "job": 20, "event": "compaction_finished", "compaction_time_micros": 124830, "compaction_time_cpu_micros": 19135, "output_level": 6, "num_output_files": 1, "total_output_size": 9755778, "num_input_records": 5033, "num_output_records": 4510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845455304703, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845455305831, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.173996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.305868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.305874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.305877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.305880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:44:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:44:15.305883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:44:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 31 02:44:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:17.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 31 02:44:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 31 02:44:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 31 02:44:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 6 op/s
Jan 31 02:44:18 np0005603621 podman[251751]: 2026-01-31 07:44:18.497859027 +0000 UTC m=+0.052375084 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 02:44:18 np0005603621 podman[251752]: 2026-01-31 07:44:18.525615578 +0000 UTC m=+0.080240028 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:44:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 31 02:44:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 31 02:44:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 31 02:44:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:19.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:44:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 56 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 773 KiB/s wr, 15 op/s
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:44:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b5f40f5b-ce39-459e-8462-d847e5c9fb9e does not exist
Jan 31 02:44:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 097fb881-1f6f-42e8-9cbc-dc33f34de3d6 does not exist
Jan 31 02:44:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f6606c4e-7987-45db-bf23-bf44c61d0bb9 does not exist
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:44:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:44:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:44:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:21.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:44:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:44:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:44:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:44:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:21.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.34313339 +0000 UTC m=+0.034445101 container create 1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_fermi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 02:44:21 np0005603621 systemd[1]: Started libpod-conmon-1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef.scope.
Jan 31 02:44:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.327467359 +0000 UTC m=+0.018779090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.428906651 +0000 UTC m=+0.120218392 container init 1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_fermi, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.434491736 +0000 UTC m=+0.125803457 container start 1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_fermi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:44:21 np0005603621 crazy_fermi[252082]: 167 167
Jan 31 02:44:21 np0005603621 systemd[1]: libpod-1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef.scope: Deactivated successfully.
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.438842532 +0000 UTC m=+0.130154273 container attach 1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.439481982 +0000 UTC m=+0.130793703 container died 1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:44:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c4bae76ba003e5f72f72dfffc6d82bf7e567dc2d1aec5ed80adb92fdd8e00625-merged.mount: Deactivated successfully.
Jan 31 02:44:21 np0005603621 podman[252066]: 2026-01-31 07:44:21.474022186 +0000 UTC m=+0.165333897 container remove 1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:44:21 np0005603621 systemd[1]: libpod-conmon-1f877d8caf5a4488618d91e6312369ca7cbd2eb535df122e63eeb2d55064baef.scope: Deactivated successfully.
Jan 31 02:44:21 np0005603621 podman[252108]: 2026-01-31 07:44:21.573731942 +0000 UTC m=+0.033664166 container create 58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:44:21 np0005603621 systemd[1]: Started libpod-conmon-58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906.scope.
Jan 31 02:44:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe33f979240d907090728200d5b77e648c7ca25e230dc90d070ea8d1d1f45d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe33f979240d907090728200d5b77e648c7ca25e230dc90d070ea8d1d1f45d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe33f979240d907090728200d5b77e648c7ca25e230dc90d070ea8d1d1f45d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe33f979240d907090728200d5b77e648c7ca25e230dc90d070ea8d1d1f45d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe33f979240d907090728200d5b77e648c7ca25e230dc90d070ea8d1d1f45d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:21 np0005603621 podman[252108]: 2026-01-31 07:44:21.642042175 +0000 UTC m=+0.101974419 container init 58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:44:21 np0005603621 podman[252108]: 2026-01-31 07:44:21.646851566 +0000 UTC m=+0.106783810 container start 58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:44:21 np0005603621 podman[252108]: 2026-01-31 07:44:21.651034876 +0000 UTC m=+0.110967120 container attach 58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:44:21 np0005603621 podman[252108]: 2026-01-31 07:44:21.560767106 +0000 UTC m=+0.020699350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:44:22 np0005603621 flamboyant_shockley[252125]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:44:22 np0005603621 flamboyant_shockley[252125]: --> relative data size: 1.0
Jan 31 02:44:22 np0005603621 flamboyant_shockley[252125]: --> All data devices are unavailable
Jan 31 02:44:22 np0005603621 systemd[1]: libpod-58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906.scope: Deactivated successfully.
Jan 31 02:44:22 np0005603621 podman[252108]: 2026-01-31 07:44:22.342578845 +0000 UTC m=+0.802511079 container died 58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:44:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-abe33f979240d907090728200d5b77e648c7ca25e230dc90d070ea8d1d1f45d0-merged.mount: Deactivated successfully.
Jan 31 02:44:22 np0005603621 podman[252108]: 2026-01-31 07:44:22.390337562 +0000 UTC m=+0.850269786 container remove 58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:44:22 np0005603621 systemd[1]: libpod-conmon-58dca06b8fec63a46e3c833da1deba3007ee96e9f1c2584e3b9689e66eeda906.scope: Deactivated successfully.
Jan 31 02:44:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 62 MiB data, 199 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.4 MiB/s wr, 35 op/s
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.827599616 +0000 UTC m=+0.028683361 container create e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:44:22 np0005603621 systemd[1]: Started libpod-conmon-e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4.scope.
Jan 31 02:44:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.883628943 +0000 UTC m=+0.084712718 container init e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tharp, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.887533086 +0000 UTC m=+0.088616831 container start e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.89021193 +0000 UTC m=+0.091295705 container attach e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tharp, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:44:22 np0005603621 ecstatic_tharp[252310]: 167 167
Jan 31 02:44:22 np0005603621 systemd[1]: libpod-e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4.scope: Deactivated successfully.
Jan 31 02:44:22 np0005603621 conmon[252310]: conmon e7b9b1aea4b932c6ab3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4.scope/container/memory.events
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.891795199 +0000 UTC m=+0.092878944 container died e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:44:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c2017b20ec4fbbb2f6a85fe6d8a0e00b0c555cdfb63a3d6b6cb6edd955d05b75-merged.mount: Deactivated successfully.
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.815085444 +0000 UTC m=+0.016169209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:44:22 np0005603621 podman[252294]: 2026-01-31 07:44:22.929515703 +0000 UTC m=+0.130599448 container remove e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:44:22 np0005603621 systemd[1]: libpod-conmon-e7b9b1aea4b932c6ab3dd66fc9477829e5adb6267776c54a43dffafad35c04f4.scope: Deactivated successfully.
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.025651287 +0000 UTC m=+0.028461123 container create 23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:44:23 np0005603621 systemd[1]: Started libpod-conmon-23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50.scope.
Jan 31 02:44:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9533cd88a3f0608ee80e9f40515d9fa2f559e219c16c6d10ca9b010a525e4551/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9533cd88a3f0608ee80e9f40515d9fa2f559e219c16c6d10ca9b010a525e4551/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9533cd88a3f0608ee80e9f40515d9fa2f559e219c16c6d10ca9b010a525e4551/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9533cd88a3f0608ee80e9f40515d9fa2f559e219c16c6d10ca9b010a525e4551/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.081254711 +0000 UTC m=+0.084064547 container init 23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.086829377 +0000 UTC m=+0.089639223 container start 23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.089981165 +0000 UTC m=+0.092791011 container attach 23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.01298732 +0000 UTC m=+0.015797186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:44:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:23.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:23 np0005603621 keen_tu[252350]: {
Jan 31 02:44:23 np0005603621 keen_tu[252350]:    "0": [
Jan 31 02:44:23 np0005603621 keen_tu[252350]:        {
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "devices": [
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "/dev/loop3"
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            ],
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "lv_name": "ceph_lv0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "lv_size": "7511998464",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "name": "ceph_lv0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "tags": {
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.cluster_name": "ceph",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.crush_device_class": "",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.encrypted": "0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.osd_id": "0",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.type": "block",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:                "ceph.vdo": "0"
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            },
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "type": "block",
Jan 31 02:44:23 np0005603621 keen_tu[252350]:            "vg_name": "ceph_vg0"
Jan 31 02:44:23 np0005603621 keen_tu[252350]:        }
Jan 31 02:44:23 np0005603621 keen_tu[252350]:    ]
Jan 31 02:44:23 np0005603621 keen_tu[252350]: }
Jan 31 02:44:23 np0005603621 systemd[1]: libpod-23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50.scope: Deactivated successfully.
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.810501832 +0000 UTC m=+0.813311678 container died 23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:44:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9533cd88a3f0608ee80e9f40515d9fa2f559e219c16c6d10ca9b010a525e4551-merged.mount: Deactivated successfully.
Jan 31 02:44:23 np0005603621 podman[252336]: 2026-01-31 07:44:23.905041777 +0000 UTC m=+0.907851623 container remove 23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tu, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:44:23 np0005603621 systemd[1]: libpod-conmon-23c7b5566be97a3b9f48f1b918c490849afab5525ce65cfb3a3fceaf1530ff50.scope: Deactivated successfully.
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.333217475 +0000 UTC m=+0.040622725 container create 58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 02:44:24 np0005603621 systemd[1]: Started libpod-conmon-58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba.scope.
Jan 31 02:44:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.312345381 +0000 UTC m=+0.019750621 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.407928319 +0000 UTC m=+0.115333599 container init 58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_engelbart, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.412272645 +0000 UTC m=+0.119677885 container start 58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_engelbart, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:44:24 np0005603621 hopeful_engelbart[252530]: 167 167
Jan 31 02:44:24 np0005603621 systemd[1]: libpod-58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba.scope: Deactivated successfully.
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.416972972 +0000 UTC m=+0.124378222 container attach 58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_engelbart, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.417310792 +0000 UTC m=+0.124716022 container died 58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:44:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bfdeb0a5cc151170dac42baaa304b03bf4deca55b248c9123fccce8df32709c2-merged.mount: Deactivated successfully.
Jan 31 02:44:24 np0005603621 podman[252514]: 2026-01-31 07:44:24.459796825 +0000 UTC m=+0.167202045 container remove 58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_engelbart, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 02:44:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.7 MiB/s wr, 109 op/s
Jan 31 02:44:24 np0005603621 systemd[1]: libpod-conmon-58aef98049acfb739546624f87cb63cfdad35cdb0dedb299d27bc05c14b7f7ba.scope: Deactivated successfully.
Jan 31 02:44:24 np0005603621 podman[252553]: 2026-01-31 07:44:24.564018294 +0000 UTC m=+0.035000339 container create 1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:44:24 np0005603621 systemd[1]: Started libpod-conmon-1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998.scope.
Jan 31 02:44:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91a5c0a68b93fcd4622eff3d547f90b850d14eee35de7a8a032e690e50c254d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91a5c0a68b93fcd4622eff3d547f90b850d14eee35de7a8a032e690e50c254d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91a5c0a68b93fcd4622eff3d547f90b850d14eee35de7a8a032e690e50c254d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91a5c0a68b93fcd4622eff3d547f90b850d14eee35de7a8a032e690e50c254d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:24 np0005603621 podman[252553]: 2026-01-31 07:44:24.636578219 +0000 UTC m=+0.107560294 container init 1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:44:24 np0005603621 podman[252553]: 2026-01-31 07:44:24.641361599 +0000 UTC m=+0.112343644 container start 1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 31 02:44:24 np0005603621 podman[252553]: 2026-01-31 07:44:24.549524229 +0000 UTC m=+0.020506324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:44:24 np0005603621 podman[252553]: 2026-01-31 07:44:24.653396847 +0000 UTC m=+0.124378882 container attach 1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 02:44:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:25 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:44:25 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:44:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:44:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:44:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:25 np0005603621 kind_golick[252569]: {
Jan 31 02:44:25 np0005603621 kind_golick[252569]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:44:25 np0005603621 kind_golick[252569]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:44:25 np0005603621 kind_golick[252569]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:44:25 np0005603621 kind_golick[252569]:        "osd_id": 0,
Jan 31 02:44:25 np0005603621 kind_golick[252569]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:44:25 np0005603621 kind_golick[252569]:        "type": "bluestore"
Jan 31 02:44:25 np0005603621 kind_golick[252569]:    }
Jan 31 02:44:25 np0005603621 kind_golick[252569]: }
Jan 31 02:44:25 np0005603621 systemd[1]: libpod-1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998.scope: Deactivated successfully.
Jan 31 02:44:25 np0005603621 podman[252553]: 2026-01-31 07:44:25.422993653 +0000 UTC m=+0.893975718 container died 1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:44:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d91a5c0a68b93fcd4622eff3d547f90b850d14eee35de7a8a032e690e50c254d-merged.mount: Deactivated successfully.
Jan 31 02:44:25 np0005603621 podman[252553]: 2026-01-31 07:44:25.958042234 +0000 UTC m=+1.429024319 container remove 1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_golick, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:44:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:44:25 np0005603621 systemd[1]: libpod-conmon-1847017e2e0463c8df6ab7e074924db3bba442593bc65208f4070e2993cbd998.scope: Deactivated successfully.
Jan 31 02:44:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:44:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:44:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:44:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2d20f8ae-b6cc-4075-85ea-eca59aba9ccd does not exist
Jan 31 02:44:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b482f2f9-0601-4bb4-8eea-eb038c899e5b does not exist
Jan 31 02:44:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 350f26c3-e7a7-4c50-bd61-787c96b21b92 does not exist
Jan 31 02:44:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 97 op/s
Jan 31 02:44:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:44:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:44:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:27.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:44:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:27.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:44:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Jan 31 02:44:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:44:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:29.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:44:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:29.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:29 np0005603621 nova_compute[247399]: 2026-01-31 07:44:29.991 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:29 np0005603621 nova_compute[247399]: 2026-01-31 07:44:29.993 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.012 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.109 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.110 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.117 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.117 247403 INFO nova.compute.claims [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:44:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 31 02:44:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 31 02:44:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.300 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:30.463 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:30.464 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:30.464 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 116 op/s
Jan 31 02:44:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528162526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.707 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.712 247403 DEBUG nova.compute.provider_tree [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.731 247403 DEBUG nova.scheduler.client.report [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.761 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.762 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.848 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.849 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.897 247403 INFO nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:44:30 np0005603621 nova_compute[247399]: 2026-01-31 07:44:30.925 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.110 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.111 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.112 247403 INFO nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Creating image(s)#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.134 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.156 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.175 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.177 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.177 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:31.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:31.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:31 np0005603621 nova_compute[247399]: 2026-01-31 07:44:31.769 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Automatically allocating a network for project dc2f6584d8b64364b13683f53c58617f. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 31 02:44:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.0 MiB/s wr, 100 op/s
Jan 31 02:44:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:33.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:33.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:33 np0005603621 nova_compute[247399]: 2026-01-31 07:44:33.752 247403 DEBUG nova.virt.libvirt.imagebackend [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 02:44:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 109 MiB data, 225 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1016 KiB/s wr, 45 op/s
Jan 31 02:44:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:35.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:35.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.704 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.749 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.part --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.749 247403 DEBUG nova.virt.images [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.750 247403 DEBUG nova.privsep.utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.751 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.part /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.941 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.part /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.converted" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.943 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.983 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6.converted --force-share --output=json" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:35 np0005603621 nova_compute[247399]: 2026-01-31 07:44:35.984 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.002 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.005 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c27f4296-ddb0-4185-b980-255e2f05e479_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.321 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c27f4296-ddb0-4185-b980-255e2f05e479_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.410 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] resizing rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:44:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 109 MiB data, 225 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1016 KiB/s wr, 45 op/s
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.547 247403 DEBUG nova.objects.instance [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lazy-loading 'migration_context' on Instance uuid c27f4296-ddb0-4185-b980-255e2f05e479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.569 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.570 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Ensure instance console log exists: /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.571 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.571 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:36 np0005603621 nova_compute[247399]: 2026-01-31 07:44:36.571 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:37.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:37.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:44:38
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images']
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 227 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.9 MiB/s wr, 114 op/s
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:44:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:44:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:39.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 263 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 8.7 MiB/s wr, 185 op/s
Jan 31 02:44:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:41.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:41.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 279 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.9 MiB/s wr, 161 op/s
Jan 31 02:44:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:44:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:43.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:44:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:43.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:44 np0005603621 nova_compute[247399]: 2026-01-31 07:44:44.380 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Automatically allocated network: {'id': '992dcec1-3019-47a1-a14c-defd99a80f3d', 'name': 'auto_allocated_network', 'tenant_id': 'dc2f6584d8b64364b13683f53c58617f', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['84a7580e-2eb6-465f-aef5-8fe041bdab03', 'c97780ea-8676-4982-b5b9-dfb934b09fd9'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-31T07:44:32Z', 'updated_at': '2026-01-31T07:44:43Z', 'revision_number': 4, 'project_id': 'dc2f6584d8b64364b13683f53c58617f'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 31 02:44:44 np0005603621 nova_compute[247399]: 2026-01-31 07:44:44.390 247403 WARNING oslo_policy.policy [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 31 02:44:44 np0005603621 nova_compute[247399]: 2026-01-31 07:44:44.390 247403 WARNING oslo_policy.policy [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 31 02:44:44 np0005603621 nova_compute[247399]: 2026-01-31 07:44:44.392 247403 DEBUG nova.policy [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0eb58e8663574849b17616075ce5c43e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc2f6584d8b64364b13683f53c58617f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:44:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.2 MiB/s wr, 185 op/s
Jan 31 02:44:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 31 02:44:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 31 02:44:44 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 31 02:44:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:44:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:45.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:44:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:45.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:45 np0005603621 nova_compute[247399]: 2026-01-31 07:44:45.504 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Successfully created port: 5a3be558-d27f-4c4f-85e6-50d454bcc9ee _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:44:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 31 02:44:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 31 02:44:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 31 02:44:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.2 MiB/s wr, 135 op/s
Jan 31 02:44:46 np0005603621 nova_compute[247399]: 2026-01-31 07:44:46.870 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Successfully updated port: 5a3be558-d27f-4c4f-85e6-50d454bcc9ee _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:44:46 np0005603621 nova_compute[247399]: 2026-01-31 07:44:46.898 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:46 np0005603621 nova_compute[247399]: 2026-01-31 07:44:46.899 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquired lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:46 np0005603621 nova_compute[247399]: 2026-01-31 07:44:46.899 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:44:47 np0005603621 nova_compute[247399]: 2026-01-31 07:44:47.118 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:44:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:47.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:47.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:47 np0005603621 nova_compute[247399]: 2026-01-31 07:44:47.373 247403 DEBUG nova.compute.manager [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-changed-5a3be558-d27f-4c4f-85e6-50d454bcc9ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:47 np0005603621 nova_compute[247399]: 2026-01-31 07:44:47.373 247403 DEBUG nova.compute.manager [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Refreshing instance network info cache due to event network-changed-5a3be558-d27f-4c4f-85e6-50d454bcc9ee. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:44:47 np0005603621 nova_compute[247399]: 2026-01-31 07:44:47.374 247403 DEBUG oslo_concurrency.lockutils [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.6 MiB/s wr, 41 op/s
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006121645554159687 of space, bias 1.0, pg target 1.836493666247906 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.635073515726782e-07 of space, bias 1.0, pg target 0.00010868869812023079 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:44:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.775 247403 DEBUG nova.network.neutron [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Updating instance_info_cache with network_info: [{"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.807 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Releasing lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.808 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Instance network_info: |[{"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.808 247403 DEBUG oslo_concurrency.lockutils [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.808 247403 DEBUG nova.network.neutron [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Refreshing network info cache for port 5a3be558-d27f-4c4f-85e6-50d454bcc9ee _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.811 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Start _get_guest_xml network_info=[{"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.816 247403 WARNING nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.820 247403 DEBUG nova.virt.libvirt.host [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.821 247403 DEBUG nova.virt.libvirt.host [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.829 247403 DEBUG nova.virt.libvirt.host [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.829 247403 DEBUG nova.virt.libvirt.host [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.830 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.830 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.831 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.831 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.831 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.831 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.831 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.832 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.832 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.832 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.832 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.833 247403 DEBUG nova.virt.hardware [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.836 247403 DEBUG nova.privsep.utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 31 02:44:48 np0005603621 nova_compute[247399]: 2026-01-31 07:44:48.837 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:49.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432316604' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.473 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.502 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:49 np0005603621 podman[252939]: 2026-01-31 07:44:49.505491591 +0000 UTC m=+0.060022284 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.505 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:49 np0005603621 podman[252940]: 2026-01-31 07:44:49.530312359 +0000 UTC m=+0.079589847 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 02:44:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230134747' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.918 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.920 247403 DEBUG nova.virt.libvirt.vif [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1079069239-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1079069239-1',id=2,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc2f6584d8b64364b13683f53c58617f',ramdisk_id='',reservation_id='r-jd4rofhy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-2135409609',owner_user_name='tempest-AutoAllocateNetworkTest-2135409609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:30Z,user_data=None,user_id='0eb58e8663574849b17616075ce5c43e',uuid=c27f4296-ddb0-4185-b980-255e2f05e479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.920 247403 DEBUG nova.network.os_vif_util [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Converting VIF {"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.921 247403 DEBUG nova.network.os_vif_util [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.923 247403 DEBUG nova.objects.instance [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lazy-loading 'pci_devices' on Instance uuid c27f4296-ddb0-4185-b980-255e2f05e479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.948 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <uuid>c27f4296-ddb0-4185-b980-255e2f05e479</uuid>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <name>instance-00000002</name>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:name>tempest-tempest.common.compute-instance-1079069239-1</nova:name>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:44:48</nova:creationTime>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:user uuid="0eb58e8663574849b17616075ce5c43e">tempest-AutoAllocateNetworkTest-2135409609-project-member</nova:user>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:project uuid="dc2f6584d8b64364b13683f53c58617f">tempest-AutoAllocateNetworkTest-2135409609</nova:project>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <nova:port uuid="5a3be558-d27f-4c4f-85e6-50d454bcc9ee">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.1.0.98" ipVersion="4"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="fdfe:381f:8400::376" ipVersion="6"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <entry name="serial">c27f4296-ddb0-4185-b980-255e2f05e479</entry>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <entry name="uuid">c27f4296-ddb0-4185-b980-255e2f05e479</entry>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c27f4296-ddb0-4185-b980-255e2f05e479_disk">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c27f4296-ddb0-4185-b980-255e2f05e479_disk.config">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:8f:36:2a"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <target dev="tap5a3be558-d2"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/console.log" append="off"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:44:49 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:44:49 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:44:49 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:44:49 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.949 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Preparing to wait for external event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.950 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.950 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.950 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.951 247403 DEBUG nova.virt.libvirt.vif [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1079069239-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1079069239-1',id=2,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc2f6584d8b64364b13683f53c58617f',ramdisk_id='',reservation_id='r-jd4rofhy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-2135409609',owner_user_name='tempest-AutoAllocateNetworkTest-2135409609-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:30Z,user_data=None,user_id='0eb58e8663574849b17616075ce5c43e',uuid=c27f4296-ddb0-4185-b980-255e2f05e479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.951 247403 DEBUG nova.network.os_vif_util [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Converting VIF {"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.952 247403 DEBUG nova.network.os_vif_util [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.952 247403 DEBUG os_vif [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.980 247403 DEBUG ovsdbapp.backend.ovs_idl [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.981 247403 DEBUG ovsdbapp.backend.ovs_idl [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.981 247403 DEBUG ovsdbapp.backend.ovs_idl [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.981 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.983 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.997 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.997 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:44:49 np0005603621 nova_compute[247399]: 2026-01-31 07:44:49.998 247403 INFO oslo.privsep.daemon [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpkn8xotgf/privsep.sock']#033[00m
Jan 31 02:44:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.615 247403 INFO oslo.privsep.daemon [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.520 253078 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.525 253078 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.527 253078 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.528 253078 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253078#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.900 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a3be558-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.901 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a3be558-d2, col_values=(('external_ids', {'iface-id': '5a3be558-d27f-4c4f-85e6-50d454bcc9ee', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:36:2a', 'vm-uuid': 'c27f4296-ddb0-4185-b980-255e2f05e479'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:50 np0005603621 NetworkManager[49013]: <info>  [1769845490.9039] manager: (tap5a3be558-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.905 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:50 np0005603621 nova_compute[247399]: 2026-01-31 07:44:50.909 247403 INFO os_vif [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2')#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.047 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.047 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.048 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] No VIF found with MAC fa:16:3e:8f:36:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.048 247403 INFO nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Using config drive#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.072 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.100 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:44:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:51.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:44:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:51.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.394 247403 DEBUG nova.network.neutron [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Updated VIF entry in instance network info cache for port 5a3be558-d27f-4c4f-85e6-50d454bcc9ee. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.395 247403 DEBUG nova.network.neutron [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Updating instance_info_cache with network_info: [{"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.417 247403 DEBUG oslo_concurrency.lockutils [req-12d98cbc-752c-4b16-ae47-e19a60b6de62 req-7df35a93-b10e-42eb-81bd-7500d253c09a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.676 247403 INFO nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Creating config drive at /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/disk.config#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.679 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx9a3dhje execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.813 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx9a3dhje" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.837 247403 DEBUG nova.storage.rbd_utils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] rbd image c27f4296-ddb0-4185-b980-255e2f05e479_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:51 np0005603621 nova_compute[247399]: 2026-01-31 07:44:51.840 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/disk.config c27f4296-ddb0-4185-b980-255e2f05e479_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.054 247403 DEBUG oslo_concurrency.processutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/disk.config c27f4296-ddb0-4185-b980-255e2f05e479_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.055 247403 INFO nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Deleting local config drive /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479/disk.config because it was imported into RBD.#033[00m
Jan 31 02:44:52 np0005603621 systemd[1]: Starting libvirt secret daemon...
Jan 31 02:44:52 np0005603621 systemd[1]: Started libvirt secret daemon.
Jan 31 02:44:52 np0005603621 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 31 02:44:52 np0005603621 NetworkManager[49013]: <info>  [1769845492.1485] manager: (tap5a3be558-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 31 02:44:52 np0005603621 kernel: tap5a3be558-d2: entered promiscuous mode
Jan 31 02:44:52 np0005603621 ovn_controller[149152]: 2026-01-31T07:44:52Z|00027|binding|INFO|Claiming lport 5a3be558-d27f-4c4f-85e6-50d454bcc9ee for this chassis.
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:52 np0005603621 ovn_controller[149152]: 2026-01-31T07:44:52Z|00028|binding|INFO|5a3be558-d27f-4c4f-85e6-50d454bcc9ee: Claiming fa:16:3e:8f:36:2a 10.1.0.98 fdfe:381f:8400::376
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.153 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:52 np0005603621 systemd-udevd[253178]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.168 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:36:2a 10.1.0.98 fdfe:381f:8400::376'], port_security=['fa:16:3e:8f:36:2a 10.1.0.98 fdfe:381f:8400::376'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.98/26 fdfe:381f:8400::376/64', 'neutron:device_id': 'c27f4296-ddb0-4185-b980-255e2f05e479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-992dcec1-3019-47a1-a14c-defd99a80f3d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc2f6584d8b64364b13683f53c58617f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f48a740a-df16-488d-83ce-01edcece1d5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d5eabbe-dd4d-4e48-a2ac-b48c29338142, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5a3be558-d27f-4c4f-85e6-50d454bcc9ee) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.169 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3be558-d27f-4c4f-85e6-50d454bcc9ee in datapath 992dcec1-3019-47a1-a14c-defd99a80f3d bound to our chassis#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.171 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 992dcec1-3019-47a1-a14c-defd99a80f3d#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.172 159734 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpnmaxlr7e/privsep.sock']#033[00m
Jan 31 02:44:52 np0005603621 NetworkManager[49013]: <info>  [1769845492.1811] device (tap5a3be558-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:44:52 np0005603621 NetworkManager[49013]: <info>  [1769845492.1820] device (tap5a3be558-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:44:52 np0005603621 systemd-machined[212769]: New machine qemu-1-instance-00000002.
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.214 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:52 np0005603621 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Jan 31 02:44:52 np0005603621 ovn_controller[149152]: 2026-01-31T07:44:52Z|00029|binding|INFO|Setting lport 5a3be558-d27f-4c4f-85e6-50d454bcc9ee ovn-installed in OVS
Jan 31 02:44:52 np0005603621 ovn_controller[149152]: 2026-01-31T07:44:52Z|00030|binding|INFO|Setting lport 5a3be558-d27f-4c4f-85e6-50d454bcc9ee up in Southbound
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.869 159734 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.870 159734 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpnmaxlr7e/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.740 253234 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.745 253234 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.749 253234 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.750 253234 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253234#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.872 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845492.8713918, c27f4296-ddb0-4185-b980-255e2f05e479 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.873 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] VM Started (Lifecycle Event)#033[00m
Jan 31 02:44:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:52.872 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7a252f80-9b9c-489f-9292-a3aeb794bb08]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.894 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.898 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845492.8718047, c27f4296-ddb0-4185-b980-255e2f05e479 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.898 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.920 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.924 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:44:52 np0005603621 nova_compute[247399]: 2026-01-31 07:44:52.970 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:44:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:44:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:53.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:44:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:53.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:53.697 253234 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:53.697 253234 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:53.697 253234 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:53 np0005603621 nova_compute[247399]: 2026-01-31 07:44:53.975 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "9d36f98c-d489-4c17-b997-24bacd1c9f58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:53 np0005603621 nova_compute[247399]: 2026-01-31 07:44:53.975 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "9d36f98c-d489-4c17-b997-24bacd1c9f58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.003 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.086 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.087 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.092 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.093 247403 INFO nova.compute.claims [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.113 247403 DEBUG nova.compute.manager [req-c1280e27-d9d2-4c23-9ea7-7e6ad238c723 req-faea7b7a-c600-497e-8402-55c198e748d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.114 247403 DEBUG oslo_concurrency.lockutils [req-c1280e27-d9d2-4c23-9ea7-7e6ad238c723 req-faea7b7a-c600-497e-8402-55c198e748d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.114 247403 DEBUG oslo_concurrency.lockutils [req-c1280e27-d9d2-4c23-9ea7-7e6ad238c723 req-faea7b7a-c600-497e-8402-55c198e748d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.114 247403 DEBUG oslo_concurrency.lockutils [req-c1280e27-d9d2-4c23-9ea7-7e6ad238c723 req-faea7b7a-c600-497e-8402-55c198e748d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.115 247403 DEBUG nova.compute.manager [req-c1280e27-d9d2-4c23-9ea7-7e6ad238c723 req-faea7b7a-c600-497e-8402-55c198e748d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Processing event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.115 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.118 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845494.117953, c27f4296-ddb0-4185-b980-255e2f05e479 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.118 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.127 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.131 247403 INFO nova.virt.libvirt.driver [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Instance spawned successfully.#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.131 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.217 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.227 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.231 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.232 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.232 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.233 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.233 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.234 247403 DEBUG nova.virt.libvirt.driver [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.261 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.306 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.338 247403 INFO nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Took 23.23 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.339 247403 DEBUG nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.396 247403 INFO nova.compute.manager [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Took 24.32 seconds to build instance.#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.415 247403 DEBUG oslo_concurrency.lockutils [None req-36ed46a6-e1c2-4d7b-8a82-f14719996b90 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 34 KiB/s wr, 21 op/s
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.711 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ec25b30a-9884-430b-ba8d-ae0671391c9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.712 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap992dcec1-31 in ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.714 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap992dcec1-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.714 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c015c748-f421-4334-9ffb-e97f39a82a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.721 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee4ecbb-0212-4552-8008-945535c0170d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.738 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[409bdf1b-7d25-48a3-a5c2-3effcc3cada4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804410021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.764 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.770 247403 DEBUG nova.compute.provider_tree [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.771 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[808c828e-2f2e-4215-9d74-8534b40bacab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:54.773 159734 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpyqrzsz_e/privsep.sock']#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.808 247403 ERROR nova.scheduler.client.report [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [req-687b6eb4-2613-4c31-9e8b-0fa70c08c423] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID d7116329-87c2-469a-b33a-1e01daf74ceb.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-687b6eb4-2613-4c31-9e8b-0fa70c08c423"}]}#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.831 247403 DEBUG nova.scheduler.client.report [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.860 247403 DEBUG nova.scheduler.client.report [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.861 247403 DEBUG nova.compute.provider_tree [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.884 247403 DEBUG nova.scheduler.client.report [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.905 247403 DEBUG nova.scheduler.client.report [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:44:54 np0005603621 nova_compute[247399]: 2026-01-31 07:44:54.975 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:44:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 31 02:44:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:55.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:44:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:55.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:44:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635768266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.437 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.442 247403 DEBUG nova.compute.provider_tree [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:44:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.475 159734 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.475 159734 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyqrzsz_e/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.330 253297 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.335 253297 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.336 253297 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.337 253297 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253297#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.478 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b44b02dc-7d0b-49fe-bf66-c58237c426bd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.609 247403 DEBUG nova.scheduler.client.report [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updated inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.610 247403 DEBUG nova.compute.provider_tree [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updating resource provider d7116329-87c2-469a-b33a-1e01daf74ceb generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.610 247403 DEBUG nova.compute.provider_tree [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.641 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.642 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.694 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.695 247403 DEBUG nova.network.neutron [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.720 247403 INFO nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.739 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.848 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.859 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.859 247403 INFO nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Creating image(s)#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.884 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.916 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.973 253297 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.973 253297 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:55.973 253297 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.974 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.979 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:55 np0005603621 nova_compute[247399]: 2026-01-31 07:44:55.994 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.001 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.001 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.027 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.039 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.040 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.040 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.041 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.083 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.086 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.127 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.128 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.134 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.134 247403 INFO nova.compute.claims [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.235 247403 DEBUG nova.compute.manager [req-9a680257-9b8d-4359-915f-01c0f38ee2a5 req-e7dbfee5-1095-4926-bd43-4298e4183fdb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.236 247403 DEBUG oslo_concurrency.lockutils [req-9a680257-9b8d-4359-915f-01c0f38ee2a5 req-e7dbfee5-1095-4926-bd43-4298e4183fdb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.236 247403 DEBUG oslo_concurrency.lockutils [req-9a680257-9b8d-4359-915f-01c0f38ee2a5 req-e7dbfee5-1095-4926-bd43-4298e4183fdb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.237 247403 DEBUG oslo_concurrency.lockutils [req-9a680257-9b8d-4359-915f-01c0f38ee2a5 req-e7dbfee5-1095-4926-bd43-4298e4183fdb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.237 247403 DEBUG nova.compute.manager [req-9a680257-9b8d-4359-915f-01c0f38ee2a5 req-e7dbfee5-1095-4926-bd43-4298e4183fdb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] No waiting events found dispatching network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.237 247403 WARNING nova.compute.manager [req-9a680257-9b8d-4359-915f-01c0f38ee2a5 req-e7dbfee5-1095-4926-bd43-4298e4183fdb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received unexpected event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee for instance with vm_state active and task_state None.#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.303 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.347 247403 DEBUG nova.network.neutron [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.348 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:44:56 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 31 02:44:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 33 KiB/s wr, 21 op/s
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.560 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[42ba1529-364f-40a7-a9e0-52c9168c4d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.578 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1f08fb75-8671-4cad-8375-9deea3c8675c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 NetworkManager[49013]: <info>  [1769845496.5800] manager: (tap992dcec1-30): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 31 02:44:56 np0005603621 systemd-udevd[253426]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.600 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[76150db8-07a5-4a02-832c-ea39610ce3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.604 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ebe2fc-2ea2-4bf6-b5ed-28063bbba598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 NetworkManager[49013]: <info>  [1769845496.6195] device (tap992dcec1-30): carrier: link connected
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.623 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d438ef0b-4e68-4b19-89bb-18a6fb537d72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.638 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[986e47ad-68b3-4f29-bc77-854200cff175]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap992dcec1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:60:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480213, 'reachable_time': 27383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253444, 'error': None, 'target': 'ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.653 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d3edaac3-12ee-4d25-ad45-82d9b4830479]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:6054'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480213, 'tstamp': 480213}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253445, 'error': None, 'target': 'ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.664 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11d2a0a6-57ed-406d-923c-d37bd3cea59a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap992dcec1-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:60:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480213, 'reachable_time': 27383, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253446, 'error': None, 'target': 'ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.686 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29fccc2e-4db0-49db-80c4-b83717357ea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.724 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f90544-1536-47a7-9337-14204aba24cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.726 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap992dcec1-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.726 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.727 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap992dcec1-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:56 np0005603621 NetworkManager[49013]: <info>  [1769845496.7292] manager: (tap992dcec1-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 31 02:44:56 np0005603621 kernel: tap992dcec1-30: entered promiscuous mode
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.728 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.733 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap992dcec1-30, col_values=(('external_ids', {'iface-id': '1443ed6f-926c-4e3e-8e41-280e0ddc0f64'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.735 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:56 np0005603621 ovn_controller[149152]: 2026-01-31T07:44:56Z|00031|binding|INFO|Releasing lport 1443ed6f-926c-4e3e-8e41-280e0ddc0f64 from this chassis (sb_readonly=0)
Jan 31 02:44:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.738 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/992dcec1-3019-47a1-a14c-defd99a80f3d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/992dcec1-3019-47a1-a14c-defd99a80f3d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:44:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320196083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.739 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.739 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c396b79f-3339-4239-82be-f4956de7c394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.740 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-992dcec1-3019-47a1-a14c-defd99a80f3d
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/992dcec1-3019-47a1-a14c-defd99a80f3d.pid.haproxy
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 992dcec1-3019-47a1-a14c-defd99a80f3d
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:44:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:44:56.741 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d', 'env', 'PROCESS_TAG=haproxy-992dcec1-3019-47a1-a14c-defd99a80f3d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/992dcec1-3019-47a1-a14c-defd99a80f3d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.756 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.769 247403 DEBUG nova.compute.provider_tree [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.791 247403 DEBUG nova.scheduler.client.report [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.819 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.820 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.864 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.865 247403 DEBUG nova.network.neutron [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.868 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.904 247403 INFO nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.948 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:44:56 np0005603621 nova_compute[247399]: 2026-01-31 07:44:56.957 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] resizing rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:44:57 np0005603621 podman[253535]: 2026-01-31 07:44:57.052218601 +0000 UTC m=+0.019171882 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.204 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.206 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.207 247403 INFO nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Creating image(s)#033[00m
Jan 31 02:44:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:44:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:57.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:44:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:57.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:57 np0005603621 podman[253535]: 2026-01-31 07:44:57.375348586 +0000 UTC m=+0.342301847 container create 9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.389 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.414 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.454 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:57 np0005603621 systemd[1]: Started libpod-conmon-9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5.scope.
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.459 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.477 247403 DEBUG nova.policy [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '090f32af51df4fddbcf003f38ed84a37', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f805a3fee7864f1bb92a97da17d9ed71', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:44:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:44:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09d0a308af1f85022189fce74433f6b296a779014bae5a522257626dc8557ded/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.638 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.639 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.640 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.640 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.662 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.664 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b2c254bb-3943-440e-8ca2-306e8777083f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:57 np0005603621 podman[253535]: 2026-01-31 07:44:57.675045445 +0000 UTC m=+0.641998736 container init 9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:44:57 np0005603621 podman[253535]: 2026-01-31 07:44:57.683337905 +0000 UTC m=+0.650291166 container start 9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.683 247403 DEBUG nova.objects.instance [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lazy-loading 'migration_context' on Instance uuid 9d36f98c-d489-4c17-b997-24bacd1c9f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.695 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.695 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Ensure instance console log exists: /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.696 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.696 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.696 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.698 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.703 247403 WARNING nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.708 247403 DEBUG nova.virt.libvirt.host [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:44:57 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [NOTICE]   (253647) : New worker (253651) forked
Jan 31 02:44:57 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [NOTICE]   (253647) : Loading success.
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.718 247403 DEBUG nova.virt.libvirt.host [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.724 247403 DEBUG nova.virt.libvirt.host [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.724 247403 DEBUG nova.virt.libvirt.host [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.726 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.726 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.727 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.727 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.727 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.727 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.727 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.727 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.728 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.728 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.728 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.728 247403 DEBUG nova.virt.hardware [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:44:57 np0005603621 nova_compute[247399]: 2026-01-31 07:44:57.730 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.147 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.179 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.184 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 331 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.3 MiB/s wr, 157 op/s
Jan 31 02:44:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:44:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772919142' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.627 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.629 247403 DEBUG nova.objects.instance [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9d36f98c-d489-4c17-b997-24bacd1c9f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.816 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <uuid>9d36f98c-d489-4c17-b997-24bacd1c9f58</uuid>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <name>instance-00000006</name>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:name>tempest-LiveMigrationNegativeTest-server-216617457</nova:name>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:44:57</nova:creationTime>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:user uuid="5dac6d92165448b3a1c60bea57f8e48d">tempest-LiveMigrationNegativeTest-402716808-project-member</nova:user>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <nova:project uuid="5047d468cca049c2891d27def49df57f">tempest-LiveMigrationNegativeTest-402716808</nova:project>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <entry name="serial">9d36f98c-d489-4c17-b997-24bacd1c9f58</entry>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <entry name="uuid">9d36f98c-d489-4c17-b997-24bacd1c9f58</entry>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/9d36f98c-d489-4c17-b997-24bacd1c9f58_disk">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/9d36f98c-d489-4c17-b997-24bacd1c9f58_disk.config">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/console.log" append="off"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:44:58 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:44:58 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:44:58 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:44:58 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.886 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.886 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.887 247403 INFO nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Using config drive#033[00m
Jan 31 02:44:58 np0005603621 nova_compute[247399]: 2026-01-31 07:44:58.911 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.082 247403 INFO nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Creating config drive at /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/disk.config#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.084 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpa5p34qum execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.101 247403 DEBUG nova.network.neutron [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Successfully created port: bdedece8-b56c-4a93-94a1-88106232511a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.198 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpa5p34qum" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.234 247403 DEBUG nova.storage.rbd_utils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] rbd image 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.237 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/disk.config 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:44:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:44:59.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:44:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:44:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:44:59.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.816 247403 DEBUG oslo_concurrency.processutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/disk.config 9d36f98c-d489-4c17-b997-24bacd1c9f58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.817 247403 INFO nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Deleting local config drive /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58/disk.config because it was imported into RBD.#033[00m
Jan 31 02:44:59 np0005603621 systemd-machined[212769]: New machine qemu-2-instance-00000006.
Jan 31 02:44:59 np0005603621 systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.976 247403 DEBUG nova.network.neutron [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Successfully updated port: bdedece8-b56c-4a93-94a1-88106232511a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.992 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.992 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquired lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:44:59 np0005603621 nova_compute[247399]: 2026-01-31 07:44:59.993 247403 DEBUG nova.network.neutron [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.064 247403 DEBUG nova.compute.manager [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-changed-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.064 247403 DEBUG nova.compute.manager [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Refreshing instance network info cache due to event network-changed-bdedece8-b56c-4a93-94a1-88106232511a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.064 247403 DEBUG oslo_concurrency.lockutils [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.185 247403 DEBUG nova.network.neutron [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:45:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.478 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b2c254bb-3943-440e-8ca2-306e8777083f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.814s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 360 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 219 op/s
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.582 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] resizing rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.644 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.645 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.645 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845500.6455007, 9d36f98c-d489-4c17-b997-24bacd1c9f58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.646 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.650 247403 INFO nova.virt.libvirt.driver [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Instance spawned successfully.#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.650 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.671 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.676 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.680 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.680 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.680 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.681 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.681 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.681 247403 DEBUG nova.virt.libvirt.driver [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.708 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.708 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845500.6455624, 9d36f98c-d489-4c17-b997-24bacd1c9f58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.709 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] VM Started (Lifecycle Event)#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.736 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.738 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.744 247403 INFO nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Took 4.89 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.744 247403 DEBUG nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.767 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.831 247403 INFO nova.compute.manager [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Took 6.77 seconds to build instance.#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.836 247403 DEBUG nova.objects.instance [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lazy-loading 'migration_context' on Instance uuid b2c254bb-3943-440e-8ca2-306e8777083f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.863 247403 DEBUG oslo_concurrency.lockutils [None req-9285aa23-f7c4-470e-adf1-dffbc0091c4c 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "9d36f98c-d489-4c17-b997-24bacd1c9f58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.867 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.868 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Ensure instance console log exists: /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.868 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.869 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.869 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:00 np0005603621 nova_compute[247399]: 2026-01-31 07:45:00.998 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.217 247403 DEBUG nova.network.neutron [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updating instance_info_cache with network_info: [{"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.249 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Releasing lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.250 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Instance network_info: |[{"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:45:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.250 247403 DEBUG oslo_concurrency.lockutils [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.251 247403 DEBUG nova.network.neutron [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Refreshing network info cache for port bdedece8-b56c-4a93-94a1-88106232511a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:45:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:01.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.253 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Start _get_guest_xml network_info=[{"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.259 247403 WARNING nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.267 247403 DEBUG nova.virt.libvirt.host [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.268 247403 DEBUG nova.virt.libvirt.host [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.274 247403 DEBUG nova.virt.libvirt.host [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.274 247403 DEBUG nova.virt.libvirt.host [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.275 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.276 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.276 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.276 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.276 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.277 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.277 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.277 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.277 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.278 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.278 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.278 247403 DEBUG nova.virt.hardware [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.280 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:01.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:45:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610530169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.760 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.784 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:01 np0005603621 nova_compute[247399]: 2026-01-31 07:45:01.787 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:45:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258311069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.211 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.212 247403 DEBUG nova.virt.libvirt.vif [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-794986223',display_name='tempest-VolumesAssistedSnapshotsTest-server-794986223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-794986223',id=7,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIRibUCXpaRvfruZc+JoMajzby6Ycr/5JBTPe80H+qeKg+/bA6hMwZSsQEJW9uZzBqw8Q9DMTxDbhMp/Ixx/VbTydQSpeQG+yyjehVq1dXjqrxBvW3ZUCIqDBfc6td0XQ==',key_name='tempest-keypair-602180198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f805a3fee7864f1bb92a97da17d9ed71',ramdisk_id='',reservation_id='r-dyja6oiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-932290615',owner_user_name='tempest-VolumesAssistedSnapshotsTest-932290615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090f32af51df4fddbcf003f38ed84a37',uuid=b2c254bb-3943-440e-8ca2-306e8777083f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.212 247403 DEBUG nova.network.os_vif_util [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Converting VIF {"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.213 247403 DEBUG nova.network.os_vif_util [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.214 247403 DEBUG nova.objects.instance [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lazy-loading 'pci_devices' on Instance uuid b2c254bb-3943-440e-8ca2-306e8777083f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.232 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.233 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <uuid>b2c254bb-3943-440e-8ca2-306e8777083f</uuid>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <name>instance-00000007</name>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:name>tempest-VolumesAssistedSnapshotsTest-server-794986223</nova:name>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:45:01</nova:creationTime>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:user uuid="090f32af51df4fddbcf003f38ed84a37">tempest-VolumesAssistedSnapshotsTest-932290615-project-member</nova:user>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:project uuid="f805a3fee7864f1bb92a97da17d9ed71">tempest-VolumesAssistedSnapshotsTest-932290615</nova:project>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <nova:port uuid="bdedece8-b56c-4a93-94a1-88106232511a">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <entry name="serial">b2c254bb-3943-440e-8ca2-306e8777083f</entry>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <entry name="uuid">b2c254bb-3943-440e-8ca2-306e8777083f</entry>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b2c254bb-3943-440e-8ca2-306e8777083f_disk">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b2c254bb-3943-440e-8ca2-306e8777083f_disk.config">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:7d:46:45"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <target dev="tapbdedece8-b5"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/console.log" append="off"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:45:02 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:45:02 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:45:02 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:45:02 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.234 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Preparing to wait for external event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.235 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.235 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.235 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.236 247403 DEBUG nova.virt.libvirt.vif [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-794986223',display_name='tempest-VolumesAssistedSnapshotsTest-server-794986223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-794986223',id=7,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIRibUCXpaRvfruZc+JoMajzby6Ycr/5JBTPe80H+qeKg+/bA6hMwZSsQEJW9uZzBqw8Q9DMTxDbhMp/Ixx/VbTydQSpeQG+yyjehVq1dXjqrxBvW3ZUCIqDBfc6td0XQ==',key_name='tempest-keypair-602180198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f805a3fee7864f1bb92a97da17d9ed71',ramdisk_id='',reservation_id='r-dyja6oiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAssistedSnapshotsTest-932290615',owner_user_name='tempest-VolumesAssistedSnapshotsTest-932290615-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:44:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090f32af51df4fddbcf003f38ed84a37',uuid=b2c254bb-3943-440e-8ca2-306e8777083f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.236 247403 DEBUG nova.network.os_vif_util [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Converting VIF {"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.237 247403 DEBUG nova.network.os_vif_util [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.237 247403 DEBUG os_vif [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.238 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.239 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.239 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.239 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.243 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.244 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbdedece8-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.244 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbdedece8-b5, col_values=(('external_ids', {'iface-id': 'bdedece8-b56c-4a93-94a1-88106232511a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:46:45', 'vm-uuid': 'b2c254bb-3943-440e-8ca2-306e8777083f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:02 np0005603621 NetworkManager[49013]: <info>  [1769845502.2469] manager: (tapbdedece8-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.253 247403 INFO os_vif [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5')#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.256 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.333 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.334 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.334 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] No VIF found with MAC fa:16:3e:7d:46:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.336 247403 INFO nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Using config drive#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.367 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 367 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.7 MiB/s wr, 250 op/s
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.637 247403 DEBUG nova.network.neutron [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updated VIF entry in instance network info cache for port bdedece8-b56c-4a93-94a1-88106232511a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.637 247403 DEBUG nova.network.neutron [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updating instance_info_cache with network_info: [{"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.660 247403 DEBUG oslo_concurrency.lockutils [req-06fd62ae-b9ab-465a-a30e-856cdb3492c1 req-e98bf439-2ddd-42e1-a332-4d229e0a39b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.767 247403 INFO nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Creating config drive at /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/disk.config#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.771 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8s85nr6y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.888 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8s85nr6y" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.916 247403 DEBUG nova.storage.rbd_utils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] rbd image b2c254bb-3943-440e-8ca2-306e8777083f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:45:02 np0005603621 nova_compute[247399]: 2026-01-31 07:45:02.919 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/disk.config b2c254bb-3943-440e-8ca2-306e8777083f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:03.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:03 np0005603621 nova_compute[247399]: 2026-01-31 07:45:03.270 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:03.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.221 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.222 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.223 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 399 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 4.3 MiB/s wr, 350 op/s
Jan 31 02:45:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350912556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.625 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.658 247403 DEBUG oslo_concurrency.processutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/disk.config b2c254bb-3943-440e-8ca2-306e8777083f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.659 247403 INFO nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Deleting local config drive /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f/disk.config because it was imported into RBD.#033[00m
Jan 31 02:45:04 np0005603621 kernel: tapbdedece8-b5: entered promiscuous mode
Jan 31 02:45:04 np0005603621 NetworkManager[49013]: <info>  [1769845504.6910] manager: (tapbdedece8-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 31 02:45:04 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:04Z|00032|binding|INFO|Claiming lport bdedece8-b56c-4a93-94a1-88106232511a for this chassis.
Jan 31 02:45:04 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:04Z|00033|binding|INFO|bdedece8-b56c-4a93-94a1-88106232511a: Claiming fa:16:3e:7d:46:45 10.100.0.7
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.698 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.703 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:46:45 10.100.0.7'], port_security=['fa:16:3e:7d:46:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b2c254bb-3943-440e-8ca2-306e8777083f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f805a3fee7864f1bb92a97da17d9ed71', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c25a702d-7879-4883-9632-171c3b820144', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d625932-0ad2-470e-81e7-116e9cfae134, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bdedece8-b56c-4a93-94a1-88106232511a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.705 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bdedece8-b56c-4a93-94a1-88106232511a in datapath 9702c86a-0c3b-47e6-b5f2-4220ef08a7ed bound to our chassis#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.707 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9702c86a-0c3b-47e6-b5f2-4220ef08a7ed#033[00m
Jan 31 02:45:04 np0005603621 systemd-machined[212769]: New machine qemu-3-instance-00000007.
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.716 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[438810a3-a3b3-42a9-99a4-45299e82f43a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.717 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9702c86a-01 in ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.718 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9702c86a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.718 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae9f809-4387-4bc0-9921-9b24408ec56c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:04Z|00034|binding|INFO|Setting lport bdedece8-b56c-4a93-94a1-88106232511a ovn-installed in OVS
Jan 31 02:45:04 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:04Z|00035|binding|INFO|Setting lport bdedece8-b56c-4a93-94a1-88106232511a up in Southbound
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.720 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6cb6c4-3c9b-42d9-b720-74edf025797a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:04 np0005603621 systemd[1]: Started Virtual Machine qemu-3-instance-00000007.
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.741 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4bc3c219-8695-4dac-999d-9a3dfebd67e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 systemd-udevd[254095]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.753 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[473a4882-d86c-4301-a986-4407c6afd06b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 NetworkManager[49013]: <info>  [1769845504.7545] device (tapbdedece8-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:45:04 np0005603621 NetworkManager[49013]: <info>  [1769845504.7550] device (tapbdedece8-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.759 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.759 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.767 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.767 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.771 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.772 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.775 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7cbf4419-38ad-4c36-9d7d-01279c6fb0c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 NetworkManager[49013]: <info>  [1769845504.7800] manager: (tap9702c86a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.779 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cc940f49-7fac-4671-9481-470b63da59c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.809 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2728c61d-9a41-4831-b768-d55c6562286f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.813 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed3a68c-d7fc-4d15-b06c-d2b83808214d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 NetworkManager[49013]: <info>  [1769845504.8382] device (tap9702c86a-00): carrier: link connected
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.842 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b0086096-35e8-4b13-9f1c-e5b6c22d57a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.860 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[461325af-f891-4bae-b517-0b088c9e8806]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9702c86a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6a:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481035, 'reachable_time': 27747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254126, 'error': None, 'target': 'ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.876 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6d9d6e73-581b-482c-9a0b-28ef63ed4f81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:6a52'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481035, 'tstamp': 481035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254128, 'error': None, 'target': 'ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.890 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[15e1a2c6-b223-4513-a788-22d182ff9f0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9702c86a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:6a:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481035, 'reachable_time': 27747, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254129, 'error': None, 'target': 'ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.918 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[234c651d-64b0-40ac-9994-328ba6bfb1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.948 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.949 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4610MB free_disk=20.833393096923828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.949 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:04 np0005603621 nova_compute[247399]: 2026-01-31 07:45:04.949 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.971 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b5525a51-cb85-42a7-bca9-de569d0111ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.972 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9702c86a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.972 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:45:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:04.973 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9702c86a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.018 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:05 np0005603621 kernel: tap9702c86a-00: entered promiscuous mode
Jan 31 02:45:05 np0005603621 NetworkManager[49013]: <info>  [1769845505.0230] manager: (tap9702c86a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:05.024 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9702c86a-00, col_values=(('external_ids', {'iface-id': '3569f852-28e0-418e-988e-a12d37b88814'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:05 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:05Z|00036|binding|INFO|Releasing lport 3569f852-28e0-418e-988e-a12d37b88814 from this chassis (sb_readonly=0)
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.030 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:05.033 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9702c86a-0c3b-47e6-b5f2-4220ef08a7ed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9702c86a-0c3b-47e6-b5f2-4220ef08a7ed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:05.034 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ca302588-37e1-4187-9038-04c35ba99ff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:05.035 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/9702c86a-0c3b-47e6-b5f2-4220ef08a7ed.pid.haproxy
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 9702c86a-0c3b-47e6-b5f2-4220ef08a7ed
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:45:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:05.035 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'env', 'PROCESS_TAG=haproxy-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9702c86a-0c3b-47e6-b5f2-4220ef08a7ed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.077 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c27f4296-ddb0-4185-b980-255e2f05e479 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.078 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 9d36f98c-d489-4c17-b997-24bacd1c9f58 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.078 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance b2c254bb-3943-440e-8ca2-306e8777083f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.078 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.079 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:45:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.258 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845505.2576768, b2c254bb-3943-440e-8ca2-306e8777083f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.258 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] VM Started (Lifecycle Event)#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.289 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.293 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845505.2579293, b2c254bb-3943-440e-8ca2-306e8777083f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.293 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.302 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.320 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.323 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:45:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:05.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.346 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.416 247403 DEBUG nova.compute.manager [req-969de38f-d658-44fd-951b-8d59f2c0d45f req-d93fdefa-d81d-4d0a-ad5d-fff8029770a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.417 247403 DEBUG oslo_concurrency.lockutils [req-969de38f-d658-44fd-951b-8d59f2c0d45f req-d93fdefa-d81d-4d0a-ad5d-fff8029770a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.417 247403 DEBUG oslo_concurrency.lockutils [req-969de38f-d658-44fd-951b-8d59f2c0d45f req-d93fdefa-d81d-4d0a-ad5d-fff8029770a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.417 247403 DEBUG oslo_concurrency.lockutils [req-969de38f-d658-44fd-951b-8d59f2c0d45f req-d93fdefa-d81d-4d0a-ad5d-fff8029770a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.418 247403 DEBUG nova.compute.manager [req-969de38f-d658-44fd-951b-8d59f2c0d45f req-d93fdefa-d81d-4d0a-ad5d-fff8029770a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Processing event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.419 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.423 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845505.4233902, b2c254bb-3943-440e-8ca2-306e8777083f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.424 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.425 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.429 247403 INFO nova.virt.libvirt.driver [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Instance spawned successfully.#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.429 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.446 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:05 np0005603621 podman[254203]: 2026-01-31 07:45:05.355934303 +0000 UTC m=+0.039343505 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.483 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.486 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.486 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.487 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.487 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.488 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.488 247403 DEBUG nova.virt.libvirt.driver [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.523 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.576 247403 INFO nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Took 8.37 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.576 247403 DEBUG nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.630 247403 INFO nova.compute.manager [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Took 9.52 seconds to build instance.#033[00m
Jan 31 02:45:05 np0005603621 podman[254203]: 2026-01-31 07:45:05.644347768 +0000 UTC m=+0.327756950 container create 5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.653 247403 DEBUG oslo_concurrency.lockutils [None req-9ae9d898-304a-4025-a510-fb7304178bbf 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:05 np0005603621 systemd[1]: Started libpod-conmon-5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148.scope.
Jan 31 02:45:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2358262564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.766 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.771 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beaf478a07f2f60b484a560718a3acf539a4e9f93270e24a205c36cb1f3aae98/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.796 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.820 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:45:05 np0005603621 nova_compute[247399]: 2026-01-31 07:45:05.820 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:05 np0005603621 podman[254203]: 2026-01-31 07:45:05.86984776 +0000 UTC m=+0.553256962 container init 5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:45:05 np0005603621 podman[254203]: 2026-01-31 07:45:05.874029331 +0000 UTC m=+0.557438503 container start 5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 02:45:05 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [NOTICE]   (254246) : New worker (254248) forked
Jan 31 02:45:05 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [NOTICE]   (254246) : Loading success.
Jan 31 02:45:06 np0005603621 nova_compute[247399]: 2026-01-31 07:45:06.142 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 399 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.9 MiB/s wr, 319 op/s
Jan 31 02:45:06 np0005603621 nova_compute[247399]: 2026-01-31 07:45:06.821 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:06 np0005603621 nova_compute[247399]: 2026-01-31 07:45:06.822 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:45:06 np0005603621 nova_compute[247399]: 2026-01-31 07:45:06.822 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.200 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.200 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c27f4296-ddb0-4185-b980-255e2f05e479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.248 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:07.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:07.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.510 247403 DEBUG nova.compute.manager [req-d2f2db60-0952-4854-a0ff-4e0a29c43208 req-3211838a-cc00-4f8c-85f5-94fb4c528774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.511 247403 DEBUG oslo_concurrency.lockutils [req-d2f2db60-0952-4854-a0ff-4e0a29c43208 req-3211838a-cc00-4f8c-85f5-94fb4c528774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.511 247403 DEBUG oslo_concurrency.lockutils [req-d2f2db60-0952-4854-a0ff-4e0a29c43208 req-3211838a-cc00-4f8c-85f5-94fb4c528774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.511 247403 DEBUG oslo_concurrency.lockutils [req-d2f2db60-0952-4854-a0ff-4e0a29c43208 req-3211838a-cc00-4f8c-85f5-94fb4c528774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.512 247403 DEBUG nova.compute.manager [req-d2f2db60-0952-4854-a0ff-4e0a29c43208 req-3211838a-cc00-4f8c-85f5-94fb4c528774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] No waiting events found dispatching network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:07 np0005603621 nova_compute[247399]: 2026-01-31 07:45:07.512 247403 WARNING nova.compute.manager [req-d2f2db60-0952-4854-a0ff-4e0a29c43208 req-3211838a-cc00-4f8c-85f5-94fb4c528774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received unexpected event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a for instance with vm_state active and task_state None.#033[00m
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:45:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 447 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 9.6 MiB/s rd, 6.9 MiB/s wr, 471 op/s
Jan 31 02:45:08 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:08Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:36:2a 10.1.0.98
Jan 31 02:45:08 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:08Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:36:2a 10.1.0.98
Jan 31 02:45:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:09.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:09.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:09 np0005603621 nova_compute[247399]: 2026-01-31 07:45:09.916 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9264] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/31)
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9271] device (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <warn>  [1769845509.9272] device (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9278] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/32)
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9409] device (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <warn>  [1769845509.9410] device (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9417] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9422] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9426] device (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:45:09 np0005603621 NetworkManager[49013]: <info>  [1769845509.9429] device (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 31 02:45:09 np0005603621 nova_compute[247399]: 2026-01-31 07:45:09.980 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:09 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:09Z|00037|binding|INFO|Releasing lport 3569f852-28e0-418e-988e-a12d37b88814 from this chassis (sb_readonly=0)
Jan 31 02:45:09 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:09Z|00038|binding|INFO|Releasing lport 1443ed6f-926c-4e3e-8e41-280e0ddc0f64 from this chassis (sb_readonly=0)
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.015 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:10.227 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:10.228 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:45:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.374 247403 DEBUG nova.compute.manager [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-changed-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.375 247403 DEBUG nova.compute.manager [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Refreshing instance network info cache due to event network-changed-bdedece8-b56c-4a93-94a1-88106232511a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.376 247403 DEBUG oslo_concurrency.lockutils [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.376 247403 DEBUG oslo_concurrency.lockutils [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:10 np0005603621 nova_compute[247399]: 2026-01-31 07:45:10.376 247403 DEBUG nova.network.neutron [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Refreshing network info cache for port bdedece8-b56c-4a93-94a1-88106232511a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:45:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 479 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.8 MiB/s wr, 395 op/s
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.210 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Updating instance_info_cache with network_info: [{"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.238 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c27f4296-ddb0-4185-b980-255e2f05e479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.239 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.239 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.239 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.240 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.240 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.240 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:11 np0005603621 nova_compute[247399]: 2026-01-31 07:45:11.240 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:45:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:11.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:12 np0005603621 nova_compute[247399]: 2026-01-31 07:45:12.252 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 493 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 7.6 MiB/s wr, 368 op/s
Jan 31 02:45:12 np0005603621 nova_compute[247399]: 2026-01-31 07:45:12.612 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:45:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:13.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:13.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.226 247403 DEBUG nova.network.neutron [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updated VIF entry in instance network info cache for port bdedece8-b56c-4a93-94a1-88106232511a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.227 247403 DEBUG nova.network.neutron [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updating instance_info_cache with network_info: [{"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.265 247403 DEBUG oslo_concurrency.lockutils [req-033c8dab-df39-4cfb-8b20-c0318b8e28ec req-0595abf2-245e-4a10-9edd-965e0d7f45f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b2c254bb-3943-440e-8ca2-306e8777083f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.415 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.416 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.416 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.416 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.416 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.417 247403 INFO nova.compute.manager [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Terminating instance#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.418 247403 DEBUG nova.compute.manager [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:45:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 509 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 8.4 MiB/s wr, 470 op/s
Jan 31 02:45:14 np0005603621 kernel: tap5a3be558-d2 (unregistering): left promiscuous mode
Jan 31 02:45:14 np0005603621 NetworkManager[49013]: <info>  [1769845514.9000] device (tap5a3be558-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:45:14 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:14Z|00039|binding|INFO|Releasing lport 5a3be558-d27f-4c4f-85e6-50d454bcc9ee from this chassis (sb_readonly=0)
Jan 31 02:45:14 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:14Z|00040|binding|INFO|Setting lport 5a3be558-d27f-4c4f-85e6-50d454bcc9ee down in Southbound
Jan 31 02:45:14 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:14Z|00041|binding|INFO|Removing iface tap5a3be558-d2 ovn-installed in OVS
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.945 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:14.951 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:36:2a 10.1.0.98 fdfe:381f:8400::376'], port_security=['fa:16:3e:8f:36:2a 10.1.0.98 fdfe:381f:8400::376'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.98/26 fdfe:381f:8400::376/64', 'neutron:device_id': 'c27f4296-ddb0-4185-b980-255e2f05e479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-992dcec1-3019-47a1-a14c-defd99a80f3d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc2f6584d8b64364b13683f53c58617f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f48a740a-df16-488d-83ce-01edcece1d5f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8d5eabbe-dd4d-4e48-a2ac-b48c29338142, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5a3be558-d27f-4c4f-85e6-50d454bcc9ee) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:14 np0005603621 nova_compute[247399]: 2026-01-31 07:45:14.951 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:14.953 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5a3be558-d27f-4c4f-85e6-50d454bcc9ee in datapath 992dcec1-3019-47a1-a14c-defd99a80f3d unbound from our chassis#033[00m
Jan 31 02:45:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:14.957 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 992dcec1-3019-47a1-a14c-defd99a80f3d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:45:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:14.959 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[68fabd62-c3c0-4de5-89da-3fb3689e4b57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:14.962 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d namespace which is not needed anymore#033[00m
Jan 31 02:45:15 np0005603621 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 31 02:45:15 np0005603621 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 13.840s CPU time.
Jan 31 02:45:15 np0005603621 systemd-machined[212769]: Machine qemu-1-instance-00000002 terminated.
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.038 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.042 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.051 247403 INFO nova.virt.libvirt.driver [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Instance destroyed successfully.#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.051 247403 DEBUG nova.objects.instance [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lazy-loading 'resources' on Instance uuid c27f4296-ddb0-4185-b980-255e2f05e479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.077 247403 DEBUG nova.virt.libvirt.vif [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:44:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1079069239-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1079069239-1',id=2,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:44:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc2f6584d8b64364b13683f53c58617f',ramdisk_id='',reservation_id='r-jd4rofhy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-2135409609',owner_user_name='tempest-AutoAllocateNetworkTest-2135409609-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:44:54Z,user_data=None,user_id='0eb58e8663574849b17616075ce5c43e',uuid=c27f4296-ddb0-4185-b980-255e2f05e479,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.078 247403 DEBUG nova.network.os_vif_util [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Converting VIF {"id": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "address": "fa:16:3e:8f:36:2a", "network": {"id": "992dcec1-3019-47a1-a14c-defd99a80f3d", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.64/26", "dns": [], "gateway": {"address": "10.1.0.65", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::376", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc2f6584d8b64364b13683f53c58617f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a3be558-d2", "ovs_interfaceid": "5a3be558-d27f-4c4f-85e6-50d454bcc9ee", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.079 247403 DEBUG nova.network.os_vif_util [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.080 247403 DEBUG os_vif [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.082 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.082 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a3be558-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.083 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.085 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.090 247403 INFO os_vif [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:36:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a3be558-d27f-4c4f-85e6-50d454bcc9ee,network=Network(992dcec1-3019-47a1-a14c-defd99a80f3d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a3be558-d2')#033[00m
Jan 31 02:45:15 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [NOTICE]   (253647) : haproxy version is 2.8.14-c23fe91
Jan 31 02:45:15 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [NOTICE]   (253647) : path to executable is /usr/sbin/haproxy
Jan 31 02:45:15 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [WARNING]  (253647) : Exiting Master process...
Jan 31 02:45:15 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [WARNING]  (253647) : Exiting Master process...
Jan 31 02:45:15 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [ALERT]    (253647) : Current worker (253651) exited with code 143 (Terminated)
Jan 31 02:45:15 np0005603621 neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d[253603]: [WARNING]  (253647) : All workers exited. Exiting... (0)
Jan 31 02:45:15 np0005603621 systemd[1]: libpod-9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5.scope: Deactivated successfully.
Jan 31 02:45:15 np0005603621 podman[254346]: 2026-01-31 07:45:15.137602539 +0000 UTC m=+0.055941943 container died 9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 02:45:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5-userdata-shm.mount: Deactivated successfully.
Jan 31 02:45:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-09d0a308af1f85022189fce74433f6b296a779014bae5a522257626dc8557ded-merged.mount: Deactivated successfully.
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.230 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:15.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:15.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:15 np0005603621 podman[254346]: 2026-01-31 07:45:15.367177313 +0000 UTC m=+0.285516727 container cleanup 9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 02:45:15 np0005603621 systemd[1]: libpod-conmon-9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5.scope: Deactivated successfully.
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.488 247403 DEBUG nova.compute.manager [req-12513ad5-2915-4c4c-91b8-7537d4b80d79 req-52ba2fa6-405e-41cd-81b6-cc1f872b2e91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-vif-unplugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.490 247403 DEBUG oslo_concurrency.lockutils [req-12513ad5-2915-4c4c-91b8-7537d4b80d79 req-52ba2fa6-405e-41cd-81b6-cc1f872b2e91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.490 247403 DEBUG oslo_concurrency.lockutils [req-12513ad5-2915-4c4c-91b8-7537d4b80d79 req-52ba2fa6-405e-41cd-81b6-cc1f872b2e91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.490 247403 DEBUG oslo_concurrency.lockutils [req-12513ad5-2915-4c4c-91b8-7537d4b80d79 req-52ba2fa6-405e-41cd-81b6-cc1f872b2e91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.490 247403 DEBUG nova.compute.manager [req-12513ad5-2915-4c4c-91b8-7537d4b80d79 req-52ba2fa6-405e-41cd-81b6-cc1f872b2e91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] No waiting events found dispatching network-vif-unplugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.491 247403 DEBUG nova.compute.manager [req-12513ad5-2915-4c4c-91b8-7537d4b80d79 req-52ba2fa6-405e-41cd-81b6-cc1f872b2e91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-vif-unplugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:45:15 np0005603621 podman[254394]: 2026-01-31 07:45:15.625047319 +0000 UTC m=+0.241058567 container remove 9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.628 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0b99600e-cf24-43f6-9b6a-a714b203f9ae]: (4, ('Sat Jan 31 07:45:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d (9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5)\n9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5\nSat Jan 31 07:45:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d (9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5)\n9c63927bc2a62b53b9e6a9ca6aefb88fb30617061b08915435fdd9ee94f7c5c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.630 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[63e096f0-e66c-4478-ac9b-381edd8bdc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.631 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap992dcec1-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:15 np0005603621 kernel: tap992dcec1-30: left promiscuous mode
Jan 31 02:45:15 np0005603621 nova_compute[247399]: 2026-01-31 07:45:15.640 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.643 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[94508060-9630-4fde-912d-b0971dce12ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.660 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d0eb58e5-c103-4e71-b103-45684475bb26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.662 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a67071a3-4056-499e-838e-09e5284fb444]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.683 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39f10055-c05c-4b86-8516-08d3e5df5876]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480207, 'reachable_time': 31659, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254410, 'error': None, 'target': 'ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:15 np0005603621 systemd[1]: run-netns-ovnmeta\x2d992dcec1\x2d3019\x2d47a1\x2da14c\x2ddefd99a80f3d.mount: Deactivated successfully.
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.690 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-992dcec1-3019-47a1-a14c-defd99a80f3d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:45:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:15.691 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e09800dc-3445-4638-9997-52f652342eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:16 np0005603621 nova_compute[247399]: 2026-01-31 07:45:16.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 509 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 7.1 MiB/s wr, 372 op/s
Jan 31 02:45:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:17.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:17.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:17 np0005603621 nova_compute[247399]: 2026-01-31 07:45:17.603 247403 DEBUG nova.compute.manager [req-90b9b8e4-7412-4c4e-b33a-68fd7fe72a6b req-6d7c4273-d1aa-4c94-8cc8-eae29d093cf9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:17 np0005603621 nova_compute[247399]: 2026-01-31 07:45:17.604 247403 DEBUG oslo_concurrency.lockutils [req-90b9b8e4-7412-4c4e-b33a-68fd7fe72a6b req-6d7c4273-d1aa-4c94-8cc8-eae29d093cf9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:17 np0005603621 nova_compute[247399]: 2026-01-31 07:45:17.604 247403 DEBUG oslo_concurrency.lockutils [req-90b9b8e4-7412-4c4e-b33a-68fd7fe72a6b req-6d7c4273-d1aa-4c94-8cc8-eae29d093cf9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:17 np0005603621 nova_compute[247399]: 2026-01-31 07:45:17.605 247403 DEBUG oslo_concurrency.lockutils [req-90b9b8e4-7412-4c4e-b33a-68fd7fe72a6b req-6d7c4273-d1aa-4c94-8cc8-eae29d093cf9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:17 np0005603621 nova_compute[247399]: 2026-01-31 07:45:17.605 247403 DEBUG nova.compute.manager [req-90b9b8e4-7412-4c4e-b33a-68fd7fe72a6b req-6d7c4273-d1aa-4c94-8cc8-eae29d093cf9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] No waiting events found dispatching network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:17 np0005603621 nova_compute[247399]: 2026-01-31 07:45:17.605 247403 WARNING nova.compute.manager [req-90b9b8e4-7412-4c4e-b33a-68fd7fe72a6b req-6d7c4273-d1aa-4c94-8cc8-eae29d093cf9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received unexpected event network-vif-plugged-5a3be558-d27f-4c4f-85e6-50d454bcc9ee for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:45:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 504 MiB data, 519 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 10 MiB/s wr, 510 op/s
Jan 31 02:45:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:19.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:19 np0005603621 nova_compute[247399]: 2026-01-31 07:45:19.927 247403 INFO nova.virt.libvirt.driver [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Deleting instance files /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479_del#033[00m
Jan 31 02:45:19 np0005603621 nova_compute[247399]: 2026-01-31 07:45:19.928 247403 INFO nova.virt.libvirt.driver [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Deletion of /var/lib/nova/instances/c27f4296-ddb0-4185-b980-255e2f05e479_del complete#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.085 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.206 247403 DEBUG nova.virt.libvirt.host [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.207 247403 INFO nova.virt.libvirt.host [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] UEFI support detected#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.210 247403 INFO nova.compute.manager [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Took 5.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.211 247403 DEBUG oslo.service.loopingcall [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.211 247403 DEBUG nova.compute.manager [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.211 247403 DEBUG nova.network.neutron [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:45:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 491 MiB data, 511 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 7.2 MiB/s wr, 379 op/s
Jan 31 02:45:20 np0005603621 podman[254415]: 2026-01-31 07:45:20.509683022 +0000 UTC m=+0.057001206 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:45:20 np0005603621 podman[254416]: 2026-01-31 07:45:20.541098262 +0000 UTC m=+0.087791288 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:45:20 np0005603621 nova_compute[247399]: 2026-01-31 07:45:20.964 247403 DEBUG nova.network.neutron [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.001 247403 INFO nova.compute.manager [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Took 0.79 seconds to deallocate network for instance.#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.068 247403 DEBUG nova.compute.manager [req-7dc670a4-ea4f-4a92-a9f8-9dcf78fd1df2 req-c56fcde0-d2f9-409d-9573-6457decc4745 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Received event network-vif-deleted-5a3be558-d27f-4c4f-85e6-50d454bcc9ee external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.114 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.114 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:21 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:21Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:46:45 10.100.0.7
Jan 31 02:45:21 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:21Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:46:45 10.100.0.7
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.201 247403 DEBUG oslo_concurrency.processutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:21.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:45:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:21.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:45:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262390628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.659 247403 DEBUG oslo_concurrency.processutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.663 247403 DEBUG nova.compute.provider_tree [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.754 247403 DEBUG nova.scheduler.client.report [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:21 np0005603621 nova_compute[247399]: 2026-01-31 07:45:21.970 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:22 np0005603621 nova_compute[247399]: 2026-01-31 07:45:22.145 247403 INFO nova.scheduler.client.report [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Deleted allocations for instance c27f4296-ddb0-4185-b980-255e2f05e479#033[00m
Jan 31 02:45:22 np0005603621 nova_compute[247399]: 2026-01-31 07:45:22.407 247403 DEBUG oslo_concurrency.lockutils [None req-3840dd17-9606-4660-bbed-249073f02cf3 0eb58e8663574849b17616075ce5c43e dc2f6584d8b64364b13683f53c58617f - - default default] Lock "c27f4296-ddb0-4185-b980-255e2f05e479" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 499 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.9 MiB/s wr, 362 op/s
Jan 31 02:45:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:23.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:23.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 503 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 8.6 MiB/s wr, 440 op/s
Jan 31 02:45:25 np0005603621 nova_compute[247399]: 2026-01-31 07:45:25.087 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:25.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:25.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.149 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.277 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.277 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.292 247403 DEBUG nova.objects.instance [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lazy-loading 'flavor' on Instance uuid b2c254bb-3943-440e-8ca2-306e8777083f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.341 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 503 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.4 MiB/s wr, 309 op/s
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.578 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.579 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.579 247403 INFO nova.compute.manager [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Attaching volume c3f8bc09-4bc5-4545-abcc-da259a7bb1ed to /dev/vdb#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.872 247403 DEBUG os_brick.utils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.873 247403 INFO oslo.privsep.daemon [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpuc93k029/privsep.sock']#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.991 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "9d36f98c-d489-4c17-b997-24bacd1c9f58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.991 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "9d36f98c-d489-4c17-b997-24bacd1c9f58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.991 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "9d36f98c-d489-4c17-b997-24bacd1c9f58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.991 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "9d36f98c-d489-4c17-b997-24bacd1c9f58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.992 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "9d36f98c-d489-4c17-b997-24bacd1c9f58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.993 247403 INFO nova.compute.manager [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Terminating instance#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.994 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "refresh_cache-9d36f98c-d489-4c17-b997-24bacd1c9f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.994 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquired lock "refresh_cache-9d36f98c-d489-4c17-b997-24bacd1c9f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:45:26 np0005603621 nova_compute[247399]: 2026-01-31 07:45:26.994 247403 DEBUG nova.network.neutron [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.163 247403 DEBUG nova.network.neutron [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:45:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:27.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:27.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.385 247403 DEBUG nova.network.neutron [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.416 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Releasing lock "refresh_cache-9d36f98c-d489-4c17-b997-24bacd1c9f58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.417 247403 DEBUG nova.compute.manager [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.506 247403 INFO oslo.privsep.daemon [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.396 254621 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.400 254621 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.402 254621 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.402 254621 INFO oslo.privsep.daemon [-] privsep daemon running as pid 254621#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.509 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc79a03-ef2f-47c2-b816-c2f8d2f5771e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.604 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.630 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.630 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[9b872833-6458-4141-a008-e530cf3359b4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.632 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.636 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.636 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[6614a547-9a5f-4b53-8bfd-fed6e131421a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.639 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.669 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.670 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[4315cbdd-cf12-4a3d-85a3-39e7723f70ef]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.673 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[f616b0db-bc77-4ecd-98d8-cd7d22ece4e3]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.673 247403 DEBUG oslo_concurrency.processutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.695 247403 DEBUG oslo_concurrency.processutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.699 247403 DEBUG os_brick.initiator.connectors.lightos [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.700 247403 DEBUG os_brick.initiator.connectors.lightos [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.700 247403 DEBUG os_brick.initiator.connectors.lightos [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.701 247403 DEBUG os_brick.utils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] <== get_connector_properties: return (827ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 02:45:27 np0005603621 nova_compute[247399]: 2026-01-31 07:45:27.702 247403 DEBUG nova.virt.block_device [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updating existing volume attachment record: da007980-b3ca-4562-bd1c-c04d4ec6098e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 02:45:27 np0005603621 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 31 02:45:27 np0005603621 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 13.143s CPU time.
Jan 31 02:45:27 np0005603621 systemd-machined[212769]: Machine qemu-2-instance-00000006 terminated.
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.037 247403 INFO nova.virt.libvirt.driver [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Instance destroyed successfully.#033[00m
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.038 247403 DEBUG nova.objects.instance [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lazy-loading 'resources' on Instance uuid 9d36f98c-d489-4c17-b997-24bacd1c9f58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:45:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 450 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 7.5 MiB/s wr, 356 op/s
Jan 31 02:45:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:45:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/726095670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.911 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.912 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.972 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.980 247403 DEBUG nova.objects.instance [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lazy-loading 'flavor' on Instance uuid b2c254bb-3943-440e-8ca2-306e8777083f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:28 np0005603621 nova_compute[247399]: 2026-01-31 07:45:28.998 247403 DEBUG nova.virt.libvirt.driver [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Attempting to attach volume c3f8bc09-4bc5-4545-abcc-da259a7bb1ed with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 02:45:29 np0005603621 nova_compute[247399]: 2026-01-31 07:45:29.000 247403 DEBUG nova.virt.libvirt.guest [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-c3f8bc09-4bc5-4545-abcc-da259a7bb1ed">
Jan 31 02:45:29 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  </source>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 02:45:29 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  </auth>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:45:29 np0005603621 nova_compute[247399]:  <serial>c3f8bc09-4bc5-4545-abcc-da259a7bb1ed</serial>
Jan 31 02:45:29 np0005603621 nova_compute[247399]: </disk>
Jan 31 02:45:29 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:45:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:29.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:45:29 np0005603621 nova_compute[247399]: 2026-01-31 07:45:29.550 247403 DEBUG nova.virt.libvirt.driver [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:29 np0005603621 nova_compute[247399]: 2026-01-31 07:45:29.551 247403 DEBUG nova.virt.libvirt.driver [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:29 np0005603621 nova_compute[247399]: 2026-01-31 07:45:29.551 247403 DEBUG nova.virt.libvirt.driver [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:45:29 np0005603621 nova_compute[247399]: 2026-01-31 07:45:29.551 247403 DEBUG nova.virt.libvirt.driver [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] No VIF found with MAC fa:16:3e:7d:46:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:45:29 np0005603621 nova_compute[247399]: 2026-01-31 07:45:29.865 247403 DEBUG oslo_concurrency.lockutils [None req-a18a6d95-7125-47bb-b0e1-960329a2ec49 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.049 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845515.0468583, c27f4296-ddb0-4185-b980-255e2f05e479 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.049 247403 INFO nova.compute.manager [-] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.074 247403 DEBUG nova.compute.manager [None req-1547131e-8310-465f-b30b-5c043af0161a - - - - - -] [instance: c27f4296-ddb0-4185-b980-255e2f05e479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.090 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.187 247403 DEBUG nova.virt.libvirt.driver [None req-cb6ad607-bac0-46c4-86a8-a36cada1f872 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] volume_snapshot_create: create_info: {'snapshot_id': '9a1890d3-66fb-4470-a088-232984de9e3e', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [None req-cb6ad607-bac0-46c4-86a8-a36cada1f872 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]     self._volume_snapshot_create(context, instance, guest,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]     raise exception.InternalError(msg)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f] nova.exception.InternalError: Found no disk to snapshot.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.192 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f] #033[00m
Jan 31 02:45:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 DEBUG nova.virt.libvirt.driver [None req-26acba0a-4445-4880-ae59-3a08581761c8 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] volume_snapshot_delete: delete_info: {'volume_id': 'c3f8bc09-4bc5-4545-abcc-da259a7bb1ed'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [None req-26acba0a-4445-4880-ae59-3a08581761c8 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]     self._volume_snapshot_delete(context, instance, volume_id,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f]     if delete_info['type'] != 'qcow2':
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f] KeyError: 'type'
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.298 247403 ERROR nova.virt.libvirt.driver [instance: b2c254bb-3943-440e-8ca2-306e8777083f] #033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver [None req-cb6ad607-bac0-46c4-86a8-a36cada1f872 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot 9a1890d3-66fb-4470-a088-232984de9e3e could not be found.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot 9a1890d3-66fb-4470-a088-232984de9e3e could not be found. (HTTP 404) (Request-ID: req-9d184c78-d503-4129-9e87-19b9c8beeecc)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot 9a1890d3-66fb-4470-a088-232984de9e3e could not be found.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.360 247403 ERROR nova.virt.libvirt.driver #033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server [None req-cb6ad607-bac0-46c4-86a8-a36cada1f872 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.366 247403 ERROR oslo_messaging.rpc.server #033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver [None req-26acba0a-4445-4880-ae59-3a08581761c8 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver KeyError: 'type'
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-55d421e3-d3ec-4067-bf46-1ea4d740d444)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver 
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.462 247403 ERROR nova.virt.libvirt.driver #033[00m
Jan 31 02:45:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:30.465 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:30.465 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:30.466 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server [None req-26acba0a-4445-4880-ae59-3a08581761c8 3558520a4eff4a8cb572020545f022c8 d416bb5cece6460faf2393820acb030a - - default default] Exception during message handling: KeyError: 'type'
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     raise self.value
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server KeyError: 'type'
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.466 247403 ERROR oslo_messaging.rpc.server #033[00m
Jan 31 02:45:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 438 MiB data, 495 MiB used, 21 GiB / 21 GiB avail; 839 KiB/s rd, 4.5 MiB/s wr, 245 op/s
Jan 31 02:45:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:45:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:45:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:45:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:45:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.947 247403 DEBUG oslo_concurrency.lockutils [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.947 247403 DEBUG oslo_concurrency.lockutils [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:30 np0005603621 nova_compute[247399]: 2026-01-31 07:45:30.964 247403 INFO nova.compute.manager [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Detaching volume c3f8bc09-4bc5-4545-abcc-da259a7bb1ed#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.160 247403 INFO nova.virt.block_device [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Attempting to driver detach volume c3f8bc09-4bc5-4545-abcc-da259a7bb1ed from mountpoint /dev/vdb#033[00m
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.202 247403 DEBUG nova.virt.libvirt.driver [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Attempting to detach device vdb from instance b2c254bb-3943-440e-8ca2-306e8777083f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.203 247403 DEBUG nova.virt.libvirt.guest [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-c3f8bc09-4bc5-4545-abcc-da259a7bb1ed">
Jan 31 02:45:31 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  </source>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <serial>c3f8bc09-4bc5-4545-abcc-da259a7bb1ed</serial>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]: </disk>
Jan 31 02:45:31 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.203 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:31.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:31.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.460 247403 INFO nova.virt.libvirt.driver [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Successfully detached device vdb from instance b2c254bb-3943-440e-8ca2-306e8777083f from the persistent domain config.#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.461 247403 DEBUG nova.virt.libvirt.driver [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b2c254bb-3943-440e-8ca2-306e8777083f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.461 247403 DEBUG nova.virt.libvirt.guest [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-c3f8bc09-4bc5-4545-abcc-da259a7bb1ed">
Jan 31 02:45:31 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  </source>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <serial>c3f8bc09-4bc5-4545-abcc-da259a7bb1ed</serial>
Jan 31 02:45:31 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 02:45:31 np0005603621 nova_compute[247399]: </disk>
Jan 31 02:45:31 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.735 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769845531.7352579, b2c254bb-3943-440e-8ca2-306e8777083f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.739 247403 DEBUG nova.virt.libvirt.driver [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b2c254bb-3943-440e-8ca2-306e8777083f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.742 247403 INFO nova.virt.libvirt.driver [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Successfully detached device vdb from instance b2c254bb-3943-440e-8ca2-306e8777083f from the live domain config.#033[00m
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 37cb7209-f44e-4357-ac1d-225c380f544f does not exist
Jan 31 02:45:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1c366252-1ad8-4aba-a3d1-b8f8a8aa8eea does not exist
Jan 31 02:45:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4a167e88-4131-479e-92fc-97df24dfaa40 does not exist
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:45:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.925 247403 DEBUG nova.objects.instance [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lazy-loading 'flavor' on Instance uuid b2c254bb-3943-440e-8ca2-306e8777083f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:31 np0005603621 nova_compute[247399]: 2026-01-31 07:45:31.969 247403 DEBUG oslo_concurrency.lockutils [None req-af926286-3750-480f-87bd-7e0541af56cc 2134a4c3d333450692024d3cd8b40429 929444f150804704b1865bcbe068c936 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:32 np0005603621 podman[254866]: 2026-01-31 07:45:32.354880413 +0000 UTC m=+0.023000056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:45:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 438 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 735 KiB/s rd, 4.1 MiB/s wr, 200 op/s
Jan 31 02:45:32 np0005603621 podman[254866]: 2026-01-31 07:45:32.742386093 +0000 UTC m=+0.410505756 container create 03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:45:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:45:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:45:32 np0005603621 systemd[1]: Started libpod-conmon-03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310.scope.
Jan 31 02:45:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:33 np0005603621 podman[254866]: 2026-01-31 07:45:33.004637536 +0000 UTC m=+0.672757199 container init 03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:45:33 np0005603621 podman[254866]: 2026-01-31 07:45:33.012986029 +0000 UTC m=+0.681105662 container start 03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:45:33 np0005603621 admiring_pascal[254882]: 167 167
Jan 31 02:45:33 np0005603621 systemd[1]: libpod-03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310.scope: Deactivated successfully.
Jan 31 02:45:33 np0005603621 conmon[254882]: conmon 03449a35959352e4c7f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310.scope/container/memory.events
Jan 31 02:45:33 np0005603621 podman[254866]: 2026-01-31 07:45:33.126081023 +0000 UTC m=+0.794200656 container attach 03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:45:33 np0005603621 podman[254866]: 2026-01-31 07:45:33.126612769 +0000 UTC m=+0.794732402 container died 03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:45:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:33.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:45:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:33.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:45:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0ddd4b21a6301642279354c7789d2f26b2403ce3edc8c2038a52587b3aae180a-merged.mount: Deactivated successfully.
Jan 31 02:45:34 np0005603621 podman[254866]: 2026-01-31 07:45:34.035051315 +0000 UTC m=+1.703170938 container remove 03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:45:34 np0005603621 systemd[1]: libpod-conmon-03449a35959352e4c7f0c281bc3f3ad41d2ad077cbe689ccd8802500cd350310.scope: Deactivated successfully.
Jan 31 02:45:34 np0005603621 podman[254909]: 2026-01-31 07:45:34.178200815 +0000 UTC m=+0.037713699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:45:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 384 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 617 KiB/s rd, 3.4 MiB/s wr, 197 op/s
Jan 31 02:45:34 np0005603621 podman[254909]: 2026-01-31 07:45:34.526687856 +0000 UTC m=+0.386200690 container create c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:45:34 np0005603621 systemd[1]: Started libpod-conmon-c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb.scope.
Jan 31 02:45:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92540024f3e22a836c3e1ef577bfcbe59206804d516d632747535133f673a712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92540024f3e22a836c3e1ef577bfcbe59206804d516d632747535133f673a712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92540024f3e22a836c3e1ef577bfcbe59206804d516d632747535133f673a712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92540024f3e22a836c3e1ef577bfcbe59206804d516d632747535133f673a712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92540024f3e22a836c3e1ef577bfcbe59206804d516d632747535133f673a712/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:34 np0005603621 podman[254909]: 2026-01-31 07:45:34.826933956 +0000 UTC m=+0.686446830 container init c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banach, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:45:34 np0005603621 podman[254909]: 2026-01-31 07:45:34.835091543 +0000 UTC m=+0.694604367 container start c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:45:34 np0005603621 podman[254909]: 2026-01-31 07:45:34.851525001 +0000 UTC m=+0.711037835 container attach c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:45:35 np0005603621 nova_compute[247399]: 2026-01-31 07:45:35.094 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:35.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:35.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:35 np0005603621 priceless_banach[254925]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:45:35 np0005603621 priceless_banach[254925]: --> relative data size: 1.0
Jan 31 02:45:35 np0005603621 priceless_banach[254925]: --> All data devices are unavailable
Jan 31 02:45:35 np0005603621 systemd[1]: libpod-c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb.scope: Deactivated successfully.
Jan 31 02:45:35 np0005603621 podman[254909]: 2026-01-31 07:45:35.621920896 +0000 UTC m=+1.481433810 container died c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:45:36 np0005603621 nova_compute[247399]: 2026-01-31 07:45:36.204 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 384 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 320 KiB/s rd, 150 KiB/s wr, 94 op/s
Jan 31 02:45:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-92540024f3e22a836c3e1ef577bfcbe59206804d516d632747535133f673a712-merged.mount: Deactivated successfully.
Jan 31 02:45:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:37.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:37.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:37 np0005603621 podman[254909]: 2026-01-31 07:45:37.368065286 +0000 UTC m=+3.227578120 container remove c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_banach, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:45:37 np0005603621 systemd[1]: libpod-conmon-c225908c2c660a823b71a529c66d1fd3788e9b9a9a35ea277f0772c453a9a4cb.scope: Deactivated successfully.
Jan 31 02:45:37 np0005603621 podman[255094]: 2026-01-31 07:45:37.903733375 +0000 UTC m=+0.022571833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:45:38 np0005603621 podman[255094]: 2026-01-31 07:45:38.04511 +0000 UTC m=+0.163948408 container create 137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:45:38 np0005603621 systemd[1]: Started libpod-conmon-137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a.scope.
Jan 31 02:45:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:45:38
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.control', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 294 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 357 KiB/s rd, 1.2 MiB/s wr, 149 op/s
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:45:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:45:38 np0005603621 podman[255094]: 2026-01-31 07:45:38.607645876 +0000 UTC m=+0.726484264 container init 137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lewin, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:45:38 np0005603621 podman[255094]: 2026-01-31 07:45:38.61604697 +0000 UTC m=+0.734885378 container start 137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:45:38 np0005603621 strange_lewin[255110]: 167 167
Jan 31 02:45:38 np0005603621 systemd[1]: libpod-137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a.scope: Deactivated successfully.
Jan 31 02:45:38 np0005603621 conmon[255110]: conmon 137915de8e6119a98c38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a.scope/container/memory.events
Jan 31 02:45:38 np0005603621 podman[255094]: 2026-01-31 07:45:38.750409144 +0000 UTC m=+0.869247532 container attach 137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:45:38 np0005603621 podman[255094]: 2026-01-31 07:45:38.75344644 +0000 UTC m=+0.872284808 container died 137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:45:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9ca9103ee26e8b5c12b6c2b6644bc45eca542b4d7176ed897270ec5899542632-merged.mount: Deactivated successfully.
Jan 31 02:45:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:45:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468462329' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:45:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:45:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3468462329' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:45:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:39.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.303 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.305 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.306 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.306 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.307 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.309 247403 INFO nova.compute.manager [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Terminating instance#033[00m
Jan 31 02:45:39 np0005603621 nova_compute[247399]: 2026-01-31 07:45:39.311 247403 DEBUG nova.compute.manager [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:45:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:39.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:39 np0005603621 podman[255094]: 2026-01-31 07:45:39.908541517 +0000 UTC m=+2.027379925 container remove 137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:45:39 np0005603621 systemd[1]: libpod-conmon-137915de8e6119a98c3872ad425ce4b106128c82aaf76e8f0601d62df1a8ce1a.scope: Deactivated successfully.
Jan 31 02:45:40 np0005603621 kernel: tapbdedece8-b5 (unregistering): left promiscuous mode
Jan 31 02:45:40 np0005603621 NetworkManager[49013]: <info>  [1769845540.0297] device (tapbdedece8-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:45:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:40Z|00042|binding|INFO|Releasing lport bdedece8-b56c-4a93-94a1-88106232511a from this chassis (sb_readonly=0)
Jan 31 02:45:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:40Z|00043|binding|INFO|Setting lport bdedece8-b56c-4a93-94a1-88106232511a down in Southbound
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:45:40Z|00044|binding|INFO|Removing iface tapbdedece8-b5 ovn-installed in OVS
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.040 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:40.043 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:46:45 10.100.0.7'], port_security=['fa:16:3e:7d:46:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b2c254bb-3943-440e-8ca2-306e8777083f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f805a3fee7864f1bb92a97da17d9ed71', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c25a702d-7879-4883-9632-171c3b820144', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d625932-0ad2-470e-81e7-116e9cfae134, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bdedece8-b56c-4a93-94a1-88106232511a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:45:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:40.045 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bdedece8-b56c-4a93-94a1-88106232511a in datapath 9702c86a-0c3b-47e6-b5f2-4220ef08a7ed unbound from our chassis#033[00m
Jan 31 02:45:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:40.047 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:45:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:40.048 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cc6259-55f0-480a-a0d1-2fd33e6626e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:40.049 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed namespace which is not needed anymore#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 31 02:45:40 np0005603621 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 14.712s CPU time.
Jan 31 02:45:40 np0005603621 systemd-machined[212769]: Machine qemu-3-instance-00000007 terminated.
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.136 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 podman[255137]: 2026-01-31 07:45:40.066431382 +0000 UTC m=+0.033366822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.170 247403 INFO nova.virt.libvirt.driver [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Instance destroyed successfully.#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.170 247403 DEBUG nova.objects.instance [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lazy-loading 'resources' on Instance uuid b2c254bb-3943-440e-8ca2-306e8777083f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.206 247403 DEBUG nova.virt.libvirt.vif [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-794986223',display_name='tempest-VolumesAssistedSnapshotsTest-server-794986223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-794986223',id=7,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBIRibUCXpaRvfruZc+JoMajzby6Ycr/5JBTPe80H+qeKg+/bA6hMwZSsQEJW9uZzBqw8Q9DMTxDbhMp/Ixx/VbTydQSpeQG+yyjehVq1dXjqrxBvW3ZUCIqDBfc6td0XQ==',key_name='tempest-keypair-602180198',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:45:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f805a3fee7864f1bb92a97da17d9ed71',ramdisk_id='',reservation_id='r-dyja6oiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-932290615',owner_user_name='tempest-VolumesAssistedSnapshotsTest-932290615-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:45:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='090f32af51df4fddbcf003f38ed84a37',uuid=b2c254bb-3943-440e-8ca2-306e8777083f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.206 247403 DEBUG nova.network.os_vif_util [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Converting VIF {"id": "bdedece8-b56c-4a93-94a1-88106232511a", "address": "fa:16:3e:7d:46:45", "network": {"id": "9702c86a-0c3b-47e6-b5f2-4220ef08a7ed", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-780910429-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f805a3fee7864f1bb92a97da17d9ed71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbdedece8-b5", "ovs_interfaceid": "bdedece8-b56c-4a93-94a1-88106232511a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.207 247403 DEBUG nova.network.os_vif_util [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.207 247403 DEBUG os_vif [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.209 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.210 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbdedece8-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.215 247403 INFO os_vif [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:46:45,bridge_name='br-int',has_traffic_filtering=True,id=bdedece8-b56c-4a93-94a1-88106232511a,network=Network(9702c86a-0c3b-47e6-b5f2-4220ef08a7ed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbdedece8-b5')#033[00m
Jan 31 02:45:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.375 247403 DEBUG nova.compute.manager [req-1314778f-4966-4f6c-90ec-ec7302abd96b req-ace23437-8832-420a-b72c-92a88d262f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-vif-unplugged-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.376 247403 DEBUG oslo_concurrency.lockutils [req-1314778f-4966-4f6c-90ec-ec7302abd96b req-ace23437-8832-420a-b72c-92a88d262f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.376 247403 DEBUG oslo_concurrency.lockutils [req-1314778f-4966-4f6c-90ec-ec7302abd96b req-ace23437-8832-420a-b72c-92a88d262f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.377 247403 DEBUG oslo_concurrency.lockutils [req-1314778f-4966-4f6c-90ec-ec7302abd96b req-ace23437-8832-420a-b72c-92a88d262f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.377 247403 DEBUG nova.compute.manager [req-1314778f-4966-4f6c-90ec-ec7302abd96b req-ace23437-8832-420a-b72c-92a88d262f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] No waiting events found dispatching network-vif-unplugged-bdedece8-b56c-4a93-94a1-88106232511a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:40 np0005603621 nova_compute[247399]: 2026-01-31 07:45:40.377 247403 DEBUG nova.compute.manager [req-1314778f-4966-4f6c-90ec-ec7302abd96b req-ace23437-8832-420a-b72c-92a88d262f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-vif-unplugged-bdedece8-b56c-4a93-94a1-88106232511a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:45:40 np0005603621 podman[255137]: 2026-01-31 07:45:40.496133261 +0000 UTC m=+0.463068691 container create 5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:45:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 260 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 31 02:45:40 np0005603621 systemd[1]: Started libpod-conmon-5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157.scope.
Jan 31 02:45:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865ec59322f617ae6c19a07c9a6e4b1ce217788cb0da2be79eba260f5876b069/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865ec59322f617ae6c19a07c9a6e4b1ce217788cb0da2be79eba260f5876b069/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865ec59322f617ae6c19a07c9a6e4b1ce217788cb0da2be79eba260f5876b069/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/865ec59322f617ae6c19a07c9a6e4b1ce217788cb0da2be79eba260f5876b069/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:40 np0005603621 podman[255137]: 2026-01-31 07:45:40.970363355 +0000 UTC m=+0.937298815 container init 5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:45:40 np0005603621 podman[255137]: 2026-01-31 07:45:40.980421611 +0000 UTC m=+0.947357041 container start 5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.209 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:41 np0005603621 podman[255137]: 2026-01-31 07:45:41.288965103 +0000 UTC m=+1.255900533 container attach 5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.298 247403 INFO nova.virt.libvirt.driver [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Deleting instance files /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58_del#033[00m
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.299 247403 INFO nova.virt.libvirt.driver [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Deletion of /var/lib/nova/instances/9d36f98c-d489-4c17-b997-24bacd1c9f58_del complete#033[00m
Jan 31 02:45:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:41.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:41.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.403 247403 INFO nova.compute.manager [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Took 13.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.403 247403 DEBUG oslo.service.loopingcall [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.404 247403 DEBUG nova.compute.manager [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:45:41 np0005603621 nova_compute[247399]: 2026-01-31 07:45:41.404 247403 DEBUG nova.network.neutron [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:45:41 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [NOTICE]   (254246) : haproxy version is 2.8.14-c23fe91
Jan 31 02:45:41 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [NOTICE]   (254246) : path to executable is /usr/sbin/haproxy
Jan 31 02:45:41 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [WARNING]  (254246) : Exiting Master process...
Jan 31 02:45:41 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [ALERT]    (254246) : Current worker (254248) exited with code 143 (Terminated)
Jan 31 02:45:41 np0005603621 neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed[254240]: [WARNING]  (254246) : All workers exited. Exiting... (0)
Jan 31 02:45:41 np0005603621 systemd[1]: libpod-5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148.scope: Deactivated successfully.
Jan 31 02:45:41 np0005603621 podman[255210]: 2026-01-31 07:45:41.590391371 +0000 UTC m=+0.822504068 container died 5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]: {
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:    "0": [
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:        {
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "devices": [
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "/dev/loop3"
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            ],
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "lv_name": "ceph_lv0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "lv_size": "7511998464",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "name": "ceph_lv0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "tags": {
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.cluster_name": "ceph",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.crush_device_class": "",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.encrypted": "0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.osd_id": "0",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.type": "block",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:                "ceph.vdo": "0"
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            },
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "type": "block",
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:            "vg_name": "ceph_vg0"
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:        }
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]:    ]
Jan 31 02:45:41 np0005603621 admiring_bouman[255206]: }
Jan 31 02:45:41 np0005603621 systemd[1]: libpod-5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157.scope: Deactivated successfully.
Jan 31 02:45:41 np0005603621 conmon[255206]: conmon 5deb4be74be19421caab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157.scope/container/memory.events
Jan 31 02:45:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148-userdata-shm.mount: Deactivated successfully.
Jan 31 02:45:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-beaf478a07f2f60b484a560718a3acf539a4e9f93270e24a205c36cb1f3aae98-merged.mount: Deactivated successfully.
Jan 31 02:45:42 np0005603621 podman[255137]: 2026-01-31 07:45:42.142413555 +0000 UTC m=+2.109348975 container died 5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.177 247403 DEBUG nova.network.neutron [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.196 247403 DEBUG nova.network.neutron [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.216 247403 INFO nova.compute.manager [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Took 0.81 seconds to deallocate network for instance.#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.269 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.270 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.345 247403 DEBUG oslo_concurrency.processutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 247 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.516 247403 DEBUG nova.compute.manager [req-14349e49-e939-47c2-8ad9-4da129d57d4c req-e3452d48-a256-462c-9e87-1d4d83dc7cdc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.517 247403 DEBUG oslo_concurrency.lockutils [req-14349e49-e939-47c2-8ad9-4da129d57d4c req-e3452d48-a256-462c-9e87-1d4d83dc7cdc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.517 247403 DEBUG oslo_concurrency.lockutils [req-14349e49-e939-47c2-8ad9-4da129d57d4c req-e3452d48-a256-462c-9e87-1d4d83dc7cdc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.518 247403 DEBUG oslo_concurrency.lockutils [req-14349e49-e939-47c2-8ad9-4da129d57d4c req-e3452d48-a256-462c-9e87-1d4d83dc7cdc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.518 247403 DEBUG nova.compute.manager [req-14349e49-e939-47c2-8ad9-4da129d57d4c req-e3452d48-a256-462c-9e87-1d4d83dc7cdc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] No waiting events found dispatching network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.518 247403 WARNING nova.compute.manager [req-14349e49-e939-47c2-8ad9-4da129d57d4c req-e3452d48-a256-462c-9e87-1d4d83dc7cdc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received unexpected event network-vif-plugged-bdedece8-b56c-4a93-94a1-88106232511a for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:45:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3282200754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.988 247403 DEBUG oslo_concurrency.processutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:42 np0005603621 nova_compute[247399]: 2026-01-31 07:45:42.992 247403 DEBUG nova.compute.provider_tree [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.005 247403 DEBUG nova.scheduler.client.report [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-865ec59322f617ae6c19a07c9a6e4b1ce217788cb0da2be79eba260f5876b069-merged.mount: Deactivated successfully.
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.029 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.034 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845528.0333183, 9d36f98c-d489-4c17-b997-24bacd1c9f58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.034 247403 INFO nova.compute.manager [-] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.057 247403 DEBUG nova.compute.manager [None req-2f79e5a1-a631-4add-bb89-8635bb5f1dd7 - - - - - -] [instance: 9d36f98c-d489-4c17-b997-24bacd1c9f58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.065 247403 INFO nova.scheduler.client.report [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Deleted allocations for instance 9d36f98c-d489-4c17-b997-24bacd1c9f58#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.129 247403 DEBUG oslo_concurrency.lockutils [None req-448f55a5-78fa-4784-a546-4f363e192fb2 5dac6d92165448b3a1c60bea57f8e48d 5047d468cca049c2891d27def49df57f - - default default] Lock "9d36f98c-d489-4c17-b997-24bacd1c9f58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:43 np0005603621 podman[255242]: 2026-01-31 07:45:43.22765026 +0000 UTC m=+1.501594235 container remove 5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bouman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:45:43 np0005603621 systemd[1]: libpod-conmon-5deb4be74be19421caab47ad4abaa0c96c6cc27ae1349efa052c8023bc636157.scope: Deactivated successfully.
Jan 31 02:45:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:43.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:43 np0005603621 podman[255210]: 2026-01-31 07:45:43.362674355 +0000 UTC m=+2.594787052 container cleanup 5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:45:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:43.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:43 np0005603621 systemd[1]: libpod-conmon-5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148.scope: Deactivated successfully.
Jan 31 02:45:43 np0005603621 podman[255356]: 2026-01-31 07:45:43.584316649 +0000 UTC m=+0.205722274 container remove 5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.588 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[62a62fdf-5c76-440e-bd77-304715b8b40a]: (4, ('Sat Jan 31 07:45:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed (5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148)\n5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148\nSat Jan 31 07:45:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed (5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148)\n5d1c046a3a46bf11afeab051363f92dcb406d3d13d8898f95ec922e5092c8148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.591 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[01297889-f7e6-4f84-84f4-9d5b33c02c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.591 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9702c86a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:45:43 np0005603621 kernel: tap9702c86a-00: left promiscuous mode
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.594 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:43 np0005603621 nova_compute[247399]: 2026-01-31 07:45:43.603 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.606 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdd31ef-cb84-4063-a9bd-86ee13c98904]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.619 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4c95ffd3-5ea8-4b5c-a1d6-fbf775f9c6aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.621 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[199e08f1-d21e-46d8-9d97-6127f920dae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.641 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[89d71336-780a-4609-be6e-4abce685f08b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481028, 'reachable_time': 30577, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255417, 'error': None, 'target': 'ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 systemd[1]: run-netns-ovnmeta\x2d9702c86a\x2d0c3b\x2d47e6\x2db5f2\x2d4220ef08a7ed.mount: Deactivated successfully.
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.648 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9702c86a-0c3b-47e6-b5f2-4220ef08a7ed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:45:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:45:43.649 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ccbe03-8194-4bd9-a549-3d2fc4806861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:45:43 np0005603621 podman[255441]: 2026-01-31 07:45:43.828415821 +0000 UTC m=+0.092397873 container create 328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:45:43 np0005603621 podman[255441]: 2026-01-31 07:45:43.755879475 +0000 UTC m=+0.019861517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:45:43 np0005603621 systemd[1]: Started libpod-conmon-328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861.scope.
Jan 31 02:45:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:44 np0005603621 podman[255441]: 2026-01-31 07:45:44.057246751 +0000 UTC m=+0.321228753 container init 328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:45:44 np0005603621 podman[255441]: 2026-01-31 07:45:44.063009773 +0000 UTC m=+0.326991785 container start 328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:45:44 np0005603621 mystifying_fermat[255458]: 167 167
Jan 31 02:45:44 np0005603621 systemd[1]: libpod-328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861.scope: Deactivated successfully.
Jan 31 02:45:44 np0005603621 conmon[255458]: conmon 328866c83250dc4a487f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861.scope/container/memory.events
Jan 31 02:45:44 np0005603621 podman[255441]: 2026-01-31 07:45:44.095775235 +0000 UTC m=+0.359757247 container attach 328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:45:44 np0005603621 podman[255441]: 2026-01-31 07:45:44.096210729 +0000 UTC m=+0.360192771 container died 328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:45:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-411418ce30739ca2f6b5e22d2ec5ec8c02f96648f89a2fceae905b7487a80418-merged.mount: Deactivated successfully.
Jan 31 02:45:44 np0005603621 podman[255441]: 2026-01-31 07:45:44.406592659 +0000 UTC m=+0.670574671 container remove 328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_fermat, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:45:44 np0005603621 systemd[1]: libpod-conmon-328866c83250dc4a487faf963ac945af22adac97df0ed858a7674a01ed273861.scope: Deactivated successfully.
Jan 31 02:45:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 233 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 1.8 MiB/s wr, 136 op/s
Jan 31 02:45:44 np0005603621 podman[255482]: 2026-01-31 07:45:44.580369814 +0000 UTC m=+0.087776976 container create 568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:45:44 np0005603621 podman[255482]: 2026-01-31 07:45:44.511715932 +0000 UTC m=+0.019123124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:45:44 np0005603621 systemd[1]: Started libpod-conmon-568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c.scope.
Jan 31 02:45:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:45:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cca8657fa64f6716b2455731c427aba5ca38da3c002c19dbcce9a98a1001139/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cca8657fa64f6716b2455731c427aba5ca38da3c002c19dbcce9a98a1001139/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cca8657fa64f6716b2455731c427aba5ca38da3c002c19dbcce9a98a1001139/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cca8657fa64f6716b2455731c427aba5ca38da3c002c19dbcce9a98a1001139/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:45:44 np0005603621 podman[255482]: 2026-01-31 07:45:44.744235338 +0000 UTC m=+0.251642530 container init 568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:45:44 np0005603621 podman[255482]: 2026-01-31 07:45:44.75189757 +0000 UTC m=+0.259304752 container start 568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:45:44 np0005603621 podman[255482]: 2026-01-31 07:45:44.790562468 +0000 UTC m=+0.297969650 container attach 568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.145 247403 INFO nova.virt.libvirt.driver [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Deleting instance files /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f_del#033[00m
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.147 247403 INFO nova.virt.libvirt.driver [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Deletion of /var/lib/nova/instances/b2c254bb-3943-440e-8ca2-306e8777083f_del complete#033[00m
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.232 247403 INFO nova.compute.manager [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Took 5.92 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.232 247403 DEBUG oslo.service.loopingcall [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.233 247403 DEBUG nova.compute.manager [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:45:45 np0005603621 nova_compute[247399]: 2026-01-31 07:45:45.233 247403 DEBUG nova.network.neutron [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:45.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:45.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]: {
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:        "osd_id": 0,
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:        "type": "bluestore"
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]:    }
Jan 31 02:45:45 np0005603621 vibrant_herschel[255499]: }
Jan 31 02:45:45 np0005603621 systemd[1]: libpod-568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c.scope: Deactivated successfully.
Jan 31 02:45:45 np0005603621 podman[255482]: 2026-01-31 07:45:45.552424724 +0000 UTC m=+1.059831896 container died 568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:45:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0cca8657fa64f6716b2455731c427aba5ca38da3c002c19dbcce9a98a1001139-merged.mount: Deactivated successfully.
Jan 31 02:45:45 np0005603621 podman[255482]: 2026-01-31 07:45:45.836047301 +0000 UTC m=+1.343454473 container remove 568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:45:45 np0005603621 systemd[1]: libpod-conmon-568656fce460273d5e40e17ade52dd33d2c755f84e2b952a9bcef2d9b083630c.scope: Deactivated successfully.
Jan 31 02:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:45:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.402 247403 DEBUG nova.network.neutron [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.420 247403 INFO nova.compute.manager [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Took 1.19 seconds to deallocate network for instance.#033[00m
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.451 247403 DEBUG nova.compute.manager [req-ed76469e-e073-422a-bf39-dac1abb25bde req-586f5494-eb5a-47c4-8b85-fe17471c6289 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Received event network-vif-deleted-bdedece8-b56c-4a93-94a1-88106232511a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.462 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.462 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:45:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:46 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0e07234c-62b8-4ea1-be2d-a0da62180cc4 does not exist
Jan 31 02:45:46 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7fd46890-bbe6-4b63-8785-aa28f2a16004 does not exist
Jan 31 02:45:46 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3fcd1a1e-acf4-44a1-accb-3428d5639c46 does not exist
Jan 31 02:45:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 233 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.507 247403 DEBUG oslo_concurrency.processutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:45:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:45:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1578449903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.935 247403 DEBUG oslo_concurrency.processutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:45:46 np0005603621 nova_compute[247399]: 2026-01-31 07:45:46.942 247403 DEBUG nova.compute.provider_tree [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:45:47 np0005603621 nova_compute[247399]: 2026-01-31 07:45:47.067 247403 DEBUG nova.scheduler.client.report [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:45:47 np0005603621 nova_compute[247399]: 2026-01-31 07:45:47.226 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:45:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:47.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:47 np0005603621 nova_compute[247399]: 2026-01-31 07:45:47.390 247403 INFO nova.scheduler.client.report [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Deleted allocations for instance b2c254bb-3943-440e-8ca2-306e8777083f#033[00m
Jan 31 02:45:47 np0005603621 nova_compute[247399]: 2026-01-31 07:45:47.650 247403 DEBUG oslo_concurrency.lockutils [None req-df10c81c-9936-4376-baa2-f069ddbb0ca5 090f32af51df4fddbcf003f38ed84a37 f805a3fee7864f1bb92a97da17d9ed71 - - default default] Lock "b2c254bb-3943-440e-8ca2-306e8777083f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00316451324911595 of space, bias 1.0, pg target 0.949353974734785 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:45:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:45:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:45:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:49.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:45:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:49.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:50 np0005603621 nova_compute[247399]: 2026-01-31 07:45:50.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 75 KiB/s rd, 790 KiB/s wr, 99 op/s
Jan 31 02:45:50 np0005603621 podman[255658]: 2026-01-31 07:45:50.60662873 +0000 UTC m=+0.071087361 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 02:45:50 np0005603621 podman[255677]: 2026-01-31 07:45:50.667396285 +0000 UTC m=+0.072933529 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:45:51 np0005603621 nova_compute[247399]: 2026-01-31 07:45:51.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:51.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:51.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 519 KiB/s rd, 19 KiB/s wr, 109 op/s
Jan 31 02:45:52 np0005603621 nova_compute[247399]: 2026-01-31 07:45:52.852 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:52 np0005603621 nova_compute[247399]: 2026-01-31 07:45:52.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:53.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:53.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 139 op/s
Jan 31 02:45:55 np0005603621 nova_compute[247399]: 2026-01-31 07:45:55.168 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845540.1660073, b2c254bb-3943-440e-8ca2-306e8777083f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:45:55 np0005603621 nova_compute[247399]: 2026-01-31 07:45:55.168 247403 INFO nova.compute.manager [-] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:45:55 np0005603621 nova_compute[247399]: 2026-01-31 07:45:55.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:55 np0005603621 nova_compute[247399]: 2026-01-31 07:45:55.255 247403 DEBUG nova.compute.manager [None req-51a535e9-2c0e-4e6d-ba36-312a6e4e958d - - - - - -] [instance: b2c254bb-3943-440e-8ca2-306e8777083f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:45:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:55.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:45:56 np0005603621 nova_compute[247399]: 2026-01-31 07:45:56.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:45:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 102 op/s
Jan 31 02:45:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:57.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:45:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 102 op/s
Jan 31 02:45:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:45:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:45:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:45:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:45:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:45:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:45:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:00 np0005603621 nova_compute[247399]: 2026-01-31 07:46:00.220 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 142 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 852 B/s wr, 81 op/s
Jan 31 02:46:01 np0005603621 nova_compute[247399]: 2026-01-31 07:46:01.231 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:01.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:01.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 127 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 392 KiB/s wr, 86 op/s
Jan 31 02:46:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:03.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:03.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:04 np0005603621 nova_compute[247399]: 2026-01-31 07:46:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 111 MiB data, 300 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 109 op/s
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.223 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.229 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.230 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.230 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.230 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.231 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:05.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:05.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002739633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.661 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.803 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.805 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=20.943607330322266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.885 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.886 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:46:05 np0005603621 nova_compute[247399]: 2026-01-31 07:46:05.912 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:06 np0005603621 nova_compute[247399]: 2026-01-31 07:46:06.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631439874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:06 np0005603621 nova_compute[247399]: 2026-01-31 07:46:06.333 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:06 np0005603621 nova_compute[247399]: 2026-01-31 07:46:06.339 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 111 MiB data, 300 MiB used, 21 GiB / 21 GiB avail; 152 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 31 02:46:06 np0005603621 nova_compute[247399]: 2026-01-31 07:46:06.562 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:06 np0005603621 nova_compute[247399]: 2026-01-31 07:46:06.592 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:46:06 np0005603621 nova_compute[247399]: 2026-01-31 07:46:06.593 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:07.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.588 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.614 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.614 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.667 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.668 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.669 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:07 np0005603621 nova_compute[247399]: 2026-01-31 07:46:07.669 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:46:08 np0005603621 nova_compute[247399]: 2026-01-31 07:46:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:08 np0005603621 nova_compute[247399]: 2026-01-31 07:46:08.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:46:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 02:46:09 np0005603621 nova_compute[247399]: 2026-01-31 07:46:09.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:09 np0005603621 nova_compute[247399]: 2026-01-31 07:46:09.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:46:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:09.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:09.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:10 np0005603621 nova_compute[247399]: 2026-01-31 07:46:10.225 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 434 KiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 31 02:46:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:10.556 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:46:10 np0005603621 nova_compute[247399]: 2026-01-31 07:46:10.557 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:10.557 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:46:11 np0005603621 nova_compute[247399]: 2026-01-31 07:46:11.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:11.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:11.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 426 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 31 02:46:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:13.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:13.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:13.559 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:46:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701457007' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:46:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:46:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701457007' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:46:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 418 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 31 02:46:15 np0005603621 nova_compute[247399]: 2026-01-31 07:46:15.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:15.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:15.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:16 np0005603621 nova_compute[247399]: 2026-01-31 07:46:16.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 282 KiB/s rd, 109 KiB/s wr, 42 op/s
Jan 31 02:46:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:17.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:17.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 151 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 298 KiB/s rd, 1.2 MiB/s wr, 68 op/s
Jan 31 02:46:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:19.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.864 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Acquiring lock "c0446e55-ef2c-48fb-9d05-6552ae91782a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.864 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "c0446e55-ef2c-48fb-9d05-6552ae91782a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.891 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.965 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.966 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.973 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:46:19 np0005603621 nova_compute[247399]: 2026-01-31 07:46:19.973 247403 INFO nova.compute.claims [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.078 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.231 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034864776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 91 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 02:46:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.520 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.524 247403 DEBUG nova.compute.provider_tree [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.543 247403 DEBUG nova.scheduler.client.report [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.580 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.582 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.670 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.671 247403 DEBUG nova.network.neutron [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.696 247403 INFO nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.724 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.858 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.860 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.860 247403 INFO nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Creating image(s)#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.883 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.909 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.940 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.945 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.989 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.991 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.991 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:20 np0005603621 nova_compute[247399]: 2026-01-31 07:46:20.992 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:21 np0005603621 nova_compute[247399]: 2026-01-31 07:46:21.017 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:21 np0005603621 nova_compute[247399]: 2026-01-31 07:46:21.025 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c0446e55-ef2c-48fb-9d05-6552ae91782a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:21 np0005603621 nova_compute[247399]: 2026-01-31 07:46:21.237 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:21 np0005603621 nova_compute[247399]: 2026-01-31 07:46:21.258 247403 DEBUG nova.network.neutron [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:46:21 np0005603621 nova_compute[247399]: 2026-01-31 07:46:21.258 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:46:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:21.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:21 np0005603621 podman[255930]: 2026-01-31 07:46:21.532645569 +0000 UTC m=+0.077959217 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:46:21 np0005603621 podman[255934]: 2026-01-31 07:46:21.551436631 +0000 UTC m=+0.099231067 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:46:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 169 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 443 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 31 02:46:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:23.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:23.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:23 np0005603621 nova_compute[247399]: 2026-01-31 07:46:23.775 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c0446e55-ef2c-48fb-9d05-6552ae91782a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.750s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:23 np0005603621 nova_compute[247399]: 2026-01-31 07:46:23.861 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] resizing rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:46:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 185 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 116 op/s
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:25.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:25.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.944 247403 DEBUG nova.objects.instance [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lazy-loading 'migration_context' on Instance uuid c0446e55-ef2c-48fb-9d05-6552ae91782a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.961 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.962 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Ensure instance console log exists: /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.963 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.963 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.964 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.966 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.973 247403 WARNING nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.978 247403 DEBUG nova.virt.libvirt.host [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.979 247403 DEBUG nova.virt.libvirt.host [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.982 247403 DEBUG nova.virt.libvirt.host [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.983 247403 DEBUG nova.virt.libvirt.host [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.984 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.985 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.986 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.986 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.986 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.987 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.987 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.987 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.988 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.988 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.988 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.989 247403 DEBUG nova.virt.hardware [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:46:25 np0005603621 nova_compute[247399]: 2026-01-31 07:46:25.992 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:46:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070040961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.445 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.475 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.479 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 185 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 115 op/s
Jan 31 02:46:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:46:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020306167' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.874 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.876 247403 DEBUG nova.objects.instance [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lazy-loading 'pci_devices' on Instance uuid c0446e55-ef2c-48fb-9d05-6552ae91782a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:26 np0005603621 nova_compute[247399]: 2026-01-31 07:46:26.890 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <uuid>c0446e55-ef2c-48fb-9d05-6552ae91782a</uuid>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <name>instance-0000000b</name>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1887538187</nova:name>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:46:25</nova:creationTime>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:user uuid="af3dc76e8c644930baa58a646b54b535">tempest-DeleteServersAdminTestJSON-1768399059-project-member</nova:user>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <nova:project uuid="f5246170c80a4be2beb961122f12fcaf">tempest-DeleteServersAdminTestJSON-1768399059</nova:project>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <entry name="serial">c0446e55-ef2c-48fb-9d05-6552ae91782a</entry>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <entry name="uuid">c0446e55-ef2c-48fb-9d05-6552ae91782a</entry>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c0446e55-ef2c-48fb-9d05-6552ae91782a_disk">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c0446e55-ef2c-48fb-9d05-6552ae91782a_disk.config">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/console.log" append="off"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:46:26 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:46:26 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:46:26 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:46:26 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:46:27 np0005603621 nova_compute[247399]: 2026-01-31 07:46:27.099 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:46:27 np0005603621 nova_compute[247399]: 2026-01-31 07:46:27.100 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:46:27 np0005603621 nova_compute[247399]: 2026-01-31 07:46:27.101 247403 INFO nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Using config drive#033[00m
Jan 31 02:46:27 np0005603621 nova_compute[247399]: 2026-01-31 07:46:27.138 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:27.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:28 np0005603621 nova_compute[247399]: 2026-01-31 07:46:28.210 247403 INFO nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Creating config drive at /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/disk.config#033[00m
Jan 31 02:46:28 np0005603621 nova_compute[247399]: 2026-01-31 07:46:28.213 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmkaymy18 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:28 np0005603621 nova_compute[247399]: 2026-01-31 07:46:28.332 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmkaymy18" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:28 np0005603621 nova_compute[247399]: 2026-01-31 07:46:28.361 247403 DEBUG nova.storage.rbd_utils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] rbd image c0446e55-ef2c-48fb-9d05-6552ae91782a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:46:28 np0005603621 nova_compute[247399]: 2026-01-31 07:46:28.365 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/disk.config c0446e55-ef2c-48fb-9d05-6552ae91782a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 215 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 137 op/s
Jan 31 02:46:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:29.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:30 np0005603621 nova_compute[247399]: 2026-01-31 07:46:30.239 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:30 np0005603621 nova_compute[247399]: 2026-01-31 07:46:30.329 247403 DEBUG oslo_concurrency.processutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/disk.config c0446e55-ef2c-48fb-9d05-6552ae91782a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.964s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:30 np0005603621 nova_compute[247399]: 2026-01-31 07:46:30.330 247403 INFO nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Deleting local config drive /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a/disk.config because it was imported into RBD.#033[00m
Jan 31 02:46:30 np0005603621 systemd-machined[212769]: New machine qemu-4-instance-0000000b.
Jan 31 02:46:30 np0005603621 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Jan 31 02:46:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:30.466 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:30.467 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:30.467 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 197 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Jan 31 02:46:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.060 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845591.0598588, c0446e55-ef2c-48fb-9d05-6552ae91782a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.061 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.064 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.064 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.068 247403 INFO nova.virt.libvirt.driver [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Instance spawned successfully.#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.068 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.081 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.087 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.091 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.092 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.092 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.093 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.094 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.094 247403 DEBUG nova.virt.libvirt.driver [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.132 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.133 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845591.064355, c0446e55-ef2c-48fb-9d05-6552ae91782a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.133 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] VM Started (Lifecycle Event)#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.166 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.169 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.174 247403 INFO nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Took 10.31 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.174 247403 DEBUG nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.195 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.232 247403 INFO nova.compute.manager [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Took 11.30 seconds to build instance.#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:31 np0005603621 nova_compute[247399]: 2026-01-31 07:46:31.251 247403 DEBUG oslo_concurrency.lockutils [None req-c9f526a2-2c20-4c9c-a92b-33363068e5c3 af3dc76e8c644930baa58a646b54b535 f5246170c80a4be2beb961122f12fcaf - - default default] Lock "c0446e55-ef2c-48fb-9d05-6552ae91782a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:31.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:31.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 174 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.369 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Acquiring lock "c0446e55-ef2c-48fb-9d05-6552ae91782a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.370 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lock "c0446e55-ef2c-48fb-9d05-6552ae91782a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.370 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Acquiring lock "c0446e55-ef2c-48fb-9d05-6552ae91782a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.370 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lock "c0446e55-ef2c-48fb-9d05-6552ae91782a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.370 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lock "c0446e55-ef2c-48fb-9d05-6552ae91782a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.371 247403 INFO nova.compute.manager [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Terminating instance#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.372 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Acquiring lock "refresh_cache-c0446e55-ef2c-48fb-9d05-6552ae91782a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.372 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Acquired lock "refresh_cache-c0446e55-ef2c-48fb-9d05-6552ae91782a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.372 247403 DEBUG nova.network.neutron [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:46:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:33.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:33 np0005603621 nova_compute[247399]: 2026-01-31 07:46:33.598 247403 DEBUG nova.network.neutron [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:46:34 np0005603621 nova_compute[247399]: 2026-01-31 07:46:34.333 247403 DEBUG nova.network.neutron [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:34 np0005603621 nova_compute[247399]: 2026-01-31 07:46:34.350 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Releasing lock "refresh_cache-c0446e55-ef2c-48fb-9d05-6552ae91782a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:46:34 np0005603621 nova_compute[247399]: 2026-01-31 07:46:34.351 247403 DEBUG nova.compute.manager [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:46:34 np0005603621 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 31 02:46:34 np0005603621 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 3.904s CPU time.
Jan 31 02:46:34 np0005603621 systemd-machined[212769]: Machine qemu-4-instance-0000000b terminated.
Jan 31 02:46:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 90 MiB data, 287 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Jan 31 02:46:34 np0005603621 nova_compute[247399]: 2026-01-31 07:46:34.571 247403 INFO nova.virt.libvirt.driver [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Instance destroyed successfully.#033[00m
Jan 31 02:46:34 np0005603621 nova_compute[247399]: 2026-01-31 07:46:34.572 247403 DEBUG nova.objects.instance [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lazy-loading 'resources' on Instance uuid c0446e55-ef2c-48fb-9d05-6552ae91782a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:46:35 np0005603621 nova_compute[247399]: 2026-01-31 07:46:35.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:35.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:35 np0005603621 ovn_controller[149152]: 2026-01-31T07:46:35Z|00045|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 02:46:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:35.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:36 np0005603621 nova_compute[247399]: 2026-01-31 07:46:36.242 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 90 MiB data, 287 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 115 op/s
Jan 31 02:46:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:37.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:37.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:46:38
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', '.mgr', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 90 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 142 op/s
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:46:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:46:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:39.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:40 np0005603621 nova_compute[247399]: 2026-01-31 07:46:40.244 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 84 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 127 op/s
Jan 31 02:46:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:41 np0005603621 nova_compute[247399]: 2026-01-31 07:46:41.243 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:41.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 58 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 125 op/s
Jan 31 02:46:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:43.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:43.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 16 KiB/s wr, 118 op/s
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:45.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:45.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.876 247403 INFO nova.virt.libvirt.driver [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Deleting instance files /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a_del#033[00m
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.877 247403 INFO nova.virt.libvirt.driver [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Deletion of /var/lib/nova/instances/c0446e55-ef2c-48fb-9d05-6552ae91782a_del complete#033[00m
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.945 247403 INFO nova.compute.manager [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Took 11.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.946 247403 DEBUG oslo.service.loopingcall [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.946 247403 DEBUG nova.compute.manager [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:46:45 np0005603621 nova_compute[247399]: 2026-01-31 07:46:45.946 247403 DEBUG nova.network.neutron [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.245 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.349 247403 DEBUG nova.network.neutron [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.365 247403 DEBUG nova.network.neutron [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.380 247403 INFO nova.compute.manager [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Took 0.43 seconds to deallocate network for instance.#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.433 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.434 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.494 247403 DEBUG oslo_concurrency.processutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:46:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 222 KiB/s rd, 2.6 KiB/s wr, 61 op/s
Jan 31 02:46:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:46:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/505216019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.957 247403 DEBUG oslo_concurrency.processutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.963 247403 DEBUG nova.compute.provider_tree [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.978 247403 DEBUG nova.scheduler.client.report [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:46:46 np0005603621 nova_compute[247399]: 2026-01-31 07:46:46.998 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:47 np0005603621 nova_compute[247399]: 2026-01-31 07:46:47.031 247403 INFO nova.scheduler.client.report [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Deleted allocations for instance c0446e55-ef2c-48fb-9d05-6552ae91782a#033[00m
Jan 31 02:46:47 np0005603621 nova_compute[247399]: 2026-01-31 07:46:47.108 247403 DEBUG oslo_concurrency.lockutils [None req-48d03c80-3dea-4f75-bd5f-f3a238499f97 a966cfa75a094d8a98219fa35d4e238f ecaef3c4421e4c888299ab59d8f3e262 - - default default] Lock "c0446e55-ef2c-48fb-9d05-6552ae91782a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:46:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:47.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:47 np0005603621 podman[256502]: 2026-01-31 07:46:47.410964706 +0000 UTC m=+0.055813120 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:46:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:47.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:47 np0005603621 podman[256502]: 2026-01-31 07:46:47.490065188 +0000 UTC m=+0.134913572 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:46:47 np0005603621 podman[256659]: 2026-01-31 07:46:47.984187529 +0000 UTC m=+0.100950432 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:46:48 np0005603621 podman[256659]: 2026-01-31 07:46:48.009076043 +0000 UTC m=+0.125838846 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:46:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:48.236 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:46:48 np0005603621 nova_compute[247399]: 2026-01-31 07:46:48.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:48.237 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:46:48 np0005603621 podman[256727]: 2026-01-31 07:46:48.307113183 +0000 UTC m=+0.134508329 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, release=1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2)
Jan 31 02:46:48 np0005603621 podman[256749]: 2026-01-31 07:46:48.45806866 +0000 UTC m=+0.136828823 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, version=2.2.4, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793)
Jan 31 02:46:48 np0005603621 podman[256727]: 2026-01-31 07:46:48.481163158 +0000 UTC m=+0.308558284 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 223 KiB/s rd, 2.9 KiB/s wr, 63 op/s
Jan 31 02:46:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:46:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:46:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:46:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:46:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:46:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:49.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:46:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c3a5aec7-1a5d-42c4-8a24-bb3e637e988e does not exist
Jan 31 02:46:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a582f279-9aaf-4b3b-bff7-31346e690af8 does not exist
Jan 31 02:46:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b1046a02-dd35-4a3b-a794-ab27f6c413c6 does not exist
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:46:49 np0005603621 nova_compute[247399]: 2026-01-31 07:46:49.570 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845594.568299, c0446e55-ef2c-48fb-9d05-6552ae91782a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:46:49 np0005603621 nova_compute[247399]: 2026-01-31 07:46:49.570 247403 INFO nova.compute.manager [-] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:46:49 np0005603621 nova_compute[247399]: 2026-01-31 07:46:49.589 247403 DEBUG nova.compute.manager [None req-517511da-2cb5-4dfd-a5fd-a4ae6f9616de - - - - - -] [instance: c0446e55-ef2c-48fb-9d05-6552ae91782a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:46:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:46:50 np0005603621 podman[257034]: 2026-01-31 07:46:50.116720463 +0000 UTC m=+0.028484208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:46:50 np0005603621 nova_compute[247399]: 2026-01-31 07:46:50.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:50 np0005603621 podman[257034]: 2026-01-31 07:46:50.373818544 +0000 UTC m=+0.285582299 container create 5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:46:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 41 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.4 KiB/s wr, 36 op/s
Jan 31 02:46:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:50 np0005603621 systemd[1]: Started libpod-conmon-5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d.scope.
Jan 31 02:46:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:46:51 np0005603621 podman[257034]: 2026-01-31 07:46:51.075551956 +0000 UTC m=+0.987315721 container init 5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 02:46:51 np0005603621 podman[257034]: 2026-01-31 07:46:51.085351285 +0000 UTC m=+0.997115040 container start 5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 02:46:51 np0005603621 objective_mclaren[257050]: 167 167
Jan 31 02:46:51 np0005603621 systemd[1]: libpod-5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d.scope: Deactivated successfully.
Jan 31 02:46:51 np0005603621 nova_compute[247399]: 2026-01-31 07:46:51.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:51 np0005603621 podman[257034]: 2026-01-31 07:46:51.292339187 +0000 UTC m=+1.204102992 container attach 5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:46:51 np0005603621 podman[257034]: 2026-01-31 07:46:51.29306479 +0000 UTC m=+1.204828545 container died 5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:46:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:51.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:46:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:51.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:46:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 53 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 161 KiB/s wr, 43 op/s
Jan 31 02:46:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-491e8f2e76bc9dd789a74e7808e7519db5c6e8c284b92d81ae7265a5f771c1bf-merged.mount: Deactivated successfully.
Jan 31 02:46:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:53.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:53.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:53 np0005603621 podman[257034]: 2026-01-31 07:46:53.796166143 +0000 UTC m=+3.707929848 container remove 5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mclaren, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:46:53 np0005603621 systemd[1]: libpod-conmon-5d63e737c41c4475b011538c678cd6c93e87a96f3e8439fd7e5a34bb2ff0637d.scope: Deactivated successfully.
Jan 31 02:46:53 np0005603621 podman[257119]: 2026-01-31 07:46:53.95508716 +0000 UTC m=+2.012434212 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 02:46:53 np0005603621 podman[257120]: 2026-01-31 07:46:53.969573746 +0000 UTC m=+2.027406824 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:46:54 np0005603621 podman[257161]: 2026-01-31 07:46:53.961382778 +0000 UTC m=+0.026032211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:46:54 np0005603621 podman[257161]: 2026-01-31 07:46:54.095933487 +0000 UTC m=+0.160582860 container create 8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bartik, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:46:54 np0005603621 systemd[1]: Started libpod-conmon-8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd.scope.
Jan 31 02:46:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e91e3ac4f8520dc35b6ae4ad8061f51df6c6c48c28a83da6471e058665974/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e91e3ac4f8520dc35b6ae4ad8061f51df6c6c48c28a83da6471e058665974/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e91e3ac4f8520dc35b6ae4ad8061f51df6c6c48c28a83da6471e058665974/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e91e3ac4f8520dc35b6ae4ad8061f51df6c6c48c28a83da6471e058665974/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914e91e3ac4f8520dc35b6ae4ad8061f51df6c6c48c28a83da6471e058665974/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:46:54 np0005603621 podman[257161]: 2026-01-31 07:46:54.483000224 +0000 UTC m=+0.547649627 container init 8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 02:46:54 np0005603621 podman[257161]: 2026-01-31 07:46:54.490996846 +0000 UTC m=+0.555646229 container start 8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 02:46:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 108 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 31 02:46:54 np0005603621 podman[257161]: 2026-01-31 07:46:54.750557035 +0000 UTC m=+0.815206508 container attach 8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bartik, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:46:55 np0005603621 angry_bartik[257185]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:46:55 np0005603621 angry_bartik[257185]: --> relative data size: 1.0
Jan 31 02:46:55 np0005603621 angry_bartik[257185]: --> All data devices are unavailable
Jan 31 02:46:55 np0005603621 nova_compute[247399]: 2026-01-31 07:46:55.252 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:55 np0005603621 systemd[1]: libpod-8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd.scope: Deactivated successfully.
Jan 31 02:46:55 np0005603621 podman[257161]: 2026-01-31 07:46:55.259461441 +0000 UTC m=+1.324110814 container died 8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:46:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:55.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:46:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-914e91e3ac4f8520dc35b6ae4ad8061f51df6c6c48c28a83da6471e058665974-merged.mount: Deactivated successfully.
Jan 31 02:46:56 np0005603621 nova_compute[247399]: 2026-01-31 07:46:56.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:46:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 108 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 2.1 MiB/s wr, 23 op/s
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.758184) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845616758239, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1780, "num_deletes": 250, "total_data_size": 3033091, "memory_usage": 3093400, "flush_reason": "Manual Compaction"}
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845616835085, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1817482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19417, "largest_seqno": 21196, "table_properties": {"data_size": 1811257, "index_size": 3172, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16364, "raw_average_key_size": 20, "raw_value_size": 1797426, "raw_average_value_size": 2289, "num_data_blocks": 140, "num_entries": 785, "num_filter_entries": 785, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845456, "oldest_key_time": 1769845456, "file_creation_time": 1769845616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 76980 microseconds, and 4189 cpu microseconds.
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.835164) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1817482 bytes OK
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.835185) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.900982) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.901040) EVENT_LOG_v1 {"time_micros": 1769845616901031, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.901061) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 3025589, prev total WAL file size 3042537, number of live WAL files 2.
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.902265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1774KB)], [44(9527KB)]
Jan 31 02:46:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845616902341, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11573260, "oldest_snapshot_seqno": -1}
Jan 31 02:46:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:57.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:57.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:57 np0005603621 podman[257161]: 2026-01-31 07:46:57.436992294 +0000 UTC m=+3.501641707 container remove 8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:46:57 np0005603621 systemd[1]: libpod-conmon-8e93a28aca4fba27df7d78ef48530ddf8729e278b22d7dd6be0d8ab1d5f398bd.scope: Deactivated successfully.
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4850 keys, 8804204 bytes, temperature: kUnknown
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845617586371, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8804204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8771545, "index_size": 19429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 120297, "raw_average_key_size": 24, "raw_value_size": 8683677, "raw_average_value_size": 1790, "num_data_blocks": 804, "num_entries": 4850, "num_filter_entries": 4850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845616, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.586641) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8804204 bytes
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.771326) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 16.9 rd, 12.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.3 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(11.2) write-amplify(4.8) OK, records in: 5295, records dropped: 445 output_compression: NoCompression
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.771359) EVENT_LOG_v1 {"time_micros": 1769845617771348, "job": 22, "event": "compaction_finished", "compaction_time_micros": 684125, "compaction_time_cpu_micros": 20601, "output_level": 6, "num_output_files": 1, "total_output_size": 8804204, "num_input_records": 5295, "num_output_records": 4850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845617771606, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845617772233, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:56.902105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.772287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.772292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.772293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.772295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:46:57 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:46:57.772296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:46:58 np0005603621 podman[257358]: 2026-01-31 07:46:58.085086366 +0000 UTC m=+0.028077456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:46:58 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:46:58.239 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:46:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 3.5 MiB/s wr, 48 op/s
Jan 31 02:46:59 np0005603621 podman[257358]: 2026-01-31 07:46:59.061551044 +0000 UTC m=+1.004542114 container create 3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_diffie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:46:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:46:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:46:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:46:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:46:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:46:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:46:59.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:46:59 np0005603621 systemd[1]: Started libpod-conmon-3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57.scope.
Jan 31 02:46:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:46:59 np0005603621 podman[257358]: 2026-01-31 07:46:59.812097213 +0000 UTC m=+1.755088363 container init 3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_diffie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:46:59 np0005603621 podman[257358]: 2026-01-31 07:46:59.818882906 +0000 UTC m=+1.761873986 container start 3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_diffie, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:46:59 np0005603621 elegant_diffie[257376]: 167 167
Jan 31 02:46:59 np0005603621 systemd[1]: libpod-3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57.scope: Deactivated successfully.
Jan 31 02:47:00 np0005603621 podman[257358]: 2026-01-31 07:47:00.226037487 +0000 UTC m=+2.169028557 container attach 3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:47:00 np0005603621 podman[257358]: 2026-01-31 07:47:00.227205423 +0000 UTC m=+2.170196483 container died 3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_diffie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:47:00 np0005603621 nova_compute[247399]: 2026-01-31 07:47:00.257 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 3.5 MiB/s wr, 47 op/s
Jan 31 02:47:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f375e112258b8012e37a7ca8cfb999a0401888f7e65c16a662751b34ea41edc6-merged.mount: Deactivated successfully.
Jan 31 02:47:01 np0005603621 nova_compute[247399]: 2026-01-31 07:47:01.255 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:01.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:02 np0005603621 podman[257358]: 2026-01-31 07:47:02.168948967 +0000 UTC m=+4.111940047 container remove 3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_diffie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:47:02 np0005603621 systemd[1]: libpod-conmon-3fbbd51130ce673d832c2ba86603b57fefe9fb61ac83ea118442e4e4b98b9c57.scope: Deactivated successfully.
Jan 31 02:47:02 np0005603621 podman[257402]: 2026-01-31 07:47:02.267818912 +0000 UTC m=+0.022702516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:47:02 np0005603621 podman[257402]: 2026-01-31 07:47:02.637401647 +0000 UTC m=+0.392285231 container create d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_nobel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:47:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 157 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 4.3 MiB/s wr, 52 op/s
Jan 31 02:47:03 np0005603621 systemd[1]: Started libpod-conmon-d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408.scope.
Jan 31 02:47:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:47:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0d59150a05d71d15e901f4ed1cd7560ff9aec449ac1644e0172e9cf0029514/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0d59150a05d71d15e901f4ed1cd7560ff9aec449ac1644e0172e9cf0029514/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0d59150a05d71d15e901f4ed1cd7560ff9aec449ac1644e0172e9cf0029514/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b0d59150a05d71d15e901f4ed1cd7560ff9aec449ac1644e0172e9cf0029514/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:03 np0005603621 podman[257402]: 2026-01-31 07:47:03.177886638 +0000 UTC m=+0.932770212 container init d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:47:03 np0005603621 podman[257402]: 2026-01-31 07:47:03.183098713 +0000 UTC m=+0.937982277 container start d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:47:03 np0005603621 podman[257402]: 2026-01-31 07:47:03.319920054 +0000 UTC m=+1.074803638 container attach d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:47:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:03.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:03 np0005603621 bold_nobel[257418]: {
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:    "0": [
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:        {
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "devices": [
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "/dev/loop3"
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            ],
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "lv_name": "ceph_lv0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "lv_size": "7511998464",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "name": "ceph_lv0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "tags": {
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.cluster_name": "ceph",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.crush_device_class": "",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.encrypted": "0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.osd_id": "0",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.type": "block",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:                "ceph.vdo": "0"
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            },
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "type": "block",
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:            "vg_name": "ceph_vg0"
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:        }
Jan 31 02:47:03 np0005603621 bold_nobel[257418]:    ]
Jan 31 02:47:03 np0005603621 bold_nobel[257418]: }
Jan 31 02:47:03 np0005603621 systemd[1]: libpod-d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408.scope: Deactivated successfully.
Jan 31 02:47:03 np0005603621 podman[257402]: 2026-01-31 07:47:03.914573881 +0000 UTC m=+1.669457475 container died d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:47:04 np0005603621 nova_compute[247399]: 2026-01-31 07:47:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 5.0 MiB/s wr, 60 op/s
Jan 31 02:47:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0b0d59150a05d71d15e901f4ed1cd7560ff9aec449ac1644e0172e9cf0029514-merged.mount: Deactivated successfully.
Jan 31 02:47:05 np0005603621 nova_compute[247399]: 2026-01-31 07:47:05.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:05.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:05.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:05 np0005603621 podman[257402]: 2026-01-31 07:47:05.846877807 +0000 UTC m=+3.601761411 container remove d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:47:05 np0005603621 systemd[1]: libpod-conmon-d58406a3c147e13cafe6edbeb2e46d0c5f9ebae25c848509d065d40f43b0f408.scope: Deactivated successfully.
Jan 31 02:47:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.226 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.226 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.255 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.256 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.256 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.256 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.256 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.268 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:06 np0005603621 podman[257601]: 2026-01-31 07:47:06.408963498 +0000 UTC m=+0.017484902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:47:06 np0005603621 podman[257601]: 2026-01-31 07:47:06.537902231 +0000 UTC m=+0.146423645 container create 4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:47:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2068109887' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.662 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:06 np0005603621 systemd[1]: Started libpod-conmon-4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266.scope.
Jan 31 02:47:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.832 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.834 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4808MB free_disk=20.926021575927734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.835 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.835 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 3.2 MiB/s wr, 52 op/s
Jan 31 02:47:06 np0005603621 podman[257601]: 2026-01-31 07:47:06.929688696 +0000 UTC m=+0.538210100 container init 4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:47:06 np0005603621 podman[257601]: 2026-01-31 07:47:06.936420448 +0000 UTC m=+0.544941832 container start 4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:47:06 np0005603621 gifted_panini[257620]: 167 167
Jan 31 02:47:06 np0005603621 systemd[1]: libpod-4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266.scope: Deactivated successfully.
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.947 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.947 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:47:06 np0005603621 nova_compute[247399]: 2026-01-31 07:47:06.962 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:07 np0005603621 podman[257601]: 2026-01-31 07:47:07.115965446 +0000 UTC m=+0.724486850 container attach 4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:47:07 np0005603621 podman[257601]: 2026-01-31 07:47:07.116504043 +0000 UTC m=+0.725025457 container died 4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:47:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451551088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:07 np0005603621 nova_compute[247399]: 2026-01-31 07:47:07.379 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:07 np0005603621 nova_compute[247399]: 2026-01-31 07:47:07.383 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:47:07 np0005603621 nova_compute[247399]: 2026-01-31 07:47:07.403 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:47:07 np0005603621 nova_compute[247399]: 2026-01-31 07:47:07.425 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:47:07 np0005603621 nova_compute[247399]: 2026-01-31 07:47:07.426 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:07.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:07.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:08 np0005603621 nova_compute[247399]: 2026-01-31 07:47:08.398 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:08 np0005603621 nova_compute[247399]: 2026-01-31 07:47:08.398 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:08 np0005603621 nova_compute[247399]: 2026-01-31 07:47:08.398 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:08 np0005603621 nova_compute[247399]: 2026-01-31 07:47:08.398 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:47:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 174 KiB/s rd, 3.2 MiB/s wr, 70 op/s
Jan 31 02:47:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a749381f330689e2bcebe15a7ca3bf32555e6238ea02e533cf8d1723f568bf28-merged.mount: Deactivated successfully.
Jan 31 02:47:09 np0005603621 nova_compute[247399]: 2026-01-31 07:47:09.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:47:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:09.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:47:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:09.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:09 np0005603621 podman[257601]: 2026-01-31 07:47:09.660312418 +0000 UTC m=+3.268833792 container remove 4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:47:09 np0005603621 systemd[1]: libpod-conmon-4c6895ded3a919da7a3960f8a692ebc314651df99ed4f001567407004e6a9266.scope: Deactivated successfully.
Jan 31 02:47:09 np0005603621 podman[257668]: 2026-01-31 07:47:09.761089613 +0000 UTC m=+0.019756053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:47:10 np0005603621 podman[257668]: 2026-01-31 07:47:10.140525839 +0000 UTC m=+0.399192259 container create a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:47:10 np0005603621 nova_compute[247399]: 2026-01-31 07:47:10.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:10 np0005603621 nova_compute[247399]: 2026-01-31 07:47:10.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:47:10 np0005603621 nova_compute[247399]: 2026-01-31 07:47:10.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:10 np0005603621 systemd[1]: Started libpod-conmon-a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a.scope.
Jan 31 02:47:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:47:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05903918bcf49134a0dfcf04ac2df624c228b77e9bcb306b8db0b40ed945832/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05903918bcf49134a0dfcf04ac2df624c228b77e9bcb306b8db0b40ed945832/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05903918bcf49134a0dfcf04ac2df624c228b77e9bcb306b8db0b40ed945832/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c05903918bcf49134a0dfcf04ac2df624c228b77e9bcb306b8db0b40ed945832/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:47:10 np0005603621 podman[257668]: 2026-01-31 07:47:10.641589017 +0000 UTC m=+0.900255457 container init a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:47:10 np0005603621 podman[257668]: 2026-01-31 07:47:10.650111486 +0000 UTC m=+0.908777906 container start a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:47:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 31 02:47:10 np0005603621 podman[257668]: 2026-01-31 07:47:10.876421896 +0000 UTC m=+1.135088316 container attach a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 02:47:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:11 np0005603621 nova_compute[247399]: 2026-01-31 07:47:11.258 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]: {
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:        "osd_id": 0,
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:        "type": "bluestore"
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]:    }
Jan 31 02:47:11 np0005603621 priceless_swartz[257685]: }
Jan 31 02:47:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:11.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:11 np0005603621 systemd[1]: libpod-a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a.scope: Deactivated successfully.
Jan 31 02:47:11 np0005603621 podman[257668]: 2026-01-31 07:47:11.443284779 +0000 UTC m=+1.701951199 container died a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:47:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:11.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c05903918bcf49134a0dfcf04ac2df624c228b77e9bcb306b8db0b40ed945832-merged.mount: Deactivated successfully.
Jan 31 02:47:11 np0005603621 podman[257668]: 2026-01-31 07:47:11.990644505 +0000 UTC m=+2.249310925 container remove a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:47:12 np0005603621 systemd[1]: libpod-conmon-a2a6c121d60baaf143b53f17c833888573b3dfcb61d072096a69adfd4c86734a.scope: Deactivated successfully.
Jan 31 02:47:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:47:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:47:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:47:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:47:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 456f2d6f-ab7c-4c45-8043-283b26326641 does not exist
Jan 31 02:47:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6e167eb9-fd8a-455f-a55b-9583482ed52b does not exist
Jan 31 02:47:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f4b47e02-e137-49bc-a345-d9744f9be86f does not exist
Jan 31 02:47:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Jan 31 02:47:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:47:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:47:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:13.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:13.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 144 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 935 KiB/s wr, 235 op/s
Jan 31 02:47:15 np0005603621 nova_compute[247399]: 2026-01-31 07:47:15.266 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:15.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:47:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:15.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:47:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:16 np0005603621 nova_compute[247399]: 2026-01-31 07:47:16.259 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 144 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 38 KiB/s wr, 214 op/s
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.050 247403 DEBUG nova.compute.manager [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.156 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.157 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.177 247403 DEBUG nova.objects.instance [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'pci_requests' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.192 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.193 247403 INFO nova.compute.claims [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.193 247403 DEBUG nova.objects.instance [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'resources' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.207 247403 DEBUG nova.objects.instance [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'numa_topology' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.221 247403 DEBUG nova.objects.instance [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'pci_devices' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.274 247403 INFO nova.compute.resource_tracker [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating resource usage from migration 23b20670-ad55-4ea4-8c1f-01ea26d14cc3#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.275 247403 DEBUG nova.compute.resource_tracker [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Starting to track incoming migration 23b20670-ad55-4ea4-8c1f-01ea26d14cc3 with flavor a01eb4f0-fd80-416b-a750-75de320394d8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.329 247403 DEBUG oslo_concurrency.processutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:17.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:47:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368391343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.726 247403 DEBUG oslo_concurrency.processutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.732 247403 DEBUG nova.compute.provider_tree [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.749 247403 DEBUG nova.scheduler.client.report [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.792 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.792 247403 INFO nova.compute.manager [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Migrating#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.792 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.792 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.802 247403 INFO nova.compute.rpcapi [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Jan 31 02:47:17 np0005603621 nova_compute[247399]: 2026-01-31 07:47:17.802 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:47:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 39 KiB/s wr, 254 op/s
Jan 31 02:47:18 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 02:47:18 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 02:47:18 np0005603621 systemd-logind[818]: New session 52 of user nova.
Jan 31 02:47:18 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 02:47:19 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 02:47:19 np0005603621 systemd[257850]: Queued start job for default target Main User Target.
Jan 31 02:47:19 np0005603621 systemd[257850]: Created slice User Application Slice.
Jan 31 02:47:19 np0005603621 systemd[257850]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:47:19 np0005603621 systemd[257850]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:47:19 np0005603621 systemd[257850]: Reached target Paths.
Jan 31 02:47:19 np0005603621 systemd[257850]: Reached target Timers.
Jan 31 02:47:19 np0005603621 systemd[257850]: Starting D-Bus User Message Bus Socket...
Jan 31 02:47:19 np0005603621 systemd[257850]: Starting Create User's Volatile Files and Directories...
Jan 31 02:47:19 np0005603621 systemd[257850]: Finished Create User's Volatile Files and Directories.
Jan 31 02:47:19 np0005603621 systemd[257850]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:47:19 np0005603621 systemd[257850]: Reached target Sockets.
Jan 31 02:47:19 np0005603621 systemd[257850]: Reached target Basic System.
Jan 31 02:47:19 np0005603621 systemd[257850]: Reached target Main User Target.
Jan 31 02:47:19 np0005603621 systemd[257850]: Startup finished in 137ms.
Jan 31 02:47:19 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 02:47:19 np0005603621 systemd[1]: Started Session 52 of User nova.
Jan 31 02:47:19 np0005603621 systemd[1]: session-52.scope: Deactivated successfully.
Jan 31 02:47:19 np0005603621 systemd-logind[818]: Session 52 logged out. Waiting for processes to exit.
Jan 31 02:47:19 np0005603621 systemd-logind[818]: Removed session 52.
Jan 31 02:47:19 np0005603621 systemd-logind[818]: New session 54 of user nova.
Jan 31 02:47:19 np0005603621 systemd[1]: Started Session 54 of User nova.
Jan 31 02:47:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:19.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:19.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:19 np0005603621 systemd[1]: session-54.scope: Deactivated successfully.
Jan 31 02:47:19 np0005603621 systemd-logind[818]: Session 54 logged out. Waiting for processes to exit.
Jan 31 02:47:19 np0005603621 systemd-logind[818]: Removed session 54.
Jan 31 02:47:20 np0005603621 nova_compute[247399]: 2026-01-31 07:47:20.270 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 139 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 424 KiB/s wr, 238 op/s
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:21 np0005603621 nova_compute[247399]: 2026-01-31 07:47:21.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 02:47:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:21.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.648183) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845641648305, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 488, "num_deletes": 256, "total_data_size": 475704, "memory_usage": 486648, "flush_reason": "Manual Compaction"}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845641698806, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 471342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21197, "largest_seqno": 21684, "table_properties": {"data_size": 468543, "index_size": 771, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6417, "raw_average_key_size": 17, "raw_value_size": 462919, "raw_average_value_size": 1289, "num_data_blocks": 34, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845616, "oldest_key_time": 1769845616, "file_creation_time": 1769845641, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 50643 microseconds, and 2089 cpu microseconds.
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.698847) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 471342 bytes OK
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.698868) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.712690) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.712784) EVENT_LOG_v1 {"time_micros": 1769845641712773, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.712815) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 472812, prev total WAL file size 489207, number of live WAL files 2.
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.713482) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(460KB)], [47(8597KB)]
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845641713558, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9275546, "oldest_snapshot_seqno": -1}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4686 keys, 9133406 bytes, temperature: kUnknown
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845641827285, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 9133406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9100987, "index_size": 19573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 118122, "raw_average_key_size": 25, "raw_value_size": 9015197, "raw_average_value_size": 1923, "num_data_blocks": 805, "num_entries": 4686, "num_filter_entries": 4686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845641, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.827518) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 9133406 bytes
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.864771) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.5 rd, 80.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.4 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(39.1) write-amplify(19.4) OK, records in: 5209, records dropped: 523 output_compression: NoCompression
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.864809) EVENT_LOG_v1 {"time_micros": 1769845641864795, "job": 24, "event": "compaction_finished", "compaction_time_micros": 113795, "compaction_time_cpu_micros": 16042, "output_level": 6, "num_output_files": 1, "total_output_size": 9133406, "num_input_records": 5209, "num_output_records": 4686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845641865003, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845641865798, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.713301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.865918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.865924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.865926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.865928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:21.865930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 146 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.2 MiB/s wr, 191 op/s
Jan 31 02:47:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:23.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:23.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:24 np0005603621 podman[257875]: 2026-01-31 07:47:24.502660381 +0000 UTC m=+0.049406161 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:47:24 np0005603621 podman[257876]: 2026-01-31 07:47:24.528451446 +0000 UTC m=+0.075324080 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:47:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 160 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.5 MiB/s wr, 188 op/s
Jan 31 02:47:25 np0005603621 nova_compute[247399]: 2026-01-31 07:47:25.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:25.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 02:47:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:25.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 02:47:26 np0005603621 nova_compute[247399]: 2026-01-31 07:47:26.265 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 160 MiB data, 325 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 85 op/s
Jan 31 02:47:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:27.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 183 MiB data, 337 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 113 op/s
Jan 31 02:47:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:47:29.193 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:47:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:47:29.193 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:47:29 np0005603621 nova_compute[247399]: 2026-01-31 07:47:29.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:29.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:29 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 02:47:29 np0005603621 systemd[257850]: Activating special unit Exit the Session...
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped target Main User Target.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped target Basic System.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped target Paths.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped target Sockets.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped target Timers.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:47:29 np0005603621 systemd[257850]: Closed D-Bus User Message Bus Socket.
Jan 31 02:47:29 np0005603621 systemd[257850]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:47:29 np0005603621 systemd[257850]: Removed slice User Application Slice.
Jan 31 02:47:29 np0005603621 systemd[257850]: Reached target Shutdown.
Jan 31 02:47:29 np0005603621 systemd[257850]: Finished Exit the Session.
Jan 31 02:47:29 np0005603621 systemd[257850]: Reached target Exit the Session.
Jan 31 02:47:29 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 02:47:29 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 02:47:29 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 02:47:29 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 02:47:29 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 02:47:29 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 02:47:29 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 02:47:30 np0005603621 nova_compute[247399]: 2026-01-31 07:47:30.274 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:47:30.468 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:47:30.468 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:47:30.468 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 186 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 488 KiB/s rd, 4.2 MiB/s wr, 93 op/s
Jan 31 02:47:31 np0005603621 ovn_controller[149152]: 2026-01-31T07:47:31Z|00046|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 02:47:31 np0005603621 nova_compute[247399]: 2026-01-31 07:47:31.267 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 190 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 607 KiB/s rd, 3.9 MiB/s wr, 109 op/s
Jan 31 02:47:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:33 np0005603621 nova_compute[247399]: 2026-01-31 07:47:33.731 247403 DEBUG oslo_concurrency.processutils [None req-e5db7c70-ea16-4063-a08c-c9de2c83ff2b fe769c83f9824637a52eff3bb4dc544b 2e3cb68417e2460f9b3679a682be4838 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:33 np0005603621 nova_compute[247399]: 2026-01-31 07:47:33.750 247403 DEBUG oslo_concurrency.processutils [None req-e5db7c70-ea16-4063-a08c-c9de2c83ff2b fe769c83f9824637a52eff3bb4dc544b 2e3cb68417e2460f9b3679a682be4838 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 632 KiB/s rd, 3.1 MiB/s wr, 116 op/s
Jan 31 02:47:34 np0005603621 nova_compute[247399]: 2026-01-31 07:47:34.961 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:47:34 np0005603621 nova_compute[247399]: 2026-01-31 07:47:34.962 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquired lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:47:34 np0005603621 nova_compute[247399]: 2026-01-31 07:47:34.962 247403 DEBUG nova.network.neutron [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.142 247403 DEBUG nova.network.neutron [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.391 247403 DEBUG nova.network.neutron [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.407 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Releasing lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:47:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:35.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.499 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.501 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.501 247403 INFO nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Creating image(s)#033[00m
Jan 31 02:47:35 np0005603621 nova_compute[247399]: 2026-01-31 07:47:35.545 247403 DEBUG nova.storage.rbd_utils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] creating snapshot(nova-resize) on rbd image(37069dd7-a48f-42ca-8238-bf5baa1fa605_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:47:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:47:36.195 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:47:36 np0005603621 nova_compute[247399]: 2026-01-31 07:47:36.273 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 31 02:47:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 31 02:47:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 31 02:47:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 571 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.010 247403 DEBUG nova.objects.instance [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.119 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.119 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Ensure instance console log exists: /var/lib/nova/instances/37069dd7-a48f-42ca-8238-bf5baa1fa605/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.120 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.120 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.121 247403 DEBUG oslo_concurrency.lockutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.122 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.127 247403 WARNING nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.134 247403 DEBUG nova.virt.libvirt.host [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.135 247403 DEBUG nova.virt.libvirt.host [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.137 247403 DEBUG nova.virt.libvirt.host [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.137 247403 DEBUG nova.virt.libvirt.host [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.138 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.139 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.139 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.139 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.140 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.140 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.140 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.140 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.141 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.141 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.141 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.141 247403 DEBUG nova.virt.hardware [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.142 247403 DEBUG nova.objects.instance [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.159 247403 DEBUG oslo_concurrency.processutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:37 np0005603621 nova_compute[247399]: 2026-01-31 07:47:37.958 247403 DEBUG oslo_concurrency.processutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.798s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:38 np0005603621 nova_compute[247399]: 2026-01-31 07:47:38.055 247403 DEBUG oslo_concurrency.processutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:47:38
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:47:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:47:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/32661668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:47:38 np0005603621 nova_compute[247399]: 2026-01-31 07:47:38.527 247403 DEBUG oslo_concurrency.processutils [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:47:38 np0005603621 nova_compute[247399]: 2026-01-31 07:47:38.531 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <uuid>37069dd7-a48f-42ca-8238-bf5baa1fa605</uuid>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <name>instance-0000000e</name>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:name>tempest-MigrationsAdminTest-server-1143047912</nova:name>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:47:37</nova:creationTime>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:user uuid="8a59efd78e244f44a1c70650f82a2c50">tempest-MigrationsAdminTest-1820348317-project-member</nova:user>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <nova:project uuid="1627a71b855b4032b51e234e44a9d570">tempest-MigrationsAdminTest-1820348317</nova:project>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <entry name="serial">37069dd7-a48f-42ca-8238-bf5baa1fa605</entry>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <entry name="uuid">37069dd7-a48f-42ca-8238-bf5baa1fa605</entry>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/37069dd7-a48f-42ca-8238-bf5baa1fa605_disk">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/37069dd7-a48f-42ca-8238-bf5baa1fa605_disk.config">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/37069dd7-a48f-42ca-8238-bf5baa1fa605/console.log" append="off"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:47:38 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:47:38 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:47:38 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:47:38 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:47:38 np0005603621 nova_compute[247399]: 2026-01-31 07:47:38.617 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:47:38 np0005603621 nova_compute[247399]: 2026-01-31 07:47:38.618 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:47:38 np0005603621 nova_compute[247399]: 2026-01-31 07:47:38.618 247403 INFO nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Using config drive#033[00m
Jan 31 02:47:38 np0005603621 systemd-machined[212769]: New machine qemu-5-instance-0000000e.
Jan 31 02:47:38 np0005603621 systemd[1]: Started Virtual Machine qemu-5-instance-0000000e.
Jan 31 02:47:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 392 KiB/s rd, 578 KiB/s wr, 83 op/s
Jan 31 02:47:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:39.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.058 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845660.0577712, 37069dd7-a48f-42ca-8238-bf5baa1fa605 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.058 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.061 247403 DEBUG nova.compute.manager [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.063 247403 INFO nova.virt.libvirt.driver [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance running successfully.#033[00m
Jan 31 02:47:40 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.066 247403 DEBUG nova.virt.libvirt.guest [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.066 247403 DEBUG nova.virt.libvirt.driver [None req-e45f7415-caeb-40ce-a0d7-77b67d913463 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.084 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.090 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.136 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.136 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845660.0607464, 37069dd7-a48f-42ca-8238-bf5baa1fa605 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.136 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] VM Started (Lifecycle Event)#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.165 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.169 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:47:40 np0005603621 nova_compute[247399]: 2026-01-31 07:47:40.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 196 KiB/s rd, 105 KiB/s wr, 62 op/s
Jan 31 02:47:41 np0005603621 nova_compute[247399]: 2026-01-31 07:47:41.274 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:41.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.739359) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845662739404, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 425, "num_deletes": 251, "total_data_size": 394341, "memory_usage": 403552, "flush_reason": "Manual Compaction"}
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845662830147, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 390742, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21685, "largest_seqno": 22109, "table_properties": {"data_size": 388238, "index_size": 604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6114, "raw_average_key_size": 18, "raw_value_size": 383229, "raw_average_value_size": 1179, "num_data_blocks": 27, "num_entries": 325, "num_filter_entries": 325, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845641, "oldest_key_time": 1769845641, "file_creation_time": 1769845662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 91145 microseconds, and 1929 cpu microseconds.
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:47:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 178 MiB data, 342 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 79 KiB/s wr, 85 op/s
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.830477) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 390742 bytes OK
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.830536) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.952027) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.952108) EVENT_LOG_v1 {"time_micros": 1769845662952100, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.952131) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 391727, prev total WAL file size 391727, number of live WAL files 2.
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.952685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(381KB)], [50(8919KB)]
Jan 31 02:47:42 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845662952719, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 9524148, "oldest_snapshot_seqno": -1}
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4497 keys, 7495754 bytes, temperature: kUnknown
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845663456674, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7495754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7465977, "index_size": 17443, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 114874, "raw_average_key_size": 25, "raw_value_size": 7384796, "raw_average_value_size": 1642, "num_data_blocks": 707, "num_entries": 4497, "num_filter_entries": 4497, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:47:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:43.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.457087) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7495754 bytes
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.932070) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.9 rd, 14.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 8.7 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(43.6) write-amplify(19.2) OK, records in: 5011, records dropped: 514 output_compression: NoCompression
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.932115) EVENT_LOG_v1 {"time_micros": 1769845663932099, "job": 26, "event": "compaction_finished", "compaction_time_micros": 504119, "compaction_time_cpu_micros": 13366, "output_level": 6, "num_output_files": 1, "total_output_size": 7495754, "num_input_records": 5011, "num_output_records": 4497, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845663932340, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845663933316, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:42.952615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.933356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.933362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.933364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.933366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:47:43.933368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:47:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 121 MiB data, 309 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 KiB/s wr, 125 op/s
Jan 31 02:47:45 np0005603621 nova_compute[247399]: 2026-01-31 07:47:45.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:45.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:45.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:46 np0005603621 nova_compute[247399]: 2026-01-31 07:47:46.275 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 121 MiB data, 309 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.7 KiB/s wr, 119 op/s
Jan 31 02:47:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:47.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021775907895961286 of space, bias 1.0, pg target 0.6532772368788385 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:47:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 KiB/s wr, 106 op/s
Jan 31 02:47:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:49.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:50 np0005603621 nova_compute[247399]: 2026-01-31 07:47:50.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 853 B/s wr, 97 op/s
Jan 31 02:47:51 np0005603621 nova_compute[247399]: 2026-01-31 07:47:51.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:51.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 KiB/s wr, 100 op/s
Jan 31 02:47:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 31 02:47:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:53.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 31 02:47:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.1 KiB/s wr, 79 op/s
Jan 31 02:47:54 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 31 02:47:55 np0005603621 nova_compute[247399]: 2026-01-31 07:47:55.327 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:55.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:47:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:55.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:55 np0005603621 podman[258247]: 2026-01-31 07:47:55.523651271 +0000 UTC m=+0.069029532 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 02:47:55 np0005603621 podman[258248]: 2026-01-31 07:47:55.57843349 +0000 UTC m=+0.124273386 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:47:56 np0005603621 nova_compute[247399]: 2026-01-31 07:47:56.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:47:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:47:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 112 KiB/s rd, 1.1 KiB/s wr, 35 op/s
Jan 31 02:47:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:57.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 938 KiB/s rd, 1.3 KiB/s wr, 43 op/s
Jan 31 02:47:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:47:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:47:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:47:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:47:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:47:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:47:59.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:00 np0005603621 nova_compute[247399]: 2026-01-31 07:48:00.330 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 12 KiB/s wr, 38 op/s
Jan 31 02:48:01 np0005603621 nova_compute[247399]: 2026-01-31 07:48:01.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:01.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 11 KiB/s wr, 33 op/s
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.035 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.035 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.075 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.149 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.149 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.156 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.156 247403 INFO nova.compute.claims [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.260 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:03.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:03.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:48:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701442324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.683 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.688 247403 DEBUG nova.compute.provider_tree [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.703 247403 DEBUG nova.scheduler.client.report [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.729 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.730 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.789 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.789 247403 DEBUG nova.network.neutron [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.808 247403 INFO nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.822 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.904 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.905 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.905 247403 INFO nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Creating image(s)#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.927 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.953 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.980 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:03 np0005603621 nova_compute[247399]: 2026-01-31 07:48:03.985 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.028 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.029 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.029 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.030 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.052 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.055 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.504 247403 DEBUG nova.network.neutron [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:48:04 np0005603621 nova_compute[247399]: 2026-01-31 07:48:04.504 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:48:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 12 KiB/s wr, 18 op/s
Jan 31 02:48:05 np0005603621 nova_compute[247399]: 2026-01-31 07:48:05.333 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:05.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:48:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:05.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:48:06 np0005603621 nova_compute[247399]: 2026-01-31 07:48:06.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:06 np0005603621 nova_compute[247399]: 2026-01-31 07:48:06.324 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.9 KiB/s wr, 15 op/s
Jan 31 02:48:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:07.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:07.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.217 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.492 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.493 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.494 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:48:08 np0005603621 nova_compute[247399]: 2026-01-31 07:48:08.494 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 11 KiB/s wr, 17 op/s
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.247 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:48:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:09.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.514 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:48:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:09.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.537 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.537 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.538 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.538 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.538 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.539 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.558 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.559 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.559 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.559 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:48:09 np0005603621 nova_compute[247399]: 2026-01-31 07:48:09.560 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:48:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1273520257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.031 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.091 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.092 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.242 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.243 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4593MB free_disk=20.942729949951172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.335 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.339 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 37069dd7-a48f-42ca-8238-bf5baa1fa605 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.339 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 39aa59fc-0e1c-4a01-860c-a7ff643e442f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.339 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.340 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:48:10 np0005603621 nova_compute[247399]: 2026-01-31 07:48:10.392 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 11 KiB/s wr, 9 op/s
Jan 31 02:48:11 np0005603621 nova_compute[247399]: 2026-01-31 07:48:11.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:11.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:12 np0005603621 nova_compute[247399]: 2026-01-31 07:48:12.109 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.717s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:12 np0005603621 nova_compute[247399]: 2026-01-31 07:48:12.117 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:48:12 np0005603621 nova_compute[247399]: 2026-01-31 07:48:12.137 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:48:12 np0005603621 nova_compute[247399]: 2026-01-31 07:48:12.163 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:48:12 np0005603621 nova_compute[247399]: 2026-01-31 07:48:12.164 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:12 np0005603621 nova_compute[247399]: 2026-01-31 07:48:12.767 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 8.712s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 138 MiB data, 312 MiB used, 21 GiB / 21 GiB avail; 263 KiB/s rd, 901 KiB/s wr, 19 op/s
Jan 31 02:48:13 np0005603621 nova_compute[247399]: 2026-01-31 07:48:13.018 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:13 np0005603621 nova_compute[247399]: 2026-01-31 07:48:13.018 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:13 np0005603621 nova_compute[247399]: 2026-01-31 07:48:13.025 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] resizing rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:48:13 np0005603621 nova_compute[247399]: 2026-01-31 07:48:13.373 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:13 np0005603621 nova_compute[247399]: 2026-01-31 07:48:13.374 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:13 np0005603621 nova_compute[247399]: 2026-01-31 07:48:13.375 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:48:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:13.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:13.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:48:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 19e431c6-8741-4a94-8626-e48a62c666a4 does not exist
Jan 31 02:48:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0f4624fe-3521-445c-84f9-ae50fb433c4d does not exist
Jan 31 02:48:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0db9000a-270e-4637-b639-e8cc5f2881a3 does not exist
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.132 247403 DEBUG nova.objects.instance [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'migration_context' on Instance uuid 39aa59fc-0e1c-4a01-860c-a7ff643e442f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.145 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.146 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Ensure instance console log exists: /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.146 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.146 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.146 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.148 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.152 247403 WARNING nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.156 247403 DEBUG nova.virt.libvirt.host [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.156 247403 DEBUG nova.virt.libvirt.host [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.160 247403 DEBUG nova.virt.libvirt.host [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.161 247403 DEBUG nova.virt.libvirt.host [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.161 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.162 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.162 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.162 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.162 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.163 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.163 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.163 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.163 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.163 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.164 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.164 247403 DEBUG nova.virt.hardware [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.166 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081450842' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4081450842' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:48:14 np0005603621 podman[258874]: 2026-01-31 07:48:14.419647053 +0000 UTC m=+0.096927881 container create 387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:48:14 np0005603621 podman[258874]: 2026-01-31 07:48:14.345533992 +0000 UTC m=+0.022814850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:48:14 np0005603621 systemd[1]: Started libpod-conmon-387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890.scope.
Jan 31 02:48:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3487857338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.620 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:14 np0005603621 podman[258874]: 2026-01-31 07:48:14.633876859 +0000 UTC m=+0.311157687 container init 387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:48:14 np0005603621 podman[258874]: 2026-01-31 07:48:14.639310781 +0000 UTC m=+0.316591609 container start 387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:48:14 np0005603621 gifted_lovelace[258890]: 167 167
Jan 31 02:48:14 np0005603621 systemd[1]: libpod-387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890.scope: Deactivated successfully.
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.648 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:14 np0005603621 nova_compute[247399]: 2026-01-31 07:48:14.654 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:14 np0005603621 podman[258874]: 2026-01-31 07:48:14.747480957 +0000 UTC m=+0.424761815 container attach 387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:48:14 np0005603621 podman[258874]: 2026-01-31 07:48:14.748626033 +0000 UTC m=+0.425906891 container died 387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:48:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:48:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 174 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 461 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Jan 31 02:48:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3fe7629e259dcf157093c23f6e78290e8d355df7aacfaf8d9ff7864aaee3b91b-merged.mount: Deactivated successfully.
Jan 31 02:48:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:48:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120295207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.072 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.074 247403 DEBUG nova.objects.instance [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39aa59fc-0e1c-4a01-860c-a7ff643e442f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.088 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <uuid>39aa59fc-0e1c-4a01-860c-a7ff643e442f</uuid>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <name>instance-0000000f</name>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:name>tempest-MigrationsAdminTest-server-426382963</nova:name>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:48:14</nova:creationTime>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:user uuid="8a59efd78e244f44a1c70650f82a2c50">tempest-MigrationsAdminTest-1820348317-project-member</nova:user>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <nova:project uuid="1627a71b855b4032b51e234e44a9d570">tempest-MigrationsAdminTest-1820348317</nova:project>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <entry name="serial">39aa59fc-0e1c-4a01-860c-a7ff643e442f</entry>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <entry name="uuid">39aa59fc-0e1c-4a01-860c-a7ff643e442f</entry>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk.config">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/console.log" append="off"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:48:15 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:48:15 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:48:15 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:48:15 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.188 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.188 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.189 247403 INFO nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Using config drive#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.211 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.339 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:15 np0005603621 podman[258874]: 2026-01-31 07:48:15.437229231 +0000 UTC m=+1.114510059 container remove 387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:48:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:15.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:15 np0005603621 systemd[1]: libpod-conmon-387152ceced7f95ebdaacd8afc8d58b067ab3f41bc125c9b6249f79f6ee71890.scope: Deactivated successfully.
Jan 31 02:48:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.569 247403 INFO nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Creating config drive at /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/disk.config#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.572 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpk13_fuzm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:15 np0005603621 podman[258975]: 2026-01-31 07:48:15.566307066 +0000 UTC m=+0.018696941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:48:15 np0005603621 podman[258975]: 2026-01-31 07:48:15.676208657 +0000 UTC m=+0.128598512 container create ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.689 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpk13_fuzm" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.712 247403 DEBUG nova.storage.rbd_utils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:48:15 np0005603621 nova_compute[247399]: 2026-01-31 07:48:15.715 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/disk.config 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:15 np0005603621 systemd[1]: Started libpod-conmon-ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2.scope.
Jan 31 02:48:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:48:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527e4f2b9665d26f526aa6b7ce3465ffde67e0c7e7a654f2bbb9dc914b6f943c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527e4f2b9665d26f526aa6b7ce3465ffde67e0c7e7a654f2bbb9dc914b6f943c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527e4f2b9665d26f526aa6b7ce3465ffde67e0c7e7a654f2bbb9dc914b6f943c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527e4f2b9665d26f526aa6b7ce3465ffde67e0c7e7a654f2bbb9dc914b6f943c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/527e4f2b9665d26f526aa6b7ce3465ffde67e0c7e7a654f2bbb9dc914b6f943c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:15 np0005603621 podman[258975]: 2026-01-31 07:48:15.924516619 +0000 UTC m=+0.376906504 container init ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:48:15 np0005603621 podman[258975]: 2026-01-31 07:48:15.931849521 +0000 UTC m=+0.384239376 container start ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:48:16 np0005603621 podman[258975]: 2026-01-31 07:48:16.041325709 +0000 UTC m=+0.493715584 container attach ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:48:16 np0005603621 nova_compute[247399]: 2026-01-31 07:48:16.327 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 31 02:48:16 np0005603621 competent_merkle[259028]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:48:16 np0005603621 competent_merkle[259028]: --> relative data size: 1.0
Jan 31 02:48:16 np0005603621 competent_merkle[259028]: --> All data devices are unavailable
Jan 31 02:48:16 np0005603621 systemd[1]: libpod-ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2.scope: Deactivated successfully.
Jan 31 02:48:16 np0005603621 podman[258975]: 2026-01-31 07:48:16.707123155 +0000 UTC m=+1.159513010 container died ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 02:48:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 174 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 369 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 31 02:48:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 31 02:48:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 31 02:48:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-527e4f2b9665d26f526aa6b7ce3465ffde67e0c7e7a654f2bbb9dc914b6f943c-merged.mount: Deactivated successfully.
Jan 31 02:48:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:17.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:17 np0005603621 nova_compute[247399]: 2026-01-31 07:48:17.576 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:48:17.576 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:48:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:48:17.578 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:48:17 np0005603621 nova_compute[247399]: 2026-01-31 07:48:17.848 247403 DEBUG oslo_concurrency.processutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/disk.config 39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:17 np0005603621 nova_compute[247399]: 2026-01-31 07:48:17.849 247403 INFO nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Deleting local config drive /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/disk.config because it was imported into RBD.#033[00m
Jan 31 02:48:17 np0005603621 systemd-machined[212769]: New machine qemu-6-instance-0000000f.
Jan 31 02:48:17 np0005603621 podman[258975]: 2026-01-31 07:48:17.909260671 +0000 UTC m=+2.361650526 container remove ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:48:17 np0005603621 systemd[1]: Started Virtual Machine qemu-6-instance-0000000f.
Jan 31 02:48:17 np0005603621 systemd[1]: libpod-conmon-ab529df1a3f46adae81601c83ffaa612854ce09c96cada650104034644b7caa2.scope: Deactivated successfully.
Jan 31 02:48:18 np0005603621 podman[259214]: 2026-01-31 07:48:18.409788268 +0000 UTC m=+0.022373758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:48:18 np0005603621 podman[259214]: 2026-01-31 07:48:18.541930601 +0000 UTC m=+0.154516091 container create 7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:48:18 np0005603621 systemd[1]: Started libpod-conmon-7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a.scope.
Jan 31 02:48:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:48:18 np0005603621 podman[259214]: 2026-01-31 07:48:18.714873403 +0000 UTC m=+0.327458903 container init 7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:48:18 np0005603621 podman[259214]: 2026-01-31 07:48:18.720872462 +0000 UTC m=+0.333457932 container start 7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_raman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:48:18 np0005603621 vigorous_raman[259263]: 167 167
Jan 31 02:48:18 np0005603621 systemd[1]: libpod-7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a.scope: Deactivated successfully.
Jan 31 02:48:18 np0005603621 conmon[259263]: conmon 7396c22e028a2de79c37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a.scope/container/memory.events
Jan 31 02:48:18 np0005603621 podman[259214]: 2026-01-31 07:48:18.777376836 +0000 UTC m=+0.389962306 container attach 7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 31 02:48:18 np0005603621 podman[259214]: 2026-01-31 07:48:18.778185902 +0000 UTC m=+0.390771372 container died 7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_raman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.813 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845698.8130095, 39aa59fc-0e1c-4a01-860c-a7ff643e442f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.815 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.817 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.817 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.820 247403 INFO nova.virt.libvirt.driver [-] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance spawned successfully.#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.821 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.857 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.864 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.864 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.865 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.865 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:48:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 215 MiB data, 343 MiB used, 21 GiB / 21 GiB avail; 433 KiB/s rd, 4.3 MiB/s wr, 104 op/s
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.866 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.866 247403 DEBUG nova.virt.libvirt.driver [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.870 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.894 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.895 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845698.8138375, 39aa59fc-0e1c-4a01-860c-a7ff643e442f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.895 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] VM Started (Lifecycle Event)#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.938 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.942 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.956 247403 INFO nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Took 15.05 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.957 247403 DEBUG nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:48:18 np0005603621 nova_compute[247399]: 2026-01-31 07:48:18.991 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:48:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bfe3bbf1c77ac4727a9445cc8be4b8499d524f4ff452b0c91c48db23d6acb4f7-merged.mount: Deactivated successfully.
Jan 31 02:48:19 np0005603621 nova_compute[247399]: 2026-01-31 07:48:19.070 247403 INFO nova.compute.manager [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Took 15.95 seconds to build instance.#033[00m
Jan 31 02:48:19 np0005603621 nova_compute[247399]: 2026-01-31 07:48:19.105 247403 DEBUG oslo_concurrency.lockutils [None req-645bb197-fb2d-4c98-81de-1894469a5a13 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:19 np0005603621 podman[259214]: 2026-01-31 07:48:19.459288052 +0000 UTC m=+1.071873532 container remove 7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:48:19 np0005603621 systemd[1]: libpod-conmon-7396c22e028a2de79c371b402ffeee55f1b6a1e219b4f3e7bf7c3d6ea4704a6a.scope: Deactivated successfully.
Jan 31 02:48:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:19.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:19.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:19 np0005603621 podman[259297]: 2026-01-31 07:48:19.60046113 +0000 UTC m=+0.027644694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:48:19 np0005603621 podman[259297]: 2026-01-31 07:48:19.777954246 +0000 UTC m=+0.205137790 container create 940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_saha, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:48:20 np0005603621 systemd[1]: Started libpod-conmon-940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8.scope.
Jan 31 02:48:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:48:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b5a0752ef3fe2c2a65a8d3a3edec72a7d9153c5ba6baebdf28809c291c87906/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b5a0752ef3fe2c2a65a8d3a3edec72a7d9153c5ba6baebdf28809c291c87906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b5a0752ef3fe2c2a65a8d3a3edec72a7d9153c5ba6baebdf28809c291c87906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b5a0752ef3fe2c2a65a8d3a3edec72a7d9153c5ba6baebdf28809c291c87906/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:20 np0005603621 podman[259297]: 2026-01-31 07:48:20.249851539 +0000 UTC m=+0.677035133 container init 940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_saha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:48:20 np0005603621 podman[259297]: 2026-01-31 07:48:20.257059547 +0000 UTC m=+0.684243091 container start 940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:48:20 np0005603621 nova_compute[247399]: 2026-01-31 07:48:20.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:20 np0005603621 podman[259297]: 2026-01-31 07:48:20.457322171 +0000 UTC m=+0.884505735 container attach 940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:48:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 215 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 497 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Jan 31 02:48:21 np0005603621 goofy_saha[259314]: {
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:    "0": [
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:        {
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "devices": [
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "/dev/loop3"
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            ],
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "lv_name": "ceph_lv0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "lv_size": "7511998464",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "name": "ceph_lv0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "tags": {
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.cluster_name": "ceph",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.crush_device_class": "",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.encrypted": "0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.osd_id": "0",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.type": "block",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:                "ceph.vdo": "0"
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            },
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "type": "block",
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:            "vg_name": "ceph_vg0"
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:        }
Jan 31 02:48:21 np0005603621 goofy_saha[259314]:    ]
Jan 31 02:48:21 np0005603621 goofy_saha[259314]: }
Jan 31 02:48:21 np0005603621 systemd[1]: libpod-940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8.scope: Deactivated successfully.
Jan 31 02:48:21 np0005603621 podman[259297]: 2026-01-31 07:48:21.072004705 +0000 UTC m=+1.499188249 container died 940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_saha, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:48:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4b5a0752ef3fe2c2a65a8d3a3edec72a7d9153c5ba6baebdf28809c291c87906-merged.mount: Deactivated successfully.
Jan 31 02:48:21 np0005603621 podman[259297]: 2026-01-31 07:48:21.301775381 +0000 UTC m=+1.728958925 container remove 940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:48:21 np0005603621 systemd[1]: libpod-conmon-940e702c2a4f9b66afe71169d42c93b83b2bdcc85b3ad0294a2e2207c13f0fe8.scope: Deactivated successfully.
Jan 31 02:48:21 np0005603621 nova_compute[247399]: 2026-01-31 07:48:21.327 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:21.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:48:21.581 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:48:21 np0005603621 podman[259473]: 2026-01-31 07:48:21.824633804 +0000 UTC m=+0.070218869 container create 15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermat, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 02:48:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:21 np0005603621 podman[259473]: 2026-01-31 07:48:21.776185513 +0000 UTC m=+0.021770598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:48:21 np0005603621 systemd[1]: Started libpod-conmon-15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b.scope.
Jan 31 02:48:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:48:22 np0005603621 podman[259473]: 2026-01-31 07:48:22.02937362 +0000 UTC m=+0.274958695 container init 15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:48:22 np0005603621 podman[259473]: 2026-01-31 07:48:22.034097479 +0000 UTC m=+0.279682554 container start 15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:48:22 np0005603621 loving_fermat[259490]: 167 167
Jan 31 02:48:22 np0005603621 systemd[1]: libpod-15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b.scope: Deactivated successfully.
Jan 31 02:48:22 np0005603621 podman[259473]: 2026-01-31 07:48:22.16426279 +0000 UTC m=+0.409847875 container attach 15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:48:22 np0005603621 podman[259473]: 2026-01-31 07:48:22.164702703 +0000 UTC m=+0.410287758 container died 15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:48:22 np0005603621 nova_compute[247399]: 2026-01-31 07:48:22.673 247403 DEBUG oslo_concurrency.lockutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-39aa59fc-0e1c-4a01-860c-a7ff643e442f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:48:22 np0005603621 nova_compute[247399]: 2026-01-31 07:48:22.674 247403 DEBUG oslo_concurrency.lockutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-39aa59fc-0e1c-4a01-860c-a7ff643e442f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:48:22 np0005603621 nova_compute[247399]: 2026-01-31 07:48:22.674 247403 DEBUG nova.network.neutron [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:48:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 215 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 172 op/s
Jan 31 02:48:22 np0005603621 nova_compute[247399]: 2026-01-31 07:48:22.889 247403 DEBUG nova.network.neutron [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:48:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d76167debabe89df47a4cde8867b8204c0dda604e50f0c7d26637170686078b1-merged.mount: Deactivated successfully.
Jan 31 02:48:23 np0005603621 nova_compute[247399]: 2026-01-31 07:48:23.475 247403 DEBUG nova.network.neutron [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:48:23 np0005603621 nova_compute[247399]: 2026-01-31 07:48:23.488 247403 DEBUG oslo_concurrency.lockutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-39aa59fc-0e1c-4a01-860c-a7ff643e442f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:48:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:23.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:23.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:23 np0005603621 nova_compute[247399]: 2026-01-31 07:48:23.625 247403 DEBUG nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 02:48:23 np0005603621 nova_compute[247399]: 2026-01-31 07:48:23.626 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Creating file /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/bfdd7a5e99324e609c56872d6cfb4051.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 02:48:23 np0005603621 nova_compute[247399]: 2026-01-31 07:48:23.627 247403 DEBUG oslo_concurrency.processutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/bfdd7a5e99324e609c56872d6cfb4051.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:24 np0005603621 nova_compute[247399]: 2026-01-31 07:48:24.104 247403 DEBUG oslo_concurrency.processutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/bfdd7a5e99324e609c56872d6cfb4051.tmp" returned: 1 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:24 np0005603621 nova_compute[247399]: 2026-01-31 07:48:24.105 247403 DEBUG oslo_concurrency.processutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f/bfdd7a5e99324e609c56872d6cfb4051.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:48:24 np0005603621 nova_compute[247399]: 2026-01-31 07:48:24.105 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Creating directory /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 02:48:24 np0005603621 nova_compute[247399]: 2026-01-31 07:48:24.106 247403 DEBUG oslo_concurrency.processutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:48:24 np0005603621 podman[259473]: 2026-01-31 07:48:24.224716532 +0000 UTC m=+2.470301587 container remove 15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:48:24 np0005603621 systemd[1]: libpod-conmon-15fcc5fde4ba287ae8f4c329bcf372db31b2d2076f31460ac6202aeaad7b238b.scope: Deactivated successfully.
Jan 31 02:48:24 np0005603621 nova_compute[247399]: 2026-01-31 07:48:24.329 247403 DEBUG oslo_concurrency.processutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/39aa59fc-0e1c-4a01-860c-a7ff643e442f" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:48:24 np0005603621 nova_compute[247399]: 2026-01-31 07:48:24.334 247403 DEBUG nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:48:24 np0005603621 podman[259517]: 2026-01-31 07:48:24.323699798 +0000 UTC m=+0.024150174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:48:24 np0005603621 podman[259517]: 2026-01-31 07:48:24.687975542 +0000 UTC m=+0.388425928 container create 2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:48:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 215 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 221 op/s
Jan 31 02:48:25 np0005603621 nova_compute[247399]: 2026-01-31 07:48:25.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:25 np0005603621 systemd[1]: Started libpod-conmon-2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9.scope.
Jan 31 02:48:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:48:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcce523c2c5a777cf3f8c20fa6748bc364597eda794b2db64fe0368cbf76303a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcce523c2c5a777cf3f8c20fa6748bc364597eda794b2db64fe0368cbf76303a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcce523c2c5a777cf3f8c20fa6748bc364597eda794b2db64fe0368cbf76303a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcce523c2c5a777cf3f8c20fa6748bc364597eda794b2db64fe0368cbf76303a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:48:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:25.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:25.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:25 np0005603621 podman[259517]: 2026-01-31 07:48:25.682003855 +0000 UTC m=+1.382454241 container init 2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hamilton, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:48:25 np0005603621 podman[259517]: 2026-01-31 07:48:25.688724888 +0000 UTC m=+1.389175264 container start 2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:48:26 np0005603621 podman[259517]: 2026-01-31 07:48:26.005051218 +0000 UTC m=+1.705501684 container attach 2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hamilton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:48:26 np0005603621 nova_compute[247399]: 2026-01-31 07:48:26.329 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]: {
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:        "osd_id": 0,
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:        "type": "bluestore"
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]:    }
Jan 31 02:48:26 np0005603621 suspicious_hamilton[259534]: }
Jan 31 02:48:26 np0005603621 systemd[1]: libpod-2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9.scope: Deactivated successfully.
Jan 31 02:48:26 np0005603621 podman[259517]: 2026-01-31 07:48:26.48516442 +0000 UTC m=+2.185615016 container died 2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hamilton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 02:48:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 215 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 1.9 MiB/s wr, 221 op/s
Jan 31 02:48:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:27.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dcce523c2c5a777cf3f8c20fa6748bc364597eda794b2db64fe0368cbf76303a-merged.mount: Deactivated successfully.
Jan 31 02:48:28 np0005603621 podman[259517]: 2026-01-31 07:48:28.41671709 +0000 UTC m=+4.117167456 container remove 2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:48:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:48:28 np0005603621 podman[259555]: 2026-01-31 07:48:28.500944551 +0000 UTC m=+2.052543643 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:48:28 np0005603621 podman[259553]: 2026-01-31 07:48:28.50409166 +0000 UTC m=+2.055693192 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 02:48:28 np0005603621 systemd[1]: libpod-conmon-2312e5e99c46bb01ca8aeb2e3bb81c37acef730032a0069b0d010ed6d7ce24b9.scope: Deactivated successfully.
Jan 31 02:48:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:48:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:48:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:48:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 99586251-ece2-4d89-9c77-a1c334adabd8 does not exist
Jan 31 02:48:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d0a9e8bc-d1fc-435f-b489-f298a6ca26bb does not exist
Jan 31 02:48:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fd471a8a-8fb9-4cc3-80f8-35d9098ea151 does not exist
Jan 31 02:48:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 215 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 164 op/s
Jan 31 02:48:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:29.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:48:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:48:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:29.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:30 np0005603621 nova_compute[247399]: 2026-01-31 07:48:30.348 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:48:30.469 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:48:30.469 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:48:30.469 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 215 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 145 op/s
Jan 31 02:48:31 np0005603621 nova_compute[247399]: 2026-01-31 07:48:31.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:31.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:31.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 239 MiB data, 371 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Jan 31 02:48:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:33.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:33.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:34 np0005603621 nova_compute[247399]: 2026-01-31 07:48:34.495 247403 DEBUG nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:48:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 304 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.8 MiB/s wr, 158 op/s
Jan 31 02:48:35 np0005603621 ovn_controller[149152]: 2026-01-31T07:48:35Z|00047|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 31 02:48:35 np0005603621 nova_compute[247399]: 2026-01-31 07:48:35.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:35.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:35.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:36 np0005603621 nova_compute[247399]: 2026-01-31 07:48:36.332 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 304 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 150 KiB/s rd, 5.8 MiB/s wr, 84 op/s
Jan 31 02:48:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:48:38
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log']
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:48:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 309 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 500 KiB/s rd, 5.9 MiB/s wr, 118 op/s
Jan 31 02:48:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:39.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:40 np0005603621 nova_compute[247399]: 2026-01-31 07:48:40.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 313 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 516 KiB/s rd, 5.9 MiB/s wr, 124 op/s
Jan 31 02:48:41 np0005603621 nova_compute[247399]: 2026-01-31 07:48:41.334 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:41.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:41.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 314 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 518 KiB/s rd, 6.0 MiB/s wr, 131 op/s
Jan 31 02:48:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:43.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 349 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 518 KiB/s rd, 5.2 MiB/s wr, 138 op/s
Jan 31 02:48:45 np0005603621 nova_compute[247399]: 2026-01-31 07:48:45.357 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:45.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:45 np0005603621 nova_compute[247399]: 2026-01-31 07:48:45.544 247403 DEBUG nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:48:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:46 np0005603621 nova_compute[247399]: 2026-01-31 07:48:46.336 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 349 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 393 KiB/s rd, 1.6 MiB/s wr, 74 op/s
Jan 31 02:48:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:47.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:47.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008311232086357714 of space, bias 1.0, pg target 2.4933696259073144 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 02:48:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 374 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 453 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 31 02:48:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:49.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:48:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:49.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:48:50 np0005603621 nova_compute[247399]: 2026-01-31 07:48:50.358 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 374 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 180 KiB/s rd, 1.9 MiB/s wr, 64 op/s
Jan 31 02:48:51 np0005603621 nova_compute[247399]: 2026-01-31 07:48:51.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:51.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:51.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 374 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 917 KiB/s rd, 1.9 MiB/s wr, 84 op/s
Jan 31 02:48:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:53.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 374 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 129 op/s
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:55 np0005603621 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 31 02:48:55 np0005603621 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Consumed 13.414s CPU time.
Jan 31 02:48:55 np0005603621 systemd-machined[212769]: Machine qemu-6-instance-0000000f terminated.
Jan 31 02:48:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.583 247403 INFO nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance shutdown successfully after 31 seconds.#033[00m
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.589 247403 INFO nova.virt.libvirt.driver [-] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance destroyed successfully.#033[00m
Jan 31 02:48:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.592 247403 DEBUG nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.593 247403 DEBUG nova.virt.libvirt.driver [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.795 247403 DEBUG oslo_concurrency.lockutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.795 247403 DEBUG oslo_concurrency.lockutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:48:55 np0005603621 nova_compute[247399]: 2026-01-31 07:48:55.795 247403 DEBUG oslo_concurrency.lockutils [None req-1b02303c-227a-4379-a54f-519c12231a78 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:48:56 np0005603621 nova_compute[247399]: 2026-01-31 07:48:56.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:48:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 374 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 498 KiB/s wr, 102 op/s
Jan 31 02:48:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:57.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:48:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 316 MiB data, 426 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 503 KiB/s wr, 162 op/s
Jan 31 02:48:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:48:59.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:48:59 np0005603621 podman[259776]: 2026-01-31 07:48:59.547782954 +0000 UTC m=+0.094593728 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:48:59 np0005603621 podman[259777]: 2026-01-31 07:48:59.551086299 +0000 UTC m=+0.097937764 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:48:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:48:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:48:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:48:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 31 02:49:00 np0005603621 nova_compute[247399]: 2026-01-31 07:49:00.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 31 02:49:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 31 02:49:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 295 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 4.5 MiB/s rd, 47 KiB/s wr, 194 op/s
Jan 31 02:49:01 np0005603621 nova_compute[247399]: 2026-01-31 07:49:01.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:01.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:01.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 295 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 66 KiB/s wr, 182 op/s
Jan 31 02:49:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:03.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:04.592 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:49:04 np0005603621 nova_compute[247399]: 2026-01-31 07:49:04.593 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:04.595 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:49:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 295 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 154 KiB/s wr, 125 op/s
Jan 31 02:49:05 np0005603621 nova_compute[247399]: 2026-01-31 07:49:05.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:05.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:06 np0005603621 nova_compute[247399]: 2026-01-31 07:49:06.344 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 298 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 386 KiB/s wr, 144 op/s
Jan 31 02:49:07 np0005603621 nova_compute[247399]: 2026-01-31 07:49:07.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:07.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:07.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.676 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.676 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.677 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:49:08 np0005603621 nova_compute[247399]: 2026-01-31 07:49:08.677 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 300 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 838 KiB/s rd, 1.0 MiB/s wr, 99 op/s
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.243 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:49:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:09.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.916 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.945 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.946 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.946 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.947 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.947 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.990 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.991 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.991 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.991 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:49:09 np0005603621 nova_compute[247399]: 2026-01-31 07:49:09.991 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.108 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.109 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.109 247403 DEBUG nova.compute.manager [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Going to confirm migration 2 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.345 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-39aa59fc-0e1c-4a01-860c-a7ff643e442f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.345 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-39aa59fc-0e1c-4a01-860c-a7ff643e442f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.346 247403 DEBUG nova.network.neutron [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.346 247403 DEBUG nova.objects.instance [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'info_cache' on Instance uuid 39aa59fc-0e1c-4a01-860c-a7ff643e442f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.369 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.545 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845735.5447261, 39aa59fc-0e1c-4a01-860c-a7ff643e442f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.546 247403 INFO nova.compute.manager [-] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.564 247403 DEBUG nova.compute.manager [None req-01c94336-295a-43ef-860f-d1242a3b04e1 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.569 247403 DEBUG nova.compute.manager [None req-01c94336-295a-43ef-860f-d1242a3b04e1 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.595 247403 INFO nova.compute.manager [None req-01c94336-295a-43ef-860f-d1242a3b04e1 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 31 02:49:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:10.597 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542705085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.734 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.743s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.819 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.820 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.822 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.823 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.975 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.977 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4633MB free_disk=20.84790802001953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.977 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:10 np0005603621 nova_compute[247399]: 2026-01-31 07:49:10.978 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.020 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration for instance 39aa59fc-0e1c-4a01-860c-a7ff643e442f refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.051 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Updating resource usage from migration 73e0184c-1180-4ca4-8c3b-7550a25526aa#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.051 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Starting to track outgoing migration 73e0184c-1180-4ca4-8c3b-7550a25526aa with flavor a01eb4f0-fd80-416b-a750-75de320394d8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 31 02:49:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 305 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 206 KiB/s rd, 1.4 MiB/s wr, 79 op/s
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.087 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 37069dd7-a48f-42ca-8238-bf5baa1fa605 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.088 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration 73e0184c-1180-4ca4-8c3b-7550a25526aa is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.088 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.088 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.156 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.346 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.413 247403 DEBUG nova.network.neutron [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:49:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:11.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2291695959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.594 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.601 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:49:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.619 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.644 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.645 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.896 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:11 np0005603621 nova_compute[247399]: 2026-01-31 07:49:11.897 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:12 np0005603621 nova_compute[247399]: 2026-01-31 07:49:12.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 329 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 145 op/s
Jan 31 02:49:13 np0005603621 nova_compute[247399]: 2026-01-31 07:49:13.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:13 np0005603621 nova_compute[247399]: 2026-01-31 07:49:13.300 247403 DEBUG nova.network.neutron [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 39aa59fc-0e1c-4a01-860c-a7ff643e442f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:13 np0005603621 nova_compute[247399]: 2026-01-31 07:49:13.314 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-39aa59fc-0e1c-4a01-860c-a7ff643e442f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:13 np0005603621 nova_compute[247399]: 2026-01-31 07:49:13.315 247403 DEBUG nova.objects.instance [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'migration_context' on Instance uuid 39aa59fc-0e1c-4a01-860c-a7ff643e442f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:13.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:13 np0005603621 nova_compute[247399]: 2026-01-31 07:49:13.573 247403 DEBUG nova.storage.rbd_utils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] removing snapshot(nova-resize) on rbd image(39aa59fc-0e1c-4a01-860c-a7ff643e442f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:49:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:14 np0005603621 nova_compute[247399]: 2026-01-31 07:49:14.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:49:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:49:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842026651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:49:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:49:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1842026651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:49:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/805675973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 333 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 159 op/s
Jan 31 02:49:15 np0005603621 nova_compute[247399]: 2026-01-31 07:49:15.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:15.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:15.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:16 np0005603621 nova_compute[247399]: 2026-01-31 07:49:16.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 31 02:49:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 31 02:49:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 31 02:49:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 346 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 196 op/s
Jan 31 02:49:17 np0005603621 nova_compute[247399]: 2026-01-31 07:49:17.486 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:17 np0005603621 nova_compute[247399]: 2026-01-31 07:49:17.486 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:17.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:17 np0005603621 nova_compute[247399]: 2026-01-31 07:49:17.941 247403 DEBUG oslo_concurrency.processutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3652818402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:18 np0005603621 nova_compute[247399]: 2026-01-31 07:49:18.386 247403 DEBUG oslo_concurrency.processutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:18 np0005603621 nova_compute[247399]: 2026-01-31 07:49:18.391 247403 DEBUG nova.compute.provider_tree [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:49:18 np0005603621 nova_compute[247399]: 2026-01-31 07:49:18.425 247403 DEBUG nova.scheduler.client.report [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:49:18 np0005603621 nova_compute[247399]: 2026-01-31 07:49:18.579 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:18 np0005603621 nova_compute[247399]: 2026-01-31 07:49:18.777 247403 INFO nova.scheduler.client.report [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Deleted allocation for migration 73e0184c-1180-4ca4-8c3b-7550a25526aa#033[00m
Jan 31 02:49:18 np0005603621 nova_compute[247399]: 2026-01-31 07:49:18.891 247403 DEBUG oslo_concurrency.lockutils [None req-76f59b9a-4f34-416f-94da-3513075ff88f 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "39aa59fc-0e1c-4a01-860c-a7ff643e442f" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 369 MiB data, 492 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.2 MiB/s wr, 207 op/s
Jan 31 02:49:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:19.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:19.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:20 np0005603621 nova_compute[247399]: 2026-01-31 07:49:20.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 390 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 5.6 MiB/s wr, 214 op/s
Jan 31 02:49:21 np0005603621 nova_compute[247399]: 2026-01-31 07:49:21.350 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:21.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:21.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 399 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.2 MiB/s wr, 174 op/s
Jan 31 02:49:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:23.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:23.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 400 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 166 op/s
Jan 31 02:49:25 np0005603621 nova_compute[247399]: 2026-01-31 07:49:25.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:25.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:25.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:26 np0005603621 nova_compute[247399]: 2026-01-31 07:49:26.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 403 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 196 op/s
Jan 31 02:49:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:27.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 31 02:49:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 31 02:49:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 31 02:49:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 409 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.0 MiB/s wr, 172 op/s
Jan 31 02:49:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:29.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:29.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:30 np0005603621 nova_compute[247399]: 2026-01-31 07:49:30.434 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:30.469 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:30.470 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:30.470 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:30 np0005603621 podman[260122]: 2026-01-31 07:49:30.51966791 +0000 UTC m=+0.077782967 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 02:49:30 np0005603621 podman[260121]: 2026-01-31 07:49:30.51996539 +0000 UTC m=+0.078262093 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:49:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 409 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 161 KiB/s wr, 185 op/s
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.432 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "620b3405-251d-4545-b523-faa35768224b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.432 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.568 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:49:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:31.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:31.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.745 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.747 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.758 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:49:31 np0005603621 nova_compute[247399]: 2026-01-31 07:49:31.759 247403 INFO nova.compute.claims [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 15f22ad0-d255-479b-8639-08395f290e47 does not exist
Jan 31 02:49:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 10097992-3e9c-4631-a2c5-063554e1c97e does not exist
Jan 31 02:49:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 107b3ff4-5db0-45b2-bcf2-a99d7de02a67 does not exist
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:49:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.154 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:32 np0005603621 podman[260365]: 2026-01-31 07:49:32.30421748 +0000 UTC m=+0.018144845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:49:32 np0005603621 podman[260365]: 2026-01-31 07:49:32.560066699 +0000 UTC m=+0.273994084 container create f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:49:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/195065631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.642 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.647 247403 DEBUG nova.compute.provider_tree [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:49:32 np0005603621 systemd[1]: Started libpod-conmon-f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44.scope.
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.677 247403 DEBUG nova.scheduler.client.report [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:49:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.743 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.744 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:49:32 np0005603621 podman[260365]: 2026-01-31 07:49:32.872126615 +0000 UTC m=+0.586054010 container init f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:49:32 np0005603621 podman[260365]: 2026-01-31 07:49:32.877874316 +0000 UTC m=+0.591801661 container start f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:49:32 np0005603621 systemd[1]: libpod-f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44.scope: Deactivated successfully.
Jan 31 02:49:32 np0005603621 kind_antonelli[260392]: 167 167
Jan 31 02:49:32 np0005603621 conmon[260392]: conmon f1b50263365e8076beac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44.scope/container/memory.events
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.899 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.901 247403 DEBUG nova.network.neutron [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.936 247403 INFO nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:49:32 np0005603621 nova_compute[247399]: 2026-01-31 07:49:32.967 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:49:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 382 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 68 KiB/s wr, 150 op/s
Jan 31 02:49:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.131 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.133 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.133 247403 INFO nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Creating image(s)#033[00m
Jan 31 02:49:33 np0005603621 podman[260365]: 2026-01-31 07:49:33.139587641 +0000 UTC m=+0.853514996 container attach f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:49:33 np0005603621 podman[260365]: 2026-01-31 07:49:33.140630564 +0000 UTC m=+0.854557929 container died f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.155 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.174 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.193 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.196 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.240 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.240 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.241 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.241 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.268 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.272 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 620b3405-251d-4545-b523-faa35768224b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:33.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:33.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.786 247403 DEBUG nova.network.neutron [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:49:33 np0005603621 nova_compute[247399]: 2026-01-31 07:49:33.786 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:49:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-320aac77baab86ca76538cef2bec9fce477d32b9478df448203ff6cd9a2e58f9-merged.mount: Deactivated successfully.
Jan 31 02:49:34 np0005603621 podman[260365]: 2026-01-31 07:49:34.29877061 +0000 UTC m=+2.012697945 container remove f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_antonelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:49:34 np0005603621 systemd[1]: libpod-conmon-f1b50263365e8076beacf0b853c6c75c5cc0539b7015fcb5f57d76d939c53e44.scope: Deactivated successfully.
Jan 31 02:49:34 np0005603621 podman[260510]: 2026-01-31 07:49:34.414516506 +0000 UTC m=+0.036490384 container create b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goodall, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:49:34 np0005603621 systemd[1]: Started libpod-conmon-b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5.scope.
Jan 31 02:49:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:49:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3f08cbbc01760ad76e6cb021a57168b691c9771a3e86c14e51719ce2e38bca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3f08cbbc01760ad76e6cb021a57168b691c9771a3e86c14e51719ce2e38bca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3f08cbbc01760ad76e6cb021a57168b691c9771a3e86c14e51719ce2e38bca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3f08cbbc01760ad76e6cb021a57168b691c9771a3e86c14e51719ce2e38bca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a3f08cbbc01760ad76e6cb021a57168b691c9771a3e86c14e51719ce2e38bca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:34 np0005603621 podman[260510]: 2026-01-31 07:49:34.39786672 +0000 UTC m=+0.019840608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:49:34 np0005603621 podman[260510]: 2026-01-31 07:49:34.529463536 +0000 UTC m=+0.151437414 container init b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:49:34 np0005603621 podman[260510]: 2026-01-31 07:49:34.535651612 +0000 UTC m=+0.157625480 container start b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:49:34 np0005603621 podman[260510]: 2026-01-31 07:49:34.546717341 +0000 UTC m=+0.168691229 container attach b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goodall, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:49:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 362 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 64 KiB/s wr, 134 op/s
Jan 31 02:49:35 np0005603621 dazzling_goodall[260527]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:49:35 np0005603621 dazzling_goodall[260527]: --> relative data size: 1.0
Jan 31 02:49:35 np0005603621 dazzling_goodall[260527]: --> All data devices are unavailable
Jan 31 02:49:35 np0005603621 systemd[1]: libpod-b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5.scope: Deactivated successfully.
Jan 31 02:49:35 np0005603621 podman[260510]: 2026-01-31 07:49:35.290868692 +0000 UTC m=+0.912842600 container died b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goodall, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:49:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8a3f08cbbc01760ad76e6cb021a57168b691c9771a3e86c14e51719ce2e38bca-merged.mount: Deactivated successfully.
Jan 31 02:49:35 np0005603621 podman[260510]: 2026-01-31 07:49:35.345278101 +0000 UTC m=+0.967251969 container remove b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_goodall, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:49:35 np0005603621 systemd[1]: libpod-conmon-b8bedc965071fc967b12cfdaacb6cb2371ba035ca2cd7d2b5681067e29f531a5.scope: Deactivated successfully.
Jan 31 02:49:35 np0005603621 nova_compute[247399]: 2026-01-31 07:49:35.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:35.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:35.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.794549029 +0000 UTC m=+0.032325711 container create 0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_varahamihira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:49:35 np0005603621 systemd[1]: Started libpod-conmon-0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8.scope.
Jan 31 02:49:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.859732908 +0000 UTC m=+0.097509620 container init 0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_varahamihira, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.864467307 +0000 UTC m=+0.102244009 container start 0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.867948367 +0000 UTC m=+0.105725079 container attach 0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:49:35 np0005603621 beautiful_varahamihira[260712]: 167 167
Jan 31 02:49:35 np0005603621 systemd[1]: libpod-0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8.scope: Deactivated successfully.
Jan 31 02:49:35 np0005603621 conmon[260712]: conmon 0700ce6fa3745345e2d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8.scope/container/memory.events
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.86932548 +0000 UTC m=+0.107102162 container died 0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.780477175 +0000 UTC m=+0.018253867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:49:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-efcfbbd84574dc27cc2eae6993eae0ca0c588f23107016324aa864be18cefd95-merged.mount: Deactivated successfully.
Jan 31 02:49:35 np0005603621 podman[260696]: 2026-01-31 07:49:35.909252552 +0000 UTC m=+0.147029234 container remove 0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:49:35 np0005603621 systemd[1]: libpod-conmon-0700ce6fa3745345e2d10341215cd203e9d1ffee1660752c44bd558c875593e8.scope: Deactivated successfully.
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.040015381 +0000 UTC m=+0.041026206 container create a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:49:36 np0005603621 systemd[1]: Started libpod-conmon-a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca.scope.
Jan 31 02:49:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:49:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb2e1a3f28559e050edfc89b05ff3450ddd3bc5a9904ccfdf4d65b46e8b3faa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb2e1a3f28559e050edfc89b05ff3450ddd3bc5a9904ccfdf4d65b46e8b3faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb2e1a3f28559e050edfc89b05ff3450ddd3bc5a9904ccfdf4d65b46e8b3faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb2e1a3f28559e050edfc89b05ff3450ddd3bc5a9904ccfdf4d65b46e8b3faa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.110406344 +0000 UTC m=+0.111417209 container init a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.116243469 +0000 UTC m=+0.117254274 container start a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.020222636 +0000 UTC m=+0.021233451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.11913702 +0000 UTC m=+0.120147855 container attach a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:49:36 np0005603621 nova_compute[247399]: 2026-01-31 07:49:36.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:36 np0005603621 nova_compute[247399]: 2026-01-31 07:49:36.708 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 620b3405-251d-4545-b523-faa35768224b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:36 np0005603621 nova_compute[247399]: 2026-01-31 07:49:36.776 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] resizing rbd image 620b3405-251d-4545-b523-faa35768224b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]: {
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:    "0": [
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:        {
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "devices": [
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "/dev/loop3"
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            ],
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "lv_name": "ceph_lv0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "lv_size": "7511998464",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "name": "ceph_lv0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "tags": {
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.cluster_name": "ceph",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.crush_device_class": "",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.encrypted": "0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.osd_id": "0",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.type": "block",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:                "ceph.vdo": "0"
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            },
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "type": "block",
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:            "vg_name": "ceph_vg0"
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:        }
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]:    ]
Jan 31 02:49:36 np0005603621 flamboyant_wozniak[260751]: }
Jan 31 02:49:36 np0005603621 systemd[1]: libpod-a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca.scope: Deactivated successfully.
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.826263263 +0000 UTC m=+0.827274068 container died a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wozniak, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:49:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7eb2e1a3f28559e050edfc89b05ff3450ddd3bc5a9904ccfdf4d65b46e8b3faa-merged.mount: Deactivated successfully.
Jan 31 02:49:36 np0005603621 podman[260735]: 2026-01-31 07:49:36.871211622 +0000 UTC m=+0.872222427 container remove a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 02:49:36 np0005603621 systemd[1]: libpod-conmon-a0939eaaf4e82bd258022660002bac55302352a47686e27f72a27ad7b3e630ca.scope: Deactivated successfully.
Jan 31 02:49:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 375 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 724 KiB/s wr, 110 op/s
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.306825489 +0000 UTC m=+0.030248786 container create 10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_galois, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:49:37 np0005603621 systemd[1]: Started libpod-conmon-10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e.scope.
Jan 31 02:49:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.37018487 +0000 UTC m=+0.093608187 container init 10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.375162567 +0000 UTC m=+0.098585864 container start 10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_galois, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 02:49:37 np0005603621 stoic_galois[260980]: 167 167
Jan 31 02:49:37 np0005603621 systemd[1]: libpod-10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e.scope: Deactivated successfully.
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.380875797 +0000 UTC m=+0.104299114 container attach 10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.381235899 +0000 UTC m=+0.104659196 container died 10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.295014776 +0000 UTC m=+0.018438093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:49:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-48fd8d35abebac86e09a353536d5340ed642b08cfc7e1e11c0a0837c19c513c3-merged.mount: Deactivated successfully.
Jan 31 02:49:37 np0005603621 podman[260963]: 2026-01-31 07:49:37.411108293 +0000 UTC m=+0.134531580 container remove 10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_galois, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:49:37 np0005603621 systemd[1]: libpod-conmon-10d5fd1000549300f0ddb9b3ed6a3cfe9c19ea8aa67af6a5a26dbfed19bce01e.scope: Deactivated successfully.
Jan 31 02:49:37 np0005603621 podman[261004]: 2026-01-31 07:49:37.517973447 +0000 UTC m=+0.033386115 container create c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_snyder, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 02:49:37 np0005603621 systemd[1]: Started libpod-conmon-c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f.scope.
Jan 31 02:49:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:49:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb7fe13d9a5293b32f24ce65872d87cea1c485969a2f48d0122493009b56d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb7fe13d9a5293b32f24ce65872d87cea1c485969a2f48d0122493009b56d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb7fe13d9a5293b32f24ce65872d87cea1c485969a2f48d0122493009b56d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22cb7fe13d9a5293b32f24ce65872d87cea1c485969a2f48d0122493009b56d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:49:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:37.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:37 np0005603621 podman[261004]: 2026-01-31 07:49:37.582129334 +0000 UTC m=+0.097542002 container init c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_snyder, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:49:37 np0005603621 podman[261004]: 2026-01-31 07:49:37.587369439 +0000 UTC m=+0.102782107 container start c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_snyder, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:49:37 np0005603621 podman[261004]: 2026-01-31 07:49:37.590862999 +0000 UTC m=+0.106275687 container attach c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:49:37 np0005603621 podman[261004]: 2026-01-31 07:49:37.502742747 +0000 UTC m=+0.018155435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:49:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:37.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.010 247403 DEBUG nova.objects.instance [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'migration_context' on Instance uuid 620b3405-251d-4545-b523-faa35768224b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.051 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.051 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Ensure instance console log exists: /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.052 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.052 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.052 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.054 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.058 247403 WARNING nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.065 247403 DEBUG nova.virt.libvirt.host [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.065 247403 DEBUG nova.virt.libvirt.host [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.068 247403 DEBUG nova.virt.libvirt.host [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.068 247403 DEBUG nova.virt.libvirt.host [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.070 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.070 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:49:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='800ab596-5263-4c34-bc7e-82584b53237c',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-532153092',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.070 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.071 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.071 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.071 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.071 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.072 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.072 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.072 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.072 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.072 247403 DEBUG nova.virt.hardware [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.075 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]: {
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:        "osd_id": 0,
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:        "type": "bluestore"
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]:    }
Jan 31 02:49:38 np0005603621 wizardly_snyder[261021]: }
Jan 31 02:49:38 np0005603621 systemd[1]: libpod-c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f.scope: Deactivated successfully.
Jan 31 02:49:38 np0005603621 podman[261004]: 2026-01-31 07:49:38.403030489 +0000 UTC m=+0.918443157 container died c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 02:49:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f22cb7fe13d9a5293b32f24ce65872d87cea1c485969a2f48d0122493009b56d-merged.mount: Deactivated successfully.
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:49:38 np0005603621 podman[261004]: 2026-01-31 07:49:38.453225404 +0000 UTC m=+0.968638072 container remove c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:49:38
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'vms', '.mgr', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:49:38 np0005603621 systemd[1]: libpod-conmon-c2277c9d559f0e292dbf6a08091e9cbe47499d28b8e90478758d1fd23f6e485f.scope: Deactivated successfully.
Jan 31 02:49:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:49:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:49:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2278088923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.546 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.571 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.574 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:49:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:49:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:49:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952004307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.993 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:38 np0005603621 nova_compute[247399]: 2026-01-31 07:49:38.995 247403 DEBUG nova.objects.instance [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'pci_devices' on Instance uuid 620b3405-251d-4545-b523-faa35768224b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.067 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <uuid>620b3405-251d-4545-b523-faa35768224b</uuid>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <name>instance-00000012</name>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:name>tempest-MigrationsAdminTest-server-376445714</nova:name>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:49:38</nova:creationTime>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:flavor name="tempest-test_resize_flavor_-532153092">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:user uuid="8a59efd78e244f44a1c70650f82a2c50">tempest-MigrationsAdminTest-1820348317-project-member</nova:user>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <nova:project uuid="1627a71b855b4032b51e234e44a9d570">tempest-MigrationsAdminTest-1820348317</nova:project>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <entry name="serial">620b3405-251d-4545-b523-faa35768224b</entry>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <entry name="uuid">620b3405-251d-4545-b523-faa35768224b</entry>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/620b3405-251d-4545-b523-faa35768224b_disk">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/620b3405-251d-4545-b523-faa35768224b_disk.config">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/console.log" append="off"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:49:39 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:49:39 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:49:39 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:49:39 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:49:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 393 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 84 op/s
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.263 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.263 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.264 247403 INFO nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Using config drive#033[00m
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.292 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:49:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:39.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:39.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b9dc8633-000f-404a-b89b-8f1025424e9c does not exist
Jan 31 02:49:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 53388a97-8a2b-4cff-bce2-ee14e71c2062 does not exist
Jan 31 02:49:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9a01fdf4-722e-4a74-b931-1e3538e506a2 does not exist
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.945 247403 INFO nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Creating config drive at /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/disk.config#033[00m
Jan 31 02:49:39 np0005603621 nova_compute[247399]: 2026-01-31 07:49:39.949 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn4xuuqhj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:40 np0005603621 nova_compute[247399]: 2026-01-31 07:49:40.064 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn4xuuqhj" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:40 np0005603621 nova_compute[247399]: 2026-01-31 07:49:40.093 247403 DEBUG nova.storage.rbd_utils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rbd image 620b3405-251d-4545-b523-faa35768224b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:49:40 np0005603621 nova_compute[247399]: 2026-01-31 07:49:40.096 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/disk.config 620b3405-251d-4545-b523-faa35768224b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:40 np0005603621 nova_compute[247399]: 2026-01-31 07:49:40.440 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 408 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 31 02:49:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:49:41 np0005603621 nova_compute[247399]: 2026-01-31 07:49:41.358 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:41.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.303 247403 DEBUG oslo_concurrency.processutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/disk.config 620b3405-251d-4545-b523-faa35768224b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.304 247403 INFO nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Deleting local config drive /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/disk.config because it was imported into RBD.#033[00m
Jan 31 02:49:42 np0005603621 systemd-machined[212769]: New machine qemu-7-instance-00000012.
Jan 31 02:49:42 np0005603621 systemd[1]: Started Virtual Machine qemu-7-instance-00000012.
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.867 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845782.8669543, 620b3405-251d-4545-b523-faa35768224b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.867 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.870 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.870 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.873 247403 INFO nova.virt.libvirt.driver [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance spawned successfully.#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.873 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.938 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:42 np0005603621 nova_compute[247399]: 2026-01-31 07:49:42.941 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.067 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.068 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.068 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.068 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.069 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.069 247403 DEBUG nova.virt.libvirt.driver [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:49:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 409 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 02:49:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.138 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.138 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845782.8694556, 620b3405-251d-4545-b523-faa35768224b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.139 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] VM Started (Lifecycle Event)#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.184 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.186 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.202 247403 INFO nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Took 10.07 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.202 247403 DEBUG nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.217 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.286 247403 INFO nova.compute.manager [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Took 11.59 seconds to build instance.#033[00m
Jan 31 02:49:43 np0005603621 nova_compute[247399]: 2026-01-31 07:49:43.338 247403 DEBUG oslo_concurrency.lockutils [None req-729caec8-4b8e-45ef-96f4-ee441e5f10d7 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:49:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:43.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:43.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 457 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 116 op/s
Jan 31 02:49:45 np0005603621 nova_compute[247399]: 2026-01-31 07:49:45.445 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:45.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:46 np0005603621 nova_compute[247399]: 2026-01-31 07:49:46.359 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 472 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.2 MiB/s wr, 129 op/s
Jan 31 02:49:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:49:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5263 writes, 23K keys, 5260 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5263 writes, 5260 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1481 writes, 6677 keys, 1479 commit groups, 1.0 writes per commit group, ingest: 10.20 MB, 0.02 MB/s#012Interval WAL: 1481 writes, 1479 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.6      0.53              0.07        13    0.041       0      0       0.0       0.0#012  L6      1/0    7.15 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.7     54.9     45.9      2.18              0.24        12    0.182     56K   6323       0.0       0.0#012 Sum      1/0    7.15 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.7     44.2     46.8      2.71              0.30        25    0.108     56K   6323       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.0     31.2     30.9      1.89              0.12        12    0.158     29K   3043       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     54.9     45.9      2.18              0.24        12    0.182     56K   6323       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.9      0.53              0.07        12    0.044       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.026, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 2.7 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 10.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(584,9.61 MB,3.16255%) FilterBlock(26,161.92 KB,0.0520154%) IndexBlock(26,303.95 KB,0.0976412%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009668750290805881 of space, bias 1.0, pg target 2.9006250872417643 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0013427961567094734 of space, bias 1.0, pg target 0.40015325469942303 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:49:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 02:49:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 501 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.8 MiB/s wr, 166 op/s
Jan 31 02:49:49 np0005603621 nova_compute[247399]: 2026-01-31 07:49:49.531 247403 DEBUG oslo_concurrency.lockutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:49:49 np0005603621 nova_compute[247399]: 2026-01-31 07:49:49.532 247403 DEBUG oslo_concurrency.lockutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:49:49 np0005603621 nova_compute[247399]: 2026-01-31 07:49:49.532 247403 DEBUG nova.network.neutron [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:49:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:49.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:49.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:50 np0005603621 nova_compute[247399]: 2026-01-31 07:49:50.283 247403 DEBUG nova.network.neutron [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:49:50 np0005603621 nova_compute[247399]: 2026-01-31 07:49:50.448 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:50 np0005603621 nova_compute[247399]: 2026-01-31 07:49:50.814 247403 DEBUG nova.network.neutron [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:49:50 np0005603621 nova_compute[247399]: 2026-01-31 07:49:50.849 247403 DEBUG oslo_concurrency.lockutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.011 247403 DEBUG nova.virt.libvirt.driver [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.012 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Creating file /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/786c7e50352342b19cd51dc431b3746b.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.012 247403 DEBUG oslo_concurrency.processutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/786c7e50352342b19cd51dc431b3746b.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 501 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 152 op/s
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.388 247403 DEBUG oslo_concurrency.processutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/786c7e50352342b19cd51dc431b3746b.tmp" returned: 1 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.389 247403 DEBUG oslo_concurrency.processutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/786c7e50352342b19cd51dc431b3746b.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.389 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Creating directory /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.390 247403 DEBUG oslo_concurrency.processutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.585 247403 DEBUG oslo_concurrency.processutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:49:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:51.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:51 np0005603621 nova_compute[247399]: 2026-01-31 07:49:51.593 247403 DEBUG nova.virt.libvirt.driver [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:49:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:51.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:52.859 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:49:52 np0005603621 nova_compute[247399]: 2026-01-31 07:49:52.859 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:52.861 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:49:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 501 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Jan 31 02:49:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:49:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2331395427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:49:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:53.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:53.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:49:54.864 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:49:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 502 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 119 op/s
Jan 31 02:49:55 np0005603621 nova_compute[247399]: 2026-01-31 07:49:55.450 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:55.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:49:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:55.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:49:56 np0005603621 nova_compute[247399]: 2026-01-31 07:49:56.363 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:49:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 503 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 82 op/s
Jan 31 02:49:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:57.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:49:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 519 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.0 MiB/s wr, 85 op/s
Jan 31 02:49:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:49:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:49:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:49:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:49:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:49:59.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 02:50:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 02:50:00 np0005603621 nova_compute[247399]: 2026-01-31 07:50:00.453 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 527 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 31 02:50:01 np0005603621 nova_compute[247399]: 2026-01-31 07:50:01.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:01 np0005603621 podman[261363]: 2026-01-31 07:50:01.49983307 +0000 UTC m=+0.052422716 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 02:50:01 np0005603621 podman[261364]: 2026-01-31 07:50:01.52070255 +0000 UTC m=+0.072662906 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:50:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:01.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:01 np0005603621 nova_compute[247399]: 2026-01-31 07:50:01.633 247403 DEBUG nova.virt.libvirt.driver [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:50:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:01.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 533 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 2.2 MiB/s wr, 72 op/s
Jan 31 02:50:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:03.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:03.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 534 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 31 02:50:05 np0005603621 nova_compute[247399]: 2026-01-31 07:50:05.492 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:05.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:05 np0005603621 nova_compute[247399]: 2026-01-31 07:50:05.649 247403 INFO nova.virt.libvirt.driver [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance shutdown successfully after 14 seconds.#033[00m
Jan 31 02:50:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:05.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:06 np0005603621 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 31 02:50:06 np0005603621 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000012.scope: Consumed 12.260s CPU time.
Jan 31 02:50:06 np0005603621 systemd-machined[212769]: Machine qemu-7-instance-00000012 terminated.
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.265 247403 INFO nova.virt.libvirt.driver [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance destroyed successfully.#033[00m
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.268 247403 DEBUG nova.virt.libvirt.driver [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.269 247403 DEBUG nova.virt.libvirt.driver [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.407 247403 DEBUG oslo_concurrency.lockutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "620b3405-251d-4545-b523-faa35768224b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.408 247403 DEBUG oslo_concurrency.lockutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:06 np0005603621 nova_compute[247399]: 2026-01-31 07:50:06.408 247403 DEBUG oslo_concurrency.lockutils [None req-344d9d3c-aed6-43d8-84d1-0b3900219d62 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 534 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 102 op/s
Jan 31 02:50:07 np0005603621 nova_compute[247399]: 2026-01-31 07:50:07.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:07 np0005603621 nova_compute[247399]: 2026-01-31 07:50:07.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:50:07 np0005603621 nova_compute[247399]: 2026-01-31 07:50:07.503 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:50:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:07.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:07.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:50:08 np0005603621 nova_compute[247399]: 2026-01-31 07:50:08.504 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 534 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 143 op/s
Jan 31 02:50:09 np0005603621 nova_compute[247399]: 2026-01-31 07:50:09.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:09 np0005603621 nova_compute[247399]: 2026-01-31 07:50:09.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:50:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:09.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:09.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:10 np0005603621 nova_compute[247399]: 2026-01-31 07:50:10.493 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 534 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 391 KiB/s wr, 182 op/s
Jan 31 02:50:11 np0005603621 nova_compute[247399]: 2026-01-31 07:50:11.368 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:11.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:11.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:11 np0005603621 nova_compute[247399]: 2026-01-31 07:50:11.904 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:13 np0005603621 nova_compute[247399]: 2026-01-31 07:50:13.078 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:13 np0005603621 nova_compute[247399]: 2026-01-31 07:50:13.080 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:50:13 np0005603621 nova_compute[247399]: 2026-01-31 07:50:13.081 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:50:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 534 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 133 KiB/s wr, 173 op/s
Jan 31 02:50:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:13.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 534 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 147 op/s
Jan 31 02:50:15 np0005603621 nova_compute[247399]: 2026-01-31 07:50:15.497 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:15 np0005603621 nova_compute[247399]: 2026-01-31 07:50:15.554 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:50:15 np0005603621 nova_compute[247399]: 2026-01-31 07:50:15.555 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:50:15 np0005603621 nova_compute[247399]: 2026-01-31 07:50:15.555 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:50:15 np0005603621 nova_compute[247399]: 2026-01-31 07:50:15.556 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:15.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:15.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:16 np0005603621 nova_compute[247399]: 2026-01-31 07:50:16.371 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 535 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 43 KiB/s wr, 145 op/s
Jan 31 02:50:17 np0005603621 nova_compute[247399]: 2026-01-31 07:50:17.533 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:50:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:17.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:17.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 559 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.5 MiB/s wr, 140 op/s
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.122 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:50:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.501 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.501 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.501 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.501 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.502 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.502 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.502 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.502 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.502 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:19.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:19.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.804 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.804 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.805 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:50:19 np0005603621 nova_compute[247399]: 2026-01-31 07:50:19.806 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 31 02:50:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 31 02:50:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:50:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564601322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:50:20 np0005603621 nova_compute[247399]: 2026-01-31 07:50:20.219 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:20 np0005603621 nova_compute[247399]: 2026-01-31 07:50:20.500 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 595 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 4.7 MiB/s wr, 98 op/s
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.167 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.168 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.171 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.172 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.263 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845806.262702, 620b3405-251d-4545-b523-faa35768224b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.264 247403 INFO nova.compute.manager [-] [instance: 620b3405-251d-4545-b523-faa35768224b] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.299 247403 DEBUG nova.compute.manager [None req-d57d6715-4948-4cc7-956d-104a2694b01a - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.303 247403 DEBUG nova.compute.manager [None req-d57d6715-4948-4cc7-956d-104a2694b01a - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.308 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.309 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4538MB free_disk=20.75467300415039GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.309 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.310 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.342 247403 INFO nova.compute.manager [None req-d57d6715-4948-4cc7-956d-104a2694b01a - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.368 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration for instance 620b3405-251d-4545-b523-faa35768224b refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.393 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating resource usage from migration e0f17e23-e3f6-4dbf-9bcf-e53ad1ea7163#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.393 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Starting to track outgoing migration e0f17e23-e3f6-4dbf-9bcf-e53ad1ea7163 with flavor 800ab596-5263-4c34-bc7e-82584b53237c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.554 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 37069dd7-a48f-42ca-8238-bf5baa1fa605 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.554 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration e0f17e23-e3f6-4dbf-9bcf-e53ad1ea7163 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.554 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.554 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:50:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:21.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.618 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:50:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:21.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.752 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.752 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.783 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.814 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:50:21 np0005603621 nova_compute[247399]: 2026-01-31 07:50:21.868 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:50:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243511614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:50:22 np0005603621 nova_compute[247399]: 2026-01-31 07:50:22.265 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:22 np0005603621 nova_compute[247399]: 2026-01-31 07:50:22.270 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:50:22 np0005603621 nova_compute[247399]: 2026-01-31 07:50:22.297 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:50:22 np0005603621 nova_compute[247399]: 2026-01-31 07:50:22.340 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:50:22 np0005603621 nova_compute[247399]: 2026-01-31 07:50:22.340 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 632 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 7.1 MiB/s wr, 132 op/s
Jan 31 02:50:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:23 np0005603621 nova_compute[247399]: 2026-01-31 07:50:23.456 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:23 np0005603621 nova_compute[247399]: 2026-01-31 07:50:23.456 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:23.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 632 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 826 KiB/s rd, 7.2 MiB/s wr, 191 op/s
Jan 31 02:50:25 np0005603621 nova_compute[247399]: 2026-01-31 07:50:25.503 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:26 np0005603621 nova_compute[247399]: 2026-01-31 07:50:26.374 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 637 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 7.2 MiB/s wr, 252 op/s
Jan 31 02:50:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:27.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 640 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.5 MiB/s wr, 296 op/s
Jan 31 02:50:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:50:30.470 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:50:30.471 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:50:30.471 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:30 np0005603621 nova_compute[247399]: 2026-01-31 07:50:30.505 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 646 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.3 MiB/s wr, 251 op/s
Jan 31 02:50:31 np0005603621 nova_compute[247399]: 2026-01-31 07:50:31.376 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:31.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:31.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:31 np0005603621 nova_compute[247399]: 2026-01-31 07:50:31.737 247403 INFO nova.compute.manager [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Swapping old allocation on dict_keys(['d7116329-87c2-469a-b33a-1e01daf74ceb']) held by migration e0f17e23-e3f6-4dbf-9bcf-e53ad1ea7163 for instance#033[00m
Jan 31 02:50:31 np0005603621 nova_compute[247399]: 2026-01-31 07:50:31.791 247403 DEBUG nova.scheduler.client.report [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Overwriting current allocation {'allocations': {'492dc482-9d1e-49ca-87f3-0104a8508b72': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 17}}, 'project_id': '1627a71b855b4032b51e234e44a9d570', 'user_id': '8a59efd78e244f44a1c70650f82a2c50', 'consumer_generation': 1} on consumer 620b3405-251d-4545-b523-faa35768224b move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Jan 31 02:50:32 np0005603621 nova_compute[247399]: 2026-01-31 07:50:32.055 247403 DEBUG oslo_concurrency.lockutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:50:32 np0005603621 nova_compute[247399]: 2026-01-31 07:50:32.056 247403 DEBUG oslo_concurrency.lockutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:50:32 np0005603621 nova_compute[247399]: 2026-01-31 07:50:32.056 247403 DEBUG nova.network.neutron [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:50:32 np0005603621 podman[261545]: 2026-01-31 07:50:32.120815706 +0000 UTC m=+0.049483183 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:50:32 np0005603621 podman[261546]: 2026-01-31 07:50:32.140537189 +0000 UTC m=+0.068992220 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 02:50:32 np0005603621 nova_compute[247399]: 2026-01-31 07:50:32.343 247403 DEBUG nova.network.neutron [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:50:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 631 MiB data, 698 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 241 op/s
Jan 31 02:50:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:33 np0005603621 nova_compute[247399]: 2026-01-31 07:50:33.575 247403 DEBUG nova.network.neutron [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:50:33 np0005603621 nova_compute[247399]: 2026-01-31 07:50:33.614 247403 DEBUG oslo_concurrency.lockutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:50:33 np0005603621 nova_compute[247399]: 2026-01-31 07:50:33.615 247403 DEBUG nova.virt.libvirt.driver [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Jan 31 02:50:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:33 np0005603621 nova_compute[247399]: 2026-01-31 07:50:33.705 247403 DEBUG nova.storage.rbd_utils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] rolling back rbd image(620b3405-251d-4545-b523-faa35768224b_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Jan 31 02:50:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:33.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:33 np0005603621 nova_compute[247399]: 2026-01-31 07:50:33.949 247403 DEBUG nova.storage.rbd_utils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] removing snapshot(nova-resize) on rbd image(620b3405-251d-4545-b523-faa35768224b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:50:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 31 02:50:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 31 02:50:34 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.527 247403 DEBUG nova.virt.libvirt.driver [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.533 247403 WARNING nova.virt.libvirt.driver [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.540 247403 DEBUG nova.virt.libvirt.host [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.542 247403 DEBUG nova.virt.libvirt.host [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.546 247403 DEBUG nova.virt.libvirt.host [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.547 247403 DEBUG nova.virt.libvirt.host [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.549 247403 DEBUG nova.virt.libvirt.driver [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.549 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:49:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='800ab596-5263-4c34-bc7e-82584b53237c',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-532153092',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.550 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.550 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.550 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.551 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.551 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.551 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.551 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.551 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.552 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.552 247403 DEBUG nova.virt.hardware [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.553 247403 DEBUG nova.objects.instance [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 620b3405-251d-4545-b523-faa35768224b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:34 np0005603621 nova_compute[247399]: 2026-01-31 07:50:34.579 247403 DEBUG oslo_concurrency.processutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:50:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928218072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:50:35 np0005603621 nova_compute[247399]: 2026-01-31 07:50:35.036 247403 DEBUG oslo_concurrency.processutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:35 np0005603621 nova_compute[247399]: 2026-01-31 07:50:35.075 247403 DEBUG oslo_concurrency.processutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 616 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 130 KiB/s wr, 222 op/s
Jan 31 02:50:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:50:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/988944341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:50:35 np0005603621 nova_compute[247399]: 2026-01-31 07:50:35.508 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:35 np0005603621 nova_compute[247399]: 2026-01-31 07:50:35.528 247403 DEBUG oslo_concurrency.processutils [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:35 np0005603621 nova_compute[247399]: 2026-01-31 07:50:35.531 247403 DEBUG nova.virt.libvirt.driver [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <uuid>620b3405-251d-4545-b523-faa35768224b</uuid>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <name>instance-00000012</name>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:name>tempest-MigrationsAdminTest-server-376445714</nova:name>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:50:34</nova:creationTime>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:flavor name="tempest-test_resize_flavor_-532153092">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:user uuid="8a59efd78e244f44a1c70650f82a2c50">tempest-MigrationsAdminTest-1820348317-project-member</nova:user>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <nova:project uuid="1627a71b855b4032b51e234e44a9d570">tempest-MigrationsAdminTest-1820348317</nova:project>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <entry name="serial">620b3405-251d-4545-b523-faa35768224b</entry>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <entry name="uuid">620b3405-251d-4545-b523-faa35768224b</entry>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/620b3405-251d-4545-b523-faa35768224b_disk">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/620b3405-251d-4545-b523-faa35768224b_disk.config">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b/console.log" append="off"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:50:35 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:50:35 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:50:35 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:50:35 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:50:35 np0005603621 systemd-machined[212769]: New machine qemu-8-instance-00000012.
Jan 31 02:50:35 np0005603621 systemd[1]: Started Virtual Machine qemu-8-instance-00000012.
Jan 31 02:50:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:35.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:35.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.115 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845836.1144435, 620b3405-251d-4545-b523-faa35768224b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.116 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.119 247403 DEBUG nova.compute.manager [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.123 247403 INFO nova.virt.libvirt.driver [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance running successfully.#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.123 247403 DEBUG nova.virt.libvirt.driver [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.162 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.167 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.197 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.197 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845836.1183486, 620b3405-251d-4545-b523-faa35768224b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.198 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] VM Started (Lifecycle Event)#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.239 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.243 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.248 247403 INFO nova.compute.manager [None req-347f6903-1a62-4c06-805a-62d11afd512d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating instance to original state: 'active'#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.305 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.377 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:36 np0005603621 nova_compute[247399]: 2026-01-31 07:50:36.802 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:50:36.814 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:50:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:50:36.815 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:50:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 600 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 77 KiB/s wr, 189 op/s
Jan 31 02:50:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:37.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:50:38
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms']
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:50:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:50:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 600 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 56 KiB/s wr, 159 op/s
Jan 31 02:50:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3835187053
Jan 31 02:50:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:39.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:40 np0005603621 nova_compute[247399]: 2026-01-31 07:50:40.511 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:50:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d648940b-ccac-44be-8cec-5b776fe72c07 does not exist
Jan 31 02:50:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b12847d9-6669-4c3e-9905-708fd89e8d7c does not exist
Jan 31 02:50:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fb2072ee-bda4-4de0-9cf0-1a692fc53157 does not exist
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:50:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:50:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:50:40.818 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:50:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 600 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 30 KiB/s wr, 140 op/s
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.239368194 +0000 UTC m=+0.042435620 container create 62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:50:41 np0005603621 systemd[1]: Started libpod-conmon-62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b.scope.
Jan 31 02:50:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.216188183 +0000 UTC m=+0.019255629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.340426416 +0000 UTC m=+0.143493842 container init 62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.349229434 +0000 UTC m=+0.152296860 container start 62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:50:41 np0005603621 suspicious_feistel[262083]: 167 167
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.355211673 +0000 UTC m=+0.158279099 container attach 62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:50:41 np0005603621 systemd[1]: libpod-62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b.scope: Deactivated successfully.
Jan 31 02:50:41 np0005603621 conmon[262083]: conmon 62059154903ddd5a217f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b.scope/container/memory.events
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.357845626 +0000 UTC m=+0.160913052 container died 62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:50:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9257bed844fd26804deea97e447d15e6e3df265813ab0300447b62b5585ca016-merged.mount: Deactivated successfully.
Jan 31 02:50:41 np0005603621 nova_compute[247399]: 2026-01-31 07:50:41.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:41 np0005603621 podman[262066]: 2026-01-31 07:50:41.42826943 +0000 UTC m=+0.231336856 container remove 62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feistel, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:50:41 np0005603621 systemd[1]: libpod-conmon-62059154903ddd5a217f3f0d55ed472df84c51cd3663531ae8fed1b24b20815b.scope: Deactivated successfully.
Jan 31 02:50:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:41.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:41 np0005603621 podman[262106]: 2026-01-31 07:50:41.64644316 +0000 UTC m=+0.129371807 container create 9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:50:41 np0005603621 podman[262106]: 2026-01-31 07:50:41.575762528 +0000 UTC m=+0.058691215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:50:41 np0005603621 systemd[1]: Started libpod-conmon-9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443.scope.
Jan 31 02:50:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:50:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d1ed522515753fe240d66ea8b0cf3f884ce220e520c400a31f7b7ca55f6bd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d1ed522515753fe240d66ea8b0cf3f884ce220e520c400a31f7b7ca55f6bd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d1ed522515753fe240d66ea8b0cf3f884ce220e520c400a31f7b7ca55f6bd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d1ed522515753fe240d66ea8b0cf3f884ce220e520c400a31f7b7ca55f6bd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74d1ed522515753fe240d66ea8b0cf3f884ce220e520c400a31f7b7ca55f6bd5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:41.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:41 np0005603621 podman[262106]: 2026-01-31 07:50:41.754479292 +0000 UTC m=+0.237407969 container init 9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:50:41 np0005603621 podman[262106]: 2026-01-31 07:50:41.760718259 +0000 UTC m=+0.243646916 container start 9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:50:41 np0005603621 podman[262106]: 2026-01-31 07:50:41.774585027 +0000 UTC m=+0.257513714 container attach 9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:50:42 np0005603621 kind_dijkstra[262123]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:50:42 np0005603621 kind_dijkstra[262123]: --> relative data size: 1.0
Jan 31 02:50:42 np0005603621 kind_dijkstra[262123]: --> All data devices are unavailable
Jan 31 02:50:42 np0005603621 systemd[1]: libpod-9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443.scope: Deactivated successfully.
Jan 31 02:50:42 np0005603621 podman[262106]: 2026-01-31 07:50:42.614708169 +0000 UTC m=+1.097636846 container died 9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 02:50:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-74d1ed522515753fe240d66ea8b0cf3f884ce220e520c400a31f7b7ca55f6bd5-merged.mount: Deactivated successfully.
Jan 31 02:50:42 np0005603621 podman[262106]: 2026-01-31 07:50:42.663509211 +0000 UTC m=+1.146437878 container remove 9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_dijkstra, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:50:42 np0005603621 systemd[1]: libpod-conmon-9eec76ada45d6869e680d4077c649042bd1bf0483ef77db3e079a8dce99a2443.scope: Deactivated successfully.
Jan 31 02:50:42 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 31 02:50:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 624 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 135 op/s
Jan 31 02:50:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 31 02:50:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.186379654 +0000 UTC m=+0.040958964 container create 80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:50:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 31 02:50:43 np0005603621 systemd[1]: Started libpod-conmon-80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02.scope.
Jan 31 02:50:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.164250295 +0000 UTC m=+0.018829625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.263163768 +0000 UTC m=+0.117743098 container init 80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.267920719 +0000 UTC m=+0.122500029 container start 80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:50:43 np0005603621 systemd[1]: libpod-80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02.scope: Deactivated successfully.
Jan 31 02:50:43 np0005603621 thirsty_lumiere[262304]: 167 167
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.27143847 +0000 UTC m=+0.126017800 container attach 80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:50:43 np0005603621 conmon[262304]: conmon 80568f4f6a108a980bf4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02.scope/container/memory.events
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.272294757 +0000 UTC m=+0.126874067 container died 80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:50:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-25f4c89c67cfa126f7b8c9a39cba16dae2fd072c721c793d11ef65b2c89b6316-merged.mount: Deactivated successfully.
Jan 31 02:50:43 np0005603621 podman[262288]: 2026-01-31 07:50:43.314321224 +0000 UTC m=+0.168900534 container remove 80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lumiere, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:50:43 np0005603621 systemd[1]: libpod-conmon-80568f4f6a108a980bf4f02e5073dc8ed6cf7e5e1e16b92bb0434a301142df02.scope: Deactivated successfully.
Jan 31 02:50:43 np0005603621 podman[262327]: 2026-01-31 07:50:43.441400878 +0000 UTC m=+0.041592195 container create a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:50:43 np0005603621 systemd[1]: Started libpod-conmon-a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de.scope.
Jan 31 02:50:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:50:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f630ed222c097fb7e4c35bfd867a4e37c7aa5fc9bcece848671ec2e0bd6b454b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f630ed222c097fb7e4c35bfd867a4e37c7aa5fc9bcece848671ec2e0bd6b454b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f630ed222c097fb7e4c35bfd867a4e37c7aa5fc9bcece848671ec2e0bd6b454b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f630ed222c097fb7e4c35bfd867a4e37c7aa5fc9bcece848671ec2e0bd6b454b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:43 np0005603621 podman[262327]: 2026-01-31 07:50:43.503467208 +0000 UTC m=+0.103658555 container init a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_volhard, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:50:43 np0005603621 podman[262327]: 2026-01-31 07:50:43.511323816 +0000 UTC m=+0.111515133 container start a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:50:43 np0005603621 podman[262327]: 2026-01-31 07:50:43.421816709 +0000 UTC m=+0.022008046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:50:43 np0005603621 podman[262327]: 2026-01-31 07:50:43.517954355 +0000 UTC m=+0.118145702 container attach a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_volhard, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:50:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:43.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:43.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:44 np0005603621 tender_volhard[262343]: {
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:    "0": [
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:        {
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "devices": [
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "/dev/loop3"
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            ],
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "lv_name": "ceph_lv0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "lv_size": "7511998464",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "name": "ceph_lv0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "tags": {
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.cluster_name": "ceph",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.crush_device_class": "",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.encrypted": "0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.osd_id": "0",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.type": "block",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:                "ceph.vdo": "0"
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            },
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "type": "block",
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:            "vg_name": "ceph_vg0"
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:        }
Jan 31 02:50:44 np0005603621 tender_volhard[262343]:    ]
Jan 31 02:50:44 np0005603621 tender_volhard[262343]: }
Jan 31 02:50:44 np0005603621 systemd[1]: libpod-a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de.scope: Deactivated successfully.
Jan 31 02:50:44 np0005603621 podman[262327]: 2026-01-31 07:50:44.320505761 +0000 UTC m=+0.920697078 container died a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_volhard, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 02:50:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f630ed222c097fb7e4c35bfd867a4e37c7aa5fc9bcece848671ec2e0bd6b454b-merged.mount: Deactivated successfully.
Jan 31 02:50:44 np0005603621 podman[262327]: 2026-01-31 07:50:44.633494516 +0000 UTC m=+1.233685833 container remove a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:50:44 np0005603621 systemd[1]: libpod-conmon-a49af68b9815a7670dad17e68d0c9a81b986a19c8e5f70733165ff9979bb16de.scope: Deactivated successfully.
Jan 31 02:50:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 637 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 124 op/s
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.205284594 +0000 UTC m=+0.081227006 container create c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.142911784 +0000 UTC m=+0.018854216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:50:45 np0005603621 systemd[1]: Started libpod-conmon-c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf.scope.
Jan 31 02:50:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.449002421 +0000 UTC m=+0.324944853 container init c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.455505796 +0000 UTC m=+0.331448208 container start c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:50:45 np0005603621 relaxed_goldberg[262519]: 167 167
Jan 31 02:50:45 np0005603621 systemd[1]: libpod-c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf.scope: Deactivated successfully.
Jan 31 02:50:45 np0005603621 nova_compute[247399]: 2026-01-31 07:50:45.514 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.621086135 +0000 UTC m=+0.497028547 container attach c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.621969773 +0000 UTC m=+0.497912185 container died c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:50:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:45.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d70ca6bf1e9aca8320e02c773f54e5489c5da5d91420c7ed6b036ff948fb0d6b-merged.mount: Deactivated successfully.
Jan 31 02:50:45 np0005603621 podman[262502]: 2026-01-31 07:50:45.952035777 +0000 UTC m=+0.827978189 container remove c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:50:46 np0005603621 systemd[1]: libpod-conmon-c7bd18b6bf91b333595a291641ffb3072589de57195949a315f6025479ca03bf.scope: Deactivated successfully.
Jan 31 02:50:46 np0005603621 podman[262544]: 2026-01-31 07:50:46.100804226 +0000 UTC m=+0.057966672 container create e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 02:50:46 np0005603621 podman[262544]: 2026-01-31 07:50:46.068216827 +0000 UTC m=+0.025379303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:50:46 np0005603621 systemd[1]: Started libpod-conmon-e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d.scope.
Jan 31 02:50:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:50:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59bb219f3167dce969a179120b0a9ee073e19a5eb915d344df47613bebd933/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59bb219f3167dce969a179120b0a9ee073e19a5eb915d344df47613bebd933/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59bb219f3167dce969a179120b0a9ee073e19a5eb915d344df47613bebd933/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c59bb219f3167dce969a179120b0a9ee073e19a5eb915d344df47613bebd933/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:50:46 np0005603621 podman[262544]: 2026-01-31 07:50:46.285762247 +0000 UTC m=+0.242924723 container init e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:50:46 np0005603621 podman[262544]: 2026-01-31 07:50:46.293237643 +0000 UTC m=+0.250400089 container start e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:50:46 np0005603621 podman[262544]: 2026-01-31 07:50:46.346891747 +0000 UTC m=+0.304054193 container attach e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:50:46 np0005603621 nova_compute[247399]: 2026-01-31 07:50:46.383 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 646 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 31 02:50:47 np0005603621 trusting_germain[262561]: {
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:        "osd_id": 0,
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:        "type": "bluestore"
Jan 31 02:50:47 np0005603621 trusting_germain[262561]:    }
Jan 31 02:50:47 np0005603621 trusting_germain[262561]: }
Jan 31 02:50:47 np0005603621 systemd[1]: libpod-e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d.scope: Deactivated successfully.
Jan 31 02:50:47 np0005603621 podman[262544]: 2026-01-31 07:50:47.182978362 +0000 UTC m=+1.140140808 container died e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:50:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6c59bb219f3167dce969a179120b0a9ee073e19a5eb915d344df47613bebd933-merged.mount: Deactivated successfully.
Jan 31 02:50:47 np0005603621 podman[262544]: 2026-01-31 07:50:47.364726252 +0000 UTC m=+1.321888698 container remove e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:50:47 np0005603621 systemd[1]: libpod-conmon-e2a8c9cdb08c393468add5edf4e4ca83f019265fce63a666326eb87fea5f7a6d.scope: Deactivated successfully.
Jan 31 02:50:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:50:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:50:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:50:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:50:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c5cf738d-fadd-4a13-8d11-3f4bbc2833b1 does not exist
Jan 31 02:50:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 14031edc-0f8f-409e-b513-f70e665aebd3 does not exist
Jan 31 02:50:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6dccfe02-49f2-4f8c-9fbe-606462dbc2f0 does not exist
Jan 31 02:50:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:47.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:50:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011853065966406104 of space, bias 1.0, pg target 3.5559197899218313 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004320466627117067 of space, bias 1.0, pg target 1.283178588253769 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:50:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 02:50:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 646 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 109 op/s
Jan 31 02:50:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:49.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:49.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.344 247403 DEBUG nova.compute.manager [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.479 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.480 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.512 247403 DEBUG nova.objects.instance [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'pci_requests' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.516 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.532 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.532 247403 INFO nova.compute.claims [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.533 247403 DEBUG nova.objects.instance [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'resources' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.557 247403 DEBUG nova.objects.instance [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'numa_topology' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.573 247403 DEBUG nova.objects.instance [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'pci_devices' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.623 247403 INFO nova.compute.resource_tracker [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Updating resource usage from migration d5c54c4c-3045-4285-98a5-2a691842bc5d#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.624 247403 DEBUG nova.compute.resource_tracker [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Starting to track incoming migration d5c54c4c-3045-4285-98a5-2a691842bc5d with flavor a01eb4f0-fd80-416b-a750-75de320394d8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 02:50:50 np0005603621 nova_compute[247399]: 2026-01-31 07:50:50.733 247403 DEBUG oslo_concurrency.processutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:50:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 646 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 31 02:50:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:50:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129374203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:50:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:50:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949856930' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:50:51 np0005603621 nova_compute[247399]: 2026-01-31 07:50:51.143 247403 DEBUG oslo_concurrency.processutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:50:51 np0005603621 nova_compute[247399]: 2026-01-31 07:50:51.147 247403 DEBUG nova.compute.provider_tree [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:50:51 np0005603621 nova_compute[247399]: 2026-01-31 07:50:51.167 247403 DEBUG nova.scheduler.client.report [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:50:51 np0005603621 nova_compute[247399]: 2026-01-31 07:50:51.197 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:51 np0005603621 nova_compute[247399]: 2026-01-31 07:50:51.198 247403 INFO nova.compute.manager [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Migrating#033[00m
Jan 31 02:50:51 np0005603621 nova_compute[247399]: 2026-01-31 07:50:51.385 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:51.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:51.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:53 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 02:50:53 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 02:50:53 np0005603621 systemd-logind[818]: New session 55 of user nova.
Jan 31 02:50:53 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 02:50:53 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 02:50:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 646 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 169 op/s
Jan 31 02:50:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:53 np0005603621 systemd[262726]: Queued start job for default target Main User Target.
Jan 31 02:50:53 np0005603621 systemd[262726]: Created slice User Application Slice.
Jan 31 02:50:53 np0005603621 systemd[262726]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:50:53 np0005603621 systemd[262726]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 02:50:53 np0005603621 systemd[262726]: Reached target Paths.
Jan 31 02:50:53 np0005603621 systemd[262726]: Reached target Timers.
Jan 31 02:50:53 np0005603621 systemd[262726]: Starting D-Bus User Message Bus Socket...
Jan 31 02:50:53 np0005603621 systemd[262726]: Starting Create User's Volatile Files and Directories...
Jan 31 02:50:53 np0005603621 systemd[262726]: Finished Create User's Volatile Files and Directories.
Jan 31 02:50:53 np0005603621 systemd[262726]: Listening on D-Bus User Message Bus Socket.
Jan 31 02:50:53 np0005603621 systemd[262726]: Reached target Sockets.
Jan 31 02:50:53 np0005603621 systemd[262726]: Reached target Basic System.
Jan 31 02:50:53 np0005603621 systemd[262726]: Reached target Main User Target.
Jan 31 02:50:53 np0005603621 systemd[262726]: Startup finished in 129ms.
Jan 31 02:50:53 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 02:50:53 np0005603621 systemd[1]: Started Session 55 of User nova.
Jan 31 02:50:53 np0005603621 systemd[1]: session-55.scope: Deactivated successfully.
Jan 31 02:50:53 np0005603621 systemd-logind[818]: Session 55 logged out. Waiting for processes to exit.
Jan 31 02:50:53 np0005603621 systemd-logind[818]: Removed session 55.
Jan 31 02:50:53 np0005603621 systemd-logind[818]: New session 57 of user nova.
Jan 31 02:50:53 np0005603621 systemd[1]: Started Session 57 of User nova.
Jan 31 02:50:53 np0005603621 systemd[1]: session-57.scope: Deactivated successfully.
Jan 31 02:50:53 np0005603621 systemd-logind[818]: Session 57 logged out. Waiting for processes to exit.
Jan 31 02:50:53 np0005603621 systemd-logind[818]: Removed session 57.
Jan 31 02:50:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:53.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:53.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:50:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717884996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.953 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.982 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.982 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 620b3405-251d-4545-b523-faa35768224b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.982 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "37069dd7-a48f-42ca-8238-bf5baa1fa605" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.983 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "37069dd7-a48f-42ca-8238-bf5baa1fa605" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.983 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "620b3405-251d-4545-b523-faa35768224b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:50:54 np0005603621 nova_compute[247399]: 2026-01-31 07:50:54.983 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "620b3405-251d-4545-b523-faa35768224b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:50:55 np0005603621 nova_compute[247399]: 2026-01-31 07:50:55.024 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "37069dd7-a48f-42ca-8238-bf5baa1fa605" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:55 np0005603621 nova_compute[247399]: 2026-01-31 07:50:55.025 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "620b3405-251d-4545-b523-faa35768224b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:50:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 646 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 928 KiB/s wr, 142 op/s
Jan 31 02:50:55 np0005603621 nova_compute[247399]: 2026-01-31 07:50:55.519 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:55.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:56 np0005603621 nova_compute[247399]: 2026-01-31 07:50:56.386 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:50:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 646 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 392 KiB/s wr, 131 op/s
Jan 31 02:50:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:57.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:50:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:50:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:50:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 648 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 35 KiB/s wr, 135 op/s
Jan 31 02:50:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:50:59.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:50:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:50:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:50:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:50:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:00 np0005603621 nova_compute[247399]: 2026-01-31 07:51:00.522 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 657 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 870 KiB/s wr, 98 op/s
Jan 31 02:51:01 np0005603621 nova_compute[247399]: 2026-01-31 07:51:01.389 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:01.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:02 np0005603621 podman[262755]: 2026-01-31 07:51:02.516050653 +0000 UTC m=+0.065283632 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 02:51:02 np0005603621 podman[262756]: 2026-01-31 07:51:02.542613403 +0000 UTC m=+0.092038488 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:51:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 670 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1005 KiB/s rd, 2.0 MiB/s wr, 87 op/s
Jan 31 02:51:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:03 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 02:51:03 np0005603621 systemd[262726]: Activating special unit Exit the Session...
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped target Main User Target.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped target Basic System.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped target Paths.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped target Sockets.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped target Timers.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 02:51:03 np0005603621 systemd[262726]: Closed D-Bus User Message Bus Socket.
Jan 31 02:51:03 np0005603621 systemd[262726]: Stopped Create User's Volatile Files and Directories.
Jan 31 02:51:03 np0005603621 systemd[262726]: Removed slice User Application Slice.
Jan 31 02:51:03 np0005603621 systemd[262726]: Reached target Shutdown.
Jan 31 02:51:03 np0005603621 systemd[262726]: Finished Exit the Session.
Jan 31 02:51:03 np0005603621 systemd[262726]: Reached target Exit the Session.
Jan 31 02:51:03 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 02:51:03 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 02:51:03 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 02:51:03 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 02:51:03 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 02:51:03 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 02:51:03 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 02:51:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 670 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Jan 31 02:51:05 np0005603621 nova_compute[247399]: 2026-01-31 07:51:05.564 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:05.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:05.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:06 np0005603621 nova_compute[247399]: 2026-01-31 07:51:06.391 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 674 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 02:51:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:07.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:51:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 674 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 146 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Jan 31 02:51:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:10 np0005603621 nova_compute[247399]: 2026-01-31 07:51:10.228 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:10 np0005603621 nova_compute[247399]: 2026-01-31 07:51:10.567 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 674 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 222 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.529 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.530 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.530 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.531 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:11.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:11 np0005603621 nova_compute[247399]: 2026-01-31 07:51:11.829 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.578 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.594 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.594 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.595 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.595 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.595 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.624 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.625 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.625 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.625 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:51:12 np0005603621 nova_compute[247399]: 2026-01-31 07:51:12.625 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:51:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2267530081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:51:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:51:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985162068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.033 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 679 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.131 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.132 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.135 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.135 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:51:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.260 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.261 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4323MB free_disk=20.71546173095703GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.262 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.262 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.307 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration for instance 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.340 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Updating resource usage from migration d5c54c4c-3045-4285-98a5-2a691842bc5d#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.341 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Starting to track incoming migration d5c54c4c-3045-4285-98a5-2a691842bc5d with flavor a01eb4f0-fd80-416b-a750-75de320394d8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.607 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 37069dd7-a48f-42ca-8238-bf5baa1fa605 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.608 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 620b3405-251d-4545-b523-faa35768224b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.643 247403 WARNING nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.643 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.643 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:51:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:13.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:13 np0005603621 nova_compute[247399]: 2026-01-31 07:51:13.774 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:13.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:51:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3975054913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:51:14 np0005603621 nova_compute[247399]: 2026-01-31 07:51:14.264 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:14 np0005603621 nova_compute[247399]: 2026-01-31 07:51:14.269 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:51:14 np0005603621 nova_compute[247399]: 2026-01-31 07:51:14.284 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:51:14 np0005603621 nova_compute[247399]: 2026-01-31 07:51:14.303 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:51:14 np0005603621 nova_compute[247399]: 2026-01-31 07:51:14.303 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:51:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2554987907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:51:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:51:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2554987907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:51:14 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 02:51:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 681 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 176 KiB/s wr, 38 op/s
Jan 31 02:51:15 np0005603621 nova_compute[247399]: 2026-01-31 07:51:15.600 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:15.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:15.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:15 np0005603621 nova_compute[247399]: 2026-01-31 07:51:15.906 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:15 np0005603621 nova_compute[247399]: 2026-01-31 07:51:15.907 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:15 np0005603621 nova_compute[247399]: 2026-01-31 07:51:15.907 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:15 np0005603621 nova_compute[247399]: 2026-01-31 07:51:15.907 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:16 np0005603621 nova_compute[247399]: 2026-01-31 07:51:16.394 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 681 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 249 KiB/s rd, 156 KiB/s wr, 43 op/s
Jan 31 02:51:17 np0005603621 nova_compute[247399]: 2026-01-31 07:51:17.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:51:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:17.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 681 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 205 KiB/s rd, 118 KiB/s wr, 46 op/s
Jan 31 02:51:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:20 np0005603621 nova_compute[247399]: 2026-01-31 07:51:20.603 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 681 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 835 KiB/s rd, 120 KiB/s wr, 44 op/s
Jan 31 02:51:21 np0005603621 nova_compute[247399]: 2026-01-31 07:51:21.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:21.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:21.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:22 np0005603621 nova_compute[247399]: 2026-01-31 07:51:22.948 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "refresh_cache-866b0b10-d2ae-4e08-9efa-36b9c9c9f50d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:22 np0005603621 nova_compute[247399]: 2026-01-31 07:51:22.948 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquired lock "refresh_cache-866b0b10-d2ae-4e08-9efa-36b9c9c9f50d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:22 np0005603621 nova_compute[247399]: 2026-01-31 07:51:22.949 247403 DEBUG nova.network.neutron [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:51:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 689 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 792 KiB/s wr, 49 op/s
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.142 247403 DEBUG nova.network.neutron [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:51:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.433 247403 DEBUG nova.network.neutron [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.447 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Releasing lock "refresh_cache-866b0b10-d2ae-4e08-9efa-36b9c9c9f50d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.604 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.606 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.606 247403 INFO nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Creating image(s)#033[00m
Jan 31 02:51:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:23.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:23 np0005603621 nova_compute[247399]: 2026-01-31 07:51:23.871 247403 DEBUG nova.storage.rbd_utils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] creating snapshot(nova-resize) on rbd image(866b0b10-d2ae-4e08-9efa-36b9c9c9f50d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:51:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 689 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 722 KiB/s wr, 45 op/s
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.307297) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845885307333, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2203, "num_deletes": 254, "total_data_size": 3936756, "memory_usage": 3995104, "flush_reason": "Manual Compaction"}
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 31 02:51:25 np0005603621 nova_compute[247399]: 2026-01-31 07:51:25.606 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845885760820, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3868424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22110, "largest_seqno": 24312, "table_properties": {"data_size": 3858382, "index_size": 6344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21257, "raw_average_key_size": 20, "raw_value_size": 3838263, "raw_average_value_size": 3744, "num_data_blocks": 278, "num_entries": 1025, "num_filter_entries": 1025, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845664, "oldest_key_time": 1769845664, "file_creation_time": 1769845885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 453575 microseconds, and 6740 cpu microseconds.
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:51:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.760870) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3868424 bytes OK
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.760891) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.880421) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.880524) EVENT_LOG_v1 {"time_micros": 1769845885880507, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.880567) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3927693, prev total WAL file size 3927693, number of live WAL files 2.
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.882109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3777KB)], [53(7320KB)]
Jan 31 02:51:25 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845885882179, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11364178, "oldest_snapshot_seqno": -1}
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4997 keys, 9281181 bytes, temperature: kUnknown
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845886282876, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9281181, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9246700, "index_size": 20867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126265, "raw_average_key_size": 25, "raw_value_size": 9155599, "raw_average_value_size": 1832, "num_data_blocks": 850, "num_entries": 4997, "num_filter_entries": 4997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769845885, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.283295) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9281181 bytes
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.307483) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.3 rd, 23.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 5522, records dropped: 525 output_compression: NoCompression
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.307508) EVENT_LOG_v1 {"time_micros": 1769845886307495, "job": 28, "event": "compaction_finished", "compaction_time_micros": 400962, "compaction_time_cpu_micros": 17188, "output_level": 6, "num_output_files": 1, "total_output_size": 9281181, "num_input_records": 5522, "num_output_records": 4997, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845886308911, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769845886310041, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:25.882000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.310166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.310171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.310172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.310173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:51:26.310175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:51:26 np0005603621 nova_compute[247399]: 2026-01-31 07:51:26.398 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 31 02:51:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 701 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.7 MiB/s wr, 39 op/s
Jan 31 02:51:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 31 02:51:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:27.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 701 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.422 247403 DEBUG nova.objects.instance [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.547 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.548 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Ensure instance console log exists: /var/lib/nova/instances/866b0b10-d2ae-4e08-9efa-36b9c9c9f50d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.548 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.548 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.549 247403 DEBUG oslo_concurrency.lockutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.550 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.556 247403 WARNING nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.564 247403 DEBUG nova.virt.libvirt.host [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.565 247403 DEBUG nova.virt.libvirt.host [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.568 247403 DEBUG nova.virt.libvirt.host [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.569 247403 DEBUG nova.virt.libvirt.host [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.570 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.570 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.571 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.571 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.571 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.572 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.572 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.572 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.572 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.572 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.573 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.573 247403 DEBUG nova.virt.hardware [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.573 247403 DEBUG nova.objects.instance [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:29 np0005603621 nova_compute[247399]: 2026-01-31 07:51:29.590 247403 DEBUG oslo_concurrency.processutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:51:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2507072266' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:51:30 np0005603621 nova_compute[247399]: 2026-01-31 07:51:30.017 247403 DEBUG oslo_concurrency.processutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:30 np0005603621 nova_compute[247399]: 2026-01-31 07:51:30.061 247403 DEBUG oslo_concurrency.processutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:51:30.471 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:51:30.472 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:51:30.472 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:51:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293053855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:51:30 np0005603621 nova_compute[247399]: 2026-01-31 07:51:30.508 247403 DEBUG oslo_concurrency.processutils [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:30 np0005603621 nova_compute[247399]: 2026-01-31 07:51:30.511 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <uuid>866b0b10-d2ae-4e08-9efa-36b9c9c9f50d</uuid>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <name>instance-00000016</name>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:name>tempest-MigrationsAdminTest-server-1993007316</nova:name>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:51:29</nova:creationTime>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:user uuid="8a59efd78e244f44a1c70650f82a2c50">tempest-MigrationsAdminTest-1820348317-project-member</nova:user>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <nova:project uuid="1627a71b855b4032b51e234e44a9d570">tempest-MigrationsAdminTest-1820348317</nova:project>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <entry name="serial">866b0b10-d2ae-4e08-9efa-36b9c9c9f50d</entry>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <entry name="uuid">866b0b10-d2ae-4e08-9efa-36b9c9c9f50d</entry>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/866b0b10-d2ae-4e08-9efa-36b9c9c9f50d_disk">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/866b0b10-d2ae-4e08-9efa-36b9c9c9f50d_disk.config">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/866b0b10-d2ae-4e08-9efa-36b9c9c9f50d/console.log" append="off"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:51:30 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:51:30 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:51:30 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:51:30 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:51:30 np0005603621 nova_compute[247399]: 2026-01-31 07:51:30.609 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:31 np0005603621 nova_compute[247399]: 2026-01-31 07:51:31.000 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:51:31 np0005603621 nova_compute[247399]: 2026-01-31 07:51:31.000 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:51:31 np0005603621 nova_compute[247399]: 2026-01-31 07:51:31.001 247403 INFO nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Using config drive#033[00m
Jan 31 02:51:31 np0005603621 systemd-machined[212769]: New machine qemu-9-instance-00000016.
Jan 31 02:51:31 np0005603621 systemd[1]: Started Virtual Machine qemu-9-instance-00000016.
Jan 31 02:51:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 701 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 31 02:51:31 np0005603621 nova_compute[247399]: 2026-01-31 07:51:31.399 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:31.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:31.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.231 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845892.2306485, 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.232 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.234 247403 DEBUG nova.compute.manager [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.236 247403 INFO nova.virt.libvirt.driver [-] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Instance running successfully.#033[00m
Jan 31 02:51:32 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.239 247403 DEBUG nova.virt.libvirt.guest [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.239 247403 DEBUG nova.virt.libvirt.driver [None req-bcb652b8-8b76-44de-b8c7-72bcadae2a34 fad82ed81c81456297863bf537d98c83 b3fcea0c21ef4dc8bbc27e3216fc550f - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.260 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.266 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.305 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.305 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845892.231554, 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.305 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] VM Started (Lifecycle Event)#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.330 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:51:32 np0005603621 nova_compute[247399]: 2026-01-31 07:51:32.335 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:51:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:51:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283715528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:51:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:51:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283715528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:51:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 621 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 1.3 MiB/s wr, 89 op/s
Jan 31 02:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:33 np0005603621 podman[263172]: 2026-01-31 07:51:33.490646537 +0000 UTC m=+0.050253488 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:51:33 np0005603621 podman[263173]: 2026-01-31 07:51:33.520589263 +0000 UTC m=+0.078938904 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:51:33 np0005603621 nova_compute[247399]: 2026-01-31 07:51:33.597 247403 DEBUG oslo_concurrency.lockutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-866b0b10-d2ae-4e08-9efa-36b9c9c9f50d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:51:33 np0005603621 nova_compute[247399]: 2026-01-31 07:51:33.598 247403 DEBUG oslo_concurrency.lockutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-866b0b10-d2ae-4e08-9efa-36b9c9c9f50d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:51:33 np0005603621 nova_compute[247399]: 2026-01-31 07:51:33.598 247403 DEBUG nova.network.neutron [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:51:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:33.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:33 np0005603621 nova_compute[247399]: 2026-01-31 07:51:33.723 247403 DEBUG nova.network.neutron [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:51:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:33.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.008 247403 DEBUG nova.network.neutron [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.023 247403 DEBUG oslo_concurrency.lockutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-866b0b10-d2ae-4e08-9efa-36b9c9c9f50d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:51:34 np0005603621 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 31 02:51:34 np0005603621 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000016.scope: Consumed 2.644s CPU time.
Jan 31 02:51:34 np0005603621 systemd-machined[212769]: Machine qemu-9-instance-00000016 terminated.
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.864 247403 INFO nova.virt.libvirt.driver [-] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Instance destroyed successfully.#033[00m
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.864 247403 DEBUG nova.objects.instance [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'resources' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.889 247403 DEBUG oslo_concurrency.lockutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.889 247403 DEBUG oslo_concurrency.lockutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:51:34 np0005603621 nova_compute[247399]: 2026-01-31 07:51:34.931 247403 DEBUG nova.objects.instance [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'migration_context' on Instance uuid 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:51:35 np0005603621 nova_compute[247399]: 2026-01-31 07:51:35.030 247403 DEBUG oslo_concurrency.processutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:51:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 568 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 31 02:51:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:51:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3278263320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:51:35 np0005603621 nova_compute[247399]: 2026-01-31 07:51:35.483 247403 DEBUG oslo_concurrency.processutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:51:35 np0005603621 nova_compute[247399]: 2026-01-31 07:51:35.489 247403 DEBUG nova.compute.provider_tree [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:51:35 np0005603621 nova_compute[247399]: 2026-01-31 07:51:35.507 247403 DEBUG nova.scheduler.client.report [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:51:35 np0005603621 nova_compute[247399]: 2026-01-31 07:51:35.570 247403 DEBUG oslo_concurrency.lockutils [None req-7465cf5f-179b-4b90-958f-2202059c96ec 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:51:35 np0005603621 nova_compute[247399]: 2026-01-31 07:51:35.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:35.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:36 np0005603621 nova_compute[247399]: 2026-01-31 07:51:36.401 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 543 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 139 op/s
Jan 31 02:51:37 np0005603621 nova_compute[247399]: 2026-01-31 07:51:37.173 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:51:37.173 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:51:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:51:37.175 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:51:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:51:37.176 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:51:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:37.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 31 02:51:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:37.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 31 02:51:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 31 02:51:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:51:38
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms']
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:51:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:51:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 543 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 15 KiB/s wr, 180 op/s
Jan 31 02:51:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:39.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:39.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:40 np0005603621 nova_compute[247399]: 2026-01-31 07:51:40.614 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 543 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.7 KiB/s wr, 181 op/s
Jan 31 02:51:41 np0005603621 nova_compute[247399]: 2026-01-31 07:51:41.403 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:41.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 543 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 KiB/s wr, 201 op/s
Jan 31 02:51:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 543 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.9 KiB/s wr, 194 op/s
Jan 31 02:51:45 np0005603621 nova_compute[247399]: 2026-01-31 07:51:45.617 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:45.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:46 np0005603621 nova_compute[247399]: 2026-01-31 07:51:46.404 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 526 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.6 KiB/s wr, 174 op/s
Jan 31 02:51:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010848695154010794 of space, bias 1.0, pg target 3.2546085462032384 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001725932905267076 of space, bias 1.0, pg target 0.5126020728643216 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8478230998743718 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:51:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 02:51:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 31 02:51:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 31 02:51:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:51:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 515 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 435 KiB/s wr, 127 op/s
Jan 31 02:51:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:51:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:51:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:49.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:51:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:49.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:49 np0005603621 nova_compute[247399]: 2026-01-31 07:51:49.863 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845894.861713, 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:51:49 np0005603621 nova_compute[247399]: 2026-01-31 07:51:49.864 247403 INFO nova.compute.manager [-] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:51:49 np0005603621 nova_compute[247399]: 2026-01-31 07:51:49.898 247403 DEBUG nova.compute.manager [None req-dd056a7f-3306-4869-902a-f68500e435d5 - - - - - -] [instance: 866b0b10-d2ae-4e08-9efa-36b9c9c9f50d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:50 np0005603621 nova_compute[247399]: 2026-01-31 07:51:50.620 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:51:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:51:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 486 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 796 KiB/s wr, 139 op/s
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0ac5b5b3-e800-4f64-8bfb-45b51ede6501 does not exist
Jan 31 02:51:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4415341f-2b41-4de3-ab6b-b0fe7c692807 does not exist
Jan 31 02:51:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d193b2b1-718c-4212-acc3-a05668b2ce64 does not exist
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:51:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:51:51 np0005603621 nova_compute[247399]: 2026-01-31 07:51:51.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:51:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:51:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:51 np0005603621 podman[263637]: 2026-01-31 07:51:51.806834048 +0000 UTC m=+0.021704262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:51:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:51:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:51:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:51:52 np0005603621 podman[263637]: 2026-01-31 07:51:52.383903294 +0000 UTC m=+0.598773488 container create e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:51:52 np0005603621 systemd[1]: Started libpod-conmon-e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002.scope.
Jan 31 02:51:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:51:53 np0005603621 podman[263637]: 2026-01-31 07:51:53.052189434 +0000 UTC m=+1.267059648 container init e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_torvalds, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:51:53 np0005603621 podman[263637]: 2026-01-31 07:51:53.062332372 +0000 UTC m=+1.277202566 container start e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:51:53 np0005603621 loving_torvalds[263704]: 167 167
Jan 31 02:51:53 np0005603621 systemd[1]: libpod-e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002.scope: Deactivated successfully.
Jan 31 02:51:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 445 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 624 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 31 02:51:53 np0005603621 podman[263637]: 2026-01-31 07:51:53.256034903 +0000 UTC m=+1.470905137 container attach e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_torvalds, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:51:53 np0005603621 podman[263637]: 2026-01-31 07:51:53.256527779 +0000 UTC m=+1.471397983 container died e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 02:51:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-73c2343cd7516062e6f7a1020deaaf070836ba5bc235996f88ac44ab14038551-merged.mount: Deactivated successfully.
Jan 31 02:51:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:53.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:53.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:53 np0005603621 podman[263637]: 2026-01-31 07:51:53.964121043 +0000 UTC m=+2.178991237 container remove e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 02:51:53 np0005603621 systemd[1]: libpod-conmon-e098f5aeb382d5f3d7e147197cdfc6213651672eb2cf0ee2a26dced721709002.scope: Deactivated successfully.
Jan 31 02:51:54 np0005603621 podman[263731]: 2026-01-31 07:51:54.061416008 +0000 UTC m=+0.018872755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:51:54 np0005603621 podman[263731]: 2026-01-31 07:51:54.233034465 +0000 UTC m=+0.190491232 container create 310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 02:51:54 np0005603621 systemd[1]: Started libpod-conmon-310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb.scope.
Jan 31 02:51:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:51:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42585f8500a06bec865041c093cd42111c801e5f2c34344bee0df700650b0016/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42585f8500a06bec865041c093cd42111c801e5f2c34344bee0df700650b0016/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42585f8500a06bec865041c093cd42111c801e5f2c34344bee0df700650b0016/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42585f8500a06bec865041c093cd42111c801e5f2c34344bee0df700650b0016/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42585f8500a06bec865041c093cd42111c801e5f2c34344bee0df700650b0016/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:54 np0005603621 podman[263731]: 2026-01-31 07:51:54.512974483 +0000 UTC m=+0.470431250 container init 310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:51:54 np0005603621 podman[263731]: 2026-01-31 07:51:54.518867278 +0000 UTC m=+0.476324045 container start 310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:51:54 np0005603621 podman[263731]: 2026-01-31 07:51:54.581229746 +0000 UTC m=+0.538686523 container attach 310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 02:51:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 425 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 31 02:51:55 np0005603621 amazing_wright[263747]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:51:55 np0005603621 amazing_wright[263747]: --> relative data size: 1.0
Jan 31 02:51:55 np0005603621 amazing_wright[263747]: --> All data devices are unavailable
Jan 31 02:51:55 np0005603621 systemd[1]: libpod-310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb.scope: Deactivated successfully.
Jan 31 02:51:55 np0005603621 podman[263731]: 2026-01-31 07:51:55.296262993 +0000 UTC m=+1.253719720 container died 310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:51:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-42585f8500a06bec865041c093cd42111c801e5f2c34344bee0df700650b0016-merged.mount: Deactivated successfully.
Jan 31 02:51:55 np0005603621 nova_compute[247399]: 2026-01-31 07:51:55.622 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:55.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:55 np0005603621 podman[263731]: 2026-01-31 07:51:55.848606464 +0000 UTC m=+1.806063231 container remove 310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:51:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:55.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:55 np0005603621 systemd[1]: libpod-conmon-310707f4de32234e6e68b46e01f8259815dfea89b41f046ccb96d642280abfbb.scope: Deactivated successfully.
Jan 31 02:51:56 np0005603621 podman[263916]: 2026-01-31 07:51:56.305853279 +0000 UTC m=+0.017808431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:51:56 np0005603621 nova_compute[247399]: 2026-01-31 07:51:56.409 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:51:56 np0005603621 podman[263916]: 2026-01-31 07:51:56.473881864 +0000 UTC m=+0.185836996 container create dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:51:56 np0005603621 systemd[1]: Started libpod-conmon-dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90.scope.
Jan 31 02:51:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:51:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 409 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 02:51:57 np0005603621 podman[263916]: 2026-01-31 07:51:57.193164735 +0000 UTC m=+0.905119867 container init dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:51:57 np0005603621 podman[263916]: 2026-01-31 07:51:57.198794222 +0000 UTC m=+0.910749354 container start dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 02:51:57 np0005603621 elegant_vaughan[263932]: 167 167
Jan 31 02:51:57 np0005603621 systemd[1]: libpod-dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90.scope: Deactivated successfully.
Jan 31 02:51:57 np0005603621 podman[263916]: 2026-01-31 07:51:57.502217837 +0000 UTC m=+1.214172999 container attach dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 02:51:57 np0005603621 podman[263916]: 2026-01-31 07:51:57.503172987 +0000 UTC m=+1.215128129 container died dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:51:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:51:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:57.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:51:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1c033276465bdb33f1407a5e1ed132ef36c27b9a385c66f9fc48be0b04ccd5d0-merged.mount: Deactivated successfully.
Jan 31 02:51:58 np0005603621 podman[263916]: 2026-01-31 07:51:58.222031225 +0000 UTC m=+1.933986357 container remove dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:51:58 np0005603621 systemd[1]: libpod-conmon-dbd7e31d2d0445c47157dadf2c349142674c47622e3ce77a7a682ea1d00b0d90.scope: Deactivated successfully.
Jan 31 02:51:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:51:58 np0005603621 podman[263957]: 2026-01-31 07:51:58.369300449 +0000 UTC m=+0.027668830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:51:58 np0005603621 podman[263957]: 2026-01-31 07:51:58.50919589 +0000 UTC m=+0.167564251 container create 856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:51:58 np0005603621 systemd[1]: Started libpod-conmon-856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b.scope.
Jan 31 02:51:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:51:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191250fcdfb73d4648c4ed39dea76f1dcd2ded553941105fffe1231187b70ad1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191250fcdfb73d4648c4ed39dea76f1dcd2ded553941105fffe1231187b70ad1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191250fcdfb73d4648c4ed39dea76f1dcd2ded553941105fffe1231187b70ad1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191250fcdfb73d4648c4ed39dea76f1dcd2ded553941105fffe1231187b70ad1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:51:59 np0005603621 podman[263957]: 2026-01-31 07:51:59.003646023 +0000 UTC m=+0.662014384 container init 856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:51:59 np0005603621 podman[263957]: 2026-01-31 07:51:59.011031965 +0000 UTC m=+0.669400326 container start 856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:51:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 371 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Jan 31 02:51:59 np0005603621 podman[263957]: 2026-01-31 07:51:59.216796134 +0000 UTC m=+0.875164515 container attach 856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:51:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:51:59.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:51:59 np0005603621 practical_shamir[263973]: {
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:    "0": [
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:        {
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "devices": [
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "/dev/loop3"
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            ],
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "lv_name": "ceph_lv0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "lv_size": "7511998464",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "name": "ceph_lv0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "tags": {
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.cluster_name": "ceph",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.crush_device_class": "",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.encrypted": "0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.osd_id": "0",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.type": "block",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:                "ceph.vdo": "0"
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            },
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "type": "block",
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:            "vg_name": "ceph_vg0"
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:        }
Jan 31 02:51:59 np0005603621 practical_shamir[263973]:    ]
Jan 31 02:51:59 np0005603621 practical_shamir[263973]: }
Jan 31 02:51:59 np0005603621 systemd[1]: libpod-856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b.scope: Deactivated successfully.
Jan 31 02:51:59 np0005603621 podman[263957]: 2026-01-31 07:51:59.759618305 +0000 UTC m=+1.417986666 container died 856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:51:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:51:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:51:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:51:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:00 np0005603621 nova_compute[247399]: 2026-01-31 07:52:00.627 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 59 op/s
Jan 31 02:52:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:52:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 51K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3364 syncs, 3.78 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4564 writes, 19K keys, 4564 commit groups, 1.0 writes per commit group, ingest: 21.94 MB, 0.04 MB/s#012Interval WAL: 4563 writes, 1611 syncs, 2.83 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 02:52:01 np0005603621 nova_compute[247399]: 2026-01-31 07:52:01.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:01.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:01.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-191250fcdfb73d4648c4ed39dea76f1dcd2ded553941105fffe1231187b70ad1-merged.mount: Deactivated successfully.
Jan 31 02:52:02 np0005603621 podman[263957]: 2026-01-31 07:52:02.778575122 +0000 UTC m=+4.436943493 container remove 856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:52:02 np0005603621 systemd[1]: libpod-conmon-856e7f5cbabcab2725f0825599eca6081b292b9554664b5793e2636f31dba39b.scope: Deactivated successfully.
Jan 31 02:52:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.1 MiB/s wr, 52 op/s
Jan 31 02:52:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:03 np0005603621 podman[264138]: 2026-01-31 07:52:03.289949186 +0000 UTC m=+0.021376252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:52:03 np0005603621 podman[264138]: 2026-01-31 07:52:03.50315388 +0000 UTC m=+0.234580896 container create 27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:52:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:03.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:03 np0005603621 systemd[1]: Started libpod-conmon-27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4.scope.
Jan 31 02:52:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:52:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:03.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:04 np0005603621 podman[264138]: 2026-01-31 07:52:04.061148837 +0000 UTC m=+0.792575923 container init 27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 02:52:04 np0005603621 podman[264138]: 2026-01-31 07:52:04.072567176 +0000 UTC m=+0.803994222 container start 27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:52:04 np0005603621 systemd[1]: libpod-27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4.scope: Deactivated successfully.
Jan 31 02:52:04 np0005603621 awesome_golick[264173]: 167 167
Jan 31 02:52:04 np0005603621 conmon[264173]: conmon 27f112a15effa24128c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4.scope/container/memory.events
Jan 31 02:52:04 np0005603621 podman[264138]: 2026-01-31 07:52:04.244597026 +0000 UTC m=+0.976024062 container attach 27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:52:04 np0005603621 podman[264138]: 2026-01-31 07:52:04.245288978 +0000 UTC m=+0.976715994 container died 27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:52:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-24d8a95fc8f3756b07b20ea09ac6808d0e5e05654da958de625deef73809140d-merged.mount: Deactivated successfully.
Jan 31 02:52:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 36 op/s
Jan 31 02:52:05 np0005603621 nova_compute[247399]: 2026-01-31 07:52:05.629 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:05.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:06 np0005603621 podman[264138]: 2026-01-31 07:52:06.351838129 +0000 UTC m=+3.083265215 container remove 27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:52:06 np0005603621 systemd[1]: libpod-conmon-27f112a15effa24128c73a402e09277715bf5d8aab9120c8f6cfbc769aaa5dc4.scope: Deactivated successfully.
Jan 31 02:52:06 np0005603621 nova_compute[247399]: 2026-01-31 07:52:06.412 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:06 np0005603621 podman[264153]: 2026-01-31 07:52:06.441159103 +0000 UTC m=+2.890707980 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 02:52:06 np0005603621 podman[264154]: 2026-01-31 07:52:06.456943519 +0000 UTC m=+2.903325956 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 02:52:06 np0005603621 podman[264226]: 2026-01-31 07:52:06.493531368 +0000 UTC m=+0.030218490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:52:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 32 op/s
Jan 31 02:52:07 np0005603621 podman[264226]: 2026-01-31 07:52:07.234031665 +0000 UTC m=+0.770718747 container create 4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:52:07 np0005603621 systemd[1]: Started libpod-conmon-4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62.scope.
Jan 31 02:52:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:52:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bcfcd3a1cc2a5566eecd94257e2359e7e4adeea1d5b74031a0caa2065eedc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:52:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bcfcd3a1cc2a5566eecd94257e2359e7e4adeea1d5b74031a0caa2065eedc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:52:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bcfcd3a1cc2a5566eecd94257e2359e7e4adeea1d5b74031a0caa2065eedc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:52:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bcfcd3a1cc2a5566eecd94257e2359e7e4adeea1d5b74031a0caa2065eedc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:52:07 np0005603621 podman[264226]: 2026-01-31 07:52:07.641116735 +0000 UTC m=+1.177803827 container init 4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:52:07 np0005603621 podman[264226]: 2026-01-31 07:52:07.6511605 +0000 UTC m=+1.187847572 container start 4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:52:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:07.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:07 np0005603621 podman[264226]: 2026-01-31 07:52:07.784321151 +0000 UTC m=+1.321008243 container attach 4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 02:52:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]: {
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:        "osd_id": 0,
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:        "type": "bluestore"
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]:    }
Jan 31 02:52:08 np0005603621 stupefied_hofstadter[264244]: }
Jan 31 02:52:08 np0005603621 systemd[1]: libpod-4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62.scope: Deactivated successfully.
Jan 31 02:52:08 np0005603621 podman[264226]: 2026-01-31 07:52:08.399950378 +0000 UTC m=+1.936637440 container died 4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:52:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c6bcfcd3a1cc2a5566eecd94257e2359e7e4adeea1d5b74031a0caa2065eedc4-merged.mount: Deactivated successfully.
Jan 31 02:52:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 16 KiB/s wr, 33 op/s
Jan 31 02:52:09 np0005603621 podman[264226]: 2026-01-31 07:52:09.297216137 +0000 UTC m=+2.833903209 container remove 4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:52:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:52:09 np0005603621 systemd[1]: libpod-conmon-4dadc883a5947c7d046b755ed3c28a06be7483059e20df41da8615979efd8b62.scope: Deactivated successfully.
Jan 31 02:52:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:09.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:10 np0005603621 nova_compute[247399]: 2026-01-31 07:52:10.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 16 KiB/s wr, 31 op/s
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.678 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.679 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.679 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:52:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:11.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:11.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:11 np0005603621 nova_compute[247399]: 2026-01-31 07:52:11.985 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 31 02:52:13 np0005603621 nova_compute[247399]: 2026-01-31 07:52:13.195 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:13 np0005603621 nova_compute[247399]: 2026-01-31 07:52:13.250 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:13 np0005603621 nova_compute[247399]: 2026-01-31 07:52:13.251 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:52:13 np0005603621 nova_compute[247399]: 2026-01-31 07:52:13.252 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:13 np0005603621 nova_compute[247399]: 2026-01-31 07:52:13.252 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:13 np0005603621 nova_compute[247399]: 2026-01-31 07:52:13.253 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:52:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:13.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:13.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:52:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.271 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.272 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.272 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.272 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.272 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:52:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9bb1824e-4f27-4fd8-aee8-afd5bb1bd6ad does not exist
Jan 31 02:52:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1e866ce8-7682-42dd-ae34-78fab10a3112 does not exist
Jan 31 02:52:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b99c3998-a291-4913-99a2-7cb71ee080bb does not exist
Jan 31 02:52:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836017633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.717 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.870 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.870 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.873 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.874 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.978 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.980 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4331MB free_disk=20.83074951171875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:14 np0005603621 nova_compute[247399]: 2026-01-31 07:52:14.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s rd, 15 KiB/s wr, 7 op/s
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.186 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 37069dd7-a48f-42ca-8238-bf5baa1fa605 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.187 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 620b3405-251d-4545-b523-faa35768224b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.187 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.188 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.314 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:52:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.639 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1643779355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.722 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:15.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.730 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.777 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.782 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:52:15 np0005603621 nova_compute[247399]: 2026-01-31 07:52:15.782 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:15.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:16 np0005603621 nova_compute[247399]: 2026-01-31 07:52:16.461 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:16 np0005603621 nova_compute[247399]: 2026-01-31 07:52:16.784 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:16 np0005603621 nova_compute[247399]: 2026-01-31 07:52:16.784 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:16 np0005603621 nova_compute[247399]: 2026-01-31 07:52:16.784 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:16 np0005603621 nova_compute[247399]: 2026-01-31 07:52:16.785 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 726 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 31 02:52:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:17.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:17.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:18 np0005603621 nova_compute[247399]: 2026-01-31 07:52:18.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 69 op/s
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.243 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.435 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "620b3405-251d-4545-b523-faa35768224b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.436 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.436 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "620b3405-251d-4545-b523-faa35768224b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.436 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.436 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.438 247403 INFO nova.compute.manager [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Terminating instance#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.439 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.439 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:19 np0005603621 nova_compute[247399]: 2026-01-31 07:52:19.439 247403 DEBUG nova.network.neutron [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:52:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:19.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:19.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:20 np0005603621 nova_compute[247399]: 2026-01-31 07:52:20.116 247403 DEBUG nova.network.neutron [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:20 np0005603621 nova_compute[247399]: 2026-01-31 07:52:20.643 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:21 np0005603621 nova_compute[247399]: 2026-01-31 07:52:21.173 247403 DEBUG nova.network.neutron [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 68 op/s
Jan 31 02:52:21 np0005603621 nova_compute[247399]: 2026-01-31 07:52:21.204 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-620b3405-251d-4545-b523-faa35768224b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:21 np0005603621 nova_compute[247399]: 2026-01-31 07:52:21.205 247403 DEBUG nova.compute.manager [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:52:21 np0005603621 nova_compute[247399]: 2026-01-31 07:52:21.464 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:21.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:21 np0005603621 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 31 02:52:21 np0005603621 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000012.scope: Consumed 16.053s CPU time.
Jan 31 02:52:21 np0005603621 systemd-machined[212769]: Machine qemu-8-instance-00000012 terminated.
Jan 31 02:52:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:21.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:22 np0005603621 nova_compute[247399]: 2026-01-31 07:52:22.027 247403 INFO nova.virt.libvirt.driver [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance destroyed successfully.#033[00m
Jan 31 02:52:22 np0005603621 nova_compute[247399]: 2026-01-31 07:52:22.029 247403 DEBUG nova.objects.instance [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'resources' on Instance uuid 620b3405-251d-4545-b523-faa35768224b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:52:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Jan 31 02:52:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:23 np0005603621 nova_compute[247399]: 2026-01-31 07:52:23.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:52:23.529 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:52:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:52:23.531 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:52:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:23.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 352 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 75 op/s
Jan 31 02:52:25 np0005603621 nova_compute[247399]: 2026-01-31 07:52:25.645 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:25.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:25.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:26 np0005603621 nova_compute[247399]: 2026-01-31 07:52:26.466 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:52:26.534 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:52:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 335 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 82 op/s
Jan 31 02:52:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:27.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:27.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.192 247403 INFO nova.virt.libvirt.driver [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Deleting instance files /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b_del#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.193 247403 INFO nova.virt.libvirt.driver [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Deletion of /var/lib/nova/instances/620b3405-251d-4545-b523-faa35768224b_del complete#033[00m
Jan 31 02:52:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.360 247403 INFO nova.compute.manager [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 620b3405-251d-4545-b523-faa35768224b] Took 7.16 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.361 247403 DEBUG oslo.service.loopingcall [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.361 247403 DEBUG nova.compute.manager [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.361 247403 DEBUG nova.network.neutron [-] [instance: 620b3405-251d-4545-b523-faa35768224b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.645 247403 DEBUG nova.network.neutron [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.664 247403 DEBUG nova.network.neutron [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.694 247403 INFO nova.compute.manager [-] [instance: 620b3405-251d-4545-b523-faa35768224b] Took 0.33 seconds to deallocate network for instance.#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.781 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.782 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:28 np0005603621 nova_compute[247399]: 2026-01-31 07:52:28.925 247403 DEBUG oslo_concurrency.processutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 289 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.7 MiB/s wr, 103 op/s
Jan 31 02:52:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1299932480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:29 np0005603621 nova_compute[247399]: 2026-01-31 07:52:29.340 247403 DEBUG oslo_concurrency.processutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:29 np0005603621 nova_compute[247399]: 2026-01-31 07:52:29.347 247403 DEBUG nova.compute.provider_tree [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:29 np0005603621 nova_compute[247399]: 2026-01-31 07:52:29.569 247403 DEBUG nova.scheduler.client.report [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:29 np0005603621 nova_compute[247399]: 2026-01-31 07:52:29.597 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:29 np0005603621 nova_compute[247399]: 2026-01-31 07:52:29.627 247403 INFO nova.scheduler.client.report [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Deleted allocations for instance 620b3405-251d-4545-b523-faa35768224b#033[00m
Jan 31 02:52:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:29.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:29 np0005603621 nova_compute[247399]: 2026-01-31 07:52:29.874 247403 DEBUG oslo_concurrency.lockutils [None req-4b710777-6682-4469-a6c3-1e31058b9d8d 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "620b3405-251d-4545-b523-faa35768224b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:29.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:52:30.472 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:52:30.473 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:52:30.473 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:30 np0005603621 nova_compute[247399]: 2026-01-31 07:52:30.648 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 289 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Jan 31 02:52:31 np0005603621 nova_compute[247399]: 2026-01-31 07:52:31.467 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:31.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:31.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 297 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 31 02:52:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:33.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 303 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 31 02:52:35 np0005603621 nova_compute[247399]: 2026-01-31 07:52:35.649 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:52:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:35.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:52:36 np0005603621 nova_compute[247399]: 2026-01-31 07:52:36.469 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:37 np0005603621 nova_compute[247399]: 2026-01-31 07:52:37.027 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845942.0251782, 620b3405-251d-4545-b523-faa35768224b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:52:37 np0005603621 nova_compute[247399]: 2026-01-31 07:52:37.027 247403 INFO nova.compute.manager [-] [instance: 620b3405-251d-4545-b523-faa35768224b] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:52:37 np0005603621 nova_compute[247399]: 2026-01-31 07:52:37.087 247403 DEBUG nova.compute.manager [None req-4f3b5654-8d96-43e8-b003-68d277b587a1 - - - - - -] [instance: 620b3405-251d-4545-b523-faa35768224b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 303 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 31 02:52:37 np0005603621 podman[264530]: 2026-01-31 07:52:37.51996096 +0000 UTC m=+0.075535093 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:52:37 np0005603621 podman[264531]: 2026-01-31 07:52:37.520322961 +0000 UTC m=+0.073552930 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 02:52:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:37.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:52:38
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:52:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:52:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 280 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 31 02:52:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:40 np0005603621 nova_compute[247399]: 2026-01-31 07:52:40.653 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 280 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 201 KiB/s rd, 474 KiB/s wr, 38 op/s
Jan 31 02:52:41 np0005603621 nova_compute[247399]: 2026-01-31 07:52:41.471 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:41.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:41.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.638 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "37069dd7-a48f-42ca-8238-bf5baa1fa605" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.639 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "37069dd7-a48f-42ca-8238-bf5baa1fa605" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.639 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "37069dd7-a48f-42ca-8238-bf5baa1fa605-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.639 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "37069dd7-a48f-42ca-8238-bf5baa1fa605-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.640 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "37069dd7-a48f-42ca-8238-bf5baa1fa605-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.641 247403 INFO nova.compute.manager [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Terminating instance#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.642 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.642 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquired lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:52:42 np0005603621 nova_compute[247399]: 2026-01-31 07:52:42.643 247403 DEBUG nova.network.neutron [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:52:43 np0005603621 nova_compute[247399]: 2026-01-31 07:52:43.005 247403 DEBUG nova.network.neutron [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 222 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 217 KiB/s rd, 474 KiB/s wr, 60 op/s
Jan 31 02:52:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:43.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:43.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:44 np0005603621 nova_compute[247399]: 2026-01-31 07:52:44.313 247403 DEBUG nova.network.neutron [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:44 np0005603621 nova_compute[247399]: 2026-01-31 07:52:44.413 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Releasing lock "refresh_cache-37069dd7-a48f-42ca-8238-bf5baa1fa605" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:52:44 np0005603621 nova_compute[247399]: 2026-01-31 07:52:44.414 247403 DEBUG nova.compute.manager [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:52:44 np0005603621 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 31 02:52:44 np0005603621 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000e.scope: Consumed 22.087s CPU time.
Jan 31 02:52:44 np0005603621 systemd-machined[212769]: Machine qemu-5-instance-0000000e terminated.
Jan 31 02:52:44 np0005603621 nova_compute[247399]: 2026-01-31 07:52:44.631 247403 INFO nova.virt.libvirt.driver [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance destroyed successfully.#033[00m
Jan 31 02:52:44 np0005603621 nova_compute[247399]: 2026-01-31 07:52:44.632 247403 DEBUG nova.objects.instance [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lazy-loading 'resources' on Instance uuid 37069dd7-a48f-42ca-8238-bf5baa1fa605 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:52:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 222 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 19 KiB/s wr, 36 op/s
Jan 31 02:52:45 np0005603621 nova_compute[247399]: 2026-01-31 07:52:45.656 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.274 247403 INFO nova.virt.libvirt.driver [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Deleting instance files /var/lib/nova/instances/37069dd7-a48f-42ca-8238-bf5baa1fa605_del#033[00m
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.275 247403 INFO nova.virt.libvirt.driver [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Deletion of /var/lib/nova/instances/37069dd7-a48f-42ca-8238-bf5baa1fa605_del complete#033[00m
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.473 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.695 247403 INFO nova.compute.manager [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Took 2.28 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.696 247403 DEBUG oslo.service.loopingcall [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.696 247403 DEBUG nova.compute.manager [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:52:46 np0005603621 nova_compute[247399]: 2026-01-31 07:52:46.696 247403 DEBUG nova.network.neutron [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:52:47 np0005603621 nova_compute[247399]: 2026-01-31 07:52:47.179 247403 DEBUG nova.network.neutron [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:52:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 188 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 32 op/s
Jan 31 02:52:47 np0005603621 nova_compute[247399]: 2026-01-31 07:52:47.204 247403 DEBUG nova.network.neutron [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:52:47 np0005603621 nova_compute[247399]: 2026-01-31 07:52:47.258 247403 INFO nova.compute.manager [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Took 0.56 seconds to deallocate network for instance.#033[00m
Jan 31 02:52:47 np0005603621 nova_compute[247399]: 2026-01-31 07:52:47.308 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:52:47 np0005603621 nova_compute[247399]: 2026-01-31 07:52:47.308 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:52:47 np0005603621 nova_compute[247399]: 2026-01-31 07:52:47.389 247403 DEBUG oslo_concurrency.processutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:52:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:47.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:52:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048995445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:52:48 np0005603621 nova_compute[247399]: 2026-01-31 07:52:48.031 247403 DEBUG oslo_concurrency.processutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:52:48 np0005603621 nova_compute[247399]: 2026-01-31 07:52:48.039 247403 DEBUG nova.compute.provider_tree [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:52:48 np0005603621 nova_compute[247399]: 2026-01-31 07:52:48.087 247403 DEBUG nova.scheduler.client.report [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:52:48 np0005603621 nova_compute[247399]: 2026-01-31 07:52:48.135 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:48 np0005603621 nova_compute[247399]: 2026-01-31 07:52:48.181 247403 INFO nova.scheduler.client.report [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Deleted allocations for instance 37069dd7-a48f-42ca-8238-bf5baa1fa605#033[00m
Jan 31 02:52:48 np0005603621 nova_compute[247399]: 2026-01-31 07:52:48.263 247403 DEBUG oslo_concurrency.lockutils [None req-fd4adeb3-670f-4bef-a01d-5457d6a52a3c 8a59efd78e244f44a1c70650f82a2c50 1627a71b855b4032b51e234e44a9d570 - - default default] Lock "37069dd7-a48f-42ca-8238-bf5baa1fa605" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:52:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033887972850362926 of space, bias 1.0, pg target 1.0166391855108878 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:52:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 02:52:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 141 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 17 KiB/s wr, 59 op/s
Jan 31 02:52:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:50 np0005603621 nova_compute[247399]: 2026-01-31 07:52:50.659 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 141 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Jan 31 02:52:51 np0005603621 nova_compute[247399]: 2026-01-31 07:52:51.475 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:51.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:51.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 141 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Jan 31 02:52:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 31 02:52:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:53.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 31 02:52:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:52:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 141 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 3.1 KiB/s wr, 32 op/s
Jan 31 02:52:55 np0005603621 nova_compute[247399]: 2026-01-31 07:52:55.661 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:55.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:56 np0005603621 nova_compute[247399]: 2026-01-31 07:52:56.477 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:52:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 141 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 2.8 KiB/s wr, 36 op/s
Jan 31 02:52:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:57.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:57.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:52:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 111 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 3.2 KiB/s wr, 37 op/s
Jan 31 02:52:59 np0005603621 nova_compute[247399]: 2026-01-31 07:52:59.629 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845964.6282394, 37069dd7-a48f-42ca-8238-bf5baa1fa605 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:52:59 np0005603621 nova_compute[247399]: 2026-01-31 07:52:59.629 247403 INFO nova.compute.manager [-] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:52:59 np0005603621 nova_compute[247399]: 2026-01-31 07:52:59.650 247403 DEBUG nova.compute.manager [None req-d62ee332-0aa7-4125-8527-c67c53c1b91d - - - - - -] [instance: 37069dd7-a48f-42ca-8238-bf5baa1fa605] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:52:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:52:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:52:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:52:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:52:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:52:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:52:59.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:00 np0005603621 nova_compute[247399]: 2026-01-31 07:53:00.664 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 111 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 426 B/s wr, 9 op/s
Jan 31 02:53:01 np0005603621 nova_compute[247399]: 2026-01-31 07:53:01.480 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:01.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:01.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 62 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 767 B/s wr, 17 op/s
Jan 31 02:53:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:03.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 62 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.462 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "052bb613-7828-4203-85b4-7a5de0658240" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.463 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "052bb613-7828-4203-85b4-7a5de0658240" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.513 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.641 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.641 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.647 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.648 247403 INFO nova.compute.claims [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.667 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:05.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:05 np0005603621 nova_compute[247399]: 2026-01-31 07:53:05.791 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:05.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788983213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.240 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.245 247403 DEBUG nova.compute.provider_tree [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.273 247403 DEBUG nova.scheduler.client.report [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.310 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.311 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.379 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.379 247403 DEBUG nova.network.neutron [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.433 247403 INFO nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.482 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.494 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.628 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.629 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.630 247403 INFO nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Creating image(s)#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.655 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.677 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.705 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.709 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.759 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.761 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.762 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.762 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.788 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:06 np0005603621 nova_compute[247399]: 2026-01-31 07:53:06.792 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 052bb613-7828-4203-85b4-7a5de0658240_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 62 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 02:53:07 np0005603621 nova_compute[247399]: 2026-01-31 07:53:07.203 247403 DEBUG nova.network.neutron [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:53:07 np0005603621 nova_compute[247399]: 2026-01-31 07:53:07.204 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:53:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:07.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.041 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 052bb613-7828-4203-85b4-7a5de0658240_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.112 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] resizing rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:53:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:53:08 np0005603621 podman[264856]: 2026-01-31 07:53:08.503734728 +0000 UTC m=+0.057515497 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.508 247403 DEBUG nova.objects.instance [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lazy-loading 'migration_context' on Instance uuid 052bb613-7828-4203-85b4-7a5de0658240 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.526 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.526 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Ensure instance console log exists: /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.527 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.527 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.527 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.529 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:53:08 np0005603621 podman[264857]: 2026-01-31 07:53:08.531292343 +0000 UTC m=+0.082469119 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.534 247403 WARNING nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.539 247403 DEBUG nova.virt.libvirt.host [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.539 247403 DEBUG nova.virt.libvirt.host [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.542 247403 DEBUG nova.virt.libvirt.host [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.543 247403 DEBUG nova.virt.libvirt.host [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.544 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.544 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.544 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.544 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.545 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.546 247403 DEBUG nova.virt.hardware [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.548 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:53:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691730716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:53:08 np0005603621 nova_compute[247399]: 2026-01-31 07:53:08.993 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.018 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.022 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 95 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.5 MiB/s wr, 23 op/s
Jan 31 02:53:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:53:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777429269' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.484 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.486 247403 DEBUG nova.objects.instance [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lazy-loading 'pci_devices' on Instance uuid 052bb613-7828-4203-85b4-7a5de0658240 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.519 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <uuid>052bb613-7828-4203-85b4-7a5de0658240</uuid>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <name>instance-00000018</name>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerExternalEventsTest-server-819259726</nova:name>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:53:08</nova:creationTime>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:user uuid="52dfb714105e45afac868e6adeaa7773">tempest-ServerExternalEventsTest-145629491-project-member</nova:user>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <nova:project uuid="16e51fafae224def941345353454a8bc">tempest-ServerExternalEventsTest-145629491</nova:project>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <entry name="serial">052bb613-7828-4203-85b4-7a5de0658240</entry>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <entry name="uuid">052bb613-7828-4203-85b4-7a5de0658240</entry>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/052bb613-7828-4203-85b4-7a5de0658240_disk">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/052bb613-7828-4203-85b4-7a5de0658240_disk.config">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/console.log" append="off"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:53:09 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:53:09 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:53:09 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:53:09 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.621 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.622 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.623 247403 INFO nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Using config drive#033[00m
Jan 31 02:53:09 np0005603621 nova_compute[247399]: 2026-01-31 07:53:09.652 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:09.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:09.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.126 247403 INFO nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Creating config drive at /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/disk.config#033[00m
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.131 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptfv0t_90 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 31 02:53:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 31 02:53:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.256 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptfv0t_90" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.290 247403 DEBUG nova.storage.rbd_utils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] rbd image 052bb613-7828-4203-85b4-7a5de0658240_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.297 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/disk.config 052bb613-7828-4203-85b4-7a5de0658240_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.574 247403 DEBUG oslo_concurrency.processutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/disk.config 052bb613-7828-4203-85b4-7a5de0658240_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.575 247403 INFO nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Deleting local config drive /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240/disk.config because it was imported into RBD.#033[00m
Jan 31 02:53:10 np0005603621 systemd-machined[212769]: New machine qemu-10-instance-00000018.
Jan 31 02:53:10 np0005603621 systemd[1]: Started Virtual Machine qemu-10-instance-00000018.
Jan 31 02:53:10 np0005603621 nova_compute[247399]: 2026-01-31 07:53:10.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 95 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 1.7 MiB/s wr, 24 op/s
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.208 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845991.2082572, 052bb613-7828-4203-85b4-7a5de0658240 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.209 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.211 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.212 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.216 247403 INFO nova.virt.libvirt.driver [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance spawned successfully.#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.217 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.235 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.241 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.245 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.245 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.246 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.246 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.247 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.247 247403 DEBUG nova.virt.libvirt.driver [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.287 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.288 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769845991.2112508, 052bb613-7828-4203-85b4-7a5de0658240 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.288 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] VM Started (Lifecycle Event)#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.323 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.327 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.353 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.362 247403 INFO nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Took 4.73 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.362 247403 DEBUG nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.453 247403 INFO nova.compute.manager [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Took 5.84 seconds to build instance.#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.484 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:11 np0005603621 nova_compute[247399]: 2026-01-31 07:53:11.513 247403 DEBUG oslo_concurrency.lockutils [None req-43fc893f-f348-436a-b9ba-d36cdd7fbf59 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "052bb613-7828-4203-85b4-7a5de0658240" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.219 247403 DEBUG nova.compute.manager [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.220 247403 DEBUG nova.compute.manager [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.220 247403 DEBUG oslo_concurrency.lockutils [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] Acquiring lock "refresh_cache-052bb613-7828-4203-85b4-7a5de0658240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.220 247403 DEBUG oslo_concurrency.lockutils [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] Acquired lock "refresh_cache-052bb613-7828-4203-85b4-7a5de0658240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.221 247403 DEBUG nova.network.neutron [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.526 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "052bb613-7828-4203-85b4-7a5de0658240" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.526 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "052bb613-7828-4203-85b4-7a5de0658240" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.527 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "052bb613-7828-4203-85b4-7a5de0658240-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.527 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "052bb613-7828-4203-85b4-7a5de0658240-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.527 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "052bb613-7828-4203-85b4-7a5de0658240-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.528 247403 INFO nova.compute.manager [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Terminating instance#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.529 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "refresh_cache-052bb613-7828-4203-85b4-7a5de0658240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:53:12 np0005603621 nova_compute[247399]: 2026-01-31 07:53:12.621 247403 DEBUG nova.network.neutron [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.163 247403 DEBUG nova.network.neutron [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.196 247403 DEBUG oslo_concurrency.lockutils [None req-4aa56ae1-f52a-4c5e-a0ce-61ed83e0571f 9b862c5fdc1f4295a6d3dd1818ec550b 33c94545347344cca3116311306ad73c - - default default] Releasing lock "refresh_cache-052bb613-7828-4203-85b4-7a5de0658240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.198 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquired lock "refresh_cache-052bb613-7828-4203-85b4-7a5de0658240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.199 247403 DEBUG nova.network.neutron [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:53:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 88 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.223 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.224 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.227 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.227 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:53:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:13 np0005603621 nova_compute[247399]: 2026-01-31 07:53:13.697 247403 DEBUG nova.network.neutron [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:53:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:13.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:14 np0005603621 nova_compute[247399]: 2026-01-31 07:53:14.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:14 np0005603621 nova_compute[247399]: 2026-01-31 07:53:14.598 247403 DEBUG nova.network.neutron [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:14 np0005603621 nova_compute[247399]: 2026-01-31 07:53:14.678 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Releasing lock "refresh_cache-052bb613-7828-4203-85b4-7a5de0658240" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:53:14 np0005603621 nova_compute[247399]: 2026-01-31 07:53:14.679 247403 DEBUG nova.compute.manager [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:53:15 np0005603621 nova_compute[247399]: 2026-01-31 07:53:15.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 88 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 31 02:53:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:53:15 np0005603621 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 31 02:53:15 np0005603621 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000018.scope: Consumed 4.073s CPU time.
Jan 31 02:53:15 np0005603621 systemd-machined[212769]: Machine qemu-10-instance-00000018 terminated.
Jan 31 02:53:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:53:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:53:15 np0005603621 nova_compute[247399]: 2026-01-31 07:53:15.674 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603621 nova_compute[247399]: 2026-01-31 07:53:15.694 247403 INFO nova.virt.libvirt.driver [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance destroyed successfully.#033[00m
Jan 31 02:53:15 np0005603621 nova_compute[247399]: 2026-01-31 07:53:15.694 247403 DEBUG nova.objects.instance [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lazy-loading 'resources' on Instance uuid 052bb613-7828-4203-85b4-7a5de0658240 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:15.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:53:15.819 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:53:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:53:15.820 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:53:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:53:15 np0005603621 nova_compute[247399]: 2026-01-31 07:53:15.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.229 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.229 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.230 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.230 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.230 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.487 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3880034915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.652 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.757 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.757 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:53:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.882 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.883 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4761MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.883 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.883 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.990 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 052bb613-7828-4203-85b4-7a5de0658240 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.990 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:53:16 np0005603621 nova_compute[247399]: 2026-01-31 07:53:16.991 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:53:17 np0005603621 nova_compute[247399]: 2026-01-31 07:53:17.037 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583831540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:17 np0005603621 nova_compute[247399]: 2026-01-31 07:53:17.478 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:17 np0005603621 nova_compute[247399]: 2026-01-31 07:53:17.482 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:17 np0005603621 nova_compute[247399]: 2026-01-31 07:53:17.513 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:17 np0005603621 nova_compute[247399]: 2026-01-31 07:53:17.537 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:53:17 np0005603621 nova_compute[247399]: 2026-01-31 07:53:17.537 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:17.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:17 np0005603621 podman[265608]: 2026-01-31 07:53:17.807166029 +0000 UTC m=+0.018940166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:18 np0005603621 podman[265608]: 2026-01-31 07:53:18.052727078 +0000 UTC m=+0.264501215 container create 37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:53:18 np0005603621 systemd[1]: Started libpod-conmon-37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1.scope.
Jan 31 02:53:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:53:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 31 02:53:18 np0005603621 podman[265608]: 2026-01-31 07:53:18.528629008 +0000 UTC m=+0.740403175 container init 37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:53:18 np0005603621 podman[265608]: 2026-01-31 07:53:18.533722848 +0000 UTC m=+0.745496985 container start 37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:53:18 np0005603621 goofy_williams[265623]: 167 167
Jan 31 02:53:18 np0005603621 systemd[1]: libpod-37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1.scope: Deactivated successfully.
Jan 31 02:53:18 np0005603621 nova_compute[247399]: 2026-01-31 07:53:18.538 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:18 np0005603621 nova_compute[247399]: 2026-01-31 07:53:18.539 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:18 np0005603621 nova_compute[247399]: 2026-01-31 07:53:18.539 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:18 np0005603621 podman[265608]: 2026-01-31 07:53:18.705610685 +0000 UTC m=+0.917384822 container attach 37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 31 02:53:18 np0005603621 podman[265608]: 2026-01-31 07:53:18.706431211 +0000 UTC m=+0.918205348 container died 37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:53:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:53:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 31 02:53:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 88 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 409 KiB/s wr, 162 op/s
Jan 31 02:53:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 31 02:53:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ed45b9ce85dd4ccf0d924ca0c666364b086755eec5ce78f2c20f403ca13947b4-merged.mount: Deactivated successfully.
Jan 31 02:53:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:19.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:19.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:20 np0005603621 nova_compute[247399]: 2026-01-31 07:53:20.676 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:20 np0005603621 podman[265608]: 2026-01-31 07:53:20.770997214 +0000 UTC m=+2.982771351 container remove 37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_williams, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:53:20 np0005603621 systemd[1]: libpod-conmon-37a92114599f1be37d489eedf285a0dd2a55005508ff986af0bfaf4fa71d96d1.scope: Deactivated successfully.
Jan 31 02:53:20 np0005603621 podman[265650]: 2026-01-31 07:53:20.871124328 +0000 UTC m=+0.020586477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:21 np0005603621 nova_compute[247399]: 2026-01-31 07:53:21.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:53:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 88 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 409 KiB/s wr, 162 op/s
Jan 31 02:53:21 np0005603621 podman[265650]: 2026-01-31 07:53:21.26743984 +0000 UTC m=+0.416901969 container create 83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:53:21 np0005603621 systemd[1]: Started libpod-conmon-83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b.scope.
Jan 31 02:53:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e835d8428572acb4dde68e8855e3245c7a705a785050bdb403a0db9c8e7df14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e835d8428572acb4dde68e8855e3245c7a705a785050bdb403a0db9c8e7df14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e835d8428572acb4dde68e8855e3245c7a705a785050bdb403a0db9c8e7df14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e835d8428572acb4dde68e8855e3245c7a705a785050bdb403a0db9c8e7df14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:53:21 np0005603621 nova_compute[247399]: 2026-01-31 07:53:21.530 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:21 np0005603621 podman[265650]: 2026-01-31 07:53:21.62003798 +0000 UTC m=+0.769500119 container init 83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:53:21 np0005603621 podman[265650]: 2026-01-31 07:53:21.626985377 +0000 UTC m=+0.776447506 container start 83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:53:21 np0005603621 podman[265650]: 2026-01-31 07:53:21.760800139 +0000 UTC m=+0.910262288 container attach 83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:53:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:21.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 02:53:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:53:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:22 np0005603621 naughty_turing[265666]: [
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:    {
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "available": false,
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "ceph_device": false,
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "lsm_data": {},
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "lvs": [],
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "path": "/dev/sr0",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "rejected_reasons": [
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "Has a FileSystem",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "Insufficient space (<5GB)"
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        ],
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        "sys_api": {
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "actuators": null,
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "device_nodes": "sr0",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "devname": "sr0",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "human_readable_size": "482.00 KB",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "id_bus": "ata",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "model": "QEMU DVD-ROM",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "nr_requests": "2",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "parent": "/dev/sr0",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "partitions": {},
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "path": "/dev/sr0",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "removable": "1",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "rev": "2.5+",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "ro": "0",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "rotational": "1",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "sas_address": "",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "sas_device_handle": "",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "scheduler_mode": "mq-deadline",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "sectors": 0,
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "sectorsize": "2048",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "size": 493568.0,
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "support_discard": "2048",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "type": "disk",
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:            "vendor": "QEMU"
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:        }
Jan 31 02:53:22 np0005603621 naughty_turing[265666]:    }
Jan 31 02:53:22 np0005603621 naughty_turing[265666]: ]
Jan 31 02:53:22 np0005603621 systemd[1]: libpod-83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b.scope: Deactivated successfully.
Jan 31 02:53:22 np0005603621 systemd[1]: libpod-83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b.scope: Consumed 1.016s CPU time.
Jan 31 02:53:22 np0005603621 podman[265650]: 2026-01-31 07:53:22.688511703 +0000 UTC m=+1.837973832 container died 83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:53:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 66 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 793 KiB/s rd, 307 B/s wr, 54 op/s
Jan 31 02:53:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 02:53:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2e835d8428572acb4dde68e8855e3245c7a705a785050bdb403a0db9c8e7df14-merged.mount: Deactivated successfully.
Jan 31 02:53:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:23.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:53:23.822 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:53:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:23.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:24 np0005603621 podman[265650]: 2026-01-31 07:53:24.522994375 +0000 UTC m=+3.672456504 container remove 83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:53:24 np0005603621 systemd[1]: libpod-conmon-83bd2021af01ac17ebb2e36a2af17c43cb120ae614ecca3bbfbb1c6f48033e1b.scope: Deactivated successfully.
Jan 31 02:53:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:53:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:53:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 54 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 400 KiB/s rd, 614 B/s wr, 41 op/s
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:53:25 np0005603621 nova_compute[247399]: 2026-01-31 07:53:25.679 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:53:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:25.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 02:53:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:53:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3eb9ed21-3f36-471d-a555-11a9e28552a4 does not exist
Jan 31 02:53:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e8f0aa9a-b9d9-4027-a941-57178ac2e4ad does not exist
Jan 31 02:53:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d41a5e51-4f31-4974-89d1-68a36f9238d7 does not exist
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.583058818 +0000 UTC m=+0.071398082 container create 664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:53:26 np0005603621 systemd[1]: Started libpod-conmon-664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9.scope.
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.538279582 +0000 UTC m=+0.026618866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.648 247403 INFO nova.virt.libvirt.driver [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Deleting instance files /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240_del#033[00m
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.649 247403 INFO nova.virt.libvirt.driver [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Deletion of /var/lib/nova/instances/052bb613-7828-4203-85b4-7a5de0658240_del complete#033[00m
Jan 31 02:53:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.678604277 +0000 UTC m=+0.166943541 container init 664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.68539922 +0000 UTC m=+0.173738484 container start 664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:53:26 np0005603621 beautiful_borg[266998]: 167 167
Jan 31 02:53:26 np0005603621 systemd[1]: libpod-664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9.scope: Deactivated successfully.
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.695534129 +0000 UTC m=+0.183873393 container attach 664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.696700605 +0000 UTC m=+0.185039879 container died 664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.724 247403 INFO nova.compute.manager [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Took 12.04 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.725 247403 DEBUG oslo.service.loopingcall [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.725 247403 DEBUG nova.compute.manager [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:53:26 np0005603621 nova_compute[247399]: 2026-01-31 07:53:26.725 247403 DEBUG nova.network.neutron [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:53:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2b000a0f85e00292d0f1090c6d12ee401a576b4c0660cd9225a9b2c1df8ddab4-merged.mount: Deactivated successfully.
Jan 31 02:53:26 np0005603621 podman[266982]: 2026-01-31 07:53:26.837700672 +0000 UTC m=+0.326039926 container remove 664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:26 np0005603621 systemd[1]: libpod-conmon-664437654d08b43a213bb28e6eee6819c63f40848aca2e0f17837c1f30a9fbb9.scope: Deactivated successfully.
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:53:26 np0005603621 podman[267024]: 2026-01-31 07:53:26.986576436 +0000 UTC m=+0.047153761 container create 4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:27 np0005603621 systemd[1]: Started libpod-conmon-4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547.scope.
Jan 31 02:53:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a50ed15141c6f8d5a3a65b1511e131e6e2a1efca56d84cc4bd6e62184c631f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a50ed15141c6f8d5a3a65b1511e131e6e2a1efca56d84cc4bd6e62184c631f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a50ed15141c6f8d5a3a65b1511e131e6e2a1efca56d84cc4bd6e62184c631f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a50ed15141c6f8d5a3a65b1511e131e6e2a1efca56d84cc4bd6e62184c631f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a50ed15141c6f8d5a3a65b1511e131e6e2a1efca56d84cc4bd6e62184c631f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:27 np0005603621 podman[267024]: 2026-01-31 07:53:26.961194939 +0000 UTC m=+0.021772284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:27 np0005603621 podman[267024]: 2026-01-31 07:53:27.077550261 +0000 UTC m=+0.138127616 container init 4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:53:27 np0005603621 podman[267024]: 2026-01-31 07:53:27.083948462 +0000 UTC m=+0.144525777 container start 4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:53:27 np0005603621 podman[267024]: 2026-01-31 07:53:27.10517962 +0000 UTC m=+0.165756945 container attach 4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.229 247403 DEBUG nova.network.neutron [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.245 247403 DEBUG nova.network.neutron [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.278 247403 INFO nova.compute.manager [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Took 0.55 seconds to deallocate network for instance.#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.339 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.339 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.486 247403 DEBUG oslo_concurrency.processutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:27.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:27 np0005603621 elated_sammet[267040]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:53:27 np0005603621 elated_sammet[267040]: --> relative data size: 1.0
Jan 31 02:53:27 np0005603621 elated_sammet[267040]: --> All data devices are unavailable
Jan 31 02:53:27 np0005603621 systemd[1]: libpod-4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547.scope: Deactivated successfully.
Jan 31 02:53:27 np0005603621 podman[267024]: 2026-01-31 07:53:27.889006937 +0000 UTC m=+0.949584272 container died 4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 02:53:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266089272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.922 247403 DEBUG oslo_concurrency.processutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.927 247403 DEBUG nova.compute.provider_tree [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.958 247403 DEBUG nova.scheduler.client.report [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:27 np0005603621 nova_compute[247399]: 2026-01-31 07:53:27.993 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:28 np0005603621 nova_compute[247399]: 2026-01-31 07:53:28.066 247403 INFO nova.scheduler.client.report [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Deleted allocations for instance 052bb613-7828-4203-85b4-7a5de0658240#033[00m
Jan 31 02:53:28 np0005603621 nova_compute[247399]: 2026-01-31 07:53:28.196 247403 DEBUG oslo_concurrency.lockutils [None req-bef83c54-8305-4e58-8ecb-5deb8d0c56dd 52dfb714105e45afac868e6adeaa7773 16e51fafae224def941345353454a8bc - - default default] Lock "052bb613-7828-4203-85b4-7a5de0658240" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e7a50ed15141c6f8d5a3a65b1511e131e6e2a1efca56d84cc4bd6e62184c631f-merged.mount: Deactivated successfully.
Jan 31 02:53:28 np0005603621 podman[267024]: 2026-01-31 07:53:28.669672335 +0000 UTC m=+1.730249670 container remove 4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sammet, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:53:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:28 np0005603621 systemd[1]: libpod-conmon-4c7ba3d10523451c5da8ffda15939d8e2b63019834333dd1863cb64eef93f547.scope: Deactivated successfully.
Jan 31 02:53:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Jan 31 02:53:29 np0005603621 podman[267233]: 2026-01-31 07:53:29.259947776 +0000 UTC m=+0.068166532 container create 646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:53:29 np0005603621 podman[267233]: 2026-01-31 07:53:29.212873118 +0000 UTC m=+0.021091894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:29 np0005603621 systemd[1]: Started libpod-conmon-646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a.scope.
Jan 31 02:53:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:29 np0005603621 podman[267233]: 2026-01-31 07:53:29.397022949 +0000 UTC m=+0.205241725 container init 646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:53:29 np0005603621 podman[267233]: 2026-01-31 07:53:29.403200263 +0000 UTC m=+0.211419019 container start 646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:53:29 np0005603621 friendly_bohr[267248]: 167 167
Jan 31 02:53:29 np0005603621 systemd[1]: libpod-646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a.scope: Deactivated successfully.
Jan 31 02:53:29 np0005603621 conmon[267248]: conmon 646eeed6e16da7f2024d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a.scope/container/memory.events
Jan 31 02:53:29 np0005603621 podman[267233]: 2026-01-31 07:53:29.42668112 +0000 UTC m=+0.234899876 container attach 646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:53:29 np0005603621 podman[267233]: 2026-01-31 07:53:29.427443904 +0000 UTC m=+0.235662670 container died 646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:53:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:29.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-97dd650cffd3d43f6bc82797406eb52a12ed631ca1368abbb2c3e5d4c974f08b-merged.mount: Deactivated successfully.
Jan 31 02:53:30 np0005603621 podman[267233]: 2026-01-31 07:53:30.470967323 +0000 UTC m=+1.279186089 container remove 646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:53:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:53:30.473 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:53:30.473 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:53:30.473 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:30 np0005603621 systemd[1]: libpod-conmon-646eeed6e16da7f2024ded92b73677209a5e83cd7200375e4e642f10f8f88f0a.scope: Deactivated successfully.
Jan 31 02:53:30 np0005603621 podman[267274]: 2026-01-31 07:53:30.667506273 +0000 UTC m=+0.099460212 container create d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mestorf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:30 np0005603621 nova_compute[247399]: 2026-01-31 07:53:30.682 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:30 np0005603621 nova_compute[247399]: 2026-01-31 07:53:30.692 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769845995.692332, 052bb613-7828-4203-85b4-7a5de0658240 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:30 np0005603621 nova_compute[247399]: 2026-01-31 07:53:30.693 247403 INFO nova.compute.manager [-] [instance: 052bb613-7828-4203-85b4-7a5de0658240] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:53:30 np0005603621 podman[267274]: 2026-01-31 07:53:30.598247699 +0000 UTC m=+0.030201648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:30 np0005603621 nova_compute[247399]: 2026-01-31 07:53:30.725 247403 DEBUG nova.compute.manager [None req-332226c9-b3be-41ca-9e70-f7f8c15df413 - - - - - -] [instance: 052bb613-7828-4203-85b4-7a5de0658240] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:30 np0005603621 systemd[1]: Started libpod-conmon-d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2.scope.
Jan 31 02:53:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be46d5d395f8e4c242a2a55edfebfc8baff323e2a4ce36208ee64441dc677468/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be46d5d395f8e4c242a2a55edfebfc8baff323e2a4ce36208ee64441dc677468/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be46d5d395f8e4c242a2a55edfebfc8baff323e2a4ce36208ee64441dc677468/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be46d5d395f8e4c242a2a55edfebfc8baff323e2a4ce36208ee64441dc677468/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:30 np0005603621 podman[267274]: 2026-01-31 07:53:30.946605495 +0000 UTC m=+0.378559464 container init d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:53:30 np0005603621 podman[267274]: 2026-01-31 07:53:30.952395797 +0000 UTC m=+0.384349736 container start d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mestorf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:53:30 np0005603621 podman[267274]: 2026-01-31 07:53:30.992058322 +0000 UTC m=+0.424012271 container attach d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:53:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 31 02:53:31 np0005603621 nova_compute[247399]: 2026-01-31 07:53:31.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]: {
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:    "0": [
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:        {
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "devices": [
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "/dev/loop3"
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            ],
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "lv_name": "ceph_lv0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "lv_size": "7511998464",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "name": "ceph_lv0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "tags": {
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.cluster_name": "ceph",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.crush_device_class": "",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.encrypted": "0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.osd_id": "0",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.type": "block",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:                "ceph.vdo": "0"
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            },
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "type": "block",
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:            "vg_name": "ceph_vg0"
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:        }
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]:    ]
Jan 31 02:53:31 np0005603621 affectionate_mestorf[267292]: }
Jan 31 02:53:31 np0005603621 systemd[1]: libpod-d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2.scope: Deactivated successfully.
Jan 31 02:53:31 np0005603621 podman[267274]: 2026-01-31 07:53:31.750885445 +0000 UTC m=+1.182839394 container died d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mestorf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:53:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 02:53:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:31.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 02:53:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-be46d5d395f8e4c242a2a55edfebfc8baff323e2a4ce36208ee64441dc677468-merged.mount: Deactivated successfully.
Jan 31 02:53:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:32 np0005603621 podman[267274]: 2026-01-31 07:53:32.171950464 +0000 UTC m=+1.603904403 container remove d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mestorf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 02:53:32 np0005603621 systemd[1]: libpod-conmon-d3734f59220d1cba5151a9761da909f1ec59591d9a0fce0cb3fa526028cfd6e2.scope: Deactivated successfully.
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.663894228 +0000 UTC m=+0.037048785 container create f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:53:32 np0005603621 systemd[1]: Started libpod-conmon-f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753.scope.
Jan 31 02:53:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.645088097 +0000 UTC m=+0.018242674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.753217542 +0000 UTC m=+0.126372129 container init f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_grothendieck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.758284851 +0000 UTC m=+0.131439408 container start f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_grothendieck, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:53:32 np0005603621 amazing_grothendieck[267472]: 167 167
Jan 31 02:53:32 np0005603621 systemd[1]: libpod-f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753.scope: Deactivated successfully.
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.763599607 +0000 UTC m=+0.136754174 container attach f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_grothendieck, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.763969609 +0000 UTC m=+0.137124166 container died f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_grothendieck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:53:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ad7d973e543d350ab12dd0fa3ea1eaded8a87bf6b9104ecd62662369676911f6-merged.mount: Deactivated successfully.
Jan 31 02:53:32 np0005603621 podman[267457]: 2026-01-31 07:53:32.807549128 +0000 UTC m=+0.180703685 container remove f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_grothendieck, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:53:32 np0005603621 systemd[1]: libpod-conmon-f59e003dfe7c742a6b29ac962c34a1d00cb597a7d88c4aa55d18b06450bd5753.scope: Deactivated successfully.
Jan 31 02:53:32 np0005603621 podman[267496]: 2026-01-31 07:53:32.928275858 +0000 UTC m=+0.041008089 container create bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_margulis, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:53:32 np0005603621 systemd[1]: Started libpod-conmon-bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7.scope.
Jan 31 02:53:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:53:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23741b57e2ccac8d0c1bd845ca649ad215ee8700a8f30dae12e166593dadea93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:33 np0005603621 podman[267496]: 2026-01-31 07:53:32.908538188 +0000 UTC m=+0.021270439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:53:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23741b57e2ccac8d0c1bd845ca649ad215ee8700a8f30dae12e166593dadea93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23741b57e2ccac8d0c1bd845ca649ad215ee8700a8f30dae12e166593dadea93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23741b57e2ccac8d0c1bd845ca649ad215ee8700a8f30dae12e166593dadea93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:53:33 np0005603621 podman[267496]: 2026-01-31 07:53:33.0251847 +0000 UTC m=+0.137916961 container init bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 02:53:33 np0005603621 podman[267496]: 2026-01-31 07:53:33.032619593 +0000 UTC m=+0.145351844 container start bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_margulis, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:53:33 np0005603621 podman[267496]: 2026-01-31 07:53:33.041312516 +0000 UTC m=+0.154044767 container attach bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:53:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.2 KiB/s wr, 64 op/s
Jan 31 02:53:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]: {
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:        "osd_id": 0,
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:        "type": "bluestore"
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]:    }
Jan 31 02:53:33 np0005603621 vibrant_margulis[267513]: }
Jan 31 02:53:33 np0005603621 systemd[1]: libpod-bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7.scope: Deactivated successfully.
Jan 31 02:53:33 np0005603621 podman[267496]: 2026-01-31 07:53:33.897152494 +0000 UTC m=+1.009884725 container died bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:53:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-23741b57e2ccac8d0c1bd845ca649ad215ee8700a8f30dae12e166593dadea93-merged.mount: Deactivated successfully.
Jan 31 02:53:34 np0005603621 podman[267496]: 2026-01-31 07:53:34.540898133 +0000 UTC m=+1.653630364 container remove bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 02:53:34 np0005603621 systemd[1]: libpod-conmon-bcb0d3b0121e4df11cb2b196b82ef235c8c9f4c42ec4fa0472a49acd6709bde7.scope: Deactivated successfully.
Jan 31 02:53:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:53:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:53:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5da481a6-3f5a-4426-add4-edb358dfb3cb does not exist
Jan 31 02:53:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1466155f-25e5-48b0-836a-08584c1970e7 does not exist
Jan 31 02:53:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f6a11362-ea32-4d87-8b38-92c491e437e7 does not exist
Jan 31 02:53:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 41 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1.2 KiB/s wr, 78 op/s
Jan 31 02:53:35 np0005603621 nova_compute[247399]: 2026-01-31 07:53:35.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:53:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:36 np0005603621 nova_compute[247399]: 2026-01-31 07:53:36.582 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 41 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 938 B/s wr, 103 op/s
Jan 31 02:53:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:37.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:38.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:53:38
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'images', '.mgr', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.log']
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:53:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:53:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 71 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 867 KiB/s wr, 119 op/s
Jan 31 02:53:39 np0005603621 podman[267650]: 2026-01-31 07:53:39.484832853 +0000 UTC m=+0.043297251 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 02:53:39 np0005603621 podman[267651]: 2026-01-31 07:53:39.517375124 +0000 UTC m=+0.076063979 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 02:53:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:40.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:40 np0005603621 nova_compute[247399]: 2026-01-31 07:53:40.689 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 71 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 867 KiB/s wr, 115 op/s
Jan 31 02:53:41 np0005603621 nova_compute[247399]: 2026-01-31 07:53:41.582 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:41.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:42.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 106 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 2.3 MiB/s wr, 171 op/s
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.499 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "202ce06a-b2ae-40e2-bf06-855757c393ba" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.499 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "202ce06a-b2ae-40e2-bf06-855757c393ba" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.532 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.647 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.647 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.653 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.654 247403 INFO nova.compute.claims [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:53:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:43 np0005603621 nova_compute[247399]: 2026-01-31 07:53:43.778 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:43.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:44.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:53:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070105396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.236 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.242 247403 DEBUG nova.compute.provider_tree [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.270 247403 DEBUG nova.scheduler.client.report [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.297 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.298 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.379 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.379 247403 DEBUG nova.network.neutron [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.413 247403 INFO nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.435 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.607 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.609 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.609 247403 INFO nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Creating image(s)#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.633 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.660 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.696 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.700 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.754 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.755 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.755 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.756 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.786 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:44 np0005603621 nova_compute[247399]: 2026-01-31 07:53:44.791 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 202ce06a-b2ae-40e2-bf06-855757c393ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.044 247403 DEBUG nova.network.neutron [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.045 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.061 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 202ce06a-b2ae-40e2-bf06-855757c393ba_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.132 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] resizing rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:53:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 134 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 211 KiB/s rd, 3.6 MiB/s wr, 168 op/s
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.262 247403 DEBUG nova.objects.instance [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lazy-loading 'migration_context' on Instance uuid 202ce06a-b2ae-40e2-bf06-855757c393ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.294 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.294 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Ensure instance console log exists: /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.295 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.295 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.296 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.297 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.303 247403 WARNING nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.325 247403 DEBUG nova.virt.libvirt.host [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.326 247403 DEBUG nova.virt.libvirt.host [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.330 247403 DEBUG nova.virt.libvirt.host [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.331 247403 DEBUG nova.virt.libvirt.host [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.332 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.333 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.333 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.334 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.334 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.334 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.334 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.335 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.335 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.335 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.335 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.336 247403 DEBUG nova.virt.hardware [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.339 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:53:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/431522958' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:53:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.822 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.848 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:45 np0005603621 nova_compute[247399]: 2026-01-31 07:53:45.852 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:53:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039314439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.356 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.358 247403 DEBUG nova.objects.instance [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 202ce06a-b2ae-40e2-bf06-855757c393ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.377 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <uuid>202ce06a-b2ae-40e2-bf06-855757c393ba</uuid>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <name>instance-0000001b</name>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1333474006</nova:name>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:53:45</nova:creationTime>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:user uuid="0ee689c4c14744fb8b4e1d54f6831626">tempest-ServersAdminNegativeTestJSON-1352127112-project-member</nova:user>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <nova:project uuid="f63afec818164f31a848360151f96a68">tempest-ServersAdminNegativeTestJSON-1352127112</nova:project>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <entry name="serial">202ce06a-b2ae-40e2-bf06-855757c393ba</entry>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <entry name="uuid">202ce06a-b2ae-40e2-bf06-855757c393ba</entry>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/202ce06a-b2ae-40e2-bf06-855757c393ba_disk">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/202ce06a-b2ae-40e2-bf06-855757c393ba_disk.config">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/console.log" append="off"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:53:46 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:53:46 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:53:46 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:53:46 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.473 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.474 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.474 247403 INFO nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Using config drive#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.497 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.585 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.807 247403 INFO nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Creating config drive at /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/disk.config#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.810 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpm5t4jxnx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.930 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpm5t4jxnx" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.960 247403 DEBUG nova.storage.rbd_utils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] rbd image 202ce06a-b2ae-40e2-bf06-855757c393ba_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:53:46 np0005603621 nova_compute[247399]: 2026-01-31 07:53:46.964 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/disk.config 202ce06a-b2ae-40e2-bf06-855757c393ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:53:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 135 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 810 KiB/s rd, 3.6 MiB/s wr, 180 op/s
Jan 31 02:53:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:53:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:53:47 np0005603621 nova_compute[247399]: 2026-01-31 07:53:47.971 247403 DEBUG oslo_concurrency.processutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/disk.config 202ce06a-b2ae-40e2-bf06-855757c393ba_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:53:47 np0005603621 nova_compute[247399]: 2026-01-31 07:53:47.974 247403 INFO nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Deleting local config drive /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba/disk.config because it was imported into RBD.#033[00m
Jan 31 02:53:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:48.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:48 np0005603621 systemd-machined[212769]: New machine qemu-11-instance-0000001b.
Jan 31 02:53:48 np0005603621 systemd[1]: Started Virtual Machine qemu-11-instance-0000001b.
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.570 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846028.5701475, 202ce06a-b2ae-40e2-bf06-855757c393ba => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.572 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.575 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.575 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.579 247403 INFO nova.virt.libvirt.driver [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance spawned successfully.#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.579 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.601 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.606 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.611 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.611 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.612 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.612 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.613 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.613 247403 DEBUG nova.virt.libvirt.driver [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.647 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.647 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846028.5710526, 202ce06a-b2ae-40e2-bf06-855757c393ba => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.648 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] VM Started (Lifecycle Event)#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.686 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.689 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.702 247403 INFO nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Took 4.09 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.703 247403 DEBUG nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:53:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.732 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.776 247403 INFO nova.compute.manager [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Took 5.18 seconds to build instance.#033[00m
Jan 31 02:53:48 np0005603621 nova_compute[247399]: 2026-01-31 07:53:48.806 247403 DEBUG oslo_concurrency.lockutils [None req-315f9cd3-2c02-42bf-8eaf-1657d383bffb 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "202ce06a-b2ae-40e2-bf06-855757c393ba" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019856589079657546 of space, bias 1.0, pg target 0.5956976723897264 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:53:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 02:53:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 143 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 315 op/s
Jan 31 02:53:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:49.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:50.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:50 np0005603621 nova_compute[247399]: 2026-01-31 07:53:50.694 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 143 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 286 op/s
Jan 31 02:53:51 np0005603621 nova_compute[247399]: 2026-01-31 07:53:51.587 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:51.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:53:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:52.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:53:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 134 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 332 op/s
Jan 31 02:53:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:53.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:54.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 134 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.1 MiB/s wr, 304 op/s
Jan 31 02:53:55 np0005603621 nova_compute[247399]: 2026-01-31 07:53:55.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:55.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:56 np0005603621 nova_compute[247399]: 2026-01-31 07:53:56.588 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:53:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 144 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 2.3 MiB/s wr, 272 op/s
Jan 31 02:53:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:57.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:53:58.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:53:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:53:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 202 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.6 MiB/s wr, 303 op/s
Jan 31 02:53:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:53:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:53:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:53:59.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:00.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:00 np0005603621 nova_compute[247399]: 2026-01-31 07:54:00.701 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 202 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.8 MiB/s wr, 142 op/s
Jan 31 02:54:01 np0005603621 nova_compute[247399]: 2026-01-31 07:54:01.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:01.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:02.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:02.888 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:54:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:02.889 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:54:02 np0005603621 nova_compute[247399]: 2026-01-31 07:54:02.920 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 198 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 173 op/s
Jan 31 02:54:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:03.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:04 np0005603621 ceph-osd[84880]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 02:54:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 163 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.8 MiB/s wr, 146 op/s
Jan 31 02:54:05 np0005603621 nova_compute[247399]: 2026-01-31 07:54:05.752 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:05.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:06 np0005603621 nova_compute[247399]: 2026-01-31 07:54:06.593 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:06.892 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 156 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 401 KiB/s rd, 5.8 MiB/s wr, 130 op/s
Jan 31 02:54:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:07.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:08.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:54:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 156 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 186 op/s
Jan 31 02:54:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:09.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:10.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:10 np0005603621 podman[268122]: 2026-01-31 07:54:10.496356585 +0000 UTC m=+0.052228660 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:54:10 np0005603621 podman[268123]: 2026-01-31 07:54:10.540521932 +0000 UTC m=+0.096630755 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:54:10 np0005603621 nova_compute[247399]: 2026-01-31 07:54:10.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 156 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 31 02:54:11 np0005603621 nova_compute[247399]: 2026-01-31 07:54:11.594 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:12.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:54:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 160 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.490 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-202ce06a-b2ae-40e2-bf06-855757c393ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.491 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-202ce06a-b2ae-40e2-bf06-855757c393ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.491 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.491 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 202ce06a-b2ae-40e2-bf06-855757c393ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:54:13 np0005603621 nova_compute[247399]: 2026-01-31 07:54:13.770 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:54:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:13.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:14.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:15 np0005603621 nova_compute[247399]: 2026-01-31 07:54:15.020 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:54:15 np0005603621 nova_compute[247399]: 2026-01-31 07:54:15.044 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-202ce06a-b2ae-40e2-bf06-855757c393ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:54:15 np0005603621 nova_compute[247399]: 2026-01-31 07:54:15.044 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:54:15 np0005603621 nova_compute[247399]: 2026-01-31 07:54:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:15 np0005603621 nova_compute[247399]: 2026-01-31 07:54:15.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:54:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 193 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 31 02:54:15 np0005603621 nova_compute[247399]: 2026-01-31 07:54:15.799 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:16.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:16 np0005603621 nova_compute[247399]: 2026-01-31 07:54:16.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:16 np0005603621 nova_compute[247399]: 2026-01-31 07:54:16.595 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 201 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 129 op/s
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.238 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.239 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.239 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.239 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:54:17 np0005603621 nova_compute[247399]: 2026-01-31 07:54:17.239 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:18.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/105896264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.302 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.371 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.372 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.479 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.480 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4609MB free_disk=20.904090881347656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.481 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.481 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.567 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 202ce06a-b2ae-40e2-bf06-855757c393ba actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.567 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.567 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:54:18 np0005603621 nova_compute[247399]: 2026-01-31 07:54:18.612 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 165 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 142 op/s
Jan 31 02:54:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751331556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:19 np0005603621 nova_compute[247399]: 2026-01-31 07:54:19.417 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.805s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:19 np0005603621 nova_compute[247399]: 2026-01-31 07:54:19.423 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:54:19 np0005603621 nova_compute[247399]: 2026-01-31 07:54:19.439 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:54:19 np0005603621 nova_compute[247399]: 2026-01-31 07:54:19.475 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:54:19 np0005603621 nova_compute[247399]: 2026-01-31 07:54:19.476 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:19.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:20.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.315 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.316 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.344 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.436 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.437 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.445 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.446 247403 INFO nova.compute.claims [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.649 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:20 np0005603621 nova_compute[247399]: 2026-01-31 07:54:20.802 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757483231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.053 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.060 247403 DEBUG nova.compute.provider_tree [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.091 247403 DEBUG nova.scheduler.client.report [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.129 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.130 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.191 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.192 247403 DEBUG nova.network.neutron [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.223 247403 INFO nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:54:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 165 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 237 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.250 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.351 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.352 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.352 247403 INFO nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Creating image(s)#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.379 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.406 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.435 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.439 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.451 247403 DEBUG nova.policy [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd6078cfaadaa45ae9256245554f784fe', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fd9f0c923b994b0295e72b111f661de1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.481 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.482 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.482 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.483 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.506 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.509 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:21 np0005603621 nova_compute[247399]: 2026-01-31 07:54:21.597 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:22 np0005603621 nova_compute[247399]: 2026-01-31 07:54:22.477 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:22 np0005603621 nova_compute[247399]: 2026-01-31 07:54:22.478 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.042 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.118 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] resizing rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.176 247403 DEBUG nova.network.neutron [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Successfully created port: badb2cfb-b8e3-4fbc-9969-b7746aff36ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:54:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 167 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 243 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.411 247403 DEBUG nova.objects.instance [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lazy-loading 'migration_context' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.438 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.438 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Ensure instance console log exists: /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.438 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.439 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:23 np0005603621 nova_compute[247399]: 2026-01-31 07:54:23.439 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:23.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:24 np0005603621 nova_compute[247399]: 2026-01-31 07:54:24.558 247403 DEBUG nova.network.neutron [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Successfully updated port: badb2cfb-b8e3-4fbc-9969-b7746aff36ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:54:24 np0005603621 nova_compute[247399]: 2026-01-31 07:54:24.576 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:24 np0005603621 nova_compute[247399]: 2026-01-31 07:54:24.577 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquired lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:54:24 np0005603621 nova_compute[247399]: 2026-01-31 07:54:24.577 247403 DEBUG nova.network.neutron [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:54:24 np0005603621 nova_compute[247399]: 2026-01-31 07:54:24.852 247403 DEBUG nova.network.neutron [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:54:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 198 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 259 KiB/s rd, 3.4 MiB/s wr, 120 op/s
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.735 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "202ce06a-b2ae-40e2-bf06-855757c393ba" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.735 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "202ce06a-b2ae-40e2-bf06-855757c393ba" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.735 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "202ce06a-b2ae-40e2-bf06-855757c393ba-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.736 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "202ce06a-b2ae-40e2-bf06-855757c393ba-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.736 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "202ce06a-b2ae-40e2-bf06-855757c393ba-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.737 247403 INFO nova.compute.manager [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Terminating instance#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.738 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "refresh_cache-202ce06a-b2ae-40e2-bf06-855757c393ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.738 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquired lock "refresh_cache-202ce06a-b2ae-40e2-bf06-855757c393ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.738 247403 DEBUG nova.network.neutron [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:54:25 np0005603621 nova_compute[247399]: 2026-01-31 07:54:25.848 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.081 247403 DEBUG nova.network.neutron [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:54:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:26.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.139 247403 DEBUG nova.network.neutron [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.162 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Releasing lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.163 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Instance network_info: |[{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.167 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Start _get_guest_xml network_info=[{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.172 247403 WARNING nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.181 247403 DEBUG nova.virt.libvirt.host [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.182 247403 DEBUG nova.virt.libvirt.host [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.187 247403 DEBUG nova.virt.libvirt.host [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.188 247403 DEBUG nova.virt.libvirt.host [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.189 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.189 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.190 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.190 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.191 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.191 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.191 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.191 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.192 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.192 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.192 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.193 247403 DEBUG nova.virt.hardware [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.196 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.417 247403 DEBUG nova.compute.manager [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-changed-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.417 247403 DEBUG nova.compute.manager [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Refreshing instance network info cache due to event network-changed-badb2cfb-b8e3-4fbc-9969-b7746aff36ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.418 247403 DEBUG oslo_concurrency.lockutils [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.418 247403 DEBUG oslo_concurrency.lockutils [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.418 247403 DEBUG nova.network.neutron [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Refreshing network info cache for port badb2cfb-b8e3-4fbc-9969-b7746aff36ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:54:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:54:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2369754840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.600 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.603 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.634 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.640 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.675 247403 DEBUG nova.network.neutron [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.699 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Releasing lock "refresh_cache-202ce06a-b2ae-40e2-bf06-855757c393ba" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:54:26 np0005603621 nova_compute[247399]: 2026-01-31 07:54:26.699 247403 DEBUG nova.compute.manager [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:54:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:54:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691512049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.097 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.099 247403 DEBUG nova.virt.libvirt.vif [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-553589441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-553589441',id=30,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYVdi1LnaHZ5r6xMfeklqfzjDViAexljM9P3M0Fy5FZ3Xolf4vxCOKTYu0NFlJGf4EcZe3GteIpoGaJZuwWfVMuKuQVsr/qX8LdXn5NJVOqUqTS1m1sSlyZl2teCw6PaQ==',key_name='tempest-keypair-1101838222',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd9f0c923b994b0295e72b111f661de1',ramdisk_id='',reservation_id='r-ewjt80bb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-860437657',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-860437657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d6078cfaadaa45ae9256245554f784fe',uuid=9ba5b3e0-db8e-489f-bb9e-3697ab066ae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.100 247403 DEBUG nova.network.os_vif_util [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Converting VIF {"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.102 247403 DEBUG nova.network.os_vif_util [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.104 247403 DEBUG nova.objects.instance [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.126 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <uuid>9ba5b3e0-db8e-489f-bb9e-3697ab066ae4</uuid>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <name>instance-0000001e</name>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-553589441</nova:name>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:54:26</nova:creationTime>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:user uuid="d6078cfaadaa45ae9256245554f784fe">tempest-UpdateMultiattachVolumeNegativeTest-860437657-project-member</nova:user>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:project uuid="fd9f0c923b994b0295e72b111f661de1">tempest-UpdateMultiattachVolumeNegativeTest-860437657</nova:project>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <nova:port uuid="badb2cfb-b8e3-4fbc-9969-b7746aff36ae">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <entry name="serial">9ba5b3e0-db8e-489f-bb9e-3697ab066ae4</entry>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <entry name="uuid">9ba5b3e0-db8e-489f-bb9e-3697ab066ae4</entry>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk.config">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:43:6d:d0"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <target dev="tapbadb2cfb-b8"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/console.log" append="off"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:54:27 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:54:27 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:54:27 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:54:27 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.127 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Preparing to wait for external event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.127 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.128 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.128 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.129 247403 DEBUG nova.virt.libvirt.vif [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:54:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-553589441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-553589441',id=30,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYVdi1LnaHZ5r6xMfeklqfzjDViAexljM9P3M0Fy5FZ3Xolf4vxCOKTYu0NFlJGf4EcZe3GteIpoGaJZuwWfVMuKuQVsr/qX8LdXn5NJVOqUqTS1m1sSlyZl2teCw6PaQ==',key_name='tempest-keypair-1101838222',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fd9f0c923b994b0295e72b111f661de1',ramdisk_id='',reservation_id='r-ewjt80bb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-860437657',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-860437657-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:54:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d6078cfaadaa45ae9256245554f784fe',uuid=9ba5b3e0-db8e-489f-bb9e-3697ab066ae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.129 247403 DEBUG nova.network.os_vif_util [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Converting VIF {"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.130 247403 DEBUG nova.network.os_vif_util [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.130 247403 DEBUG os_vif [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.130 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.131 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.131 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.136 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbadb2cfb-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.136 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbadb2cfb-b8, col_values=(('external_ids', {'iface-id': 'badb2cfb-b8e3-4fbc-9969-b7746aff36ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:6d:d0', 'vm-uuid': '9ba5b3e0-db8e-489f-bb9e-3697ab066ae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.138 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:27 np0005603621 NetworkManager[49013]: <info>  [1769846067.1398] manager: (tapbadb2cfb-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.146 247403 INFO os_vif [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8')#033[00m
Jan 31 02:54:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 213 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.773 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.774 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.774 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No VIF found with MAC fa:16:3e:43:6d:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.774 247403 INFO nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Using config drive#033[00m
Jan 31 02:54:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:27.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:27 np0005603621 nova_compute[247399]: 2026-01-31 07:54:27.944 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:28 np0005603621 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 31 02:54:28 np0005603621 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000001b.scope: Consumed 13.963s CPU time.
Jan 31 02:54:28 np0005603621 systemd-machined[212769]: Machine qemu-11-instance-0000001b terminated.
Jan 31 02:54:28 np0005603621 nova_compute[247399]: 2026-01-31 07:54:28.921 247403 INFO nova.virt.libvirt.driver [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance destroyed successfully.#033[00m
Jan 31 02:54:28 np0005603621 nova_compute[247399]: 2026-01-31 07:54:28.922 247403 DEBUG nova.objects.instance [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lazy-loading 'resources' on Instance uuid 202ce06a-b2ae-40e2-bf06-855757c393ba obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:54:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 213 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 31 02:54:29 np0005603621 nova_compute[247399]: 2026-01-31 07:54:29.261 247403 INFO nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Creating config drive at /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/disk.config#033[00m
Jan 31 02:54:29 np0005603621 nova_compute[247399]: 2026-01-31 07:54:29.267 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1x82re_s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:29 np0005603621 nova_compute[247399]: 2026-01-31 07:54:29.385 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1x82re_s" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:29 np0005603621 nova_compute[247399]: 2026-01-31 07:54:29.731 247403 DEBUG nova.storage.rbd_utils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] rbd image 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:54:29 np0005603621 nova_compute[247399]: 2026-01-31 07:54:29.735 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/disk.config 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:29.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:30.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:30.473 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:30.474 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:30.474 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:30 np0005603621 nova_compute[247399]: 2026-01-31 07:54:30.656 247403 DEBUG nova.network.neutron [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updated VIF entry in instance network info cache for port badb2cfb-b8e3-4fbc-9969-b7746aff36ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:54:30 np0005603621 nova_compute[247399]: 2026-01-31 07:54:30.657 247403 DEBUG nova.network.neutron [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:54:30 np0005603621 nova_compute[247399]: 2026-01-31 07:54:30.681 247403 DEBUG oslo_concurrency.lockutils [req-187d858b-390f-41e1-b9e7-26819d0ea721 req-c7b92ba1-15df-46ee-b828-c8ee5a21b0dc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:54:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 213 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.601 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.610 247403 DEBUG oslo_concurrency.processutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/disk.config 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.611 247403 INFO nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Deleting local config drive /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4/disk.config because it was imported into RBD.#033[00m
Jan 31 02:54:31 np0005603621 kernel: tapbadb2cfb-b8: entered promiscuous mode
Jan 31 02:54:31 np0005603621 NetworkManager[49013]: <info>  [1769846071.6518] manager: (tapbadb2cfb-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.652 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 systemd-udevd[268539]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:54:31 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:31Z|00048|binding|INFO|Claiming lport badb2cfb-b8e3-4fbc-9969-b7746aff36ae for this chassis.
Jan 31 02:54:31 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:31Z|00049|binding|INFO|badb2cfb-b8e3-4fbc-9969-b7746aff36ae: Claiming fa:16:3e:43:6d:d0 10.100.0.11
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.655 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.659 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.661 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 NetworkManager[49013]: <info>  [1769846071.6649] device (tapbadb2cfb-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:54:31 np0005603621 NetworkManager[49013]: <info>  [1769846071.6657] device (tapbadb2cfb-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.674 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:6d:d0 10.100.0.11'], port_security=['fa:16:3e:43:6d:d0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9ba5b3e0-db8e-489f-bb9e-3697ab066ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80d90d51-335c-4f74-8a61-143d47d84f22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd9f0c923b994b0295e72b111f661de1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '026daea3-1ff6-4616-9656-065604061a00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd84a3ff-b232-4c39-928c-e2cb3c0840e0, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=badb2cfb-b8e3-4fbc-9969-b7746aff36ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:54:31 np0005603621 systemd-machined[212769]: New machine qemu-12-instance-0000001e.
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.675 159734 INFO neutron.agent.ovn.metadata.agent [-] Port badb2cfb-b8e3-4fbc-9969-b7746aff36ae in datapath 80d90d51-335c-4f74-8a61-143d47d84f22 bound to our chassis#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.677 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80d90d51-335c-4f74-8a61-143d47d84f22#033[00m
Jan 31 02:54:31 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:31Z|00050|binding|INFO|Setting lport badb2cfb-b8e3-4fbc-9969-b7746aff36ae ovn-installed in OVS
Jan 31 02:54:31 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:31Z|00051|binding|INFO|Setting lport badb2cfb-b8e3-4fbc-9969-b7746aff36ae up in Southbound
Jan 31 02:54:31 np0005603621 systemd[1]: Started Virtual Machine qemu-12-instance-0000001e.
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.686 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[478f1798-5d4a-4d76-9ddf-c6d2547de3e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.687 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80d90d51-31 in ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.688 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80d90d51-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.688 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c379a3-308e-4c29-ae0f-45de17232836]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.690 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b39d4018-d330-40e1-b5f1-cf299f33c01f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.697 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4083e5-8870-4840-ac50-31b28cdf69f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.705 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e8dc76db-021f-456b-b3c0-03c6c38a4d1d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.724 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e08ab4fc-5ff4-4b87-b432-2c328d1b0efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 NetworkManager[49013]: <info>  [1769846071.7313] manager: (tap80d90d51-30): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.730 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8b097e-757c-4d39-bc7f-5c0b2035429e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 systemd-udevd[268615]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.753 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7b33a0b9-c5d8-4c91-92d8-39d0a9d4a76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.755 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[26c7641c-f102-4fd5-9d4a-2be1f5765424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 NetworkManager[49013]: <info>  [1769846071.7690] device (tap80d90d51-30): carrier: link connected
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.774 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6a0e5c-0346-459b-a26a-7dcd978212be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.785 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[490903ff-5d66-4935-ba54-1baef777a1bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80d90d51-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:7c:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537728, 'reachable_time': 36976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268648, 'error': None, 'target': 'ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.797 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3f82cb6c-185c-4eac-950c-972305471729]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:7c63'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537728, 'tstamp': 537728}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268649, 'error': None, 'target': 'ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.811 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[681ce1e6-4cdf-4ae2-87f2-10686360ab0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80d90d51-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:7c:63'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537728, 'reachable_time': 36976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268650, 'error': None, 'target': 'ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.832 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[27da974c-414f-40d3-927e-60b9f94bab2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:31.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.869 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0f6e95b6-e1d1-4ef7-b463-edb9e0a9fb13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.870 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80d90d51-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.870 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.871 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80d90d51-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:31 np0005603621 kernel: tap80d90d51-30: entered promiscuous mode
Jan 31 02:54:31 np0005603621 NetworkManager[49013]: <info>  [1769846071.8733] manager: (tap80d90d51-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.877 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80d90d51-30, col_values=(('external_ids', {'iface-id': '8d9a016f-907b-4797-b88c-cdfc5c832335'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:31 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:31Z|00052|binding|INFO|Releasing lport 8d9a016f-907b-4797-b88c-cdfc5c832335 from this chassis (sb_readonly=0)
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.881 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80d90d51-335c-4f74-8a61-143d47d84f22.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80d90d51-335c-4f74-8a61-143d47d84f22.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.882 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c853e669-4e04-4a0e-ba7e-d499519280e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.883 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-80d90d51-335c-4f74-8a61-143d47d84f22
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/80d90d51-335c-4f74-8a61-143d47d84f22.pid.haproxy
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 80d90d51-335c-4f74-8a61-143d47d84f22
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:54:31 np0005603621 nova_compute[247399]: 2026-01-31 07:54:31.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:31.884 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22', 'env', 'PROCESS_TAG=haproxy-80d90d51-335c-4f74-8a61-143d47d84f22', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80d90d51-335c-4f74-8a61-143d47d84f22.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:54:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:32.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.094 247403 DEBUG nova.compute.manager [req-03b9cc7c-fe53-4f8a-8dc1-610f501cb72e req-d6beade4-5c8c-432d-afb7-0490da7bba72 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.096 247403 DEBUG oslo_concurrency.lockutils [req-03b9cc7c-fe53-4f8a-8dc1-610f501cb72e req-d6beade4-5c8c-432d-afb7-0490da7bba72 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.097 247403 DEBUG oslo_concurrency.lockutils [req-03b9cc7c-fe53-4f8a-8dc1-610f501cb72e req-d6beade4-5c8c-432d-afb7-0490da7bba72 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.098 247403 DEBUG oslo_concurrency.lockutils [req-03b9cc7c-fe53-4f8a-8dc1-610f501cb72e req-d6beade4-5c8c-432d-afb7-0490da7bba72 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.098 247403 DEBUG nova.compute.manager [req-03b9cc7c-fe53-4f8a-8dc1-610f501cb72e req-d6beade4-5c8c-432d-afb7-0490da7bba72 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Processing event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.138 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:32 np0005603621 podman[268682]: 2026-01-31 07:54:32.176654766 +0000 UTC m=+0.024063046 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.825 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.829 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846072.825419, 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.829 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] VM Started (Lifecycle Event)#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.832 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.835 247403 INFO nova.virt.libvirt.driver [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Instance spawned successfully.#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.835 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.851 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.853 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.861 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.861 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.861 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.862 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.862 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.862 247403 DEBUG nova.virt.libvirt.driver [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.893 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.893 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846072.8282585, 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.893 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.920 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.926 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846072.8311026, 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.926 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.957 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.962 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.970 247403 INFO nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Took 11.62 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:54:32 np0005603621 nova_compute[247399]: 2026-01-31 07:54:32.971 247403 DEBUG nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:54:33 np0005603621 nova_compute[247399]: 2026-01-31 07:54:33.002 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:54:33 np0005603621 nova_compute[247399]: 2026-01-31 07:54:33.089 247403 INFO nova.compute.manager [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Took 12.68 seconds to build instance.#033[00m
Jan 31 02:54:33 np0005603621 nova_compute[247399]: 2026-01-31 07:54:33.118 247403 DEBUG oslo_concurrency.lockutils [None req-ef7ae290-c60b-4c5b-aba9-c9975b7d9e1d d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 214 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Jan 31 02:54:33 np0005603621 podman[268682]: 2026-01-31 07:54:33.71943366 +0000 UTC m=+1.566841890 container create 730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 02:54:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:33.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:34 np0005603621 systemd[1]: Started libpod-conmon-730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b.scope.
Jan 31 02:54:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b2b146d401b7bcf40b312d31cb38221eafd3a685c0f4028a31403bca716a22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:34.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:34 np0005603621 nova_compute[247399]: 2026-01-31 07:54:34.232 247403 DEBUG nova.compute.manager [req-f24a384e-dbcb-4440-9bcc-c0e303935508 req-503dd74f-1e29-43ea-9a9a-5360f3ef3a9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:54:34 np0005603621 nova_compute[247399]: 2026-01-31 07:54:34.233 247403 DEBUG oslo_concurrency.lockutils [req-f24a384e-dbcb-4440-9bcc-c0e303935508 req-503dd74f-1e29-43ea-9a9a-5360f3ef3a9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:34 np0005603621 nova_compute[247399]: 2026-01-31 07:54:34.233 247403 DEBUG oslo_concurrency.lockutils [req-f24a384e-dbcb-4440-9bcc-c0e303935508 req-503dd74f-1e29-43ea-9a9a-5360f3ef3a9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:34 np0005603621 nova_compute[247399]: 2026-01-31 07:54:34.234 247403 DEBUG oslo_concurrency.lockutils [req-f24a384e-dbcb-4440-9bcc-c0e303935508 req-503dd74f-1e29-43ea-9a9a-5360f3ef3a9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:34 np0005603621 nova_compute[247399]: 2026-01-31 07:54:34.234 247403 DEBUG nova.compute.manager [req-f24a384e-dbcb-4440-9bcc-c0e303935508 req-503dd74f-1e29-43ea-9a9a-5360f3ef3a9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] No waiting events found dispatching network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:54:34 np0005603621 nova_compute[247399]: 2026-01-31 07:54:34.234 247403 WARNING nova.compute.manager [req-f24a384e-dbcb-4440-9bcc-c0e303935508 req-503dd74f-1e29-43ea-9a9a-5360f3ef3a9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received unexpected event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae for instance with vm_state active and task_state None.#033[00m
Jan 31 02:54:34 np0005603621 podman[268682]: 2026-01-31 07:54:34.363585673 +0000 UTC m=+2.210993923 container init 730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:54:34 np0005603621 podman[268682]: 2026-01-31 07:54:34.367375522 +0000 UTC m=+2.214783752 container start 730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 31 02:54:34 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [NOTICE]   (268746) : New worker (268748) forked
Jan 31 02:54:34 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [NOTICE]   (268746) : Loading success.
Jan 31 02:54:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:35 np0005603621 NetworkManager[49013]: <info>  [1769846075.2142] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 31 02:54:35 np0005603621 NetworkManager[49013]: <info>  [1769846075.2151] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 142 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:35 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:35Z|00053|binding|INFO|Releasing lport 8d9a016f-907b-4797-b88c-cdfc5c832335 from this chassis (sb_readonly=0)
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.627 247403 DEBUG nova.compute.manager [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-changed-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.627 247403 DEBUG nova.compute.manager [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Refreshing instance network info cache due to event network-changed-badb2cfb-b8e3-4fbc-9969-b7746aff36ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.627 247403 DEBUG oslo_concurrency.lockutils [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.627 247403 DEBUG oslo_concurrency.lockutils [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:54:35 np0005603621 nova_compute[247399]: 2026-01-31 07:54:35.628 247403 DEBUG nova.network.neutron [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Refreshing network info cache for port badb2cfb-b8e3-4fbc-9969-b7746aff36ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:54:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:35.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:54:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:54:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:54:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:54:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:54:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:36.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:54:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a352df36-8746-40cd-ac05-e3be680c4b3a does not exist
Jan 31 02:54:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f96e584e-6ffa-41da-a880-1111fc0ae434 does not exist
Jan 31 02:54:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 45b7c2b0-db2b-40ef-a3bc-a4249865609c does not exist
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:54:36 np0005603621 nova_compute[247399]: 2026-01-31 07:54:36.603 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:54:37 np0005603621 nova_compute[247399]: 2026-01-31 07:54:37.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:37 np0005603621 podman[269079]: 2026-01-31 07:54:37.051345902 +0000 UTC m=+0.018163122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:54:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 141 MiB data, 435 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 850 KiB/s wr, 146 op/s
Jan 31 02:54:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:37 np0005603621 podman[269079]: 2026-01-31 07:54:37.93044027 +0000 UTC m=+0.897257480 container create 06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 02:54:37 np0005603621 nova_compute[247399]: 2026-01-31 07:54:37.985 247403 DEBUG nova.network.neutron [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updated VIF entry in instance network info cache for port badb2cfb-b8e3-4fbc-9969-b7746aff36ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:54:37 np0005603621 nova_compute[247399]: 2026-01-31 07:54:37.986 247403 DEBUG nova.network.neutron [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.007 247403 DEBUG oslo_concurrency.lockutils [req-ca3f8d95-a92c-493d-992e-da8790d74607 req-ef9ba6d5-2d55-463a-ac55-1183edf3e39c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:54:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:38.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.146 247403 INFO nova.virt.libvirt.driver [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Deleting instance files /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba_del#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.146 247403 INFO nova.virt.libvirt.driver [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Deletion of /var/lib/nova/instances/202ce06a-b2ae-40e2-bf06-855757c393ba_del complete#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.205 247403 INFO nova.compute.manager [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Took 11.50 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.206 247403 DEBUG oslo.service.loopingcall [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.206 247403 DEBUG nova.compute.manager [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.207 247403 DEBUG nova.network.neutron [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:54:38 np0005603621 systemd[1]: Started libpod-conmon-06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262.scope.
Jan 31 02:54:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:54:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.454 247403 DEBUG nova.network.neutron [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:54:38
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images']
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.486 247403 DEBUG nova.network.neutron [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.501 247403 INFO nova.compute.manager [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Took 0.29 seconds to deallocate network for instance.#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.560 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.561 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:54:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:54:38 np0005603621 nova_compute[247399]: 2026-01-31 07:54:38.674 247403 DEBUG oslo_concurrency.processutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:54:38 np0005603621 podman[269079]: 2026-01-31 07:54:38.78857875 +0000 UTC m=+1.755395990 container init 06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:54:38 np0005603621 podman[269079]: 2026-01-31 07:54:38.796405136 +0000 UTC m=+1.763222346 container start 06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:54:38 np0005603621 infallible_liskov[269097]: 167 167
Jan 31 02:54:38 np0005603621 systemd[1]: libpod-06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262.scope: Deactivated successfully.
Jan 31 02:54:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 142 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 178 op/s
Jan 31 02:54:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:54:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4097491596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:54:39 np0005603621 nova_compute[247399]: 2026-01-31 07:54:39.415 247403 DEBUG oslo_concurrency.processutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:54:39 np0005603621 nova_compute[247399]: 2026-01-31 07:54:39.422 247403 DEBUG nova.compute.provider_tree [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:54:39 np0005603621 nova_compute[247399]: 2026-01-31 07:54:39.441 247403 DEBUG nova.scheduler.client.report [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:54:39 np0005603621 podman[269079]: 2026-01-31 07:54:39.594658666 +0000 UTC m=+2.561475896 container attach 06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 02:54:39 np0005603621 podman[269079]: 2026-01-31 07:54:39.595140771 +0000 UTC m=+2.561958001 container died 06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 02:54:39 np0005603621 nova_compute[247399]: 2026-01-31 07:54:39.670 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:39 np0005603621 nova_compute[247399]: 2026-01-31 07:54:39.726 247403 INFO nova.scheduler.client.report [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Deleted allocations for instance 202ce06a-b2ae-40e2-bf06-855757c393ba#033[00m
Jan 31 02:54:39 np0005603621 nova_compute[247399]: 2026-01-31 07:54:39.855 247403 DEBUG oslo_concurrency.lockutils [None req-5c98ac6a-54e3-4ffd-83ab-9fdf9770966c 0ee689c4c14744fb8b4e1d54f6831626 f63afec818164f31a848360151f96a68 - - default default] Lock "202ce06a-b2ae-40e2-bf06-855757c393ba" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:54:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:39.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:40.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:40 np0005603621 nova_compute[247399]: 2026-01-31 07:54:40.591 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4a1e963b4cd25ce2fa7e326e4a2096b1ff53ef94218da9d66ef19b0df16a7bf8-merged.mount: Deactivated successfully.
Jan 31 02:54:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 142 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 119 op/s
Jan 31 02:54:41 np0005603621 nova_compute[247399]: 2026-01-31 07:54:41.604 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:41 np0005603621 podman[269079]: 2026-01-31 07:54:41.856995539 +0000 UTC m=+4.823812739 container remove 06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:54:41 np0005603621 systemd[1]: libpod-conmon-06142a9de3b318a9c74ba25d9a7ee64da42f49701a9b5354bdac62d2a621e262.scope: Deactivated successfully.
Jan 31 02:54:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:41 np0005603621 podman[269138]: 2026-01-31 07:54:41.896280322 +0000 UTC m=+1.083442594 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:54:41 np0005603621 podman[269139]: 2026-01-31 07:54:41.921659379 +0000 UTC m=+1.109412589 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:54:42 np0005603621 podman[269189]: 2026-01-31 07:54:41.967520479 +0000 UTC m=+0.021215187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:54:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:42.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:42 np0005603621 nova_compute[247399]: 2026-01-31 07:54:42.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:42.152 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:54:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:42.153 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:54:42 np0005603621 nova_compute[247399]: 2026-01-31 07:54:42.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:42 np0005603621 podman[269189]: 2026-01-31 07:54:42.171224644 +0000 UTC m=+0.224919332 container create 216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:54:42 np0005603621 systemd[1]: Started libpod-conmon-216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb.scope.
Jan 31 02:54:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2dd57c86d8153d2cf8aa1181065575c36a2453fff2b32352db3f1f4c4c3168/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2dd57c86d8153d2cf8aa1181065575c36a2453fff2b32352db3f1f4c4c3168/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2dd57c86d8153d2cf8aa1181065575c36a2453fff2b32352db3f1f4c4c3168/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2dd57c86d8153d2cf8aa1181065575c36a2453fff2b32352db3f1f4c4c3168/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf2dd57c86d8153d2cf8aa1181065575c36a2453fff2b32352db3f1f4c4c3168/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:42 np0005603621 podman[269189]: 2026-01-31 07:54:42.824836344 +0000 UTC m=+0.878531052 container init 216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:54:42 np0005603621 podman[269189]: 2026-01-31 07:54:42.830717079 +0000 UTC m=+0.884411767 container start 216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kirch, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:54:43 np0005603621 podman[269189]: 2026-01-31 07:54:43.034240548 +0000 UTC m=+1.087935236 container attach 216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:54:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 148 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 120 op/s
Jan 31 02:54:43 np0005603621 wonderful_kirch[269205]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:54:43 np0005603621 wonderful_kirch[269205]: --> relative data size: 1.0
Jan 31 02:54:43 np0005603621 wonderful_kirch[269205]: --> All data devices are unavailable
Jan 31 02:54:43 np0005603621 systemd[1]: libpod-216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb.scope: Deactivated successfully.
Jan 31 02:54:43 np0005603621 podman[269189]: 2026-01-31 07:54:43.670558405 +0000 UTC m=+1.724253093 container died 216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 31 02:54:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:43 np0005603621 nova_compute[247399]: 2026-01-31 07:54:43.919 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846068.918501, 202ce06a-b2ae-40e2-bf06-855757c393ba => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:54:43 np0005603621 nova_compute[247399]: 2026-01-31 07:54:43.920 247403 INFO nova.compute.manager [-] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:54:43 np0005603621 nova_compute[247399]: 2026-01-31 07:54:43.942 247403 DEBUG nova.compute.manager [None req-fea4e841-56c2-4906-bcf8-d9c465d1e573 - - - - - -] [instance: 202ce06a-b2ae-40e2-bf06-855757c393ba] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:54:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:44.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bf2dd57c86d8153d2cf8aa1181065575c36a2453fff2b32352db3f1f4c4c3168-merged.mount: Deactivated successfully.
Jan 31 02:54:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 152 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 123 op/s
Jan 31 02:54:45 np0005603621 podman[269189]: 2026-01-31 07:54:45.654218709 +0000 UTC m=+3.707913417 container remove 216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kirch, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 02:54:45 np0005603621 systemd[1]: libpod-conmon-216e832dbcc7a6d965ee9b18dee9772e8921b0832840642db2cb06ca94a3ecfb.scope: Deactivated successfully.
Jan 31 02:54:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:45.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:46 np0005603621 podman[269373]: 2026-01-31 07:54:46.149072615 +0000 UTC m=+0.018310786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:46.257761) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846086257852, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2035, "num_deletes": 256, "total_data_size": 3786554, "memory_usage": 3837344, "flush_reason": "Manual Compaction"}
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 31 02:54:46 np0005603621 podman[269373]: 2026-01-31 07:54:46.416821031 +0000 UTC m=+0.286059182 container create edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846086596326, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3661247, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24313, "largest_seqno": 26347, "table_properties": {"data_size": 3651829, "index_size": 5912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19635, "raw_average_key_size": 20, "raw_value_size": 3632825, "raw_average_value_size": 3741, "num_data_blocks": 261, "num_entries": 971, "num_filter_entries": 971, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769845886, "oldest_key_time": 1769845886, "file_creation_time": 1769846086, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 338687 microseconds, and 6651 cpu microseconds.
Jan 31 02:54:46 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:54:46 np0005603621 nova_compute[247399]: 2026-01-31 07:54:46.608 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:46 np0005603621 systemd[1]: Started libpod-conmon-edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9.scope.
Jan 31 02:54:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:47 np0005603621 nova_compute[247399]: 2026-01-31 07:54:47.146 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:46.596456) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3661247 bytes OK
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:46.596490) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:46.839261) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:46.839318) EVENT_LOG_v1 {"time_micros": 1769846086839308, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:46.839342) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3777945, prev total WAL file size 3793483, number of live WAL files 2.
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:47.175323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373533' seq:0, type:0; will stop at (end)
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3575KB)], [56(9063KB)]
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846087175379, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12942428, "oldest_snapshot_seqno": -1}
Jan 31 02:54:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 156 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 948 KiB/s rd, 2.4 MiB/s wr, 84 op/s
Jan 31 02:54:47 np0005603621 podman[269373]: 2026-01-31 07:54:47.651380758 +0000 UTC m=+1.520618929 container init edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:54:47 np0005603621 podman[269373]: 2026-01-31 07:54:47.656577911 +0000 UTC m=+1.525816082 container start edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:54:47 np0005603621 awesome_blackwell[269389]: 167 167
Jan 31 02:54:47 np0005603621 systemd[1]: libpod-edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9.scope: Deactivated successfully.
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5435 keys, 12817738 bytes, temperature: kUnknown
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846087793410, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 12817738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12776733, "index_size": 26303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 136847, "raw_average_key_size": 25, "raw_value_size": 12674557, "raw_average_value_size": 2332, "num_data_blocks": 1085, "num_entries": 5435, "num_filter_entries": 5435, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:54:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:48 np0005603621 podman[269373]: 2026-01-31 07:54:48.072638383 +0000 UTC m=+1.941876554 container attach edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:54:48 np0005603621 podman[269373]: 2026-01-31 07:54:48.074085289 +0000 UTC m=+1.943323440 container died edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:54:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:48.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:47.794422) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 12817738 bytes
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.165573) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.9 rd, 20.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.0) write-amplify(3.5) OK, records in: 5968, records dropped: 533 output_compression: NoCompression
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.165613) EVENT_LOG_v1 {"time_micros": 1769846088165597, "job": 30, "event": "compaction_finished", "compaction_time_micros": 618841, "compaction_time_cpu_micros": 21679, "output_level": 6, "num_output_files": 1, "total_output_size": 12817738, "num_input_records": 5968, "num_output_records": 5435, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846088166461, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846088167540, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:47.175195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.167652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.167657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.167659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.167660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:54:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:54:48.167661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:54:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c2334cfa840f898c0f8b0ecc5dc6559cea57a015f40055e6b4a218a3d9b227be-merged.mount: Deactivated successfully.
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033395420388981946 of space, bias 1.0, pg target 1.0018626116694584 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:54:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 02:54:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 174 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 896 KiB/s rd, 3.0 MiB/s wr, 99 op/s
Jan 31 02:54:49 np0005603621 podman[269373]: 2026-01-31 07:54:49.372997756 +0000 UTC m=+3.242235947 container remove edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:54:49 np0005603621 systemd[1]: libpod-conmon-edf550b91bdf74daf0d841e135e4713d2c335fd601c9cf73913471e001ba90d9.scope: Deactivated successfully.
Jan 31 02:54:49 np0005603621 podman[269414]: 2026-01-31 07:54:49.480224212 +0000 UTC m=+0.021625330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:54:49 np0005603621 nova_compute[247399]: 2026-01-31 07:54:49.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:49 np0005603621 podman[269414]: 2026-01-31 07:54:49.766338294 +0000 UTC m=+0.307739392 container create ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 02:54:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:49.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:49 np0005603621 systemd[1]: Started libpod-conmon-ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962.scope.
Jan 31 02:54:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2506dd43ef846248489761573e8a256f47cd93edc0de846dc37223f5d3adf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2506dd43ef846248489761573e8a256f47cd93edc0de846dc37223f5d3adf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2506dd43ef846248489761573e8a256f47cd93edc0de846dc37223f5d3adf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e2506dd43ef846248489761573e8a256f47cd93edc0de846dc37223f5d3adf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:50 np0005603621 podman[269414]: 2026-01-31 07:54:50.032238982 +0000 UTC m=+0.573640110 container init ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_rubin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:54:50 np0005603621 podman[269414]: 2026-01-31 07:54:50.040793001 +0000 UTC m=+0.582194109 container start ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_rubin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 02:54:50 np0005603621 podman[269414]: 2026-01-31 07:54:50.060659854 +0000 UTC m=+0.602060952 container attach ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_rubin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:54:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:50.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]: {
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:    "0": [
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:        {
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "devices": [
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "/dev/loop3"
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            ],
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "lv_name": "ceph_lv0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "lv_size": "7511998464",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "name": "ceph_lv0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "tags": {
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.cluster_name": "ceph",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.crush_device_class": "",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.encrypted": "0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.osd_id": "0",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.type": "block",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:                "ceph.vdo": "0"
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            },
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "type": "block",
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:            "vg_name": "ceph_vg0"
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:        }
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]:    ]
Jan 31 02:54:50 np0005603621 recursing_rubin[269431]: }
Jan 31 02:54:50 np0005603621 systemd[1]: libpod-ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962.scope: Deactivated successfully.
Jan 31 02:54:50 np0005603621 podman[269414]: 2026-01-31 07:54:50.860914507 +0000 UTC m=+1.402315705 container died ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:54:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 174 MiB data, 473 MiB used, 21 GiB / 21 GiB avail; 290 KiB/s rd, 2.5 MiB/s wr, 56 op/s
Jan 31 02:54:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b8e2506dd43ef846248489761573e8a256f47cd93edc0de846dc37223f5d3adf-merged.mount: Deactivated successfully.
Jan 31 02:54:51 np0005603621 podman[269414]: 2026-01-31 07:54:51.56177432 +0000 UTC m=+2.103175428 container remove ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:54:51 np0005603621 systemd[1]: libpod-conmon-ff2b2db20c12a69ac58b34808ab6fd19a7c03ed8719389dc4296d039245f6962.scope: Deactivated successfully.
Jan 31 02:54:51 np0005603621 nova_compute[247399]: 2026-01-31 07:54:51.642 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:51.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:52.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:52 np0005603621 podman[269592]: 2026-01-31 07:54:52.028181072 +0000 UTC m=+0.018483681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:54:52 np0005603621 nova_compute[247399]: 2026-01-31 07:54:52.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:54:52.155 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:54:52 np0005603621 podman[269592]: 2026-01-31 07:54:52.203242769 +0000 UTC m=+0.193545348 container create c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:54:52 np0005603621 systemd[1]: Started libpod-conmon-c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de.scope.
Jan 31 02:54:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:52 np0005603621 podman[269592]: 2026-01-31 07:54:52.568174314 +0000 UTC m=+0.558476893 container init c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:54:52 np0005603621 podman[269592]: 2026-01-31 07:54:52.572976736 +0000 UTC m=+0.563279315 container start c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 31 02:54:52 np0005603621 pedantic_galileo[269608]: 167 167
Jan 31 02:54:52 np0005603621 systemd[1]: libpod-c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de.scope: Deactivated successfully.
Jan 31 02:54:52 np0005603621 podman[269592]: 2026-01-31 07:54:52.619418333 +0000 UTC m=+0.609720912 container attach c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:54:52 np0005603621 podman[269592]: 2026-01-31 07:54:52.619895208 +0000 UTC m=+0.610197777 container died c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 02:54:52 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:52Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:6d:d0 10.100.0.11
Jan 31 02:54:52 np0005603621 ovn_controller[149152]: 2026-01-31T07:54:52Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:6d:d0 10.100.0.11
Jan 31 02:54:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 183 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 422 KiB/s rd, 2.9 MiB/s wr, 75 op/s
Jan 31 02:54:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-13bdd071fc61edb913a14804825e01d500888ecd7718db2a0867ae882a20e299-merged.mount: Deactivated successfully.
Jan 31 02:54:53 np0005603621 podman[269592]: 2026-01-31 07:54:53.477638426 +0000 UTC m=+1.467941005 container remove c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:54:53 np0005603621 systemd[1]: libpod-conmon-c1c32cea9d0cea108c7c1edeab5e559cb3abb2dd3df866cbfa9be33ff7c065de.scope: Deactivated successfully.
Jan 31 02:54:53 np0005603621 podman[269635]: 2026-01-31 07:54:53.644835876 +0000 UTC m=+0.073936383 container create 190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 02:54:53 np0005603621 podman[269635]: 2026-01-31 07:54:53.593526804 +0000 UTC m=+0.022627351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:54:53 np0005603621 systemd[1]: Started libpod-conmon-190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5.scope.
Jan 31 02:54:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:54:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf394d82eaf61f3fabe42fb7a71e3591e7ffcdc43b2ed7a01b687481411d99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf394d82eaf61f3fabe42fb7a71e3591e7ffcdc43b2ed7a01b687481411d99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf394d82eaf61f3fabe42fb7a71e3591e7ffcdc43b2ed7a01b687481411d99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afaf394d82eaf61f3fabe42fb7a71e3591e7ffcdc43b2ed7a01b687481411d99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:54:53 np0005603621 podman[269635]: 2026-01-31 07:54:53.819658183 +0000 UTC m=+0.248758730 container init 190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:54:53 np0005603621 podman[269635]: 2026-01-31 07:54:53.826102826 +0000 UTC m=+0.255203343 container start 190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:54:53 np0005603621 podman[269635]: 2026-01-31 07:54:53.85873382 +0000 UTC m=+0.287834347 container attach 190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:54:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:54.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]: {
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:        "osd_id": 0,
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:        "type": "bluestore"
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]:    }
Jan 31 02:54:54 np0005603621 hardcore_goldberg[269652]: }
Jan 31 02:54:54 np0005603621 systemd[1]: libpod-190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5.scope: Deactivated successfully.
Jan 31 02:54:54 np0005603621 podman[269635]: 2026-01-31 07:54:54.733190993 +0000 UTC m=+1.162291520 container died 190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:54:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-afaf394d82eaf61f3fabe42fb7a71e3591e7ffcdc43b2ed7a01b687481411d99-merged.mount: Deactivated successfully.
Jan 31 02:54:54 np0005603621 podman[269635]: 2026-01-31 07:54:54.8417082 +0000 UTC m=+1.270808717 container remove 190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldberg, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:54:54 np0005603621 systemd[1]: libpod-conmon-190c2271bc130232b923ad07465e94aafb23f952cea696525f6b3e56e40e98b5.scope: Deactivated successfully.
Jan 31 02:54:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:54:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:54:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:54:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:54:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a8e9ab0b-3a25-4fa3-94ac-6de37ce69832 does not exist
Jan 31 02:54:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4990e813-d53b-4b46-ae55-789f3b4e7166 does not exist
Jan 31 02:54:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ec6d6370-4e00-4b56-9d41-90d03251f09d does not exist
Jan 31 02:54:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:54:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 194 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 538 KiB/s rd, 2.8 MiB/s wr, 100 op/s
Jan 31 02:54:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:55.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:54:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:54:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:54:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:54:56 np0005603621 nova_compute[247399]: 2026-01-31 07:54:56.645 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:57 np0005603621 nova_compute[247399]: 2026-01-31 07:54:57.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:54:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 196 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 557 KiB/s rd, 2.5 MiB/s wr, 100 op/s
Jan 31 02:54:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:57.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:54:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:54:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:54:58.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:54:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 200 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 490 KiB/s rd, 1.9 MiB/s wr, 85 op/s
Jan 31 02:54:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:54:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:54:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:54:59.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:00.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:00 np0005603621 nova_compute[247399]: 2026-01-31 07:55:00.984 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 200 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 292 KiB/s rd, 665 KiB/s wr, 54 op/s
Jan 31 02:55:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:02 np0005603621 nova_compute[247399]: 2026-01-31 07:55:02.505 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 211 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 293 KiB/s rd, 1.2 MiB/s wr, 56 op/s
Jan 31 02:55:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:03.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:04.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 20 GiB / 21 GiB avail; 178 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Jan 31 02:55:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:55:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:05.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:55:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:06.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 02:55:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 217 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.507 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.509 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.509 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.509 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.541 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:07 np0005603621 nova_compute[247399]: 2026-01-31 07:55:07.542 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:55:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:08 np0005603621 nova_compute[247399]: 2026-01-31 07:55:08.279 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 02:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:55:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.239 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.240 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 211 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 817 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.493 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.537564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846109537627, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 447, "num_deletes": 251, "total_data_size": 480299, "memory_usage": 489480, "flush_reason": "Manual Compaction"}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846109543864, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 476388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26348, "largest_seqno": 26794, "table_properties": {"data_size": 473718, "index_size": 770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6271, "raw_average_key_size": 18, "raw_value_size": 468442, "raw_average_value_size": 1415, "num_data_blocks": 34, "num_entries": 331, "num_filter_entries": 331, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846087, "oldest_key_time": 1769846087, "file_creation_time": 1769846109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 6880 microseconds, and 2530 cpu microseconds.
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.544439) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 476388 bytes OK
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.544479) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.550963) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.551009) EVENT_LOG_v1 {"time_micros": 1769846109550999, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.551036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 477615, prev total WAL file size 477615, number of live WAL files 2.
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.551656) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(465KB)], [59(12MB)]
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846109551695, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 13294126, "oldest_snapshot_seqno": -1}
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.704 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.705 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.714 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.714 247403 INFO nova.compute.claims [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5253 keys, 11325132 bytes, temperature: kUnknown
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846109721929, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 11325132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11286620, "index_size": 24258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 133731, "raw_average_key_size": 25, "raw_value_size": 11188802, "raw_average_value_size": 2129, "num_data_blocks": 995, "num_entries": 5253, "num_filter_entries": 5253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.722181) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 11325132 bytes
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.724346) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.1 rd, 66.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(51.7) write-amplify(23.8) OK, records in: 5766, records dropped: 513 output_compression: NoCompression
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.724377) EVENT_LOG_v1 {"time_micros": 1769846109724365, "job": 32, "event": "compaction_finished", "compaction_time_micros": 170306, "compaction_time_cpu_micros": 19779, "output_level": 6, "num_output_files": 1, "total_output_size": 11325132, "num_input_records": 5766, "num_output_records": 5253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846109724537, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846109726174, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.551502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.726331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.726336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.726338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.726340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:55:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:55:09.726342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:55:09 np0005603621 nova_compute[247399]: 2026-01-31 07:55:09.894 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:09.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:55:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371399401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:55:10 np0005603621 nova_compute[247399]: 2026-01-31 07:55:10.384 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:10 np0005603621 nova_compute[247399]: 2026-01-31 07:55:10.393 247403 DEBUG nova.compute.provider_tree [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:55:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:10.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:10 np0005603621 nova_compute[247399]: 2026-01-31 07:55:10.499 247403 DEBUG nova.scheduler.client.report [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:55:10 np0005603621 nova_compute[247399]: 2026-01-31 07:55:10.900 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:10 np0005603621 nova_compute[247399]: 2026-01-31 07:55:10.901 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:55:11 np0005603621 nova_compute[247399]: 2026-01-31 07:55:11.107 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:55:11 np0005603621 nova_compute[247399]: 2026-01-31 07:55:11.108 247403 DEBUG nova.network.neutron [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:55:11 np0005603621 nova_compute[247399]: 2026-01-31 07:55:11.220 247403 INFO nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:55:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 167 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Jan 31 02:55:11 np0005603621 nova_compute[247399]: 2026-01-31 07:55:11.399 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:55:11 np0005603621 nova_compute[247399]: 2026-01-31 07:55:11.413 247403 DEBUG nova.policy [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93973daeb08c453e90372a79b54b9ede', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:55:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:11.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.174 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.176 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.176 247403 INFO nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating image(s)#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.206 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.246 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.286 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.293 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.352 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.353 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.354 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.354 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.388 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.394 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:12 np0005603621 podman[269895]: 2026-01-31 07:55:12.499986204 +0000 UTC m=+0.055991288 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 02:55:12 np0005603621 podman[269896]: 2026-01-31 07:55:12.52917398 +0000 UTC m=+0.084724470 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.541 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:12 np0005603621 nova_compute[247399]: 2026-01-31 07:55:12.543 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:13 np0005603621 nova_compute[247399]: 2026-01-31 07:55:13.163 247403 DEBUG nova.network.neutron [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Successfully created port: bd448b0d-7dc7-43bc-b4d2-eba76110aa01 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:55:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 167 MiB data, 455 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 31 02:55:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:13.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:14 np0005603621 nova_compute[247399]: 2026-01-31 07:55:14.279 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:14 np0005603621 nova_compute[247399]: 2026-01-31 07:55:14.280 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:55:14 np0005603621 nova_compute[247399]: 2026-01-31 07:55:14.280 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:55:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:55:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2850578448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:55:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:55:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2850578448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:55:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:14.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 173 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 140 op/s
Jan 31 02:55:15 np0005603621 nova_compute[247399]: 2026-01-31 07:55:15.859 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 02:55:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:15.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:16 np0005603621 nova_compute[247399]: 2026-01-31 07:55:16.044 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:16 np0005603621 nova_compute[247399]: 2026-01-31 07:55:16.121 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:55:16 np0005603621 nova_compute[247399]: 2026-01-31 07:55:16.121 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:16 np0005603621 nova_compute[247399]: 2026-01-31 07:55:16.122 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:55:16 np0005603621 nova_compute[247399]: 2026-01-31 07:55:16.123 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:55:16 np0005603621 nova_compute[247399]: 2026-01-31 07:55:16.128 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] resizing rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:55:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:16.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.186 247403 DEBUG nova.network.neutron [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Successfully updated port: bd448b0d-7dc7-43bc-b4d2-eba76110aa01 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:55:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 189 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1019 KiB/s wr, 114 op/s
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.380 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.381 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquired lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.381 247403 DEBUG nova.network.neutron [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.545 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.611 247403 DEBUG nova.compute.manager [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-changed-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.611 247403 DEBUG nova.compute.manager [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Refreshing instance network info cache due to event network-changed-bd448b0d-7dc7-43bc-b4d2-eba76110aa01. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:55:17 np0005603621 nova_compute[247399]: 2026-01-31 07:55:17.611 247403 DEBUG oslo_concurrency.lockutils [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:55:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:55:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:55:18 np0005603621 nova_compute[247399]: 2026-01-31 07:55:18.061 247403 DEBUG nova.network.neutron [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:55:18 np0005603621 nova_compute[247399]: 2026-01-31 07:55:18.402 247403 DEBUG nova.objects.instance [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'migration_context' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:55:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 189 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1018 KiB/s wr, 103 op/s
Jan 31 02:55:19 np0005603621 nova_compute[247399]: 2026-01-31 07:55:19.507 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:55:19 np0005603621 nova_compute[247399]: 2026-01-31 07:55:19.507 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Ensure instance console log exists: /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:55:19 np0005603621 nova_compute[247399]: 2026-01-31 07:55:19.507 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:19 np0005603621 nova_compute[247399]: 2026-01-31 07:55:19.508 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:19 np0005603621 nova_compute[247399]: 2026-01-31 07:55:19.508 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:20 np0005603621 nova_compute[247399]: 2026-01-31 07:55:20.331 247403 DEBUG nova.network.neutron [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updating instance_info_cache with network_info: [{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 217 MiB data, 501 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.3 MiB/s wr, 97 op/s
Jan 31 02:55:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:22 np0005603621 nova_compute[247399]: 2026-01-31 07:55:22.547 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:55:22 np0005603621 nova_compute[247399]: 2026-01-31 07:55:22.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:22 np0005603621 nova_compute[247399]: 2026-01-31 07:55:22.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 31 02:55:22 np0005603621 nova_compute[247399]: 2026-01-31 07:55:22.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:55:22 np0005603621 nova_compute[247399]: 2026-01-31 07:55:22.549 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 31 02:55:22 np0005603621 nova_compute[247399]: 2026-01-31 07:55:22.551 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 218 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 2.6 MiB/s wr, 62 op/s
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.529 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Releasing lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.529 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance network_info: |[{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.530 247403 DEBUG oslo_concurrency.lockutils [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.530 247403 DEBUG nova.network.neutron [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Refreshing network info cache for port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.532 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start _get_guest_xml network_info=[{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.536 247403 WARNING nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.550 247403 DEBUG nova.virt.libvirt.host [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.551 247403 DEBUG nova.virt.libvirt.host [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.553 247403 DEBUG nova.virt.libvirt.host [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.554 247403 DEBUG nova.virt.libvirt.host [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.555 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.555 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.556 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.556 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.556 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.556 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.556 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.557 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.557 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.557 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.557 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.558 247403 DEBUG nova.virt.hardware [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.560 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:55:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1222846577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:55:23 np0005603621 nova_compute[247399]: 2026-01-31 07:55:23.996 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.022 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.025 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:55:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264849217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:55:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.499 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.501 247403 DEBUG nova.virt.libvirt.vif [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:55:11Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.502 247403 DEBUG nova.network.os_vif_util [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.503 247403 DEBUG nova.network.os_vif_util [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:55:24 np0005603621 nova_compute[247399]: 2026-01-31 07:55:24.504 247403 DEBUG nova.objects.instance [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.209 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <uuid>e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</uuid>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <name>instance-00000020</name>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersAdminTestJSON-server-1840895963</nova:name>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:55:23</nova:creationTime>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:user uuid="93973daeb08c453e90372a79b54b9ede">tempest-ServersAdminTestJSON-784933461-project-member</nova:user>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:project uuid="8033316fc42c4926bfd1f8a34b02fa97">tempest-ServersAdminTestJSON-784933461</nova:project>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <nova:port uuid="bd448b0d-7dc7-43bc-b4d2-eba76110aa01">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <entry name="serial">e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</entry>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <entry name="uuid">e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</entry>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:64:f8:02"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <target dev="tapbd448b0d-7d"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/console.log" append="off"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:55:25 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:55:25 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:55:25 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:55:25 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.211 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Preparing to wait for external event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.211 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.211 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.212 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.213 247403 DEBUG nova.virt.libvirt.vif [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:55:11Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.213 247403 DEBUG nova.network.os_vif_util [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.214 247403 DEBUG nova.network.os_vif_util [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.214 247403 DEBUG os_vif [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.215 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.215 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.221 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd448b0d-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.222 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd448b0d-7d, col_values=(('external_ids', {'iface-id': 'bd448b0d-7dc7-43bc-b4d2-eba76110aa01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:f8:02', 'vm-uuid': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:25 np0005603621 NetworkManager[49013]: <info>  [1769846125.2247] manager: (tapbd448b0d-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.229 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.230 247403 INFO os_vif [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d')#033[00m
Jan 31 02:55:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 226 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 3.3 MiB/s wr, 48 op/s
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.726 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.726 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.726 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No VIF found with MAC fa:16:3e:64:f8:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.727 247403 INFO nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Using config drive#033[00m
Jan 31 02:55:25 np0005603621 nova_compute[247399]: 2026-01-31 07:55:25.763 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:25.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.090 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.314 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.315 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.317 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.317 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.534 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.534 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.535 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.535 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.535 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:55:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639379908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:55:26 np0005603621 nova_compute[247399]: 2026-01-31 07:55:26.993 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 230 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 3.4 MiB/s wr, 34 op/s
Jan 31 02:55:27 np0005603621 nova_compute[247399]: 2026-01-31 07:55:27.355 247403 INFO nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating config drive at /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config#033[00m
Jan 31 02:55:27 np0005603621 nova_compute[247399]: 2026-01-31 07:55:27.360 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmjt81wbb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:27 np0005603621 nova_compute[247399]: 2026-01-31 07:55:27.480 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmjt81wbb" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:27 np0005603621 nova_compute[247399]: 2026-01-31 07:55:27.541 247403 DEBUG nova.storage.rbd_utils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:55:27 np0005603621 nova_compute[247399]: 2026-01-31 07:55:27.545 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:27 np0005603621 nova_compute[247399]: 2026-01-31 07:55:27.569 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.031 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.031 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.035 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.035 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.166 247403 DEBUG oslo_concurrency.processutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.166 247403 INFO nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deleting local config drive /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config because it was imported into RBD.#033[00m
Jan 31 02:55:28 np0005603621 kernel: tapbd448b0d-7d: entered promiscuous mode
Jan 31 02:55:28 np0005603621 NetworkManager[49013]: <info>  [1769846128.2049] manager: (tapbd448b0d-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 31 02:55:28 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:28Z|00054|binding|INFO|Claiming lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for this chassis.
Jan 31 02:55:28 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:28Z|00055|binding|INFO|bd448b0d-7dc7-43bc-b4d2-eba76110aa01: Claiming fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.206 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:28Z|00056|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 ovn-installed in OVS
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:28Z|00057|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 up in Southbound
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.221 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.222 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 bound to our chassis#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.224 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:55:28 np0005603621 systemd-udevd[270244]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:55:28 np0005603621 systemd-machined[212769]: New machine qemu-13-instance-00000020.
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.233 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[576ca337-db3a-40cc-8485-2812cc509220]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.233 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc58eaedf-21 in ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.235 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc58eaedf-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.235 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fec99a57-3155-48f6-992f-353d2f8a19ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.235 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8952e27b-3f7a-4c1e-bdf7-792ab6e10152]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 NetworkManager[49013]: <info>  [1769846128.2420] device (tapbd448b0d-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:55:28 np0005603621 NetworkManager[49013]: <info>  [1769846128.2427] device (tapbd448b0d-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:55:28 np0005603621 systemd[1]: Started Virtual Machine qemu-13-instance-00000020.
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.247 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6605cd40-48b4-49ec-b3cc-66b79c1f6b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.255 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.256 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4510MB free_disk=20.883102416992188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.257 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.257 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.259 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7a617f-8d35-4a6c-ba18-15bbde0670d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.291 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3c3d3e-e390-4379-b58c-dc4d22c9caa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 NetworkManager[49013]: <info>  [1769846128.2991] manager: (tapc58eaedf-20): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.298 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e8815b-b890-4074-b10e-aee3f2591712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 systemd-udevd[270248]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.327 247403 DEBUG nova.network.neutron [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updated VIF entry in instance network info cache for port bd448b0d-7dc7-43bc-b4d2-eba76110aa01. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.327 247403 DEBUG nova.network.neutron [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updating instance_info_cache with network_info: [{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.335 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2e9eda44-e63b-4faf-99b3-c4578a38f82f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.338 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d05eea3f-55e2-4389-a51b-bdf1fed0f45a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 NetworkManager[49013]: <info>  [1769846128.3585] device (tapc58eaedf-20): carrier: link connected
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.362 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[47137bbf-3923-49b6-bfbe-2b1ac800c848]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.375 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7223fe-899a-4611-a87e-7b297f25d4b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 36909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270277, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.379 247403 DEBUG oslo_concurrency.lockutils [req-8fca7fb8-3f04-495e-9b43-0c06d4bd46a6 req-9e2a8618-046d-43b6-92d6-80fdf09fe9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.386 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cda202b2-48d0-4a7c-9fc8-f7ede5c38187]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:11bf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543387, 'tstamp': 543387}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270278, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.398 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[57f74d9e-8b79-41ea-b930-06a393306b84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 36909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270279, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.426 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[14059302-e750-4cc8-9c97-b5221a85eb89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.476 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d118fe1-814c-468a-93b6-cb31c85eca5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.478 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.478 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.478 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:28 np0005603621 NetworkManager[49013]: <info>  [1769846128.4807] manager: (tapc58eaedf-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.480 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603621 kernel: tapc58eaedf-20: entered promiscuous mode
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.484 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:55:28 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:28Z|00058|binding|INFO|Releasing lport 8c531a0f-deeb-4de0-880b-b07ec1cf9103 from this chassis (sb_readonly=0)
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.493 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.494 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c58eaedf-202a-428a-acfb-f0b1291517f1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c58eaedf-202a-428a-acfb-f0b1291517f1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.496 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1fdb71-4e95-4bc2-b693-e1d415012de6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.498 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-c58eaedf-202a-428a-acfb-f0b1291517f1
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/c58eaedf-202a-428a-acfb-f0b1291517f1.pid.haproxy
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID c58eaedf-202a-428a-acfb-f0b1291517f1
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:55:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:28.499 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'env', 'PROCESS_TAG=haproxy-c58eaedf-202a-428a-acfb-f0b1291517f1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c58eaedf-202a-428a-acfb-f0b1291517f1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.501 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.501 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.502 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.502 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:55:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:55:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:28.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.609 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.674 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.675 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.689 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.703 247403 DEBUG nova.compute.manager [req-2da56381-ae40-499c-b92f-d9bde2bd5c60 req-459bf5ba-e017-4775-8a6b-447f80983f89 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.704 247403 DEBUG oslo_concurrency.lockutils [req-2da56381-ae40-499c-b92f-d9bde2bd5c60 req-459bf5ba-e017-4775-8a6b-447f80983f89 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.704 247403 DEBUG oslo_concurrency.lockutils [req-2da56381-ae40-499c-b92f-d9bde2bd5c60 req-459bf5ba-e017-4775-8a6b-447f80983f89 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.705 247403 DEBUG oslo_concurrency.lockutils [req-2da56381-ae40-499c-b92f-d9bde2bd5c60 req-459bf5ba-e017-4775-8a6b-447f80983f89 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.705 247403 DEBUG nova.compute.manager [req-2da56381-ae40-499c-b92f-d9bde2bd5c60 req-459bf5ba-e017-4775-8a6b-447f80983f89 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Processing event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.721 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 02:55:28 np0005603621 nova_compute[247399]: 2026-01-31 07:55:28.858 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:55:28 np0005603621 podman[270316]: 2026-01-31 07:55:28.831910766 +0000 UTC m=+0.024268752 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:55:28 np0005603621 podman[270316]: 2026-01-31 07:55:28.929260772 +0000 UTC m=+0.121618738 container create eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:55:29 np0005603621 systemd[1]: Started libpod-conmon-eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e.scope.
Jan 31 02:55:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:55:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7137bc06e584b4333644569b7cfd6fc72323215fc41ad8ec7f9f508789144938/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:55:29 np0005603621 podman[270316]: 2026-01-31 07:55:29.035137556 +0000 UTC m=+0.227495552 container init eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:55:29 np0005603621 podman[270316]: 2026-01-31 07:55:29.041681321 +0000 UTC m=+0.234039287 container start eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 02:55:29 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [NOTICE]   (270389) : New worker (270391) forked
Jan 31 02:55:29 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [NOTICE]   (270389) : Loading success.
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.132 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.134 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846129.1313741, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.135 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Started (Lifecycle Event)#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.141 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.145 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance spawned successfully.#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.146 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.180 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.186 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.220 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.220 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.221 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.221 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.222 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.222 247403 DEBUG nova.virt.libvirt.driver [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.248 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.248 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846129.1339, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.249 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:55:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 230 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 2.5 MiB/s wr, 33 op/s
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.296 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.301 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846129.1459825, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.302 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:55:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:55:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3860253367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.369 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.375 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.430 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.435 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.464 247403 INFO nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Took 17.29 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.465 247403 DEBUG nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.490 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.557 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.591 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.592 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.593 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.593 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.680 247403 INFO nova.compute.manager [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Took 20.02 seconds to build instance.#033[00m
Jan 31 02:55:29 np0005603621 nova_compute[247399]: 2026-01-31 07:55:29.779 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:30 np0005603621 nova_compute[247399]: 2026-01-31 07:55:30.118 247403 DEBUG oslo_concurrency.lockutils [None req-46b13c6a-ea02-4cc9-b5b6-bd7d6af837f5 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:30 np0005603621 nova_compute[247399]: 2026-01-31 07:55:30.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:30.474 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:30.474 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:55:30.475 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:30.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 319 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 6.2 MiB/s wr, 114 op/s
Jan 31 02:55:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:55:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:31.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.273 247403 DEBUG nova.compute.manager [req-e6f14408-0dba-40be-ae89-0a959d98921b req-037799b0-58d3-42e9-b361-ebfa1a7a5817 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.274 247403 DEBUG oslo_concurrency.lockutils [req-e6f14408-0dba-40be-ae89-0a959d98921b req-037799b0-58d3-42e9-b361-ebfa1a7a5817 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.274 247403 DEBUG oslo_concurrency.lockutils [req-e6f14408-0dba-40be-ae89-0a959d98921b req-037799b0-58d3-42e9-b361-ebfa1a7a5817 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.274 247403 DEBUG oslo_concurrency.lockutils [req-e6f14408-0dba-40be-ae89-0a959d98921b req-037799b0-58d3-42e9-b361-ebfa1a7a5817 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.274 247403 DEBUG nova.compute.manager [req-e6f14408-0dba-40be-ae89-0a959d98921b req-037799b0-58d3-42e9-b361-ebfa1a7a5817 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.274 247403 WARNING nova.compute.manager [req-e6f14408-0dba-40be-ae89-0a959d98921b req-037799b0-58d3-42e9-b361-ebfa1a7a5817 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:55:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:32.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:32 np0005603621 nova_compute[247399]: 2026-01-31 07:55:32.607 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 339 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.2 MiB/s wr, 155 op/s
Jan 31 02:55:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:33.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:34 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:34Z|00059|binding|INFO|Releasing lport 8c531a0f-deeb-4de0-880b-b07ec1cf9103 from this chassis (sb_readonly=0)
Jan 31 02:55:34 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:34Z|00060|binding|INFO|Releasing lport 8d9a016f-907b-4797-b88c-cdfc5c832335 from this chassis (sb_readonly=0)
Jan 31 02:55:34 np0005603621 nova_compute[247399]: 2026-01-31 07:55:34.375 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:34.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:35 np0005603621 nova_compute[247399]: 2026-01-31 07:55:35.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 381 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.6 MiB/s wr, 201 op/s
Jan 31 02:55:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:35.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:36.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 398 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.3 MiB/s wr, 210 op/s
Jan 31 02:55:37 np0005603621 nova_compute[247399]: 2026-01-31 07:55:37.610 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:37.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:55:38
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr']
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:55:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:55:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:55:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 398 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.1 MiB/s wr, 208 op/s
Jan 31 02:55:39 np0005603621 nova_compute[247399]: 2026-01-31 07:55:39.467 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:55:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:39.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:40 np0005603621 nova_compute[247399]: 2026-01-31 07:55:40.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 432 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Jan 31 02:55:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:41.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:42 np0005603621 nova_compute[247399]: 2026-01-31 07:55:42.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 432 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 144 op/s
Jan 31 02:55:43 np0005603621 podman[270461]: 2026-01-31 07:55:43.504243966 +0000 UTC m=+0.057140704 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 02:55:43 np0005603621 podman[270462]: 2026-01-31 07:55:43.531932226 +0000 UTC m=+0.079421014 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Jan 31 02:55:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:43.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:55:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:44.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:55:45 np0005603621 nova_compute[247399]: 2026-01-31 07:55:45.229 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 435 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 1007 KiB/s rd, 4.1 MiB/s wr, 98 op/s
Jan 31 02:55:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:45.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:55:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:46.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:55:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 440 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.8 MiB/s wr, 51 op/s
Jan 31 02:55:47 np0005603621 nova_compute[247399]: 2026-01-31 07:55:47.614 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:47.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009900486227433464 of space, bias 1.0, pg target 2.9701458682300395 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:55:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 02:55:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 440 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.4 MiB/s wr, 39 op/s
Jan 31 02:55:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 31 02:55:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 31 02:55:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 31 02:55:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:50 np0005603621 nova_compute[247399]: 2026-01-31 07:55:50.231 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:50 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:50Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:55:50 np0005603621 ovn_controller[149152]: 2026-01-31T07:55:50Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:55:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:50.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 453 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 158 op/s
Jan 31 02:55:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:52.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:52 np0005603621 nova_compute[247399]: 2026-01-31 07:55:52.616 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 458 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 175 op/s
Jan 31 02:55:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:54.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:55 np0005603621 nova_compute[247399]: 2026-01-31 07:55:55.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 462 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 178 op/s
Jan 31 02:55:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:55:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:55.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 465 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 180 op/s
Jan 31 02:55:57 np0005603621 nova_compute[247399]: 2026-01-31 07:55:57.618 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:55:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:55:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:55:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 02:55:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:55:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 02:55:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:55:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:55:58.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:55:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1db806b5-2939-4135-9c4e-1486ce993fa6 does not exist
Jan 31 02:55:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d1b47418-9ec2-4a5f-819d-9ebc51a011e6 does not exist
Jan 31 02:55:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 29143824-3598-4086-8a36-314eab865ed8 does not exist
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:55:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:55:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 465 MiB data, 598 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 180 op/s
Jan 31 02:55:59 np0005603621 podman[270834]: 2026-01-31 07:55:59.650825929 +0000 UTC m=+0.024009343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:55:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:55:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:55:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:55:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:00 np0005603621 podman[270834]: 2026-01-31 07:56:00.024059306 +0000 UTC m=+0.397242720 container create 38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:56:00 np0005603621 systemd[1]: Started libpod-conmon-38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360.scope.
Jan 31 02:56:00 np0005603621 nova_compute[247399]: 2026-01-31 07:56:00.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:56:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:56:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:56:00 np0005603621 podman[270834]: 2026-01-31 07:56:00.426751086 +0000 UTC m=+0.799934500 container init 38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 02:56:00 np0005603621 podman[270834]: 2026-01-31 07:56:00.434116157 +0000 UTC m=+0.807299551 container start 38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:56:00 np0005603621 focused_mendeleev[270850]: 167 167
Jan 31 02:56:00 np0005603621 systemd[1]: libpod-38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360.scope: Deactivated successfully.
Jan 31 02:56:00 np0005603621 podman[270834]: 2026-01-31 07:56:00.46330447 +0000 UTC m=+0.836487864 container attach 38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:56:00 np0005603621 podman[270834]: 2026-01-31 07:56:00.464410055 +0000 UTC m=+0.837593469 container died 38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:56:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:00.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-59c6f5f2588a289665f345ee3d839b00ffb94bc7d3962de08205b3c0d0a681a6-merged.mount: Deactivated successfully.
Jan 31 02:56:01 np0005603621 podman[270834]: 2026-01-31 07:56:01.247616041 +0000 UTC m=+1.620799435 container remove 38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 02:56:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 478 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 124 op/s
Jan 31 02:56:01 np0005603621 systemd[1]: libpod-conmon-38f9dddb5f9d5f21b69a2a46a29bbacb7309aecbce449ee1df1e4b7d3ddd6360.scope: Deactivated successfully.
Jan 31 02:56:01 np0005603621 podman[270873]: 2026-01-31 07:56:01.369686311 +0000 UTC m=+0.022552166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:56:01 np0005603621 podman[270873]: 2026-01-31 07:56:01.515565255 +0000 UTC m=+0.168431080 container create 0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 02:56:01 np0005603621 systemd[1]: Started libpod-conmon-0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f.scope.
Jan 31 02:56:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:56:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7140d5ec860ed9ffdc487582eee7ec2094736ce7cf6039efb3b7bf2416b9289d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7140d5ec860ed9ffdc487582eee7ec2094736ce7cf6039efb3b7bf2416b9289d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7140d5ec860ed9ffdc487582eee7ec2094736ce7cf6039efb3b7bf2416b9289d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7140d5ec860ed9ffdc487582eee7ec2094736ce7cf6039efb3b7bf2416b9289d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7140d5ec860ed9ffdc487582eee7ec2094736ce7cf6039efb3b7bf2416b9289d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:01 np0005603621 podman[270873]: 2026-01-31 07:56:01.709799183 +0000 UTC m=+0.362665038 container init 0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:56:01 np0005603621 podman[270873]: 2026-01-31 07:56:01.718483105 +0000 UTC m=+0.371348930 container start 0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:56:01 np0005603621 podman[270873]: 2026-01-31 07:56:01.775815999 +0000 UTC m=+0.428681834 container attach 0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:56:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:01.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:02.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:02 np0005603621 epic_leakey[270890]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:56:02 np0005603621 epic_leakey[270890]: --> relative data size: 1.0
Jan 31 02:56:02 np0005603621 epic_leakey[270890]: --> All data devices are unavailable
Jan 31 02:56:02 np0005603621 nova_compute[247399]: 2026-01-31 07:56:02.621 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:02 np0005603621 systemd[1]: libpod-0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f.scope: Deactivated successfully.
Jan 31 02:56:02 np0005603621 podman[270873]: 2026-01-31 07:56:02.633490805 +0000 UTC m=+1.286356630 container died 0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 02:56:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7140d5ec860ed9ffdc487582eee7ec2094736ce7cf6039efb3b7bf2416b9289d-merged.mount: Deactivated successfully.
Jan 31 02:56:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 486 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 02:56:03 np0005603621 podman[270873]: 2026-01-31 07:56:03.9382067 +0000 UTC m=+2.591072525 container remove 0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:56:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:04 np0005603621 systemd[1]: libpod-conmon-0977477d72932378dddf19e0727f6d0710341fc37a266a790df8ca7756552f0f.scope: Deactivated successfully.
Jan 31 02:56:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:04.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:04 np0005603621 podman[271058]: 2026-01-31 07:56:04.492454182 +0000 UTC m=+0.024861749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:56:04 np0005603621 podman[271058]: 2026-01-31 07:56:04.588797457 +0000 UTC m=+0.121205004 container create c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:56:04 np0005603621 systemd[1]: Started libpod-conmon-c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99.scope.
Jan 31 02:56:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:56:04 np0005603621 podman[271058]: 2026-01-31 07:56:04.837647224 +0000 UTC m=+0.370054801 container init c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 02:56:04 np0005603621 podman[271058]: 2026-01-31 07:56:04.847363837 +0000 UTC m=+0.379771384 container start c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:56:04 np0005603621 youthful_kapitsa[271075]: 167 167
Jan 31 02:56:04 np0005603621 systemd[1]: libpod-c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99.scope: Deactivated successfully.
Jan 31 02:56:04 np0005603621 podman[271058]: 2026-01-31 07:56:04.893202962 +0000 UTC m=+0.425610539 container attach c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:56:04 np0005603621 podman[271058]: 2026-01-31 07:56:04.893644586 +0000 UTC m=+0.426052153 container died c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:56:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-77933e1abed5705acdd4e031f5120b1199a3913fa2c182e4f501a5256e2190a1-merged.mount: Deactivated successfully.
Jan 31 02:56:05 np0005603621 podman[271058]: 2026-01-31 07:56:05.033570274 +0000 UTC m=+0.565977821 container remove c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kapitsa, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:56:05 np0005603621 systemd[1]: libpod-conmon-c67449b00746d2c39acddfd6cf302db09c9a49065c2cce59b443b396dd4b6e99.scope: Deactivated successfully.
Jan 31 02:56:05 np0005603621 podman[271101]: 2026-01-31 07:56:05.18747488 +0000 UTC m=+0.053618599 container create 659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 02:56:05 np0005603621 nova_compute[247399]: 2026-01-31 07:56:05.237 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:05 np0005603621 podman[271101]: 2026-01-31 07:56:05.157889523 +0000 UTC m=+0.024033262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:56:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 494 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.2 MiB/s wr, 256 op/s
Jan 31 02:56:05 np0005603621 systemd[1]: Started libpod-conmon-659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de.scope.
Jan 31 02:56:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:56:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa45b9659ae4899cee730eec61e1ba8b34d7c81e86fe6784f8a1acc3194b2c55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa45b9659ae4899cee730eec61e1ba8b34d7c81e86fe6784f8a1acc3194b2c55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa45b9659ae4899cee730eec61e1ba8b34d7c81e86fe6784f8a1acc3194b2c55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa45b9659ae4899cee730eec61e1ba8b34d7c81e86fe6784f8a1acc3194b2c55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:05 np0005603621 podman[271101]: 2026-01-31 07:56:05.344123951 +0000 UTC m=+0.210267670 container init 659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 02:56:05 np0005603621 podman[271101]: 2026-01-31 07:56:05.35207598 +0000 UTC m=+0.218219699 container start 659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 31 02:56:05 np0005603621 podman[271101]: 2026-01-31 07:56:05.455679672 +0000 UTC m=+0.321823411 container attach 659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 02:56:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:05.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:06 np0005603621 friendly_elion[271117]: {
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:    "0": [
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:        {
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "devices": [
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "/dev/loop3"
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            ],
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "lv_name": "ceph_lv0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "lv_size": "7511998464",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "name": "ceph_lv0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "tags": {
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.cluster_name": "ceph",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.crush_device_class": "",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.encrypted": "0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.osd_id": "0",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.type": "block",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:                "ceph.vdo": "0"
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            },
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "type": "block",
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:            "vg_name": "ceph_vg0"
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:        }
Jan 31 02:56:06 np0005603621 friendly_elion[271117]:    ]
Jan 31 02:56:06 np0005603621 friendly_elion[271117]: }
Jan 31 02:56:06 np0005603621 systemd[1]: libpod-659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de.scope: Deactivated successfully.
Jan 31 02:56:06 np0005603621 podman[271128]: 2026-01-31 07:56:06.24503371 +0000 UTC m=+0.028933636 container died 659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:56:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:06.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aa45b9659ae4899cee730eec61e1ba8b34d7c81e86fe6784f8a1acc3194b2c55-merged.mount: Deactivated successfully.
Jan 31 02:56:06 np0005603621 podman[271128]: 2026-01-31 07:56:06.941018778 +0000 UTC m=+0.724918684 container remove 659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:56:06 np0005603621 systemd[1]: libpod-conmon-659189ec9bacfc210c573553dfbfe471588ebf6195b32d171cd045ef0497c1de.scope: Deactivated successfully.
Jan 31 02:56:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 497 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 264 op/s
Jan 31 02:56:07 np0005603621 podman[271284]: 2026-01-31 07:56:07.509391713 +0000 UTC m=+0.023053323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:56:07 np0005603621 nova_compute[247399]: 2026-01-31 07:56:07.623 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:07 np0005603621 podman[271284]: 2026-01-31 07:56:07.709336469 +0000 UTC m=+0.222998069 container create e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:56:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:08 np0005603621 systemd[1]: Started libpod-conmon-e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8.scope.
Jan 31 02:56:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.103 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.103 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:08 np0005603621 podman[271284]: 2026-01-31 07:56:08.227566265 +0000 UTC m=+0.741227885 container init e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:56:08 np0005603621 podman[271284]: 2026-01-31 07:56:08.233222942 +0000 UTC m=+0.746884542 container start e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:56:08 np0005603621 funny_lumiere[271303]: 167 167
Jan 31 02:56:08 np0005603621 systemd[1]: libpod-e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8.scope: Deactivated successfully.
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.247 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:56:08 np0005603621 podman[271284]: 2026-01-31 07:56:08.259454023 +0000 UTC m=+0.773115623 container attach e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 02:56:08 np0005603621 podman[271284]: 2026-01-31 07:56:08.260377611 +0000 UTC m=+0.774039201 container died e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:56:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a5d3dbc1c3b56b2f17c2be08beccdfef3a886268e72a84f0a64a125bf183f934-merged.mount: Deactivated successfully.
Jan 31 02:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:56:08 np0005603621 podman[271284]: 2026-01-31 07:56:08.485352301 +0000 UTC m=+0.999013891 container remove e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lumiere, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 31 02:56:08 np0005603621 systemd[1]: libpod-conmon-e8ae1003a70856feb6d751c33367529bfc762a8fc7dc8945855d402e6c28f7e8.scope: Deactivated successfully.
Jan 31 02:56:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:08.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.556 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.556 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.565 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.565 247403 INFO nova.compute.claims [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:56:08 np0005603621 podman[271327]: 2026-01-31 07:56:08.652632935 +0000 UTC m=+0.065602533 container create 61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 31 02:56:08 np0005603621 systemd[1]: Started libpod-conmon-61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58.scope.
Jan 31 02:56:08 np0005603621 podman[271327]: 2026-01-31 07:56:08.609831215 +0000 UTC m=+0.022800833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:56:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22416ffb49ffff75983cb1ab0a3c538d72f7f32bcc1cf9140c5003c99a23cd57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22416ffb49ffff75983cb1ab0a3c538d72f7f32bcc1cf9140c5003c99a23cd57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22416ffb49ffff75983cb1ab0a3c538d72f7f32bcc1cf9140c5003c99a23cd57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22416ffb49ffff75983cb1ab0a3c538d72f7f32bcc1cf9140c5003c99a23cd57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:56:08 np0005603621 podman[271327]: 2026-01-31 07:56:08.802943338 +0000 UTC m=+0.215912956 container init 61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_spence, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:56:08 np0005603621 podman[271327]: 2026-01-31 07:56:08.811909478 +0000 UTC m=+0.224879096 container start 61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_spence, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:56:08 np0005603621 podman[271327]: 2026-01-31 07:56:08.873592988 +0000 UTC m=+0.286562606 container attach 61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:56:08 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:08Z|00061|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 31 02:56:08 np0005603621 nova_compute[247399]: 2026-01-31 07:56:08.976 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 497 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.1 MiB/s wr, 260 op/s
Jan 31 02:56:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2197666960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:09 np0005603621 nova_compute[247399]: 2026-01-31 07:56:09.533 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:09 np0005603621 nova_compute[247399]: 2026-01-31 07:56:09.540 247403 DEBUG nova.compute.provider_tree [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]: {
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:        "osd_id": 0,
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:        "type": "bluestore"
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]:    }
Jan 31 02:56:09 np0005603621 beautiful_spence[271343]: }
Jan 31 02:56:09 np0005603621 systemd[1]: libpod-61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58.scope: Deactivated successfully.
Jan 31 02:56:09 np0005603621 podman[271327]: 2026-01-31 07:56:09.619724755 +0000 UTC m=+1.032694353 container died 61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:56:09 np0005603621 nova_compute[247399]: 2026-01-31 07:56:09.715 247403 DEBUG nova.scheduler.client.report [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:56:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-22416ffb49ffff75983cb1ab0a3c538d72f7f32bcc1cf9140c5003c99a23cd57-merged.mount: Deactivated successfully.
Jan 31 02:56:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.165 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.166 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.239 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.515 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.516 247403 DEBUG nova.network.neutron [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:56:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:10.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.565 247403 INFO nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:56:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:10 np0005603621 podman[271327]: 2026-01-31 07:56:10.614952876 +0000 UTC m=+2.027922474 container remove 61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:56:10 np0005603621 systemd[1]: libpod-conmon-61df1e1562e24d7b79d87085adfeb1e5610b0114783013c33b0d8120245a8e58.scope: Deactivated successfully.
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.624 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:56:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.707 247403 DEBUG nova.policy [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93973daeb08c453e90372a79b54b9ede', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.912 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.914 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.914 247403 INFO nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Creating image(s)#033[00m
Jan 31 02:56:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:56:10 np0005603621 nova_compute[247399]: 2026-01-31 07:56:10.940 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.001 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.026 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.032 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.092 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.093 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.094 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.095 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.123 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.127 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:56:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bc3a6be5-6b4a-4262-a960-c4d89221f36e does not exist
Jan 31 02:56:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b689dbf6-af87-432c-813f-acb47980d5d1 does not exist
Jan 31 02:56:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a88c5c2b-1501-4f9b-b785-30fca57704ea does not exist
Jan 31 02:56:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 518 MiB data, 641 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.8 MiB/s wr, 283 op/s
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.313 247403 DEBUG nova.compute.manager [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-changed-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.313 247403 DEBUG nova.compute.manager [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Refreshing instance network info cache due to event network-changed-badb2cfb-b8e3-4fbc-9969-b7746aff36ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.314 247403 DEBUG oslo_concurrency.lockutils [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.314 247403 DEBUG oslo_concurrency.lockutils [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.314 247403 DEBUG nova.network.neutron [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Refreshing network info cache for port badb2cfb-b8e3-4fbc-9969-b7746aff36ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:11.459 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.459 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:11.460 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.695 247403 DEBUG nova.network.neutron [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Successfully created port: 1067f15a-f293-4ea3-894a-c2be3e98368f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.879 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "8a3fb170-2275-431a-9931-fc0b814b13e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.879 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "8a3fb170-2275-431a-9931-fc0b814b13e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:11 np0005603621 nova_compute[247399]: 2026-01-31 07:56:11.933 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:56:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:11.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.102 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.102 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.109 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.109 247403 INFO nova.compute.claims [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:56:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:56:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:56:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:12.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.587 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.624 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.696 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:12 np0005603621 nova_compute[247399]: 2026-01-31 07:56:12.757 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] resizing rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:56:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2048867158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.015 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.021 247403 DEBUG nova.compute.provider_tree [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.116 247403 DEBUG nova.scheduler.client.report [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.201 247403 DEBUG nova.network.neutron [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updated VIF entry in instance network info cache for port badb2cfb-b8e3-4fbc-9969-b7746aff36ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.202 247403 DEBUG nova.network.neutron [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.210 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 523 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.9 MiB/s wr, 234 op/s
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.462 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.462 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.464 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "84b46624-4c75-43ae-90e4-00a0a374729b" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.464 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "84b46624-4c75-43ae-90e4-00a0a374729b" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.585 247403 DEBUG oslo_concurrency.lockutils [req-a4d75073-24a2-413f-a957-2d0af709b661 req-c2c88314-d7c8-41d2-a821-6883193270e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.594 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] No node specified, defaulting to compute-0.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Jan 31 02:56:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 31 02:56:13 np0005603621 nova_compute[247399]: 2026-01-31 07:56:13.777 247403 DEBUG nova.objects.instance [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'migration_context' on Instance uuid 0e2720e1-c54e-4332-90a4-f5c2c884b77d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:13.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.011 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.012 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.012 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.012 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.438 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.439 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Ensure instance console log exists: /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.439 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.440 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.440 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:14 np0005603621 podman[271642]: 2026-01-31 07:56:14.49460651 +0000 UTC m=+0.049148538 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 02:56:14 np0005603621 podman[271643]: 2026-01-31 07:56:14.515446833 +0000 UTC m=+0.069875538 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 02:56:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:14.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.752 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "84b46624-4c75-43ae-90e4-00a0a374729b" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:14 np0005603621 nova_compute[247399]: 2026-01-31 07:56:14.753 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:56:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 31 02:56:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 31 02:56:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:56:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/423078677' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:56:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:56:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/423078677' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.049 247403 DEBUG nova.network.neutron [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Successfully updated port: 1067f15a-f293-4ea3-894a-c2be3e98368f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.238 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.239 247403 DEBUG nova.network.neutron [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.274 247403 DEBUG nova.compute.manager [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-changed-1067f15a-f293-4ea3-894a-c2be3e98368f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.275 247403 DEBUG nova.compute.manager [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Refreshing instance network info cache due to event network-changed-1067f15a-f293-4ea3-894a-c2be3e98368f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.275 247403 DEBUG oslo_concurrency.lockutils [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0e2720e1-c54e-4332-90a4-f5c2c884b77d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.275 247403 DEBUG oslo_concurrency.lockutils [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0e2720e1-c54e-4332-90a4-f5c2c884b77d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.275 247403 DEBUG nova.network.neutron [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Refreshing network info cache for port 1067f15a-f293-4ea3-894a-c2be3e98368f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:56:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 566 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.3 MiB/s wr, 152 op/s
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.339 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "refresh_cache-0e2720e1-c54e-4332-90a4-f5c2c884b77d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.350 247403 INFO nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.513 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:56:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:15.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.986 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.988 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:56:15 np0005603621 nova_compute[247399]: 2026-01-31 07:56:15.988 247403 INFO nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Creating image(s)#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.016 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.046 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.082 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.085 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.133 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.133 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.134 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.134 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.240 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:16 np0005603621 nova_compute[247399]: 2026-01-31 07:56:16.244 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 8a3fb170-2275-431a-9931-fc0b814b13e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:16.463 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:56:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:56:17 np0005603621 nova_compute[247399]: 2026-01-31 07:56:17.024 247403 DEBUG nova.network.neutron [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:56:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 592 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 7.4 MiB/s wr, 157 op/s
Jan 31 02:56:17 np0005603621 nova_compute[247399]: 2026-01-31 07:56:17.626 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:17.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:18 np0005603621 nova_compute[247399]: 2026-01-31 07:56:18.022 247403 DEBUG nova.network.neutron [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 02:56:18 np0005603621 nova_compute[247399]: 2026-01-31 07:56:18.022 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:56:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:56:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:18.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:56:18 np0005603621 nova_compute[247399]: 2026-01-31 07:56:18.728 247403 DEBUG nova.network.neutron [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:18 np0005603621 nova_compute[247399]: 2026-01-31 07:56:18.840 247403 DEBUG oslo_concurrency.lockutils [req-63d1cf68-9118-4fd1-b01a-d97f4cc03c3f req-7681d891-b72d-4af3-a9e3-58b2ac177482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0e2720e1-c54e-4332-90a4-f5c2c884b77d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:18 np0005603621 nova_compute[247399]: 2026-01-31 07:56:18.840 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquired lock "refresh_cache-0e2720e1-c54e-4332-90a4-f5c2c884b77d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:18 np0005603621 nova_compute[247399]: 2026-01-31 07:56:18.840 247403 DEBUG nova.network.neutron [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:56:19 np0005603621 nova_compute[247399]: 2026-01-31 07:56:19.093 247403 DEBUG nova.network.neutron [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:56:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 592 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 479 KiB/s rd, 7.4 MiB/s wr, 157 op/s
Jan 31 02:56:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:19.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.065 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [{"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.150 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.151 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.151 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.151 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.151 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.152 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.152 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.152 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.152 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.207 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.208 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.208 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.208 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.208 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.243 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.658 247403 DEBUG nova.network.neutron [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Updating instance_info_cache with network_info: [{"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.728 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Releasing lock "refresh_cache-0e2720e1-c54e-4332-90a4-f5c2c884b77d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.728 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Instance network_info: |[{"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.730 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Start _get_guest_xml network_info=[{"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.735 247403 WARNING nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.739 247403 DEBUG nova.virt.libvirt.host [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.740 247403 DEBUG nova.virt.libvirt.host [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.743 247403 DEBUG nova.virt.libvirt.host [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.744 247403 DEBUG nova.virt.libvirt.host [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.745 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.745 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.746 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.746 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.746 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.746 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.747 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.747 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.747 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.747 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.748 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.748 247403 DEBUG nova.virt.hardware [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:56:20 np0005603621 nova_compute[247399]: 2026-01-31 07:56:20.751 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328656145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266842788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.274 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 684 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 10 MiB/s wr, 195 op/s
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.296 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.300 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.312 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.404 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.405 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.407 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.407 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.532 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.533 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4249MB free_disk=20.671531677246094GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.533 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.534 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.619 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.619 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.619 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0e2720e1-c54e-4332-90a4-f5c2c884b77d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.620 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 8a3fb170-2275-431a-9931-fc0b814b13e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.620 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.620 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:56:21 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:56:21 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 02:56:21 np0005603621 nova_compute[247399]: 2026-01-31 07:56:21.729 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:21.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308690233' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.229 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.230 247403 DEBUG nova.virt.libvirt.vif [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2052398935',display_name='tempest-ServersAdminTestJSON-server-2052398935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2052398935',id=37,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-m7tceh63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:10Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=0e2720e1-c54e-4332-90a4-f5c2c884b77d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.231 247403 DEBUG nova.network.os_vif_util [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.231 247403 DEBUG nova.network.os_vif_util [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.232 247403 DEBUG nova.objects.instance [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e2720e1-c54e-4332-90a4-f5c2c884b77d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.266 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <uuid>0e2720e1-c54e-4332-90a4-f5c2c884b77d</uuid>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <name>instance-00000025</name>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersAdminTestJSON-server-2052398935</nova:name>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:56:20</nova:creationTime>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:user uuid="93973daeb08c453e90372a79b54b9ede">tempest-ServersAdminTestJSON-784933461-project-member</nova:user>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:project uuid="8033316fc42c4926bfd1f8a34b02fa97">tempest-ServersAdminTestJSON-784933461</nova:project>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <nova:port uuid="1067f15a-f293-4ea3-894a-c2be3e98368f">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <entry name="serial">0e2720e1-c54e-4332-90a4-f5c2c884b77d</entry>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <entry name="uuid">0e2720e1-c54e-4332-90a4-f5c2c884b77d</entry>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk.config">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:4a:d7:71"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <target dev="tap1067f15a-f2"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/console.log" append="off"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:56:22 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:56:22 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:56:22 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:56:22 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.267 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Preparing to wait for external event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.267 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.267 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.267 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.268 247403 DEBUG nova.virt.libvirt.vif [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2052398935',display_name='tempest-ServersAdminTestJSON-server-2052398935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2052398935',id=37,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-m7tceh63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:56:10Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=0e2720e1-c54e-4332-90a4-f5c2c884b77d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.268 247403 DEBUG nova.network.os_vif_util [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.269 247403 DEBUG nova.network.os_vif_util [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.269 247403 DEBUG os_vif [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.269 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.270 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.270 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.272 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1067f15a-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.273 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1067f15a-f2, col_values=(('external_ids', {'iface-id': '1067f15a-f293-4ea3-894a-c2be3e98368f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:d7:71', 'vm-uuid': '0e2720e1-c54e-4332-90a4-f5c2c884b77d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.274 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:22 np0005603621 NetworkManager[49013]: <info>  [1769846182.2752] manager: (tap1067f15a-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.280 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.280 247403 INFO os_vif [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2')#033[00m
Jan 31 02:56:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1387483763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.538 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.809s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.544 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:56:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.627 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.720 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.748 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.749 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.749 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No VIF found with MAC fa:16:3e:4a:d7:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.749 247403 INFO nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Using config drive#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.801 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.808 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.808 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:22 np0005603621 nova_compute[247399]: 2026-01-31 07:56:22.984 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 8a3fb170-2275-431a-9931-fc0b814b13e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.739s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:23 np0005603621 nova_compute[247399]: 2026-01-31 07:56:23.060 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] resizing rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:56:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 708 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 768 KiB/s rd, 11 MiB/s wr, 200 op/s
Jan 31 02:56:23 np0005603621 nova_compute[247399]: 2026-01-31 07:56:23.475 247403 INFO nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Creating config drive at /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/disk.config#033[00m
Jan 31 02:56:23 np0005603621 nova_compute[247399]: 2026-01-31 07:56:23.478 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpps5fcnsp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:23 np0005603621 nova_compute[247399]: 2026-01-31 07:56:23.597 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpps5fcnsp" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:23 np0005603621 nova_compute[247399]: 2026-01-31 07:56:23.623 247403 DEBUG nova.storage.rbd_utils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:23 np0005603621 nova_compute[247399]: 2026-01-31 07:56:23.626 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/disk.config 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:23.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:24.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.619 247403 DEBUG nova.objects.instance [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8a3fb170-2275-431a-9931-fc0b814b13e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.648 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.648 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Ensure instance console log exists: /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.648 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.649 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.649 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.650 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.655 247403 WARNING nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.660 247403 DEBUG nova.virt.libvirt.host [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.661 247403 DEBUG nova.virt.libvirt.host [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.663 247403 DEBUG nova.virt.libvirt.host [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.664 247403 DEBUG nova.virt.libvirt.host [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.665 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.665 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.665 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.666 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.666 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.666 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.666 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.666 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.667 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.667 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.667 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.667 247403 DEBUG nova.virt.hardware [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:56:24 np0005603621 nova_compute[247399]: 2026-01-31 07:56:24.670 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3769599182' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.090 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.111 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.114 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 708 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 489 KiB/s rd, 7.9 MiB/s wr, 160 op/s
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.309 247403 DEBUG oslo_concurrency.processutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/disk.config 0e2720e1-c54e-4332-90a4-f5c2c884b77d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.310 247403 INFO nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Deleting local config drive /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d/disk.config because it was imported into RBD.#033[00m
Jan 31 02:56:25 np0005603621 kernel: tap1067f15a-f2: entered promiscuous mode
Jan 31 02:56:25 np0005603621 NetworkManager[49013]: <info>  [1769846185.3528] manager: (tap1067f15a-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 31 02:56:25 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:25Z|00062|binding|INFO|Claiming lport 1067f15a-f293-4ea3-894a-c2be3e98368f for this chassis.
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.355 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:25Z|00063|binding|INFO|1067f15a-f293-4ea3-894a-c2be3e98368f: Claiming fa:16:3e:4a:d7:71 10.100.0.9
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.368 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:25Z|00064|binding|INFO|Setting lport 1067f15a-f293-4ea3-894a-c2be3e98368f ovn-installed in OVS
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603621 systemd-udevd[272145]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.385 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d7:71 10.100.0.9'], port_security=['fa:16:3e:4a:d7:71 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0e2720e1-c54e-4332-90a4-f5c2c884b77d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=1067f15a-f293-4ea3-894a-c2be3e98368f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:56:25 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:25Z|00065|binding|INFO|Setting lport 1067f15a-f293-4ea3-894a-c2be3e98368f up in Southbound
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.387 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 1067f15a-f293-4ea3-894a-c2be3e98368f in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 bound to our chassis#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.389 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:56:25 np0005603621 NetworkManager[49013]: <info>  [1769846185.3978] device (tap1067f15a-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:56:25 np0005603621 NetworkManager[49013]: <info>  [1769846185.3983] device (tap1067f15a-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:56:25 np0005603621 systemd-machined[212769]: New machine qemu-14-instance-00000025.
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.402 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[209f88c5-fde5-4de2-8739-d72ff26fd283]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:25 np0005603621 systemd[1]: Started Virtual Machine qemu-14-instance-00000025.
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.422 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9822e5-d9ab-4741-961d-aee1cd45b89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.425 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4e4298-e79f-468f-9866-5162b6b9c5cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.446 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[12a91b55-8a76-43e0-9884-7765f99382e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.459 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[35d4bce3-7c5d-48d8-bb76-32969471e073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272159, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.472 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a80ef81d-2529-4c4c-8bbe-fee50e9fc6ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272161, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272161, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.474 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.476 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.477 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.479 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.479 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.480 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:56:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:25.480 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:56:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882682153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.549 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.550 247403 DEBUG nova.objects.instance [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a3fb170-2275-431a-9931-fc0b814b13e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.582 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <uuid>8a3fb170-2275-431a-9931-fc0b814b13e0</uuid>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <name>instance-00000027</name>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersOnMultiNodesTest-server-2015351071-2</nova:name>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:56:24</nova:creationTime>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:user uuid="d4307bc8a2224140b78ba248cecefe55">tempest-ServersOnMultiNodesTest-1827677275-project-member</nova:user>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <nova:project uuid="b6dca32431594e2682c5d2acb448bbf4">tempest-ServersOnMultiNodesTest-1827677275</nova:project>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <entry name="serial">8a3fb170-2275-431a-9931-fc0b814b13e0</entry>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <entry name="uuid">8a3fb170-2275-431a-9931-fc0b814b13e0</entry>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/8a3fb170-2275-431a-9931-fc0b814b13e0_disk">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/8a3fb170-2275-431a-9931-fc0b814b13e0_disk.config">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/console.log" append="off"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:56:25 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:56:25 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:56:25 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:56:25 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.814 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.814 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.815 247403 INFO nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Using config drive#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.837 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.854 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.855 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:25 np0005603621 nova_compute[247399]: 2026-01-31 07:56:25.899 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:56:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 31 02:56:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:25.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 31 02:56:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.162 247403 INFO nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Creating config drive at /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/disk.config#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.166 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3nox97dk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.286 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp3nox97dk" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.312 247403 DEBUG nova.storage.rbd_utils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] rbd image 8a3fb170-2275-431a-9931-fc0b814b13e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.318 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/disk.config 8a3fb170-2275-431a-9931-fc0b814b13e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.334 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846186.287957, 0e2720e1-c54e-4332-90a4-f5c2c884b77d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.335 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] VM Started (Lifecycle Event)#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.337 247403 DEBUG nova.compute.manager [req-80bebd42-4372-4c13-ab7d-b01b78344e80 req-9aa8a2a9-1d35-491b-9939-48a08fb4864d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.338 247403 DEBUG oslo_concurrency.lockutils [req-80bebd42-4372-4c13-ab7d-b01b78344e80 req-9aa8a2a9-1d35-491b-9939-48a08fb4864d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.338 247403 DEBUG oslo_concurrency.lockutils [req-80bebd42-4372-4c13-ab7d-b01b78344e80 req-9aa8a2a9-1d35-491b-9939-48a08fb4864d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.338 247403 DEBUG oslo_concurrency.lockutils [req-80bebd42-4372-4c13-ab7d-b01b78344e80 req-9aa8a2a9-1d35-491b-9939-48a08fb4864d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.338 247403 DEBUG nova.compute.manager [req-80bebd42-4372-4c13-ab7d-b01b78344e80 req-9aa8a2a9-1d35-491b-9939-48a08fb4864d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Processing event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.339 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.343 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.346 247403 INFO nova.virt.libvirt.driver [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Instance spawned successfully.#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.346 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:56:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:26.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.811 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.815 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.820 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.821 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.821 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.822 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.822 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.822 247403 DEBUG nova.virt.libvirt.driver [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.908 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.909 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846186.288539, 0e2720e1-c54e-4332-90a4-f5c2c884b77d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:26 np0005603621 nova_compute[247399]: 2026-01-31 07:56:26.909 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.133 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.137 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846186.3428695, 0e2720e1-c54e-4332-90a4-f5c2c884b77d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.137 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.160 247403 INFO nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Took 16.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.161 247403 DEBUG nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.169 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.172 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.214 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.276 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 730 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 6.6 MiB/s wr, 209 op/s
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.326 247403 INFO nova.compute.manager [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Took 18.79 seconds to build instance.#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.415 247403 DEBUG oslo_concurrency.lockutils [None req-e4db676c-86b8-467f-afd2-be45d7396fa1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:27 np0005603621 nova_compute[247399]: 2026-01-31 07:56:27.629 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:27.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:28 np0005603621 nova_compute[247399]: 2026-01-31 07:56:28.563 247403 DEBUG nova.compute.manager [req-b3e0e094-8a0b-481e-a7b7-ea170a2bfc9b req-e3f20362-6950-45ee-ab61-9e490e786e77 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:56:28 np0005603621 nova_compute[247399]: 2026-01-31 07:56:28.563 247403 DEBUG oslo_concurrency.lockutils [req-b3e0e094-8a0b-481e-a7b7-ea170a2bfc9b req-e3f20362-6950-45ee-ab61-9e490e786e77 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:28 np0005603621 nova_compute[247399]: 2026-01-31 07:56:28.564 247403 DEBUG oslo_concurrency.lockutils [req-b3e0e094-8a0b-481e-a7b7-ea170a2bfc9b req-e3f20362-6950-45ee-ab61-9e490e786e77 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:28 np0005603621 nova_compute[247399]: 2026-01-31 07:56:28.564 247403 DEBUG oslo_concurrency.lockutils [req-b3e0e094-8a0b-481e-a7b7-ea170a2bfc9b req-e3f20362-6950-45ee-ab61-9e490e786e77 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:28 np0005603621 nova_compute[247399]: 2026-01-31 07:56:28.564 247403 DEBUG nova.compute.manager [req-b3e0e094-8a0b-481e-a7b7-ea170a2bfc9b req-e3f20362-6950-45ee-ab61-9e490e786e77 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] No waiting events found dispatching network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:56:28 np0005603621 nova_compute[247399]: 2026-01-31 07:56:28.565 247403 WARNING nova.compute.manager [req-b3e0e094-8a0b-481e-a7b7-ea170a2bfc9b req-e3f20362-6950-45ee-ab61-9e490e786e77 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received unexpected event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f for instance with vm_state active and task_state None.#033[00m
Jan 31 02:56:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 730 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 6.6 MiB/s wr, 209 op/s
Jan 31 02:56:29 np0005603621 nova_compute[247399]: 2026-01-31 07:56:29.496 247403 DEBUG oslo_concurrency.processutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/disk.config 8a3fb170-2275-431a-9931-fc0b814b13e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:29 np0005603621 nova_compute[247399]: 2026-01-31 07:56:29.497 247403 INFO nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Deleting local config drive /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0/disk.config because it was imported into RBD.#033[00m
Jan 31 02:56:29 np0005603621 systemd-machined[212769]: New machine qemu-15-instance-00000027.
Jan 31 02:56:29 np0005603621 systemd[1]: Started Virtual Machine qemu-15-instance-00000027.
Jan 31 02:56:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:30.475 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:30.476 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:56:30.477 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.740 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846190.7398555, 8a3fb170-2275-431a-9931-fc0b814b13e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.740 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.742 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.743 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.745 247403 INFO nova.virt.libvirt.driver [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Instance spawned successfully.#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.745 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.815 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.818 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.839 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.839 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.840 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.840 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.841 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.841 247403 DEBUG nova.virt.libvirt.driver [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.863 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.863 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846190.740678, 8a3fb170-2275-431a-9931-fc0b814b13e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.864 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] VM Started (Lifecycle Event)#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.893 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.897 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.946 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.967 247403 INFO nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Took 14.98 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:56:30 np0005603621 nova_compute[247399]: 2026-01-31 07:56:30.967 247403 DEBUG nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:31 np0005603621 nova_compute[247399]: 2026-01-31 07:56:31.118 247403 INFO nova.compute.manager [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Took 19.04 seconds to build instance.#033[00m
Jan 31 02:56:31 np0005603621 nova_compute[247399]: 2026-01-31 07:56:31.173 247403 DEBUG oslo_concurrency.lockutils [None req-9b87d736-98a6-4e09-9cde-6b1f1a69c0b2 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "8a3fb170-2275-431a-9931-fc0b814b13e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 736 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 217 op/s
Jan 31 02:56:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:31.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:32 np0005603621 nova_compute[247399]: 2026-01-31 07:56:32.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:56:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:56:32 np0005603621 nova_compute[247399]: 2026-01-31 07:56:32.631 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 736 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 706 KiB/s wr, 222 op/s
Jan 31 02:56:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 736 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 693 KiB/s wr, 244 op/s
Jan 31 02:56:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:35.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:37 np0005603621 nova_compute[247399]: 2026-01-31 07:56:37.282 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 736 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 631 KiB/s wr, 306 op/s
Jan 31 02:56:37 np0005603621 nova_compute[247399]: 2026-01-31 07:56:37.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:37.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:56:38
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', 'images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.data']
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:56:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:38.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.626 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "8a3fb170-2275-431a-9931-fc0b814b13e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.627 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "8a3fb170-2275-431a-9931-fc0b814b13e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.627 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "8a3fb170-2275-431a-9931-fc0b814b13e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.628 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "8a3fb170-2275-431a-9931-fc0b814b13e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.628 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "8a3fb170-2275-431a-9931-fc0b814b13e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.630 247403 INFO nova.compute.manager [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Terminating instance#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.631 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "refresh_cache-8a3fb170-2275-431a-9931-fc0b814b13e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.631 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquired lock "refresh_cache-8a3fb170-2275-431a-9931-fc0b814b13e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.631 247403 DEBUG nova.network.neutron [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:56:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:56:38 np0005603621 nova_compute[247399]: 2026-01-31 07:56:38.994 247403 DEBUG nova.network.neutron [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:56:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 736 MiB data, 763 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 79 KiB/s wr, 216 op/s
Jan 31 02:56:39 np0005603621 nova_compute[247399]: 2026-01-31 07:56:39.414 247403 DEBUG nova.network.neutron [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:39 np0005603621 nova_compute[247399]: 2026-01-31 07:56:39.510 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Releasing lock "refresh_cache-8a3fb170-2275-431a-9931-fc0b814b13e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:56:39 np0005603621 nova_compute[247399]: 2026-01-31 07:56:39.511 247403 DEBUG nova.compute.manager [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:56:39 np0005603621 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 31 02:56:39 np0005603621 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d00000027.scope: Consumed 9.322s CPU time.
Jan 31 02:56:39 np0005603621 systemd-machined[212769]: Machine qemu-15-instance-00000027 terminated.
Jan 31 02:56:39 np0005603621 nova_compute[247399]: 2026-01-31 07:56:39.731 247403 INFO nova.virt.libvirt.driver [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Instance destroyed successfully.#033[00m
Jan 31 02:56:39 np0005603621 nova_compute[247399]: 2026-01-31 07:56:39.732 247403 DEBUG nova.objects.instance [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lazy-loading 'resources' on Instance uuid 8a3fb170-2275-431a-9931-fc0b814b13e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:39.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:40Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:d7:71 10.100.0.9
Jan 31 02:56:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:56:40Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:d7:71 10.100.0.9
Jan 31 02:56:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:40 np0005603621 nova_compute[247399]: 2026-01-31 07:56:40.691 247403 INFO nova.virt.libvirt.driver [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Deleting instance files /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0_del#033[00m
Jan 31 02:56:40 np0005603621 nova_compute[247399]: 2026-01-31 07:56:40.692 247403 INFO nova.virt.libvirt.driver [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Deletion of /var/lib/nova/instances/8a3fb170-2275-431a-9931-fc0b814b13e0_del complete#033[00m
Jan 31 02:56:40 np0005603621 nova_compute[247399]: 2026-01-31 07:56:40.807 247403 INFO nova.compute.manager [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:56:40 np0005603621 nova_compute[247399]: 2026-01-31 07:56:40.808 247403 DEBUG oslo.service.loopingcall [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:56:40 np0005603621 nova_compute[247399]: 2026-01-31 07:56:40.808 247403 DEBUG nova.compute.manager [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:56:40 np0005603621 nova_compute[247399]: 2026-01-31 07:56:40.808 247403 DEBUG nova.network.neutron [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:56:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 748 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 318 op/s
Jan 31 02:56:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:56:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:41.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.131 247403 DEBUG nova.network.neutron [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.248 247403 DEBUG nova.network.neutron [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.286 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.296 247403 INFO nova.compute.manager [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Took 1.49 seconds to deallocate network for instance.#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.407 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.407 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.544 247403 DEBUG oslo_concurrency.processutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:56:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:42.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:56:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268182619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.968 247403 DEBUG oslo_concurrency.processutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:42 np0005603621 nova_compute[247399]: 2026-01-31 07:56:42.973 247403 DEBUG nova.compute.provider_tree [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:56:43 np0005603621 nova_compute[247399]: 2026-01-31 07:56:43.041 247403 DEBUG nova.scheduler.client.report [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:56:43 np0005603621 nova_compute[247399]: 2026-01-31 07:56:43.137 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:43 np0005603621 nova_compute[247399]: 2026-01-31 07:56:43.274 247403 INFO nova.scheduler.client.report [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Deleted allocations for instance 8a3fb170-2275-431a-9931-fc0b814b13e0#033[00m
Jan 31 02:56:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 747 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 284 op/s
Jan 31 02:56:43 np0005603621 nova_compute[247399]: 2026-01-31 07:56:43.904 247403 DEBUG oslo_concurrency.lockutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:43 np0005603621 nova_compute[247399]: 2026-01-31 07:56:43.904 247403 DEBUG oslo_concurrency.lockutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:43 np0005603621 nova_compute[247399]: 2026-01-31 07:56:43.958 247403 DEBUG oslo_concurrency.lockutils [None req-f02e8ffe-528f-40c4-b02a-4520befc0b29 d4307bc8a2224140b78ba248cecefe55 b6dca32431594e2682c5d2acb448bbf4 - - default default] Lock "8a3fb170-2275-431a-9931-fc0b814b13e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:44 np0005603621 nova_compute[247399]: 2026-01-31 07:56:44.089 247403 DEBUG nova.objects.instance [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lazy-loading 'flavor' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:44.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:44 np0005603621 nova_compute[247399]: 2026-01-31 07:56:44.895 247403 DEBUG oslo_concurrency.lockutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 722 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 266 op/s
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.447 247403 DEBUG oslo_concurrency.lockutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.448 247403 DEBUG oslo_concurrency.lockutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.448 247403 INFO nova.compute.manager [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Attaching volume bd6a018f-eaef-43b9-b78d-313bb85c40fa to /dev/vdb#033[00m
Jan 31 02:56:45 np0005603621 podman[272427]: 2026-01-31 07:56:45.514662727 +0000 UTC m=+0.059931586 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 02:56:45 np0005603621 podman[272428]: 2026-01-31 07:56:45.535343494 +0000 UTC m=+0.080611423 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.626 247403 DEBUG os_brick.utils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.628 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.664 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.665 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad94ea2-15dd-4acc-ba87-442cce8020e6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.666 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.671 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.672 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[a096a94a-7419-473c-99cd-0888e705d439]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.673 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.681 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.681 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[a4039d90-2983-470c-8659-136b610f122a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.683 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[8396c0f8-8805-49ed-b669-9d1a490aa8ba]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.683 247403 DEBUG oslo_concurrency.processutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.703 247403 DEBUG oslo_concurrency.processutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.705 247403 DEBUG os_brick.initiator.connectors.lightos [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.706 247403 DEBUG os_brick.initiator.connectors.lightos [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.706 247403 DEBUG os_brick.initiator.connectors.lightos [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.706 247403 DEBUG os_brick.utils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 02:56:45 np0005603621 nova_compute[247399]: 2026-01-31 07:56:45.707 247403 DEBUG nova.virt.block_device [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating existing volume attachment record: 3df5a4ea-9863-4a70-94e7-37a47bc480d7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 02:56:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:47 np0005603621 nova_compute[247399]: 2026-01-31 07:56:47.288 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 722 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 230 op/s
Jan 31 02:56:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:56:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/241420563' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:56:47 np0005603621 nova_compute[247399]: 2026-01-31 07:56:47.637 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:47 np0005603621 nova_compute[247399]: 2026-01-31 07:56:47.863 247403 DEBUG nova.objects.instance [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lazy-loading 'flavor' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:56:47 np0005603621 nova_compute[247399]: 2026-01-31 07:56:47.897 247403 DEBUG nova.virt.libvirt.driver [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Attempting to attach volume bd6a018f-eaef-43b9-b78d-313bb85c40fa with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 02:56:47 np0005603621 nova_compute[247399]: 2026-01-31 07:56:47.899 247403 DEBUG nova.virt.libvirt.guest [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-bd6a018f-eaef-43b9-b78d-313bb85c40fa">
Jan 31 02:56:47 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  </source>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 02:56:47 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  </auth>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  <serial>bd6a018f-eaef-43b9-b78d-313bb85c40fa</serial>
Jan 31 02:56:47 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 02:56:47 np0005603621 nova_compute[247399]: </disk>
Jan 31 02:56:47 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 02:56:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:48.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:48 np0005603621 nova_compute[247399]: 2026-01-31 07:56:48.662 247403 DEBUG nova.virt.libvirt.driver [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:48 np0005603621 nova_compute[247399]: 2026-01-31 07:56:48.662 247403 DEBUG nova.virt.libvirt.driver [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:48 np0005603621 nova_compute[247399]: 2026-01-31 07:56:48.662 247403 DEBUG nova.virt.libvirt.driver [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:56:48 np0005603621 nova_compute[247399]: 2026-01-31 07:56:48.662 247403 DEBUG nova.virt.libvirt.driver [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] No VIF found with MAC fa:16:3e:43:6d:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01833222100083752 of space, bias 1.0, pg target 5.499666300251256 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001608520030709101 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:56:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 02:56:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 722 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 381 KiB/s rd, 3.9 MiB/s wr, 145 op/s
Jan 31 02:56:49 np0005603621 nova_compute[247399]: 2026-01-31 07:56:49.336 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:49 np0005603621 nova_compute[247399]: 2026-01-31 07:56:49.373 247403 DEBUG oslo_concurrency.lockutils [None req-0ceabdc1-ff16-4aa8-83a3-cd974caa686b d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:56:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:50.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 722 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 396 KiB/s rd, 3.9 MiB/s wr, 165 op/s
Jan 31 02:56:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:52.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:52 np0005603621 nova_compute[247399]: 2026-01-31 07:56:52.290 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:52 np0005603621 nova_compute[247399]: 2026-01-31 07:56:52.640 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 722 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 144 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 31 02:56:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:54.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:54 np0005603621 nova_compute[247399]: 2026-01-31 07:56:54.730 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846199.7287247, 8a3fb170-2275-431a-9931-fc0b814b13e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:56:54 np0005603621 nova_compute[247399]: 2026-01-31 07:56:54.730 247403 INFO nova.compute.manager [-] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:56:55 np0005603621 nova_compute[247399]: 2026-01-31 07:56:55.020 247403 DEBUG nova.compute.manager [None req-40d48252-d7d8-443c-8cba-ee6d8b89267f - - - - - -] [instance: 8a3fb170-2275-431a-9931-fc0b814b13e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:56:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 722 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 499 KiB/s rd, 42 KiB/s wr, 60 op/s
Jan 31 02:56:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:56:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:56.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:56:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:56:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:56.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:57 np0005603621 nova_compute[247399]: 2026-01-31 07:56:57.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 606 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 49 KiB/s wr, 110 op/s
Jan 31 02:56:57 np0005603621 nova_compute[247399]: 2026-01-31 07:56:57.642 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:56:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:56:58.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:56:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:56:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:56:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:56:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 606 MiB data, 727 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 108 op/s
Jan 31 02:57:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:57:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:00.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:57:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 564 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 42 KiB/s wr, 120 op/s
Jan 31 02:57:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:02.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:02 np0005603621 nova_compute[247399]: 2026-01-31 07:57:02.058 247403 INFO nova.compute.manager [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Rebuilding instance#033[00m
Jan 31 02:57:02 np0005603621 nova_compute[247399]: 2026-01-31 07:57:02.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:02 np0005603621 nova_compute[247399]: 2026-01-31 07:57:02.389 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:02 np0005603621 nova_compute[247399]: 2026-01-31 07:57:02.644 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:02 np0005603621 nova_compute[247399]: 2026-01-31 07:57:02.665 247403 DEBUG nova.compute.manager [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:03 np0005603621 nova_compute[247399]: 2026-01-31 07:57:03.296 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'pci_requests' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 564 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 39 KiB/s wr, 102 op/s
Jan 31 02:57:03 np0005603621 nova_compute[247399]: 2026-01-31 07:57:03.335 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:03 np0005603621 nova_compute[247399]: 2026-01-31 07:57:03.392 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'resources' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:03 np0005603621 nova_compute[247399]: 2026-01-31 07:57:03.432 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'migration_context' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:03 np0005603621 nova_compute[247399]: 2026-01-31 07:57:03.450 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:57:03 np0005603621 nova_compute[247399]: 2026-01-31 07:57:03.453 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:57:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:04.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 577 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 668 KiB/s wr, 116 op/s
Jan 31 02:57:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:06.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.326 247403 DEBUG oslo_concurrency.lockutils [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.326 247403 DEBUG oslo_concurrency.lockutils [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.378 247403 INFO nova.compute.manager [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Detaching volume bd6a018f-eaef-43b9-b78d-313bb85c40fa#033[00m
Jan 31 02:57:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.572 247403 INFO nova.virt.block_device [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Attempting to driver detach volume bd6a018f-eaef-43b9-b78d-313bb85c40fa from mountpoint /dev/vdb#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.579 247403 DEBUG nova.virt.libvirt.driver [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Attempting to detach device vdb from instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.579 247403 DEBUG nova.virt.libvirt.guest [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-bd6a018f-eaef-43b9-b78d-313bb85c40fa">
Jan 31 02:57:06 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  </source>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <serial>bd6a018f-eaef-43b9-b78d-313bb85c40fa</serial>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]: </disk>
Jan 31 02:57:06 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:57:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:06.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.642 247403 INFO nova.virt.libvirt.driver [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Successfully detached device vdb from instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 from the persistent domain config.#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.643 247403 DEBUG nova.virt.libvirt.driver [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.643 247403 DEBUG nova.virt.libvirt.guest [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-bd6a018f-eaef-43b9-b78d-313bb85c40fa">
Jan 31 02:57:06 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  </source>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <serial>bd6a018f-eaef-43b9-b78d-313bb85c40fa</serial>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 02:57:06 np0005603621 nova_compute[247399]: </disk>
Jan 31 02:57:06 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.747 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769846226.7471461, 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.749 247403 DEBUG nova.virt.libvirt.driver [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.750 247403 INFO nova.virt.libvirt.driver [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Successfully detached device vdb from instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 from the live domain config.#033[00m
Jan 31 02:57:06 np0005603621 kernel: tapbd448b0d-7d (unregistering): left promiscuous mode
Jan 31 02:57:06 np0005603621 NetworkManager[49013]: <info>  [1769846226.9437] device (tapbd448b0d-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.949 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:06 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:06Z|00066|binding|INFO|Releasing lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 from this chassis (sb_readonly=0)
Jan 31 02:57:06 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:06Z|00067|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 down in Southbound
Jan 31 02:57:06 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:06Z|00068|binding|INFO|Removing iface tapbd448b0d-7d ovn-installed in OVS
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.953 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:06 np0005603621 nova_compute[247399]: 2026-01-31 07:57:06.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:06 np0005603621 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000020.scope: Deactivated successfully.
Jan 31 02:57:07 np0005603621 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000020.scope: Consumed 16.337s CPU time.
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.001 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:07 np0005603621 systemd-machined[212769]: Machine qemu-13-instance-00000020 terminated.
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.002 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 unbound from our chassis#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.005 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.021 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[20b89d80-76ef-40f8-82f8-c126ea2758e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.046 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[55145b63-4287-430a-9e28-873c31ae6896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.049 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6c43eb8a-c2bd-4d12-851b-0dced0ef4d6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.070 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[22212297-b638-495c-836f-9a1fb3353c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.083 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9e095d-150b-4266-a856-93209422a282]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272576, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.099 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[55d9d1ff-6c7e-4cc6-b2cb-cc40daf5629e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272577, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272577, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.101 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.103 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.109 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.109 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:07 np0005603621 kernel: tapbd448b0d-7d: entered promiscuous mode
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.170 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:07Z|00069|binding|INFO|Claiming lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for this chassis.
Jan 31 02:57:07 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:07Z|00070|binding|INFO|bd448b0d-7dc7-43bc-b4d2-eba76110aa01: Claiming fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:57:07 np0005603621 kernel: tapbd448b0d-7d (unregistering): left promiscuous mode
Jan 31 02:57:07 np0005603621 NetworkManager[49013]: <info>  [1769846227.1739] manager: (tapbd448b0d-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Jan 31 02:57:07 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:07Z|00071|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 ovn-installed in OVS
Jan 31 02:57:07 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:07Z|00072|if_status|INFO|Not setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 down as sb is readonly
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.182 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:07Z|00073|binding|INFO|Releasing lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 from this chassis (sb_readonly=0)
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.249 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.251 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 bound to our chassis#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.253 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.256 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.260 247403 DEBUG nova.objects.instance [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lazy-loading 'flavor' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.265 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[23ce20a7-73cf-4636-a76e-07456ade99b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.287 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[98200912-d8dc-4f54-b1c7-76030af63780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.290 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.291 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b15e1774-9793-4b08-9442-89f33fe70a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 628 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 155 op/s
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.310 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[22ab56db-4bcb-479e-a6e7-3478a65471fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.325 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4386ff7-c01c-4f9f-ad28-cf2f5cee7dcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272594, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.339 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b52a441-b9ed-4725-9c11-1905cdd9aa55]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272595, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272595, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.341 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.343 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.347 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.347 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.348 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.348 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.349 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 unbound from our chassis#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.351 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.364 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[af6a70b7-2665-4b1f-a823-123ff6373209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.378 247403 DEBUG oslo_concurrency.lockutils [None req-699e8161-1eb5-4269-a13f-6c9228a05429 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.388 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc99b81-b1d5-4a34-bfc2-668d57e6c59a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.391 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a2922642-351c-45f8-9509-329c715e3afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.416 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e015dceb-d476-4d8e-9d4c-2b78f90c2fb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.429 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dced98e0-9962-4e06-b70c-3f54f7636d7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272602, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.441 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0d485e0b-f901-4783-9799-3677bd527ff3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272603, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272603, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.442 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.472 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance shutdown successfully after 4 seconds.#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.476 247403 DEBUG nova.compute.manager [req-1e284a30-4afa-4b04-8102-b6a0add03fb1 req-c29952a4-477b-4250-bdb9-ef61c8f6ef61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.476 247403 DEBUG oslo_concurrency.lockutils [req-1e284a30-4afa-4b04-8102-b6a0add03fb1 req-c29952a4-477b-4250-bdb9-ef61c8f6ef61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.477 247403 DEBUG oslo_concurrency.lockutils [req-1e284a30-4afa-4b04-8102-b6a0add03fb1 req-c29952a4-477b-4250-bdb9-ef61c8f6ef61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.477 247403 DEBUG oslo_concurrency.lockutils [req-1e284a30-4afa-4b04-8102-b6a0add03fb1 req-c29952a4-477b-4250-bdb9-ef61c8f6ef61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.477 247403 DEBUG nova.compute.manager [req-1e284a30-4afa-4b04-8102-b6a0add03fb1 req-c29952a4-477b-4250-bdb9-ef61c8f6ef61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.477 247403 WARNING nova.compute.manager [req-1e284a30-4afa-4b04-8102-b6a0add03fb1 req-c29952a4-477b-4250-bdb9-ef61c8f6ef61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state error and task_state rebuilding.#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.481 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.486 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.487 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.487 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:07.487 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.489 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance destroyed successfully.#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.495 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance destroyed successfully.#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.496 247403 DEBUG nova.virt.libvirt.vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:01Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.497 247403 DEBUG nova.network.os_vif_util [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.497 247403 DEBUG nova.network.os_vif_util [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.498 247403 DEBUG os_vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.500 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.500 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd448b0d-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.502 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.503 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.507 247403 INFO os_vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d')#033[00m
Jan 31 02:57:07 np0005603621 nova_compute[247399]: 2026-01-31 07:57:07.646 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:57:08 np0005603621 nova_compute[247399]: 2026-01-31 07:57:08.596 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deleting instance files /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_del#033[00m
Jan 31 02:57:08 np0005603621 nova_compute[247399]: 2026-01-31 07:57:08.597 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deletion of /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_del complete#033[00m
Jan 31 02:57:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:08.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 628 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 215 KiB/s rd, 3.2 MiB/s wr, 90 op/s
Jan 31 02:57:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:09.437 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:09 np0005603621 nova_compute[247399]: 2026-01-31 07:57:09.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:09.438 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:57:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:57:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.282 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.282 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating image(s)#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.306 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.338 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.363 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.366 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "365f9823d2619ef09948bdeed685488da63755b5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.367 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.698 247403 DEBUG nova.virt.libvirt.imagebackend [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/0864ca59-9877-4e6d-adfc-f0a3204ed8f8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/0864ca59-9877-4e6d-adfc-f0a3204ed8f8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.766 247403 DEBUG nova.compute.manager [req-26d8f02c-35b8-4a8c-b894-661a9ee78b04 req-8266eef0-5ace-45f7-ac16-e49713951350 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.766 247403 DEBUG oslo_concurrency.lockutils [req-26d8f02c-35b8-4a8c-b894-661a9ee78b04 req-8266eef0-5ace-45f7-ac16-e49713951350 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.767 247403 DEBUG oslo_concurrency.lockutils [req-26d8f02c-35b8-4a8c-b894-661a9ee78b04 req-8266eef0-5ace-45f7-ac16-e49713951350 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.767 247403 DEBUG oslo_concurrency.lockutils [req-26d8f02c-35b8-4a8c-b894-661a9ee78b04 req-8266eef0-5ace-45f7-ac16-e49713951350 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.767 247403 DEBUG nova.compute.manager [req-26d8f02c-35b8-4a8c-b894-661a9ee78b04 req-8266eef0-5ace-45f7-ac16-e49713951350 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:10 np0005603621 nova_compute[247399]: 2026-01-31 07:57:10.767 247403 WARNING nova.compute.manager [req-26d8f02c-35b8-4a8c-b894-661a9ee78b04 req-8266eef0-5ace-45f7-ac16-e49713951350 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Jan 31 02:57:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 551 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 157 op/s
Jan 31 02:57:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:12.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:12 np0005603621 podman[272854]: 2026-01-31 07:57:12.230230651 +0000 UTC m=+0.090437661 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:57:12 np0005603621 podman[272854]: 2026-01-31 07:57:12.322168798 +0000 UTC m=+0.182375808 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 02:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:12.440 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.501 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.518 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.569 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.part --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.570 247403 DEBUG nova.virt.images [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] 0864ca59-9877-4e6d-adfc-f0a3204ed8f8 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.571 247403 DEBUG nova.privsep.utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.571 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.part /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:12.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:12 np0005603621 nova_compute[247399]: 2026-01-31 07:57:12.648 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 497 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 31 02:57:13 np0005603621 podman[273017]: 2026-01-31 07:57:13.913383056 +0000 UTC m=+1.047111975 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:57:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:14.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3086653534' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:57:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:14.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3086653534' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:57:15 np0005603621 podman[273017]: 2026-01-31 07:57:15.106592643 +0000 UTC m=+2.240321572 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 02:57:15 np0005603621 nova_compute[247399]: 2026-01-31 07:57:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:15 np0005603621 nova_compute[247399]: 2026-01-31 07:57:15.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:57:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 452 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Jan 31 02:57:15 np0005603621 nova_compute[247399]: 2026-01-31 07:57:15.345 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:57:15 np0005603621 nova_compute[247399]: 2026-01-31 07:57:15.346 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:57:15 np0005603621 nova_compute[247399]: 2026-01-31 07:57:15.346 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:57:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:16.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:16 np0005603621 podman[273055]: 2026-01-31 07:57:16.182048124 +0000 UTC m=+0.074141991 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 02:57:16 np0005603621 podman[273054]: 2026-01-31 07:57:16.183095086 +0000 UTC m=+0.075212294 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 02:57:16 np0005603621 nova_compute[247399]: 2026-01-31 07:57:16.281 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.part /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.converted" returned: 0 in 3.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:16 np0005603621 nova_compute[247399]: 2026-01-31 07:57:16.285 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:16 np0005603621 nova_compute[247399]: 2026-01-31 07:57:16.346 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5.converted --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:16 np0005603621 nova_compute[247399]: 2026-01-31 07:57:16.348 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:16 np0005603621 nova_compute[247399]: 2026-01-31 07:57:16.375 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:16 np0005603621 nova_compute[247399]: 2026-01-31 07:57:16.379 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:16.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:16 np0005603621 podman[273125]: 2026-01-31 07:57:16.707662021 +0000 UTC m=+0.496290661 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, name=keepalived, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 02:57:16 np0005603621 podman[273233]: 2026-01-31 07:57:16.875871374 +0000 UTC m=+0.152782261 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, version=2.2.4)
Jan 31 02:57:16 np0005603621 podman[273125]: 2026-01-31 07:57:16.915874086 +0000 UTC m=+0.704502696 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, build-date=2023-02-22T09:23:20, distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., vcs-type=git, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 31 02:57:17 np0005603621 nova_compute[247399]: 2026-01-31 07:57:17.071 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updating instance_info_cache with network_info: [{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:57:17 np0005603621 nova_compute[247399]: 2026-01-31 07:57:17.207 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:57:17 np0005603621 nova_compute[247399]: 2026-01-31 07:57:17.208 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:57:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:57:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 405 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.3 MiB/s wr, 242 op/s
Jan 31 02:57:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:17 np0005603621 nova_compute[247399]: 2026-01-31 07:57:17.503 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:17 np0005603621 nova_compute[247399]: 2026-01-31 07:57:17.651 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:17 np0005603621 nova_compute[247399]: 2026-01-31 07:57:17.952 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.024 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] resizing rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:57:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:18.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9da0bfad-c4b4-4a66-b9ce-f78152132a27 does not exist
Jan 31 02:57:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a22429b3-e5f4-449d-a00b-2a27af98c268 does not exist
Jan 31 02:57:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 704e2020-b586-4c7e-8187-2a668f57ebba does not exist
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.219 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.308 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.308 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.308 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.309 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.309 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.607 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.607 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.608 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.608 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.608 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.610 247403 INFO nova.compute.manager [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Terminating instance#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.611 247403 DEBUG nova.compute.manager [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:57:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:18 np0005603621 kernel: tapbadb2cfb-b8 (unregistering): left promiscuous mode
Jan 31 02:57:18 np0005603621 NetworkManager[49013]: <info>  [1769846238.7201] device (tapbadb2cfb-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.727 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:18 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:18Z|00074|binding|INFO|Releasing lport badb2cfb-b8e3-4fbc-9969-b7746aff36ae from this chassis (sb_readonly=0)
Jan 31 02:57:18 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:18Z|00075|binding|INFO|Setting lport badb2cfb-b8e3-4fbc-9969-b7746aff36ae down in Southbound
Jan 31 02:57:18 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:18Z|00076|binding|INFO|Removing iface tapbadb2cfb-b8 ovn-installed in OVS
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.735 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895703216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:18 np0005603621 podman[273593]: 2026-01-31 07:57:18.661440574 +0000 UTC m=+0.020684627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:57:18 np0005603621 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Jan 31 02:57:18 np0005603621 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001e.scope: Consumed 18.890s CPU time.
Jan 31 02:57:18 np0005603621 systemd-machined[212769]: Machine qemu-12-instance-0000001e terminated.
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.777 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:18 np0005603621 podman[273593]: 2026-01-31 07:57:18.806834124 +0000 UTC m=+0.166078147 container create be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.834 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.841 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.854 247403 INFO nova.virt.libvirt.driver [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Instance destroyed successfully.#033[00m
Jan 31 02:57:18 np0005603621 nova_compute[247399]: 2026-01-31 07:57:18.855 247403 DEBUG nova.objects.instance [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lazy-loading 'resources' on Instance uuid 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:18.950 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:6d:d0 10.100.0.11'], port_security=['fa:16:3e:43:6d:d0 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9ba5b3e0-db8e-489f-bb9e-3697ab066ae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80d90d51-335c-4f74-8a61-143d47d84f22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fd9f0c923b994b0295e72b111f661de1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '026daea3-1ff6-4616-9656-065604061a00', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd84a3ff-b232-4c39-928c-e2cb3c0840e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=badb2cfb-b8e3-4fbc-9969-b7746aff36ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:18.951 159734 INFO neutron.agent.ovn.metadata.agent [-] Port badb2cfb-b8e3-4fbc-9969-b7746aff36ae in datapath 80d90d51-335c-4f74-8a61-143d47d84f22 unbound from our chassis#033[00m
Jan 31 02:57:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:18.953 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80d90d51-335c-4f74-8a61-143d47d84f22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:57:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:18.953 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc13072f-0f48-4c35-8deb-c9301a53b719]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:18.954 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22 namespace which is not needed anymore#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.127 247403 DEBUG nova.virt.libvirt.vif [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:54:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-553589441',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-553589441',id=30,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYVdi1LnaHZ5r6xMfeklqfzjDViAexljM9P3M0Fy5FZ3Xolf4vxCOKTYu0NFlJGf4EcZe3GteIpoGaJZuwWfVMuKuQVsr/qX8LdXn5NJVOqUqTS1m1sSlyZl2teCw6PaQ==',key_name='tempest-keypair-1101838222',keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:54:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fd9f0c923b994b0295e72b111f661de1',ramdisk_id='',reservation_id='r-ewjt80bb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-860437657',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-860437657-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:54:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='d6078cfaadaa45ae9256245554f784fe',uuid=9ba5b3e0-db8e-489f-bb9e-3697ab066ae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.129 247403 DEBUG nova.network.os_vif_util [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Converting VIF {"id": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "address": "fa:16:3e:43:6d:d0", "network": {"id": "80d90d51-335c-4f74-8a61-143d47d84f22", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-991561978-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fd9f0c923b994b0295e72b111f661de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbadb2cfb-b8", "ovs_interfaceid": "badb2cfb-b8e3-4fbc-9969-b7746aff36ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.129 247403 DEBUG nova.network.os_vif_util [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.129 247403 DEBUG os_vif [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.131 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbadb2cfb-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.147 247403 INFO os_vif [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:6d:d0,bridge_name='br-int',has_traffic_filtering=True,id=badb2cfb-b8e3-4fbc-9969-b7746aff36ae,network=Network(80d90d51-335c-4f74-8a61-143d47d84f22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbadb2cfb-b8')#033[00m
Jan 31 02:57:19 np0005603621 systemd[1]: Started libpod-conmon-be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd.scope.
Jan 31 02:57:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:57:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:57:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:57:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 405 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 767 KiB/s wr, 183 op/s
Jan 31 02:57:19 np0005603621 podman[273593]: 2026-01-31 07:57:19.31423136 +0000 UTC m=+0.673475383 container init be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:57:19 np0005603621 podman[273593]: 2026-01-31 07:57:19.321323431 +0000 UTC m=+0.680567454 container start be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jones, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:57:19 np0005603621 quizzical_jones[273656]: 167 167
Jan 31 02:57:19 np0005603621 systemd[1]: libpod-be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd.scope: Deactivated successfully.
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.342 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.343 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.347 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.347 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.399 247403 DEBUG nova.compute.manager [req-8b9721fc-761d-41b6-a289-7325ca059cfd req-111192f6-eef0-418d-9d57-a3675b3e72c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-vif-unplugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.399 247403 DEBUG oslo_concurrency.lockutils [req-8b9721fc-761d-41b6-a289-7325ca059cfd req-111192f6-eef0-418d-9d57-a3675b3e72c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.399 247403 DEBUG oslo_concurrency.lockutils [req-8b9721fc-761d-41b6-a289-7325ca059cfd req-111192f6-eef0-418d-9d57-a3675b3e72c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.400 247403 DEBUG oslo_concurrency.lockutils [req-8b9721fc-761d-41b6-a289-7325ca059cfd req-111192f6-eef0-418d-9d57-a3675b3e72c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.400 247403 DEBUG nova.compute.manager [req-8b9721fc-761d-41b6-a289-7325ca059cfd req-111192f6-eef0-418d-9d57-a3675b3e72c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] No waiting events found dispatching network-vif-unplugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.400 247403 DEBUG nova.compute.manager [req-8b9721fc-761d-41b6-a289-7325ca059cfd req-111192f6-eef0-418d-9d57-a3675b3e72c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-vif-unplugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:57:19 np0005603621 podman[273593]: 2026-01-31 07:57:19.429658662 +0000 UTC m=+0.788902685 container attach be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jones, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 02:57:19 np0005603621 podman[273593]: 2026-01-31 07:57:19.433911335 +0000 UTC m=+0.793155358 container died be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.507 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.509 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4385MB free_disk=20.785079956054688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.509 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.509 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.571 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.571 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Ensure instance console log exists: /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.572 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.572 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.572 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.574 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start _get_guest_xml network_info=[{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.577 247403 WARNING nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.581 247403 DEBUG nova.virt.libvirt.host [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.582 247403 DEBUG nova.virt.libvirt.host [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.587 247403 DEBUG nova.virt.libvirt.host [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.588 247403 DEBUG nova.virt.libvirt.host [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.590 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.590 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.591 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.591 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.592 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.592 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.592 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.592 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.593 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.593 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.593 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.594 247403 DEBUG nova.virt.hardware [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.594 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-acbe4f06bbf9845c9657beceb909a3c32c71e60794c7180c5f1df5c19d5a367c-merged.mount: Deactivated successfully.
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.659 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.906 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.907 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.907 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0e2720e1-c54e-4332-90a4-f5c2c884b77d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.908 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:57:19 np0005603621 nova_compute[247399]: 2026-01-31 07:57:19.908 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.005 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:20 np0005603621 podman[273593]: 2026-01-31 07:57:20.012911701 +0000 UTC m=+1.372155724 container remove be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jones, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 02:57:20 np0005603621 systemd[1]: libpod-conmon-be9ccfe462a11d1947575dfd11c65e9a7677c9288d52a7bc351cd669a160e6dd.scope: Deactivated successfully.
Jan 31 02:57:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:20.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:57:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3632473410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.119 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:20 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [NOTICE]   (268746) : haproxy version is 2.8.14-c23fe91
Jan 31 02:57:20 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [NOTICE]   (268746) : path to executable is /usr/sbin/haproxy
Jan 31 02:57:20 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [WARNING]  (268746) : Exiting Master process...
Jan 31 02:57:20 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [ALERT]    (268746) : Current worker (268748) exited with code 143 (Terminated)
Jan 31 02:57:20 np0005603621 neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22[268742]: [WARNING]  (268746) : All workers exited. Exiting... (0)
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.146 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:20 np0005603621 systemd[1]: libpod-730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b.scope: Deactivated successfully.
Jan 31 02:57:20 np0005603621 podman[273665]: 2026-01-31 07:57:20.155964868 +0000 UTC m=+0.968079562 container died 730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.159 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:20 np0005603621 podman[273742]: 2026-01-31 07:57:20.153899653 +0000 UTC m=+0.043798661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:57:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033082033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.468 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.475 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:57:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b-userdata-shm.mount: Deactivated successfully.
Jan 31 02:57:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:20.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-31b2b146d401b7bcf40b312d31cb38221eafd3a685c0f4028a31403bca716a22-merged.mount: Deactivated successfully.
Jan 31 02:57:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:57:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039844071' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.752 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.754 247403 DEBUG nova.virt.libvirt.vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:09Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.754 247403 DEBUG nova.network.os_vif_util [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.755 247403 DEBUG nova.network.os_vif_util [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.758 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <uuid>e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</uuid>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <name>instance-00000020</name>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersAdminTestJSON-server-1840895963</nova:name>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:57:19</nova:creationTime>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:user uuid="93973daeb08c453e90372a79b54b9ede">tempest-ServersAdminTestJSON-784933461-project-member</nova:user>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:project uuid="8033316fc42c4926bfd1f8a34b02fa97">tempest-ServersAdminTestJSON-784933461</nova:project>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="0864ca59-9877-4e6d-adfc-f0a3204ed8f8"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <nova:port uuid="bd448b0d-7dc7-43bc-b4d2-eba76110aa01">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <entry name="serial">e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</entry>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <entry name="uuid">e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</entry>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:64:f8:02"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <target dev="tapbd448b0d-7d"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/console.log" append="off"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:57:20 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:57:20 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:57:20 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:57:20 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.759 247403 DEBUG nova.compute.manager [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Preparing to wait for external event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.759 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.760 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.760 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.761 247403 DEBUG nova.virt.libvirt.vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:55:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:09Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.761 247403 DEBUG nova.network.os_vif_util [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.762 247403 DEBUG nova.network.os_vif_util [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.762 247403 DEBUG os_vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.764 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.764 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.766 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.767 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd448b0d-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.767 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd448b0d-7d, col_values=(('external_ids', {'iface-id': 'bd448b0d-7dc7-43bc-b4d2-eba76110aa01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:f8:02', 'vm-uuid': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.769 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:20 np0005603621 NetworkManager[49013]: <info>  [1769846240.7703] manager: (tapbd448b0d-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.771 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:57:20 np0005603621 podman[273665]: 2026-01-31 07:57:20.772283492 +0000 UTC m=+1.584398186 container cleanup 730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:57:20 np0005603621 systemd[1]: libpod-conmon-730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b.scope: Deactivated successfully.
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.776 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.778 247403 INFO os_vif [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d')#033[00m
Jan 31 02:57:20 np0005603621 nova_compute[247399]: 2026-01-31 07:57:20.781 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:57:20 np0005603621 podman[273742]: 2026-01-31 07:57:20.810340814 +0000 UTC m=+0.700239802 container create fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 02:57:20 np0005603621 systemd[1]: Started libpod-conmon-fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64.scope.
Jan 31 02:57:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:57:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf5cfc83fb6aec7a1390a83b44a2775176c9319c667442a3d80a1acabd017ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf5cfc83fb6aec7a1390a83b44a2775176c9319c667442a3d80a1acabd017ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf5cfc83fb6aec7a1390a83b44a2775176c9319c667442a3d80a1acabd017ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf5cfc83fb6aec7a1390a83b44a2775176c9319c667442a3d80a1acabd017ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbf5cfc83fb6aec7a1390a83b44a2775176c9319c667442a3d80a1acabd017ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:20 np0005603621 podman[273742]: 2026-01-31 07:57:20.994588869 +0000 UTC m=+0.884487877 container init fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 02:57:21 np0005603621 podman[273742]: 2026-01-31 07:57:21.001940268 +0000 UTC m=+0.891839256 container start fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:57:21 np0005603621 podman[273742]: 2026-01-31 07:57:21.016883286 +0000 UTC m=+0.906782684 container attach fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:57:21 np0005603621 podman[273835]: 2026-01-31 07:57:21.039634998 +0000 UTC m=+0.248479386 container remove 730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.043 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[823125a9-86aa-4abf-b4c1-7471fceabe67]: (4, ('Sat Jan 31 07:57:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22 (730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b)\n730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b\nSat Jan 31 07:57:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22 (730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b)\n730a4bb8d6e73d0495b11c439f32289771ec71a7275db72e03e7a70f3534c96b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.046 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[38b00c70-dba0-4ea1-a873-47bab1940401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.048 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80d90d51-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.050 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:21 np0005603621 kernel: tap80d90d51-30: left promiscuous mode
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.065 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b4da62-4744-44f3-ada8-04767970831c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.079 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fb76f2a7-6464-4e57-9ba9-b0f8d0e7a79d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.082 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[19f14278-11ad-4b3e-bbe6-d820291eaba1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.085 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.086 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.094 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ce595a62-cddc-4eb9-959a-8319c857e87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537723, 'reachable_time': 35856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273861, 'error': None, 'target': 'ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 systemd[1]: run-netns-ovnmeta\x2d80d90d51\x2d335c\x2d4f74\x2d8a61\x2d143d47d84f22.mount: Deactivated successfully.
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.098 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80d90d51-335c-4f74-8a61-143d47d84f22 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:57:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:21.098 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[26a036fa-28c5-4dff-90b4-dc40c5a8e1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.238 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.238 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.238 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No VIF found with MAC fa:16:3e:64:f8:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.239 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Using config drive#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.265 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 373 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 244 op/s
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.372 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.493 247403 INFO nova.virt.libvirt.driver [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Deleting instance files /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_del#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.494 247403 INFO nova.virt.libvirt.driver [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Deletion of /var/lib/nova/instances/9ba5b3e0-db8e-489f-bb9e-3697ab066ae4_del complete#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.503 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'keypairs' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.516 247403 DEBUG nova.compute.manager [req-0d03c353-6fb9-4d90-9a00-1c0d55e3dba0 req-fcef36d8-bd1b-4321-8ba0-a98c145b5fda fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.516 247403 DEBUG oslo_concurrency.lockutils [req-0d03c353-6fb9-4d90-9a00-1c0d55e3dba0 req-fcef36d8-bd1b-4321-8ba0-a98c145b5fda fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.517 247403 DEBUG oslo_concurrency.lockutils [req-0d03c353-6fb9-4d90-9a00-1c0d55e3dba0 req-fcef36d8-bd1b-4321-8ba0-a98c145b5fda fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.517 247403 DEBUG oslo_concurrency.lockutils [req-0d03c353-6fb9-4d90-9a00-1c0d55e3dba0 req-fcef36d8-bd1b-4321-8ba0-a98c145b5fda fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.517 247403 DEBUG nova.compute.manager [req-0d03c353-6fb9-4d90-9a00-1c0d55e3dba0 req-fcef36d8-bd1b-4321-8ba0-a98c145b5fda fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] No waiting events found dispatching network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.517 247403 WARNING nova.compute.manager [req-0d03c353-6fb9-4d90-9a00-1c0d55e3dba0 req-fcef36d8-bd1b-4321-8ba0-a98c145b5fda fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received unexpected event network-vif-plugged-badb2cfb-b8e3-4fbc-9969-b7746aff36ae for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.754 247403 INFO nova.compute.manager [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Took 3.14 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.755 247403 DEBUG oslo.service.loopingcall [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.756 247403 DEBUG nova.compute.manager [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:57:21 np0005603621 nova_compute[247399]: 2026-01-31 07:57:21.756 247403 DEBUG nova.network.neutron [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:57:21 np0005603621 confident_goldwasser[273850]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:57:21 np0005603621 confident_goldwasser[273850]: --> relative data size: 1.0
Jan 31 02:57:21 np0005603621 confident_goldwasser[273850]: --> All data devices are unavailable
Jan 31 02:57:21 np0005603621 systemd[1]: libpod-fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64.scope: Deactivated successfully.
Jan 31 02:57:21 np0005603621 podman[273891]: 2026-01-31 07:57:21.841074255 +0000 UTC m=+0.022158025 container died fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:57:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cbf5cfc83fb6aec7a1390a83b44a2775176c9319c667442a3d80a1acabd017ac-merged.mount: Deactivated successfully.
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.020 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating config drive at /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.023 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5h48jdg7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:22.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.065 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.066 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.066 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.066 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.067 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.067 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:57:22 np0005603621 podman[273891]: 2026-01-31 07:57:22.077145801 +0000 UTC m=+0.258229541 container remove fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:57:22 np0005603621 systemd[1]: libpod-conmon-fd79261d747fc25c9685e32675c69f2cb5eaf0e83dca87b254b38f95d2cafe64.scope: Deactivated successfully.
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.145 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5h48jdg7" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.181 247403 DEBUG nova.storage.rbd_utils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.185 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.200 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846227.1856644, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.201 247403 INFO nova.compute.manager [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.379 247403 DEBUG nova.compute.manager [None req-88687917-5df7-4d4a-9fec-c568d463de92 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.383 247403 DEBUG nova.compute.manager [None req-88687917-5df7-4d4a-9fec-c568d463de92 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.543 247403 DEBUG oslo_concurrency.processutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.544 247403 INFO nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deleting local config drive /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config because it was imported into RBD.#033[00m
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.571851291 +0000 UTC m=+0.059528613 container create 1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:57:22 np0005603621 kernel: tapbd448b0d-7d: entered promiscuous mode
Jan 31 02:57:22 np0005603621 NetworkManager[49013]: <info>  [1769846242.5828] manager: (tapbd448b0d-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.583 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:22Z|00077|binding|INFO|Claiming lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for this chassis.
Jan 31 02:57:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:22Z|00078|binding|INFO|bd448b0d-7dc7-43bc-b4d2-eba76110aa01: Claiming fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:57:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:22Z|00079|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 ovn-installed in OVS
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:22 np0005603621 systemd-udevd[274112]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:57:22 np0005603621 systemd-machined[212769]: New machine qemu-16-instance-00000020.
Jan 31 02:57:22 np0005603621 NetworkManager[49013]: <info>  [1769846242.6156] device (tapbd448b0d-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:57:22 np0005603621 NetworkManager[49013]: <info>  [1769846242.6162] device (tapbd448b0d-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:57:22 np0005603621 systemd[1]: Started Virtual Machine qemu-16-instance-00000020.
Jan 31 02:57:22 np0005603621 systemd[1]: Started libpod-conmon-1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf.scope.
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.532616613 +0000 UTC m=+0.020293955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:57:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:57:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:22.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.651 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.681 247403 INFO nova.compute.manager [None req-88687917-5df7-4d4a-9fec-c568d463de92 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.688447759 +0000 UTC m=+0.176125101 container init 1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.694907581 +0000 UTC m=+0.182584913 container start 1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 02:57:22 np0005603621 amazing_galileo[274118]: 167 167
Jan 31 02:57:22 np0005603621 systemd[1]: libpod-1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf.scope: Deactivated successfully.
Jan 31 02:57:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:22Z|00080|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 up in Southbound
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.701 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.702 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 bound to our chassis#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.704 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.716 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0362bf-6ea1-427e-bb01-ee741b1fb54f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.736 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[bd1ef0a0-3311-4221-869f-d65f86a36503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.738 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[485ab23d-d55c-4ec4-8b7a-038aa2b5c4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.758 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5e360e-aea6-4301-a255-1e1ddacbd861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.767940226 +0000 UTC m=+0.255617568 container attach 1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.769032231 +0000 UTC m=+0.256709553 container died 1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.787 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa4af5b-5676-4572-8e6d-ad47d521bca9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274144, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.800 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39f756fd-0cb3-429b-86b6-fe7f5809a7e9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274145, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274145, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.802 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.804 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.804 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.805 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:22.805 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:22 np0005603621 nova_compute[247399]: 2026-01-31 07:57:22.804 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cdd6e81ab9acf85e44610e0acf5918fb18032baccd2e999c76c4fca4a584821f-merged.mount: Deactivated successfully.
Jan 31 02:57:22 np0005603621 podman[274086]: 2026-01-31 07:57:22.905641765 +0000 UTC m=+0.393319087 container remove 1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_galileo, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:57:22 np0005603621 systemd[1]: libpod-conmon-1d695ee49529ec9c6450fbc8bf29650a829a82097e4403a18c2f0f5347870faf.scope: Deactivated successfully.
Jan 31 02:57:23 np0005603621 podman[274155]: 2026-01-31 07:57:23.054650608 +0000 UTC m=+0.056120947 container create 145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:57:23 np0005603621 podman[274155]: 2026-01-31 07:57:23.018472376 +0000 UTC m=+0.019942735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:57:23 np0005603621 systemd[1]: Started libpod-conmon-145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc.scope.
Jan 31 02:57:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:57:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1d1e91db6962951c6cef143f39a7ea547e8e89fa831411d6e8a28d514bf387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1d1e91db6962951c6cef143f39a7ea547e8e89fa831411d6e8a28d514bf387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1d1e91db6962951c6cef143f39a7ea547e8e89fa831411d6e8a28d514bf387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1d1e91db6962951c6cef143f39a7ea547e8e89fa831411d6e8a28d514bf387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:23 np0005603621 podman[274155]: 2026-01-31 07:57:23.212113895 +0000 UTC m=+0.213584254 container init 145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 31 02:57:23 np0005603621 podman[274155]: 2026-01-31 07:57:23.218161504 +0000 UTC m=+0.219631843 container start 145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 02:57:23 np0005603621 podman[274155]: 2026-01-31 07:57:23.294176522 +0000 UTC m=+0.295646881 container attach 145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 02:57:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 339 MiB data, 543 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.374 247403 DEBUG nova.network.neutron [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.503 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846243.5034792, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.504 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Started (Lifecycle Event)#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.535 247403 INFO nova.compute.manager [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Took 1.78 seconds to deallocate network for instance.#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.585 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.588 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846243.503537, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.588 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.666 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.668 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.678 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.678 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:23 np0005603621 nova_compute[247399]: 2026-01-31 07:57:23.780 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]: {
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:    "0": [
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:        {
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "devices": [
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "/dev/loop3"
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            ],
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "lv_name": "ceph_lv0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "lv_size": "7511998464",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "name": "ceph_lv0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "tags": {
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.cluster_name": "ceph",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.crush_device_class": "",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.encrypted": "0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.osd_id": "0",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.type": "block",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:                "ceph.vdo": "0"
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            },
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "type": "block",
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:            "vg_name": "ceph_vg0"
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:        }
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]:    ]
Jan 31 02:57:23 np0005603621 amazing_stonebraker[274171]: }
Jan 31 02:57:23 np0005603621 systemd[1]: libpod-145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc.scope: Deactivated successfully.
Jan 31 02:57:23 np0005603621 podman[274155]: 2026-01-31 07:57:23.970399681 +0000 UTC m=+0.971870010 container died 145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:57:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.165 247403 DEBUG nova.compute.manager [req-1a6b6ce2-7c8e-45a0-b526-c8d79b431a66 req-32beeadc-5879-4b02-b2e7-147703eaf222 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.165 247403 DEBUG oslo_concurrency.lockutils [req-1a6b6ce2-7c8e-45a0-b526-c8d79b431a66 req-32beeadc-5879-4b02-b2e7-147703eaf222 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.165 247403 DEBUG oslo_concurrency.lockutils [req-1a6b6ce2-7c8e-45a0-b526-c8d79b431a66 req-32beeadc-5879-4b02-b2e7-147703eaf222 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.166 247403 DEBUG oslo_concurrency.lockutils [req-1a6b6ce2-7c8e-45a0-b526-c8d79b431a66 req-32beeadc-5879-4b02-b2e7-147703eaf222 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.166 247403 DEBUG nova.compute.manager [req-1a6b6ce2-7c8e-45a0-b526-c8d79b431a66 req-32beeadc-5879-4b02-b2e7-147703eaf222 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Processing event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.166 247403 DEBUG nova.compute.manager [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.169 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846244.1696405, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.170 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.171 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.174 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance spawned successfully.#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.174 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.339 247403 DEBUG oslo_concurrency.processutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.426 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.431 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.432 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.432 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.433 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.433 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.434 247403 DEBUG nova.virt.libvirt.driver [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.441 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:57:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2f1d1e91db6962951c6cef143f39a7ea547e8e89fa831411d6e8a28d514bf387-merged.mount: Deactivated successfully.
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.568 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.591 247403 DEBUG nova.compute.manager [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:24.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:24 np0005603621 podman[274155]: 2026-01-31 07:57:24.678119446 +0000 UTC m=+1.679589775 container remove 145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:57:24 np0005603621 systemd[1]: libpod-conmon-145d01a086928b4074faadcaadc895e98249c22bc21c0dc4cbd38cc1fe867fcc.scope: Deactivated successfully.
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.780 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:57:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530420551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.940 247403 DEBUG oslo_concurrency.processutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.945 247403 DEBUG nova.compute.provider_tree [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:57:24 np0005603621 nova_compute[247399]: 2026-01-31 07:57:24.994 247403 DEBUG nova.scheduler.client.report [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.142 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.144 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.145 247403 DEBUG nova.objects.instance [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:57:25 np0005603621 podman[274400]: 2026-01-31 07:57:25.221943822 +0000 UTC m=+0.090767161 container create 6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:57:25 np0005603621 podman[274400]: 2026-01-31 07:57:25.152374165 +0000 UTC m=+0.021197524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.267 247403 INFO nova.scheduler.client.report [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Deleted allocations for instance 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4#033[00m
Jan 31 02:57:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 326 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 197 op/s
Jan 31 02:57:25 np0005603621 systemd[1]: Started libpod-conmon-6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f.scope.
Jan 31 02:57:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.448 247403 DEBUG oslo_concurrency.lockutils [None req-de8ad4f8-57c4-4684-bf39-ec60a1315264 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:25 np0005603621 podman[274400]: 2026-01-31 07:57:25.466810533 +0000 UTC m=+0.335633872 container init 6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.471 247403 DEBUG nova.compute.manager [req-4e13f91d-792d-4e30-969a-6e23e6da9a3e req-5fead9d6-c731-41d2-8f87-8eed971d01e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Received event network-vif-deleted-badb2cfb-b8e3-4fbc-9969-b7746aff36ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:25 np0005603621 podman[274400]: 2026-01-31 07:57:25.474059601 +0000 UTC m=+0.342882940 container start 6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 02:57:25 np0005603621 jovial_sutherland[274416]: 167 167
Jan 31 02:57:25 np0005603621 systemd[1]: libpod-6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f.scope: Deactivated successfully.
Jan 31 02:57:25 np0005603621 conmon[274416]: conmon 6099d94acf835f6ca0ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f.scope/container/memory.events
Jan 31 02:57:25 np0005603621 podman[274400]: 2026-01-31 07:57:25.522160926 +0000 UTC m=+0.390984265 container attach 6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 02:57:25 np0005603621 podman[274400]: 2026-01-31 07:57:25.523533718 +0000 UTC m=+0.392357057 container died 6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.582 247403 DEBUG oslo_concurrency.lockutils [None req-76884dfb-66c4-4111-adb6-6ee7cdd6afa6 d6078cfaadaa45ae9256245554f784fe fd9f0c923b994b0295e72b111f661de1 - - default default] Lock "9ba5b3e0-db8e-489f-bb9e-3697ab066ae4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:25 np0005603621 nova_compute[247399]: 2026-01-31 07:57:25.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83c6c2014606b0db64fd059d17437787caafbb64cd9c55b60297a1fd9a38e46d-merged.mount: Deactivated successfully.
Jan 31 02:57:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:26.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:26 np0005603621 nova_compute[247399]: 2026-01-31 07:57:26.268 247403 DEBUG nova.compute.manager [req-561745cb-e807-4265-97c4-b84ca3f48b8e req-12d98c4a-a941-4d07-8b0d-42f0105649bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:26 np0005603621 nova_compute[247399]: 2026-01-31 07:57:26.268 247403 DEBUG oslo_concurrency.lockutils [req-561745cb-e807-4265-97c4-b84ca3f48b8e req-12d98c4a-a941-4d07-8b0d-42f0105649bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:26 np0005603621 nova_compute[247399]: 2026-01-31 07:57:26.268 247403 DEBUG oslo_concurrency.lockutils [req-561745cb-e807-4265-97c4-b84ca3f48b8e req-12d98c4a-a941-4d07-8b0d-42f0105649bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:26 np0005603621 nova_compute[247399]: 2026-01-31 07:57:26.269 247403 DEBUG oslo_concurrency.lockutils [req-561745cb-e807-4265-97c4-b84ca3f48b8e req-12d98c4a-a941-4d07-8b0d-42f0105649bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:26 np0005603621 nova_compute[247399]: 2026-01-31 07:57:26.269 247403 DEBUG nova.compute.manager [req-561745cb-e807-4265-97c4-b84ca3f48b8e req-12d98c4a-a941-4d07-8b0d-42f0105649bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:26 np0005603621 nova_compute[247399]: 2026-01-31 07:57:26.269 247403 WARNING nova.compute.manager [req-561745cb-e807-4265-97c4-b84ca3f48b8e req-12d98c4a-a941-4d07-8b0d-42f0105649bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:57:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:26 np0005603621 podman[274400]: 2026-01-31 07:57:26.626219261 +0000 UTC m=+1.495042600 container remove 6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 02:57:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:26 np0005603621 systemd[1]: libpod-conmon-6099d94acf835f6ca0bad86161a50ef3a3cca0af4f7026517dfd44b888eb613f.scope: Deactivated successfully.
Jan 31 02:57:26 np0005603621 podman[274441]: 2026-01-31 07:57:26.758008715 +0000 UTC m=+0.035521132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:57:26 np0005603621 podman[274441]: 2026-01-31 07:57:26.900904606 +0000 UTC m=+0.178417003 container create 69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 02:57:27 np0005603621 systemd[1]: Started libpod-conmon-69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842.scope.
Jan 31 02:57:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:57:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e745bc85b56af6bce3f888667c449e730a1b33192548564b6d7edadffb2d7ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e745bc85b56af6bce3f888667c449e730a1b33192548564b6d7edadffb2d7ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e745bc85b56af6bce3f888667c449e730a1b33192548564b6d7edadffb2d7ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e745bc85b56af6bce3f888667c449e730a1b33192548564b6d7edadffb2d7ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:57:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 326 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Jan 31 02:57:27 np0005603621 podman[274441]: 2026-01-31 07:57:27.32927655 +0000 UTC m=+0.606788957 container init 69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 02:57:27 np0005603621 podman[274441]: 2026-01-31 07:57:27.336424543 +0000 UTC m=+0.613936940 container start 69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:57:27 np0005603621 nova_compute[247399]: 2026-01-31 07:57:27.654 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:27 np0005603621 podman[274441]: 2026-01-31 07:57:27.692217146 +0000 UTC m=+0.969729553 container attach 69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 02:57:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]: {
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:        "osd_id": 0,
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:        "type": "bluestore"
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]:    }
Jan 31 02:57:28 np0005603621 reverent_wilbur[274457]: }
Jan 31 02:57:28 np0005603621 systemd[1]: libpod-69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842.scope: Deactivated successfully.
Jan 31 02:57:28 np0005603621 podman[274441]: 2026-01-31 07:57:28.162853042 +0000 UTC m=+1.440365449 container died 69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:57:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4e745bc85b56af6bce3f888667c449e730a1b33192548564b6d7edadffb2d7ca-merged.mount: Deactivated successfully.
Jan 31 02:57:29 np0005603621 nova_compute[247399]: 2026-01-31 07:57:29.129 247403 INFO nova.compute.manager [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Rebuilding instance#033[00m
Jan 31 02:57:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 326 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 31 02:57:29 np0005603621 nova_compute[247399]: 2026-01-31 07:57:29.430 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:29 np0005603621 nova_compute[247399]: 2026-01-31 07:57:29.498 247403 DEBUG nova.compute.manager [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:29 np0005603621 podman[274441]: 2026-01-31 07:57:29.49898561 +0000 UTC m=+2.776498007 container remove 69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 02:57:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:57:29 np0005603621 systemd[1]: libpod-conmon-69b622cbc31ee68696fba3186acc6b25e4059989d6ed0cc45696041801280842.scope: Deactivated successfully.
Jan 31 02:57:29 np0005603621 nova_compute[247399]: 2026-01-31 07:57:29.846 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'pci_requests' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:29 np0005603621 nova_compute[247399]: 2026-01-31 07:57:29.931 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:30 np0005603621 nova_compute[247399]: 2026-01-31 07:57:30.094 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'resources' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:30 np0005603621 nova_compute[247399]: 2026-01-31 07:57:30.135 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'migration_context' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:57:30 np0005603621 nova_compute[247399]: 2026-01-31 07:57:30.363 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:57:30 np0005603621 nova_compute[247399]: 2026-01-31 07:57:30.366 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:57:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 320f2938-090e-4883-ba2b-1a1c248eb29c does not exist
Jan 31 02:57:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 55581d61-42c4-4111-ba89-e4c58aa82832 does not exist
Jan 31 02:57:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f6961a1a-0c23-4b7f-a32f-0f9211dcc6cc does not exist
Jan 31 02:57:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:30.477 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:30.478 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:30.478 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:30.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:30 np0005603621 nova_compute[247399]: 2026-01-31 07:57:30.772 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:57:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 326 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Jan 31 02:57:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:32.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:32 np0005603621 nova_compute[247399]: 2026-01-31 07:57:32.656 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:32.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 326 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 343 KiB/s wr, 119 op/s
Jan 31 02:57:33 np0005603621 nova_compute[247399]: 2026-01-31 07:57:33.852 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846238.8509562, 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:33 np0005603621 nova_compute[247399]: 2026-01-31 07:57:33.852 247403 INFO nova.compute.manager [-] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:57:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:34.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:34.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:34 np0005603621 nova_compute[247399]: 2026-01-31 07:57:34.820 247403 DEBUG nova.compute.manager [None req-0ed4c183-2b03-4375-b3cf-1e9df9cee0b7 - - - - - -] [instance: 9ba5b3e0-db8e-489f-bb9e-3697ab066ae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:57:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3151725203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:57:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:57:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3151725203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:57:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 326 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 96 op/s
Jan 31 02:57:35 np0005603621 nova_compute[247399]: 2026-01-31 07:57:35.774 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:36 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:36Z|00081|binding|INFO|Releasing lport 8c531a0f-deeb-4de0-880b-b07ec1cf9103 from this chassis (sb_readonly=0)
Jan 31 02:57:36 np0005603621 nova_compute[247399]: 2026-01-31 07:57:36.245 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:36 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:36Z|00082|binding|INFO|Releasing lport 8c531a0f-deeb-4de0-880b-b07ec1cf9103 from this chassis (sb_readonly=0)
Jan 31 02:57:36 np0005603621 nova_compute[247399]: 2026-01-31 07:57:36.318 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:36.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 329 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 279 KiB/s wr, 89 op/s
Jan 31 02:57:37 np0005603621 nova_compute[247399]: 2026-01-31 07:57:37.658 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:38.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:57:38
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'backups', '.rgw.root', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta']
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:57:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:38Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:57:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:38Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:57:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:57:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:38.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 329 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 816 KiB/s rd, 276 KiB/s wr, 47 op/s
Jan 31 02:57:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:57:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:57:40 np0005603621 nova_compute[247399]: 2026-01-31 07:57:40.405 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:57:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:40.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:40 np0005603621 nova_compute[247399]: 2026-01-31 07:57:40.776 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 333 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 914 KiB/s rd, 660 KiB/s wr, 65 op/s
Jan 31 02:57:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:42 np0005603621 nova_compute[247399]: 2026-01-31 07:57:42.663 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:42.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 354 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 2.2 MiB/s wr, 79 op/s
Jan 31 02:57:43 np0005603621 kernel: tapbd448b0d-7d (unregistering): left promiscuous mode
Jan 31 02:57:43 np0005603621 NetworkManager[49013]: <info>  [1769846263.6846] device (tapbd448b0d-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:57:43 np0005603621 nova_compute[247399]: 2026-01-31 07:57:43.690 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:43 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:43Z|00083|binding|INFO|Releasing lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 from this chassis (sb_readonly=0)
Jan 31 02:57:43 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:43Z|00084|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 down in Southbound
Jan 31 02:57:43 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:43Z|00085|binding|INFO|Removing iface tapbd448b0d-7d ovn-installed in OVS
Jan 31 02:57:43 np0005603621 nova_compute[247399]: 2026-01-31 07:57:43.698 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:43 np0005603621 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000020.scope: Deactivated successfully.
Jan 31 02:57:43 np0005603621 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000020.scope: Consumed 13.950s CPU time.
Jan 31 02:57:43 np0005603621 systemd-machined[212769]: Machine qemu-16-instance-00000020 terminated.
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.879 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.880 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 unbound from our chassis#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.881 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.892 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e291c96e-8090-46d1-b84d-92dd80204917]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.915 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9253b6d9-dc7e-4cb0-bfd0-fdd8dc4fde63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.918 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e60d396d-e8f3-45b9-84f3-5dbd66f37394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.938 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c058cd02-035e-4796-a286-d3683f705669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.950 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eeafe48a-297c-41d4-aec2-305ac5d43980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274626, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.960 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[09b514bd-1ef3-416a-aeb5-50d712df8601]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274627, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274627, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.961 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:43 np0005603621 nova_compute[247399]: 2026-01-31 07:57:43.963 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:43 np0005603621 nova_compute[247399]: 2026-01-31 07:57:43.965 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.966 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.966 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.966 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:43.966 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:44.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.421 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance shutdown successfully after 14 seconds.#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.427 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance destroyed successfully.#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.431 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance destroyed successfully.#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.432 247403 DEBUG nova.virt.libvirt.vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:57:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:27Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.432 247403 DEBUG nova.network.os_vif_util [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.433 247403 DEBUG nova.network.os_vif_util [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.433 247403 DEBUG os_vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.435 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd448b0d-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.438 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:44 np0005603621 nova_compute[247399]: 2026-01-31 07:57:44.440 247403 INFO os_vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d')#033[00m
Jan 31 02:57:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:44.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:45 np0005603621 nova_compute[247399]: 2026-01-31 07:57:45.258 247403 DEBUG nova.compute.manager [req-250bec2c-4477-440f-90b4-338f80195116 req-ca76c661-daf1-47f5-aa76-3fce0109ad95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:45 np0005603621 nova_compute[247399]: 2026-01-31 07:57:45.259 247403 DEBUG oslo_concurrency.lockutils [req-250bec2c-4477-440f-90b4-338f80195116 req-ca76c661-daf1-47f5-aa76-3fce0109ad95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:45 np0005603621 nova_compute[247399]: 2026-01-31 07:57:45.259 247403 DEBUG oslo_concurrency.lockutils [req-250bec2c-4477-440f-90b4-338f80195116 req-ca76c661-daf1-47f5-aa76-3fce0109ad95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:45 np0005603621 nova_compute[247399]: 2026-01-31 07:57:45.259 247403 DEBUG oslo_concurrency.lockutils [req-250bec2c-4477-440f-90b4-338f80195116 req-ca76c661-daf1-47f5-aa76-3fce0109ad95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:45 np0005603621 nova_compute[247399]: 2026-01-31 07:57:45.259 247403 DEBUG nova.compute.manager [req-250bec2c-4477-440f-90b4-338f80195116 req-ca76c661-daf1-47f5-aa76-3fce0109ad95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:45 np0005603621 nova_compute[247399]: 2026-01-31 07:57:45.260 247403 WARNING nova.compute.manager [req-250bec2c-4477-440f-90b4-338f80195116 req-ca76c661-daf1-47f5-aa76-3fce0109ad95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 02:57:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 358 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 31 02:57:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:46.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:46 np0005603621 nova_compute[247399]: 2026-01-31 07:57:46.470 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deleting instance files /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_del#033[00m
Jan 31 02:57:46 np0005603621 nova_compute[247399]: 2026-01-31 07:57:46.472 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deletion of /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_del complete#033[00m
Jan 31 02:57:46 np0005603621 podman[274650]: 2026-01-31 07:57:46.526454368 +0000 UTC m=+0.077720373 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:57:46 np0005603621 podman[274649]: 2026-01-31 07:57:46.528268574 +0000 UTC m=+0.083599726 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 02:57:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:46.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 339 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.2 MiB/s wr, 79 op/s
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.573 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.574 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating image(s)#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.613 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.640 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.666 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.670 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.725 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.726 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.727 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.727 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.752 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:47 np0005603621 nova_compute[247399]: 2026-01-31 07:57:47.755 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:48.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:57:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:48.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008160194781779267 of space, bias 1.0, pg target 2.44805843453378 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:57:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 02:57:48 np0005603621 nova_compute[247399]: 2026-01-31 07:57:48.989 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.071 247403 DEBUG nova.compute.manager [req-1852884e-58fb-41d6-8473-0d6045235e85 req-748dcfeb-ae0e-49f9-b703-0ee94133beb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.071 247403 DEBUG oslo_concurrency.lockutils [req-1852884e-58fb-41d6-8473-0d6045235e85 req-748dcfeb-ae0e-49f9-b703-0ee94133beb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.072 247403 DEBUG oslo_concurrency.lockutils [req-1852884e-58fb-41d6-8473-0d6045235e85 req-748dcfeb-ae0e-49f9-b703-0ee94133beb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.072 247403 DEBUG oslo_concurrency.lockutils [req-1852884e-58fb-41d6-8473-0d6045235e85 req-748dcfeb-ae0e-49f9-b703-0ee94133beb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.072 247403 DEBUG nova.compute.manager [req-1852884e-58fb-41d6-8473-0d6045235e85 req-748dcfeb-ae0e-49f9-b703-0ee94133beb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.072 247403 WARNING nova.compute.manager [req-1852884e-58fb-41d6-8473-0d6045235e85 req-748dcfeb-ae0e-49f9-b703-0ee94133beb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.077 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] resizing rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.203 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.203 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Ensure instance console log exists: /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.204 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.204 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.204 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.206 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start _get_guest_xml network_info=[{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.211 247403 WARNING nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.224 247403 DEBUG nova.virt.libvirt.host [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.225 247403 DEBUG nova.virt.libvirt.host [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.229 247403 DEBUG nova.virt.libvirt.host [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.230 247403 DEBUG nova.virt.libvirt.host [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.231 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.231 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.231 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.232 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.232 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.232 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.232 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.232 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.233 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.233 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.234 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.234 247403 DEBUG nova.virt.hardware [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.234 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 301 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.353 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.438 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:57:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1433621426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.789 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.821 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:49 np0005603621 nova_compute[247399]: 2026-01-31 07:57:49.828 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:57:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/137281711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.273 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.275 247403 DEBUG nova.virt.libvirt.vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:57:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:47Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.276 247403 DEBUG nova.network.os_vif_util [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.276 247403 DEBUG nova.network.os_vif_util [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.279 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <uuid>e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</uuid>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <name>instance-00000020</name>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersAdminTestJSON-server-1840895963</nova:name>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:57:49</nova:creationTime>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:user uuid="93973daeb08c453e90372a79b54b9ede">tempest-ServersAdminTestJSON-784933461-project-member</nova:user>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:project uuid="8033316fc42c4926bfd1f8a34b02fa97">tempest-ServersAdminTestJSON-784933461</nova:project>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <nova:port uuid="bd448b0d-7dc7-43bc-b4d2-eba76110aa01">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <entry name="serial">e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</entry>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <entry name="uuid">e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d</entry>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:64:f8:02"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <target dev="tapbd448b0d-7d"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/console.log" append="off"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:57:50 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:57:50 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:57:50 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:57:50 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.280 247403 DEBUG nova.compute.manager [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Preparing to wait for external event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.280 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.281 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.281 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.282 247403 DEBUG nova.virt.libvirt.vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:57:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:57:47Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.282 247403 DEBUG nova.network.os_vif_util [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.282 247403 DEBUG nova.network.os_vif_util [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.283 247403 DEBUG os_vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.283 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.284 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.284 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.286 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.286 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd448b0d-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.287 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd448b0d-7d, col_values=(('external_ids', {'iface-id': 'bd448b0d-7dc7-43bc-b4d2-eba76110aa01', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:f8:02', 'vm-uuid': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.288 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:50 np0005603621 NetworkManager[49013]: <info>  [1769846270.2890] manager: (tapbd448b0d-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.290 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.292 247403 INFO os_vif [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d')#033[00m
Jan 31 02:57:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.827 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.828 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.828 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] No VIF found with MAC fa:16:3e:64:f8:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.829 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Using config drive#033[00m
Jan 31 02:57:50 np0005603621 nova_compute[247399]: 2026-01-31 07:57:50.856 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:51 np0005603621 nova_compute[247399]: 2026-01-31 07:57:51.247 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 298 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.5 MiB/s wr, 104 op/s
Jan 31 02:57:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:51 np0005603621 nova_compute[247399]: 2026-01-31 07:57:51.591 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'keypairs' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:57:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:52.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:52 np0005603621 nova_compute[247399]: 2026-01-31 07:57:52.368 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Creating config drive at /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config#033[00m
Jan 31 02:57:52 np0005603621 nova_compute[247399]: 2026-01-31 07:57:52.372 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmr2qhzq7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:52 np0005603621 nova_compute[247399]: 2026-01-31 07:57:52.495 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpmr2qhzq7" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:52 np0005603621 nova_compute[247399]: 2026-01-31 07:57:52.522 247403 DEBUG nova.storage.rbd_utils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] rbd image e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:57:52 np0005603621 nova_compute[247399]: 2026-01-31 07:57:52.525 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:57:52 np0005603621 nova_compute[247399]: 2026-01-31 07:57:52.666 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:52.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 325 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 251 KiB/s rd, 3.3 MiB/s wr, 101 op/s
Jan 31 02:57:53 np0005603621 nova_compute[247399]: 2026-01-31 07:57:53.805 247403 DEBUG oslo_concurrency.processutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:57:53 np0005603621 nova_compute[247399]: 2026-01-31 07:57:53.806 247403 INFO nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deleting local config drive /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d/disk.config because it was imported into RBD.#033[00m
Jan 31 02:57:53 np0005603621 kernel: tapbd448b0d-7d: entered promiscuous mode
Jan 31 02:57:53 np0005603621 NetworkManager[49013]: <info>  [1769846273.8376] manager: (tapbd448b0d-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 31 02:57:53 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:53Z|00086|binding|INFO|Claiming lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for this chassis.
Jan 31 02:57:53 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:53Z|00087|binding|INFO|bd448b0d-7dc7-43bc-b4d2-eba76110aa01: Claiming fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:57:53 np0005603621 nova_compute[247399]: 2026-01-31 07:57:53.838 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:53 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:53Z|00088|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 ovn-installed in OVS
Jan 31 02:57:53 np0005603621 nova_compute[247399]: 2026-01-31 07:57:53.846 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:53 np0005603621 ovn_controller[149152]: 2026-01-31T07:57:53Z|00089|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 up in Southbound
Jan 31 02:57:53 np0005603621 systemd-machined[212769]: New machine qemu-17-instance-00000020.
Jan 31 02:57:53 np0005603621 systemd-udevd[274999]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.864 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.865 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 bound to our chassis#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.867 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:57:53 np0005603621 NetworkManager[49013]: <info>  [1769846273.8725] device (tapbd448b0d-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:57:53 np0005603621 NetworkManager[49013]: <info>  [1769846273.8733] device (tapbd448b0d-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:57:53 np0005603621 systemd[1]: Started Virtual Machine qemu-17-instance-00000020.
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.879 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[029b1eb1-5f36-4b31-ba91-302ad9c393d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.905 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[905875f7-7321-4eab-b83c-993864d3783f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.909 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1aca776c-2df2-4f25-8f07-f63e80ddd984]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.931 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[51e773c9-2cd7-4604-9869-0de7faceb7c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.943 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c4b828-05d5-4c20-88e6-b6577b4f10cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275013, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.957 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74390559-6a4d-4f79-bf7a-9cec8bec45a9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275014, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275014, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.959 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:53 np0005603621 nova_compute[247399]: 2026-01-31 07:57:53.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.962 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.963 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.963 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:57:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:57:53.963 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:57:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:54.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.270 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.271 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846274.269621, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.271 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Started (Lifecycle Event)#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.350 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.355 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846274.2709038, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.355 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.395 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.400 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:57:54 np0005603621 nova_compute[247399]: 2026-01-31 07:57:54.455 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:57:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:55 np0005603621 nova_compute[247399]: 2026-01-31 07:57:55.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 325 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 31 02:57:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:57:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:56.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:57:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:57:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:56.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 31 02:57:57 np0005603621 nova_compute[247399]: 2026-01-31 07:57:57.668 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:57:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:57:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:57:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:57:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:57:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:57:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:57:58.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:57:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 31 02:58:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:00.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.291 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.353 247403 DEBUG nova.compute.manager [req-ebfc5fd0-d8b9-4d24-a3bd-4fbf3a2e6d35 req-5fda86d2-cdf7-48a3-86fa-8c7111b7ed4a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.353 247403 DEBUG oslo_concurrency.lockutils [req-ebfc5fd0-d8b9-4d24-a3bd-4fbf3a2e6d35 req-5fda86d2-cdf7-48a3-86fa-8c7111b7ed4a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.354 247403 DEBUG oslo_concurrency.lockutils [req-ebfc5fd0-d8b9-4d24-a3bd-4fbf3a2e6d35 req-5fda86d2-cdf7-48a3-86fa-8c7111b7ed4a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.355 247403 DEBUG oslo_concurrency.lockutils [req-ebfc5fd0-d8b9-4d24-a3bd-4fbf3a2e6d35 req-5fda86d2-cdf7-48a3-86fa-8c7111b7ed4a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.355 247403 DEBUG nova.compute.manager [req-ebfc5fd0-d8b9-4d24-a3bd-4fbf3a2e6d35 req-5fda86d2-cdf7-48a3-86fa-8c7111b7ed4a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Processing event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.356 247403 DEBUG nova.compute.manager [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.362 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846280.3618848, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.362 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.365 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.369 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance spawned successfully.#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.369 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.527 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.532 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.532 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.533 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.533 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.534 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.534 247403 DEBUG nova.virt.libvirt.driver [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.539 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.619 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.669 247403 DEBUG nova.compute.manager [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:00.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.855 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.856 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:00 np0005603621 nova_compute[247399]: 2026-01-31 07:58:00.856 247403 DEBUG nova.objects.instance [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 02:58:01 np0005603621 nova_compute[247399]: 2026-01-31 07:58:01.043 247403 DEBUG oslo_concurrency.lockutils [None req-b2fde502-536d-4e14-be54-13d1251cacc1 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 31 02:58:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:02.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.672 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:02.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.928 247403 DEBUG nova.compute.manager [req-b7fb719a-a3a3-49fe-9f74-a4c2a10bfe7a req-48b297d4-b126-49fc-82c2-4d75c327dfdf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.929 247403 DEBUG oslo_concurrency.lockutils [req-b7fb719a-a3a3-49fe-9f74-a4c2a10bfe7a req-48b297d4-b126-49fc-82c2-4d75c327dfdf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.930 247403 DEBUG oslo_concurrency.lockutils [req-b7fb719a-a3a3-49fe-9f74-a4c2a10bfe7a req-48b297d4-b126-49fc-82c2-4d75c327dfdf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.930 247403 DEBUG oslo_concurrency.lockutils [req-b7fb719a-a3a3-49fe-9f74-a4c2a10bfe7a req-48b297d4-b126-49fc-82c2-4d75c327dfdf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.930 247403 DEBUG nova.compute.manager [req-b7fb719a-a3a3-49fe-9f74-a4c2a10bfe7a req-48b297d4-b126-49fc-82c2-4d75c327dfdf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:02 np0005603621 nova_compute[247399]: 2026-01-31 07:58:02.931 247403 WARNING nova.compute.manager [req-b7fb719a-a3a3-49fe-9f74-a4c2a10bfe7a req-48b297d4-b126-49fc-82c2-4d75c327dfdf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:58:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 66 op/s
Jan 31 02:58:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:04.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:04.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:05 np0005603621 nova_compute[247399]: 2026-01-31 07:58:05.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 02:58:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:06.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 31 02:58:07 np0005603621 nova_compute[247399]: 2026-01-31 07:58:07.675 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:08.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:58:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:08.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 326 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 31 02:58:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:10.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:10 np0005603621 nova_compute[247399]: 2026-01-31 07:58:10.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:10.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 299 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 852 B/s wr, 80 op/s
Jan 31 02:58:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:12.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:12.286 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:12.288 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.676 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:12.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.790 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.791 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.791 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.791 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.792 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.792 247403 INFO nova.compute.manager [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Terminating instance#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.793 247403 DEBUG nova.compute.manager [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:58:12 np0005603621 kernel: tap1067f15a-f2 (unregistering): left promiscuous mode
Jan 31 02:58:12 np0005603621 NetworkManager[49013]: <info>  [1769846292.8643] device (tap1067f15a-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.869 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:12 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:12Z|00090|binding|INFO|Releasing lport 1067f15a-f293-4ea3-894a-c2be3e98368f from this chassis (sb_readonly=0)
Jan 31 02:58:12 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:12Z|00091|binding|INFO|Setting lport 1067f15a-f293-4ea3-894a-c2be3e98368f down in Southbound
Jan 31 02:58:12 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:12Z|00092|binding|INFO|Removing iface tap1067f15a-f2 ovn-installed in OVS
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:12 np0005603621 nova_compute[247399]: 2026-01-31 07:58:12.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:12 np0005603621 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 31 02:58:12 np0005603621 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d00000025.scope: Consumed 16.938s CPU time.
Jan 31 02:58:12 np0005603621 systemd-machined[212769]: Machine qemu-14-instance-00000025 terminated.
Jan 31 02:58:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:12.980 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d7:71 10.100.0.9'], port_security=['fa:16:3e:4a:d7:71 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0e2720e1-c54e-4332-90a4-f5c2c884b77d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=1067f15a-f293-4ea3-894a-c2be3e98368f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:58:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:12.983 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 1067f15a-f293-4ea3-894a-c2be3e98368f in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 unbound from our chassis#033[00m
Jan 31 02:58:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:12.985 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c58eaedf-202a-428a-acfb-f0b1291517f1#033[00m
Jan 31 02:58:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:12.995 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a69c11fe-94a0-4d80-92c8-1ce433796922]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.021 247403 INFO nova.virt.libvirt.driver [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Instance destroyed successfully.#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.022 247403 DEBUG nova.objects.instance [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'resources' on Instance uuid 0e2720e1-c54e-4332-90a4-f5c2c884b77d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.024 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[de51116e-73d0-4eae-bfcf-aca0841bd5be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.027 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4907c8d6-b236-4acf-aada-102eebc3fc97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.047 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[eb164160-b431-4bc7-a9b7-c9561e4f9ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.058 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0039c73-66a5-457f-8579-18b0298285a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc58eaedf-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:11:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543387, 'reachable_time': 41876, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275138, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.069 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[526318f7-ad33-4f3a-a171-249f019f5f45]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543396, 'tstamp': 543396}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275139, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc58eaedf-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543398, 'tstamp': 543398}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275139, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.071 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.076 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.077 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc58eaedf-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.077 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.077 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc58eaedf-20, col_values=(('external_ids', {'iface-id': '8c531a0f-deeb-4de0-880b-b07ec1cf9103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:13.078 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.114 247403 DEBUG nova.virt.libvirt.vif [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:56:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2052398935',display_name='tempest-ServersAdminTestJSON-server-2052398935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2052398935',id=37,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:56:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-m7tceh63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:56:27Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=0e2720e1-c54e-4332-90a4-f5c2c884b77d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.114 247403 DEBUG nova.network.os_vif_util [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "1067f15a-f293-4ea3-894a-c2be3e98368f", "address": "fa:16:3e:4a:d7:71", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1067f15a-f2", "ovs_interfaceid": "1067f15a-f293-4ea3-894a-c2be3e98368f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.115 247403 DEBUG nova.network.os_vif_util [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.116 247403 DEBUG os_vif [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.117 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1067f15a-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.120 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.122 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.125 247403 INFO os_vif [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d7:71,bridge_name='br-int',has_traffic_filtering=True,id=1067f15a-f293-4ea3-894a-c2be3e98368f,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1067f15a-f2')#033[00m
Jan 31 02:58:13 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:13Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:58:13 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:13Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:f8:02 10.100.0.11
Jan 31 02:58:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 254 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 978 KiB/s wr, 108 op/s
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.482 247403 DEBUG nova.compute.manager [req-333ae529-5289-4285-a533-e3b3edac4658 req-d58b0831-9305-4f7d-ae4b-592b922407de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-vif-unplugged-1067f15a-f293-4ea3-894a-c2be3e98368f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.483 247403 DEBUG oslo_concurrency.lockutils [req-333ae529-5289-4285-a533-e3b3edac4658 req-d58b0831-9305-4f7d-ae4b-592b922407de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.483 247403 DEBUG oslo_concurrency.lockutils [req-333ae529-5289-4285-a533-e3b3edac4658 req-d58b0831-9305-4f7d-ae4b-592b922407de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.484 247403 DEBUG oslo_concurrency.lockutils [req-333ae529-5289-4285-a533-e3b3edac4658 req-d58b0831-9305-4f7d-ae4b-592b922407de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.484 247403 DEBUG nova.compute.manager [req-333ae529-5289-4285-a533-e3b3edac4658 req-d58b0831-9305-4f7d-ae4b-592b922407de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] No waiting events found dispatching network-vif-unplugged-1067f15a-f293-4ea3-894a-c2be3e98368f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.484 247403 DEBUG nova.compute.manager [req-333ae529-5289-4285-a533-e3b3edac4658 req-d58b0831-9305-4f7d-ae4b-592b922407de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-vif-unplugged-1067f15a-f293-4ea3-894a-c2be3e98368f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.630 247403 INFO nova.virt.libvirt.driver [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Deleting instance files /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d_del#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.630 247403 INFO nova.virt.libvirt.driver [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Deletion of /var/lib/nova/instances/0e2720e1-c54e-4332-90a4-f5c2c884b77d_del complete#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.845 247403 INFO nova.compute.manager [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.846 247403 DEBUG oslo.service.loopingcall [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.846 247403 DEBUG nova.compute.manager [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:58:13 np0005603621 nova_compute[247399]: 2026-01-31 07:58:13.846 247403 DEBUG nova.network.neutron [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:58:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:14.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:14.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 913 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.094 247403 DEBUG nova.compute.manager [req-bc1ca6c3-fe7b-4485-bc4d-28b8f2d7a4c6 req-e7286f58-d073-4958-8631-310ee821fde1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.094 247403 DEBUG oslo_concurrency.lockutils [req-bc1ca6c3-fe7b-4485-bc4d-28b8f2d7a4c6 req-e7286f58-d073-4958-8631-310ee821fde1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.094 247403 DEBUG oslo_concurrency.lockutils [req-bc1ca6c3-fe7b-4485-bc4d-28b8f2d7a4c6 req-e7286f58-d073-4958-8631-310ee821fde1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.095 247403 DEBUG oslo_concurrency.lockutils [req-bc1ca6c3-fe7b-4485-bc4d-28b8f2d7a4c6 req-e7286f58-d073-4958-8631-310ee821fde1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.095 247403 DEBUG nova.compute.manager [req-bc1ca6c3-fe7b-4485-bc4d-28b8f2d7a4c6 req-e7286f58-d073-4958-8631-310ee821fde1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] No waiting events found dispatching network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.095 247403 WARNING nova.compute.manager [req-bc1ca6c3-fe7b-4485-bc4d-28b8f2d7a4c6 req-e7286f58-d073-4958-8631-310ee821fde1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received unexpected event network-vif-plugged-1067f15a-f293-4ea3-894a-c2be3e98368f for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:58:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:16.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.221 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 02:58:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.531 247403 DEBUG nova.network.neutron [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.552 247403 INFO nova.compute.manager [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Took 2.71 seconds to deallocate network for instance.#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.631 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.631 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:16.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.726 247403 DEBUG nova.compute.manager [req-98eb7bf7-fad0-48bf-a8a3-53c8de9947a2 req-2fbcde6a-d917-479f-baea-380191e37844 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Received event network-vif-deleted-1067f15a-f293-4ea3-894a-c2be3e98368f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.755 247403 DEBUG oslo_concurrency.processutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:16 np0005603621 podman[275186]: 2026-01-31 07:58:16.77370182 +0000 UTC m=+0.082191733 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:58:16 np0005603621 podman[275187]: 2026-01-31 07:58:16.783210557 +0000 UTC m=+0.091449172 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.806 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:58:16 np0005603621 nova_compute[247399]: 2026-01-31 07:58:16.806 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355670159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.184 247403 DEBUG oslo_concurrency.processutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.191 247403 DEBUG nova.compute.provider_tree [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.218 247403 DEBUG nova.scheduler.client.report [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.246 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.274 247403 INFO nova.scheduler.client.report [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Deleted allocations for instance 0e2720e1-c54e-4332-90a4-f5c2c884b77d#033[00m
Jan 31 02:58:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 223 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.367 247403 DEBUG oslo_concurrency.lockutils [None req-74fbc45a-b452-4d97-b0e2-eb50bc35bbf0 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "0e2720e1-c54e-4332-90a4-f5c2c884b77d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:17 np0005603621 nova_compute[247399]: 2026-01-31 07:58:17.679 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:18 np0005603621 nova_compute[247399]: 2026-01-31 07:58:18.119 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:18.290 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:18 np0005603621 nova_compute[247399]: 2026-01-31 07:58:18.639 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updating instance_info_cache with network_info: [{"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:58:18 np0005603621 nova_compute[247399]: 2026-01-31 07:58:18.662 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:58:18 np0005603621 nova_compute[247399]: 2026-01-31 07:58:18.663 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:58:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:18.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 197 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Jan 31 02:58:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:20.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.236 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.236 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980803695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.656 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.762 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.763 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.891 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.892 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4501MB free_disk=20.89763641357422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.893 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:20 np0005603621 nova_compute[247399]: 2026-01-31 07:58:20.893 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.007 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.007 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.007 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.052 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 183 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 118 op/s
Jan 31 02:58:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969085029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.517 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.525 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.544 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.607 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:58:21 np0005603621 nova_compute[247399]: 2026-01-31 07:58:21.607 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.041 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.042 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.042 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.042 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.043 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.044 247403 INFO nova.compute.manager [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Terminating instance#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.045 247403 DEBUG nova.compute.manager [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:58:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:22.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:22 np0005603621 kernel: tapbd448b0d-7d (unregistering): left promiscuous mode
Jan 31 02:58:22 np0005603621 NetworkManager[49013]: <info>  [1769846302.2453] device (tapbd448b0d-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:58:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:22Z|00093|binding|INFO|Releasing lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 from this chassis (sb_readonly=0)
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:22Z|00094|binding|INFO|Setting lport bd448b0d-7dc7-43bc-b4d2-eba76110aa01 down in Southbound
Jan 31 02:58:22 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:22Z|00095|binding|INFO|Removing iface tapbd448b0d-7d ovn-installed in OVS
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.256 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:f8:02 10.100.0.11'], port_security=['fa:16:3e:64:f8:02 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c58eaedf-202a-428a-acfb-f0b1291517f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8033316fc42c4926bfd1f8a34b02fa97', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4b3d9baf-bd3e-457e-a5c2-9addbc71d588', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189b55ef-8e14-4c6c-870a-5dba85715c4a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bd448b0d-7dc7-43bc-b4d2-eba76110aa01) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.257 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bd448b0d-7dc7-43bc-b4d2-eba76110aa01 in datapath c58eaedf-202a-428a-acfb-f0b1291517f1 unbound from our chassis#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.259 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c58eaedf-202a-428a-acfb-f0b1291517f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.260 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2f4bc9-e13e-48d2-97d9-ab618b44c913]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.260 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1 namespace which is not needed anymore#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000020.scope: Deactivated successfully.
Jan 31 02:58:22 np0005603621 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000020.scope: Consumed 14.263s CPU time.
Jan 31 02:58:22 np0005603621 systemd-machined[212769]: Machine qemu-17-instance-00000020 terminated.
Jan 31 02:58:22 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [NOTICE]   (270389) : haproxy version is 2.8.14-c23fe91
Jan 31 02:58:22 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [NOTICE]   (270389) : path to executable is /usr/sbin/haproxy
Jan 31 02:58:22 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [WARNING]  (270389) : Exiting Master process...
Jan 31 02:58:22 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [ALERT]    (270389) : Current worker (270391) exited with code 143 (Terminated)
Jan 31 02:58:22 np0005603621 neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1[270380]: [WARNING]  (270389) : All workers exited. Exiting... (0)
Jan 31 02:58:22 np0005603621 systemd[1]: libpod-eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e.scope: Deactivated successfully.
Jan 31 02:58:22 np0005603621 podman[275353]: 2026-01-31 07:58:22.382790687 +0000 UTC m=+0.048747127 container died eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 02:58:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e-userdata-shm.mount: Deactivated successfully.
Jan 31 02:58:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7137bc06e584b4333644569b7cfd6fc72323215fc41ad8ec7f9f508789144938-merged.mount: Deactivated successfully.
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.478 247403 INFO nova.virt.libvirt.driver [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Instance destroyed successfully.#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.478 247403 DEBUG nova.objects.instance [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lazy-loading 'resources' on Instance uuid e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:22 np0005603621 podman[275353]: 2026-01-31 07:58:22.493204472 +0000 UTC m=+0.159160892 container cleanup eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.493 247403 DEBUG nova.virt.libvirt.vif [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T07:55:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1840895963',display_name='tempest-ServersAdminTestJSON-server-1840895963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1840895963',id=32,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:58:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8033316fc42c4926bfd1f8a34b02fa97',ramdisk_id='',reservation_id='r-ozdcxviv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-784933461',owner_user_name='tempest-ServersAdminTestJSON-784933461-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:58:04Z,user_data=None,user_id='93973daeb08c453e90372a79b54b9ede',uuid=e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.494 247403 DEBUG nova.network.os_vif_util [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converting VIF {"id": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "address": "fa:16:3e:64:f8:02", "network": {"id": "c58eaedf-202a-428a-acfb-f0b1291517f1", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1332449122-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8033316fc42c4926bfd1f8a34b02fa97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd448b0d-7d", "ovs_interfaceid": "bd448b0d-7dc7-43bc-b4d2-eba76110aa01", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.494 247403 DEBUG nova.network.os_vif_util [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.495 247403 DEBUG os_vif [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.496 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.497 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd448b0d-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:22 np0005603621 systemd[1]: libpod-conmon-eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e.scope: Deactivated successfully.
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.500 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.502 247403 INFO os_vif [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:f8:02,bridge_name='br-int',has_traffic_filtering=True,id=bd448b0d-7dc7-43bc-b4d2-eba76110aa01,network=Network(c58eaedf-202a-428a-acfb-f0b1291517f1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd448b0d-7d')#033[00m
Jan 31 02:58:22 np0005603621 podman[275395]: 2026-01-31 07:58:22.584575161 +0000 UTC m=+0.074610876 container remove eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.592 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[168febeb-37e4-4cbe-97bb-7f8dc45170c2]: (4, ('Sat Jan 31 07:58:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1 (eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e)\neff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e\nSat Jan 31 07:58:22 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1 (eff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e)\neff3b884ed67cef22bf520b33562daf8b5b97cf7cac3b11fc7c7888dfd6b501e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.594 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f76025b3-0e2b-4a47-9e10-d126f9719d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.595 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc58eaedf-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.607 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.608 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.608 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 kernel: tapc58eaedf-20: left promiscuous mode
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.640 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd3ed4e-3c5e-46a1-a337-834747e9d2cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.651 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3779a271-a0fe-45d0-9de7-b918bcaa3a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.652 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[847a3006-bc7e-43c4-a1d2-8c966ceb92b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.662 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a81c3437-f412-49e0-9056-3df9fc4ea394]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543380, 'reachable_time': 23361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275426, 'error': None, 'target': 'ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 systemd[1]: run-netns-ovnmeta\x2dc58eaedf\x2d202a\x2d428a\x2dacfb\x2df0b1291517f1.mount: Deactivated successfully.
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.665 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c58eaedf-202a-428a-acfb-f0b1291517f1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:22.665 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[0336c408-5621-45bf-ad5b-4d695ff6c037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.680 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.754 247403 DEBUG nova.compute.manager [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.754 247403 DEBUG oslo_concurrency.lockutils [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.755 247403 DEBUG oslo_concurrency.lockutils [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.755 247403 DEBUG oslo_concurrency.lockutils [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.755 247403 DEBUG nova.compute.manager [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.755 247403 DEBUG nova.compute.manager [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-unplugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.755 247403 DEBUG nova.compute.manager [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.756 247403 DEBUG oslo_concurrency.lockutils [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.756 247403 DEBUG oslo_concurrency.lockutils [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.756 247403 DEBUG oslo_concurrency.lockutils [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.756 247403 DEBUG nova.compute.manager [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] No waiting events found dispatching network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:22 np0005603621 nova_compute[247399]: 2026-01-31 07:58:22.756 247403 WARNING nova.compute.manager [req-ca547ad9-cb63-4810-bf28-f49f15639f71 req-c5015340-8ab0-4c96-aafb-e107cfbb0a16 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received unexpected event network-vif-plugged-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 02:58:23 np0005603621 nova_compute[247399]: 2026-01-31 07:58:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 121 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 370 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 31 02:58:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:24.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:24 np0005603621 nova_compute[247399]: 2026-01-31 07:58:24.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:24 np0005603621 nova_compute[247399]: 2026-01-31 07:58:24.222 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:24.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:25 np0005603621 nova_compute[247399]: 2026-01-31 07:58:25.222 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:58:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 140 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 31 02:58:25 np0005603621 nova_compute[247399]: 2026-01-31 07:58:25.913 247403 INFO nova.virt.libvirt.driver [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deleting instance files /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_del#033[00m
Jan 31 02:58:25 np0005603621 nova_compute[247399]: 2026-01-31 07:58:25.914 247403 INFO nova.virt.libvirt.driver [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deletion of /var/lib/nova/instances/e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d_del complete#033[00m
Jan 31 02:58:26 np0005603621 nova_compute[247399]: 2026-01-31 07:58:26.008 247403 INFO nova.compute.manager [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Took 3.96 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:58:26 np0005603621 nova_compute[247399]: 2026-01-31 07:58:26.009 247403 DEBUG oslo.service.loopingcall [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:58:26 np0005603621 nova_compute[247399]: 2026-01-31 07:58:26.010 247403 DEBUG nova.compute.manager [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:58:26 np0005603621 nova_compute[247399]: 2026-01-31 07:58:26.010 247403 DEBUG nova.network.neutron [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:58:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:26.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:26.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.055 247403 DEBUG nova.network.neutron [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.077 247403 INFO nova.compute.manager [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Took 1.07 seconds to deallocate network for instance.#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.148 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.149 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.166 247403 DEBUG nova.compute.manager [req-4ea1dd34-8029-4132-b203-e240c5c2cffe req-63428acb-67d1-44cb-a5b2-4e792eec4487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Received event network-vif-deleted-bd448b0d-7dc7-43bc-b4d2-eba76110aa01 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.202 247403 DEBUG oslo_concurrency.processutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 137 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 146 KiB/s rd, 1.4 MiB/s wr, 104 op/s
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.501 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744194365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.677 247403 DEBUG oslo_concurrency.processutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.682 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.686 247403 DEBUG nova.compute.provider_tree [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.726 247403 DEBUG nova.scheduler.client.report [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.765 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.812 247403 INFO nova.scheduler.client.report [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Deleted allocations for instance e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d#033[00m
Jan 31 02:58:27 np0005603621 nova_compute[247399]: 2026-01-31 07:58:27.907 247403 DEBUG oslo_concurrency.lockutils [None req-69bac4a7-b093-47ff-a28f-b63ec6591a6f 93973daeb08c453e90372a79b54b9ede 8033316fc42c4926bfd1f8a34b02fa97 - - default default] Lock "e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:28 np0005603621 nova_compute[247399]: 2026-01-31 07:58:28.020 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846293.0200567, 0e2720e1-c54e-4332-90a4-f5c2c884b77d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:28 np0005603621 nova_compute[247399]: 2026-01-31 07:58:28.021 247403 INFO nova.compute.manager [-] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:58:28 np0005603621 nova_compute[247399]: 2026-01-31 07:58:28.050 247403 DEBUG nova.compute.manager [None req-56c5c437-a44a-4a57-94a2-1b7d600beaa4 - - - - - -] [instance: 0e2720e1-c54e-4332-90a4-f5c2c884b77d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:28.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:28.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 120 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 125 KiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.074 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.074 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:30.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.129 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.221 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.222 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.230 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.230 247403 INFO nova.compute.claims [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.384 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:30.478 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:30.479 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:30.479 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:30.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41188665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.825 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.830 247403 DEBUG nova.compute.provider_tree [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.856 247403 DEBUG nova.scheduler.client.report [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.941 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:30 np0005603621 nova_compute[247399]: 2026-01-31 07:58:30.943 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.037 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.037 247403 DEBUG nova.network.neutron [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.140 247403 INFO nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.174 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.289 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.291 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.291 247403 INFO nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Creating image(s)#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.313 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 88 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.342 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.368 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.372 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.435 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.436 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.436 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.436 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:58:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:58:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:58:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:58:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.462 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.466 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:31 np0005603621 nova_compute[247399]: 2026-01-31 07:58:31.591 247403 DEBUG nova.policy [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e5b93162787e405080a5a790c1847434', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dbf6b6306ca449dfb064371ec88681f5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:58:32 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 759467d4-3ba6-4e5b-a341-d1e3d9f7fe6e does not exist
Jan 31 02:58:32 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f0c8bd6b-c5a7-46af-8b88-a5a296f0cd37 does not exist
Jan 31 02:58:32 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d25304a0-a38d-4de5-b6d6-4409771e5750 does not exist
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:58:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:32.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:58:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:58:32 np0005603621 nova_compute[247399]: 2026-01-31 07:58:32.400 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:32 np0005603621 nova_compute[247399]: 2026-01-31 07:58:32.502 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:32 np0005603621 nova_compute[247399]: 2026-01-31 07:58:32.683 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:32 np0005603621 podman[275845]: 2026-01-31 07:58:32.710084348 +0000 UTC m=+0.080470839 container create ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 02:58:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:32 np0005603621 podman[275845]: 2026-01-31 07:58:32.650408551 +0000 UTC m=+0.020795052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:58:32 np0005603621 systemd[1]: Started libpod-conmon-ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845.scope.
Jan 31 02:58:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:32 np0005603621 podman[275845]: 2026-01-31 07:58:32.898221275 +0000 UTC m=+0.268607786 container init ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:58:32 np0005603621 podman[275845]: 2026-01-31 07:58:32.906491074 +0000 UTC m=+0.276877555 container start ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:58:32 np0005603621 crazy_buck[275861]: 167 167
Jan 31 02:58:32 np0005603621 systemd[1]: libpod-ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845.scope: Deactivated successfully.
Jan 31 02:58:32 np0005603621 podman[275845]: 2026-01-31 07:58:32.968478074 +0000 UTC m=+0.338864585 container attach ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 02:58:32 np0005603621 podman[275845]: 2026-01-31 07:58:32.969257928 +0000 UTC m=+0.339644409 container died ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:58:33 np0005603621 nova_compute[247399]: 2026-01-31 07:58:33.064 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c6daa1d7b674bce829fb32957130c5737c83d705c2414b3852ff2a1d6817e185-merged.mount: Deactivated successfully.
Jan 31 02:58:33 np0005603621 nova_compute[247399]: 2026-01-31 07:58:33.153 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] resizing rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:58:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 88 MiB data, 459 MiB used, 21 GiB / 21 GiB avail; 995 KiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 31 02:58:33 np0005603621 nova_compute[247399]: 2026-01-31 07:58:33.412 247403 DEBUG nova.network.neutron [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Successfully created port: d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 02:58:33 np0005603621 podman[275845]: 2026-01-31 07:58:33.430909203 +0000 UTC m=+0.801295684 container remove ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_buck, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:58:33 np0005603621 systemd[1]: libpod-conmon-ba69a66b449c4b01506455cc02dba308ace882237f6ff93f92547e58726d0845.scope: Deactivated successfully.
Jan 31 02:58:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:58:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:58:33 np0005603621 podman[275940]: 2026-01-31 07:58:33.528041062 +0000 UTC m=+0.021525274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:58:33 np0005603621 podman[275940]: 2026-01-31 07:58:33.668128965 +0000 UTC m=+0.161613157 container create 0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:58:33 np0005603621 systemd[1]: Started libpod-conmon-0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165.scope.
Jan 31 02:58:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3eef48c16295553b6710112c1a9c7a7fec11df3e229ad645666aefed3a62e05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3eef48c16295553b6710112c1a9c7a7fec11df3e229ad645666aefed3a62e05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3eef48c16295553b6710112c1a9c7a7fec11df3e229ad645666aefed3a62e05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3eef48c16295553b6710112c1a9c7a7fec11df3e229ad645666aefed3a62e05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3eef48c16295553b6710112c1a9c7a7fec11df3e229ad645666aefed3a62e05/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:33 np0005603621 podman[275940]: 2026-01-31 07:58:33.823476976 +0000 UTC m=+0.316961198 container init 0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 02:58:33 np0005603621 podman[275940]: 2026-01-31 07:58:33.82969139 +0000 UTC m=+0.323175572 container start 0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_leavitt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 02:58:33 np0005603621 podman[275940]: 2026-01-31 07:58:33.937095211 +0000 UTC m=+0.430579403 container attach 0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_leavitt, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.011 247403 DEBUG nova.objects.instance [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.040 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.040 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Ensure instance console log exists: /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.041 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.041 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.042 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:34.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:34 np0005603621 condescending_leavitt[275957]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:58:34 np0005603621 condescending_leavitt[275957]: --> relative data size: 1.0
Jan 31 02:58:34 np0005603621 condescending_leavitt[275957]: --> All data devices are unavailable
Jan 31 02:58:34 np0005603621 systemd[1]: libpod-0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165.scope: Deactivated successfully.
Jan 31 02:58:34 np0005603621 podman[275940]: 2026-01-31 07:58:34.608482099 +0000 UTC m=+1.101966291 container died 0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_leavitt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:58:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:34.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f3eef48c16295553b6710112c1a9c7a7fec11df3e229ad645666aefed3a62e05-merged.mount: Deactivated successfully.
Jan 31 02:58:34 np0005603621 nova_compute[247399]: 2026-01-31 07:58:34.971 247403 DEBUG nova.network.neutron [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Successfully updated port: d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.006 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.007 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.007 247403 DEBUG nova.network.neutron [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.086 247403 DEBUG nova.compute.manager [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.087 247403 DEBUG nova.compute.manager [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing instance network info cache due to event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.087 247403 DEBUG oslo_concurrency.lockutils [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:58:35 np0005603621 podman[275940]: 2026-01-31 07:58:35.186599989 +0000 UTC m=+1.680084181 container remove 0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:58:35 np0005603621 nova_compute[247399]: 2026-01-31 07:58:35.220 247403 DEBUG nova.network.neutron [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:58:35 np0005603621 systemd[1]: libpod-conmon-0673a5457ae072492a005d8451d9b9118668f86d34aaa3ded43dd5b9e4336165.scope: Deactivated successfully.
Jan 31 02:58:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 103 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 132 op/s
Jan 31 02:58:35 np0005603621 podman[276143]: 2026-01-31 07:58:35.703012496 +0000 UTC m=+0.026423107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:58:35 np0005603621 podman[276143]: 2026-01-31 07:58:35.850145341 +0000 UTC m=+0.173555932 container create e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 02:58:35 np0005603621 systemd[1]: Started libpod-conmon-e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759.scope.
Jan 31 02:58:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:36 np0005603621 podman[276143]: 2026-01-31 07:58:36.00926379 +0000 UTC m=+0.332674401 container init e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:58:36 np0005603621 podman[276143]: 2026-01-31 07:58:36.017284011 +0000 UTC m=+0.340694602 container start e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cartwright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:58:36 np0005603621 vigilant_cartwright[276160]: 167 167
Jan 31 02:58:36 np0005603621 podman[276143]: 2026-01-31 07:58:36.021622896 +0000 UTC m=+0.345033517 container attach e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 02:58:36 np0005603621 systemd[1]: libpod-e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759.scope: Deactivated successfully.
Jan 31 02:58:36 np0005603621 conmon[276160]: conmon e8d6325fbf6bc389a04b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759.scope/container/memory.events
Jan 31 02:58:36 np0005603621 podman[276143]: 2026-01-31 07:58:36.023061311 +0000 UTC m=+0.346471902 container died e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.030 247403 DEBUG nova.network.neutron [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:58:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-267df9697744629ed305ada3e707d7120a2820a3d5af8868bda7d943ae4f3127-merged.mount: Deactivated successfully.
Jan 31 02:58:36 np0005603621 podman[276143]: 2026-01-31 07:58:36.072735555 +0000 UTC m=+0.396160927 container remove e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:58:36 np0005603621 systemd[1]: libpod-conmon-e8d6325fbf6bc389a04b15fc1fb0cb611bdcfad703aa8ecfdd3e03355de64759.scope: Deactivated successfully.
Jan 31 02:58:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:36.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.155 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.155 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Instance network_info: |[{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.155 247403 DEBUG oslo_concurrency.lockutils [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.155 247403 DEBUG nova.network.neutron [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.158 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Start _get_guest_xml network_info=[{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.163 247403 WARNING nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.172 247403 DEBUG nova.virt.libvirt.host [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.173 247403 DEBUG nova.virt.libvirt.host [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.178 247403 DEBUG nova.virt.libvirt.host [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.179 247403 DEBUG nova.virt.libvirt.host [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.180 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.180 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.181 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.181 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.181 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.182 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.182 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.182 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.182 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.183 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.183 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.183 247403 DEBUG nova.virt.hardware [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.187 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:36 np0005603621 podman[276183]: 2026-01-31 07:58:36.221022765 +0000 UTC m=+0.041029444 container create 94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:58:36 np0005603621 systemd[1]: Started libpod-conmon-94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408.scope.
Jan 31 02:58:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd92432706f98f0c72133ba0915b8e5328bb0cd291223c2f99fd3809662be7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd92432706f98f0c72133ba0915b8e5328bb0cd291223c2f99fd3809662be7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd92432706f98f0c72133ba0915b8e5328bb0cd291223c2f99fd3809662be7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd92432706f98f0c72133ba0915b8e5328bb0cd291223c2f99fd3809662be7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:36 np0005603621 podman[276183]: 2026-01-31 07:58:36.296732334 +0000 UTC m=+0.116739033 container init 94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 02:58:36 np0005603621 podman[276183]: 2026-01-31 07:58:36.20583787 +0000 UTC m=+0.025844579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:58:36 np0005603621 podman[276183]: 2026-01-31 07:58:36.30330833 +0000 UTC m=+0.123315009 container start 94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 02:58:36 np0005603621 podman[276183]: 2026-01-31 07:58:36.307208472 +0000 UTC m=+0.127215161 container attach 94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 02:58:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:58:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3303747309' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.635 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.662 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:36 np0005603621 nova_compute[247399]: 2026-01-31 07:58:36.666 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:36.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]: {
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:    "0": [
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:        {
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "devices": [
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "/dev/loop3"
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            ],
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "lv_name": "ceph_lv0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "lv_size": "7511998464",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "name": "ceph_lv0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "tags": {
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.cluster_name": "ceph",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.crush_device_class": "",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.encrypted": "0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.osd_id": "0",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.type": "block",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:                "ceph.vdo": "0"
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            },
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "type": "block",
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:            "vg_name": "ceph_vg0"
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:        }
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]:    ]
Jan 31 02:58:37 np0005603621 nostalgic_heyrovsky[276201]: }
Jan 31 02:58:37 np0005603621 systemd[1]: libpod-94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408.scope: Deactivated successfully.
Jan 31 02:58:37 np0005603621 conmon[276201]: conmon 94fddff3f854cc450d3a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408.scope/container/memory.events
Jan 31 02:58:37 np0005603621 podman[276183]: 2026-01-31 07:58:37.05499736 +0000 UTC m=+0.875004069 container died 94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:58:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4fd92432706f98f0c72133ba0915b8e5328bb0cd291223c2f99fd3809662be7a-merged.mount: Deactivated successfully.
Jan 31 02:58:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:58:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580824064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:58:37 np0005603621 podman[276183]: 2026-01-31 07:58:37.113050907 +0000 UTC m=+0.933057586 container remove 94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.123 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.127 247403 DEBUG nova.virt.libvirt.vif [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1030320071',display_name='tempest-FloatingIPsAssociationTestJSON-server-1030320071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1030320071',id=43,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbf6b6306ca449dfb064371ec88681f5',ramdisk_id='',reservation_id='r-q0lh8mgv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-338180924',owner_user_name='tempest-FloatingIPsAssociationTestJSON-338180924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:58:31Z,user_data=None,user_id='e5b93162787e405080a5a790c1847434',uuid=38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.128 247403 DEBUG nova.network.os_vif_util [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Converting VIF {"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:58:37 np0005603621 systemd[1]: libpod-conmon-94fddff3f854cc450d3a608f90518718c42b61007c4a6184501d4c907dae5408.scope: Deactivated successfully.
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.129 247403 DEBUG nova.network.os_vif_util [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.130 247403 DEBUG nova.objects.instance [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.155 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <uuid>38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e</uuid>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <name>instance-0000002b</name>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1030320071</nova:name>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:58:36</nova:creationTime>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:user uuid="e5b93162787e405080a5a790c1847434">tempest-FloatingIPsAssociationTestJSON-338180924-project-member</nova:user>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:project uuid="dbf6b6306ca449dfb064371ec88681f5">tempest-FloatingIPsAssociationTestJSON-338180924</nova:project>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <nova:port uuid="d6392acd-fa98-4f6e-b62a-ceffd7ca0c29">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <entry name="serial">38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e</entry>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <entry name="uuid">38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e</entry>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk.config">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:04:1a:07"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <target dev="tapd6392acd-fa"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </interface>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/console.log" append="off"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:58:37 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:58:37 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:58:37 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:58:37 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.156 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Preparing to wait for external event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.157 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.157 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.157 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.158 247403 DEBUG nova.virt.libvirt.vif [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T07:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1030320071',display_name='tempest-FloatingIPsAssociationTestJSON-server-1030320071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1030320071',id=43,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dbf6b6306ca449dfb064371ec88681f5',ramdisk_id='',reservation_id='r-q0lh8mgv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-338180924',owner_user_name='tempest-FloatingIPsAssociationTestJSON-338180924-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T07:58:31Z,user_data=None,user_id='e5b93162787e405080a5a790c1847434',uuid=38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.158 247403 DEBUG nova.network.os_vif_util [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Converting VIF {"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.159 247403 DEBUG nova.network.os_vif_util [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.159 247403 DEBUG os_vif [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.160 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.160 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.161 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.165 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.165 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6392acd-fa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.165 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6392acd-fa, col_values=(('external_ids', {'iface-id': 'd6392acd-fa98-4f6e-b62a-ceffd7ca0c29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:1a:07', 'vm-uuid': '38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:37 np0005603621 NetworkManager[49013]: <info>  [1769846317.1686] manager: (tapd6392acd-fa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.169 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.175 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.176 247403 INFO os_vif [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa')#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.264 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.264 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.264 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] No VIF found with MAC fa:16:3e:04:1a:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.265 247403 INFO nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Using config drive#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.296 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 115 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.475 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846302.474523, e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.476 247403 INFO nova.compute.manager [-] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.503 247403 DEBUG nova.compute.manager [None req-1ddc4a3b-fcce-450a-9898-1c3ea3f878bf - - - - - -] [instance: e0a2d9f4-0b2c-4bc6-ba27-d67e7a27d52d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.637 247403 DEBUG nova.network.neutron [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updated VIF entry in instance network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.637 247403 DEBUG nova.network.neutron [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.642347018 +0000 UTC m=+0.038425323 container create 1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.653 247403 DEBUG oslo_concurrency.lockutils [req-2e247998-9ae1-41ba-b6a7-0f928652eec9 req-91a59910-661c-4f78-8fd5-a65bc565541a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.660 247403 INFO nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Creating config drive at /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/disk.config#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.665 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpycb1am3h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:37 np0005603621 systemd[1]: Started libpod-conmon-1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49.scope.
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.62223477 +0000 UTC m=+0.018313095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.725938495 +0000 UTC m=+0.122016820 container init 1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.731641002 +0000 UTC m=+0.127719317 container start 1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.73569875 +0000 UTC m=+0.131777055 container attach 1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:58:37 np0005603621 systemd[1]: libpod-1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49.scope: Deactivated successfully.
Jan 31 02:58:37 np0005603621 sweet_volhard[276515]: 167 167
Jan 31 02:58:37 np0005603621 conmon[276515]: conmon 1b1c65d2c26069149857 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49.scope/container/memory.events
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.739264841 +0000 UTC m=+0.135343146 container died 1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 02:58:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5e9f170a215e0e629fc33360440b7fe101a8c7faad628f589dea9b9ff6923287-merged.mount: Deactivated successfully.
Jan 31 02:58:37 np0005603621 podman[276497]: 2026-01-31 07:58:37.780859973 +0000 UTC m=+0.176938278 container remove 1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.783 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpycb1am3h" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:37 np0005603621 systemd[1]: libpod-conmon-1b1c65d2c26069149857deff09db3186224c3e11ca042929654246b8d4bcab49.scope: Deactivated successfully.
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.809 247403 DEBUG nova.storage.rbd_utils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] rbd image 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:37 np0005603621 nova_compute[247399]: 2026-01-31 07:58:37.812 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/disk.config 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:37 np0005603621 podman[276557]: 2026-01-31 07:58:37.927899453 +0000 UTC m=+0.066266834 container create 91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 02:58:37 np0005603621 systemd[1]: Started libpod-conmon-91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd.scope.
Jan 31 02:58:37 np0005603621 podman[276557]: 2026-01-31 07:58:37.884175985 +0000 UTC m=+0.022543386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:58:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6fcf9edce3e6259ec5af9c10486fa4ad3471a5ce5add8980e5b40ea34e8cae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6fcf9edce3e6259ec5af9c10486fa4ad3471a5ce5add8980e5b40ea34e8cae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6fcf9edce3e6259ec5af9c10486fa4ad3471a5ce5add8980e5b40ea34e8cae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6fcf9edce3e6259ec5af9c10486fa4ad3471a5ce5add8980e5b40ea34e8cae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:38 np0005603621 podman[276557]: 2026-01-31 07:58:38.06202117 +0000 UTC m=+0.200388581 container init 91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:58:38 np0005603621 podman[276557]: 2026-01-31 07:58:38.070758883 +0000 UTC m=+0.209126264 container start 91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 02:58:38 np0005603621 podman[276557]: 2026-01-31 07:58:38.097004365 +0000 UTC m=+0.235371836 container attach 91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.107 247403 DEBUG oslo_concurrency.processutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/disk.config 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.108 247403 INFO nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Deleting local config drive /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e/disk.config because it was imported into RBD.#033[00m
Jan 31 02:58:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:38.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:38 np0005603621 kernel: tapd6392acd-fa: entered promiscuous mode
Jan 31 02:58:38 np0005603621 NetworkManager[49013]: <info>  [1769846318.1494] manager: (tapd6392acd-fa): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.151 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:38Z|00096|binding|INFO|Claiming lport d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 for this chassis.
Jan 31 02:58:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:38Z|00097|binding|INFO|d6392acd-fa98-4f6e-b62a-ceffd7ca0c29: Claiming fa:16:3e:04:1a:07 10.100.0.5
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.153 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.159 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.167 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:1a:07 10.100.0.5'], port_security=['fa:16:3e:04:1a:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c28cb14d-420d-4734-af33-c452602f84f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbf6b6306ca449dfb064371ec88681f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f2274c0-e32f-4e92-b2d3-ffc323775fce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01325513-d5d2-4c4f-b0c6-fdb3a77ce88f, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.169 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 in datapath c28cb14d-420d-4734-af33-c452602f84f5 bound to our chassis#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.171 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c28cb14d-420d-4734-af33-c452602f84f5#033[00m
Jan 31 02:58:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:38Z|00098|binding|INFO|Setting lport d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 ovn-installed in OVS
Jan 31 02:58:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:38Z|00099|binding|INFO|Setting lport d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 up in Southbound
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.184 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 systemd-machined[212769]: New machine qemu-18-instance-0000002b.
Jan 31 02:58:38 np0005603621 systemd-udevd[276609]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.188 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5a007fb1-951c-46d9-bbea-a7b646baabd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.188 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc28cb14d-41 in ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.190 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc28cb14d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.190 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[270ff566-f3e6-484c-8750-6e12e03d3297]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.191 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b152de38-f816-4337-960a-c0764c71b9ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 systemd[1]: Started Virtual Machine qemu-18-instance-0000002b.
Jan 31 02:58:38 np0005603621 NetworkManager[49013]: <info>  [1769846318.2009] device (tapd6392acd-fa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 02:58:38 np0005603621 NetworkManager[49013]: <info>  [1769846318.2016] device (tapd6392acd-fa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.200 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4a3f994f-42ed-4721-956d-42faf894c8f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.214 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8c7b051c-8653-4c95-9ff0-a3826750fb88]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.238 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7fa630-2786-4610-9ad9-98b588802d1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.245 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0c308229-0516-4f71-8cc3-b0606680da06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 NetworkManager[49013]: <info>  [1769846318.2460] manager: (tapc28cb14d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 31 02:58:38 np0005603621 systemd-udevd[276612]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.270 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[906197ea-4b86-4cdc-b9fe-3b70fbee5499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.272 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7822e0a3-402f-4d1f-a620-2e427708d4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 NetworkManager[49013]: <info>  [1769846318.2873] device (tapc28cb14d-40): carrier: link connected
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.291 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec4f0a3-66b8-44ee-9098-6aa77a11686c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.304 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9c95316e-e388-44c9-bf58-82d0674924eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc28cb14d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:59:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562380, 'reachable_time': 38249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276641, 'error': None, 'target': 'ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.317 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2340a919-a7f2-4d88-a7a8-83045832c4d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:594f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562380, 'tstamp': 562380}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276642, 'error': None, 'target': 'ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.330 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c98f90e9-8544-44eb-8441-070eb4d0f2fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc28cb14d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:59:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562380, 'reachable_time': 38249, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276643, 'error': None, 'target': 'ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.363 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3deb5867-396e-4be1-ae4a-415d7aaa17bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.406 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed5f0a6-d543-4b7f-a744-dd738d011c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.407 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc28cb14d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.408 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.408 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc28cb14d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:38 np0005603621 kernel: tapc28cb14d-40: entered promiscuous mode
Jan 31 02:58:38 np0005603621 NetworkManager[49013]: <info>  [1769846318.4117] manager: (tapc28cb14d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.412 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.413 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc28cb14d-40, col_values=(('external_ids', {'iface-id': 'fca56836-0662-40f0-bef3-510bc5f598ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.416 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c28cb14d-420d-4734-af33-c452602f84f5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c28cb14d-420d-4734-af33-c452602f84f5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 02:58:38 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:38Z|00100|binding|INFO|Releasing lport fca56836-0662-40f0-bef3-510bc5f598ce from this chassis (sb_readonly=0)
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.417 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b47fcd8b-ff07-4161-a0bc-49ae0049f36d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.417 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-c28cb14d-420d-4734-af33-c452602f84f5
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/c28cb14d-420d-4734-af33-c452602f84f5.pid.haproxy
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID c28cb14d-420d-4734-af33-c452602f84f5
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 02:58:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:58:38.418 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5', 'env', 'PROCESS_TAG=haproxy-c28cb14d-420d-4734-af33-c452602f84f5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c28cb14d-420d-4734-af33-c452602f84f5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.422 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:58:38
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.638 247403 DEBUG nova.compute.manager [req-4c002286-6bf1-4e59-81eb-df25a2c6131c req-1668d250-dc93-4526-94ce-52f5d41fe494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.639 247403 DEBUG oslo_concurrency.lockutils [req-4c002286-6bf1-4e59-81eb-df25a2c6131c req-1668d250-dc93-4526-94ce-52f5d41fe494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.639 247403 DEBUG oslo_concurrency.lockutils [req-4c002286-6bf1-4e59-81eb-df25a2c6131c req-1668d250-dc93-4526-94ce-52f5d41fe494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.639 247403 DEBUG oslo_concurrency.lockutils [req-4c002286-6bf1-4e59-81eb-df25a2c6131c req-1668d250-dc93-4526-94ce-52f5d41fe494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:38 np0005603621 nova_compute[247399]: 2026-01-31 07:58:38.639 247403 DEBUG nova.compute.manager [req-4c002286-6bf1-4e59-81eb-df25a2c6131c req-1668d250-dc93-4526-94ce-52f5d41fe494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Processing event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:58:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:58:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:38 np0005603621 podman[276675]: 2026-01-31 07:58:38.74657517 +0000 UTC m=+0.020979488 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 02:58:38 np0005603621 podman[276675]: 2026-01-31 07:58:38.912342437 +0000 UTC m=+0.186746725 container create 1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]: {
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:        "osd_id": 0,
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:        "type": "bluestore"
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]:    }
Jan 31 02:58:38 np0005603621 interesting_cannon[276591]: }
Jan 31 02:58:38 np0005603621 systemd[1]: libpod-91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd.scope: Deactivated successfully.
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.061 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.062 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846319.0620084, 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.062 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] VM Started (Lifecycle Event)#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.064 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.068 247403 INFO nova.virt.libvirt.driver [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Instance spawned successfully.#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.069 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.089 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.093 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:58:39 np0005603621 podman[276557]: 2026-01-31 07:58:39.129419719 +0000 UTC m=+1.267787120 container died 91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:58:39 np0005603621 systemd[1]: Started libpod-conmon-1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111.scope.
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.136 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.136 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.137 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.138 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.138 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.138 247403 DEBUG nova.virt.libvirt.driver [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:58:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a6fcf9edce3e6259ec5af9c10486fa4ad3471a5ce5add8980e5b40ea34e8cae1-merged.mount: Deactivated successfully.
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.170 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.171 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846319.0652943, 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.171 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] VM Paused (Lifecycle Event)#033[00m
Jan 31 02:58:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f94284ea8aa915ea848961abcc70a48276a1d8a944c078eefd484013b3860a31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 02:58:39 np0005603621 podman[276675]: 2026-01-31 07:58:39.191832522 +0000 UTC m=+0.466236830 container init 1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 02:58:39 np0005603621 podman[276675]: 2026-01-31 07:58:39.196761256 +0000 UTC m=+0.471165544 container start 1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:58:39 np0005603621 podman[276738]: 2026-01-31 07:58:39.204082516 +0000 UTC m=+0.238449463 container remove 91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:58:39 np0005603621 systemd[1]: libpod-conmon-91081a9878d0b3c8b4978cfe0568c7ce82407920b40ce644e3582b2901631bbd.scope: Deactivated successfully.
Jan 31 02:58:39 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [NOTICE]   (276767) : New worker (276769) forked
Jan 31 02:58:39 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [NOTICE]   (276767) : Loading success.
Jan 31 02:58:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:58:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:58:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 02:58:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:58:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bc4c2374-ad35-4319-a225-ee841680469c does not exist
Jan 31 02:58:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 63a5aa25-5efb-48d7-a478-f0501f701801 does not exist
Jan 31 02:58:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2a589ff9-5228-4fc1-9c16-0a84adbc8f0a does not exist
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.295 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.301 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846319.0658498, 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.302 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:58:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 134 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 130 op/s
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.385 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.391 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.399 247403 INFO nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Took 8.11 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.400 247403 DEBUG nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.449 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.502 247403 INFO nova.compute.manager [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Took 9.30 seconds to build instance.#033[00m
Jan 31 02:58:39 np0005603621 nova_compute[247399]: 2026-01-31 07:58:39.524 247403 DEBUG oslo_concurrency.lockutils [None req-84e7b4d6-9457-41c3-9b73-977cd3ef45dd e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:40.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:58:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:58:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:40 np0005603621 nova_compute[247399]: 2026-01-31 07:58:40.773 247403 DEBUG nova.compute.manager [req-79147ba5-a4f3-447d-af62-3df08762061d req-7ae126ed-3e31-467f-b887-930112dd2124 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:40 np0005603621 nova_compute[247399]: 2026-01-31 07:58:40.774 247403 DEBUG oslo_concurrency.lockutils [req-79147ba5-a4f3-447d-af62-3df08762061d req-7ae126ed-3e31-467f-b887-930112dd2124 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:40 np0005603621 nova_compute[247399]: 2026-01-31 07:58:40.774 247403 DEBUG oslo_concurrency.lockutils [req-79147ba5-a4f3-447d-af62-3df08762061d req-7ae126ed-3e31-467f-b887-930112dd2124 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:40 np0005603621 nova_compute[247399]: 2026-01-31 07:58:40.774 247403 DEBUG oslo_concurrency.lockutils [req-79147ba5-a4f3-447d-af62-3df08762061d req-7ae126ed-3e31-467f-b887-930112dd2124 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:40 np0005603621 nova_compute[247399]: 2026-01-31 07:58:40.775 247403 DEBUG nova.compute.manager [req-79147ba5-a4f3-447d-af62-3df08762061d req-7ae126ed-3e31-467f-b887-930112dd2124 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] No waiting events found dispatching network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:58:40 np0005603621 nova_compute[247399]: 2026-01-31 07:58:40.775 247403 WARNING nova.compute.manager [req-79147ba5-a4f3-447d-af62-3df08762061d req-7ae126ed-3e31-467f-b887-930112dd2124 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received unexpected event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 for instance with vm_state active and task_state None.#033[00m
Jan 31 02:58:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 134 MiB data, 481 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 31 02:58:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:42.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:42 np0005603621 nova_compute[247399]: 2026-01-31 07:58:42.170 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:42 np0005603621 nova_compute[247399]: 2026-01-31 07:58:42.688 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:42.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 183 MiB data, 515 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.8 MiB/s wr, 198 op/s
Jan 31 02:58:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:44.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:58:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:44.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:58:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 207 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.6 MiB/s wr, 202 op/s
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.710552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846325710598, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2433, "num_deletes": 502, "total_data_size": 3874725, "memory_usage": 3949504, "flush_reason": "Manual Compaction"}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846325728836, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3656389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26796, "largest_seqno": 29227, "table_properties": {"data_size": 3646130, "index_size": 5985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 25689, "raw_average_key_size": 20, "raw_value_size": 3623274, "raw_average_value_size": 2875, "num_data_blocks": 260, "num_entries": 1260, "num_filter_entries": 1260, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846109, "oldest_key_time": 1769846109, "file_creation_time": 1769846325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 18341 microseconds, and 8045 cpu microseconds.
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.728889) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3656389 bytes OK
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.728913) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.731529) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.731624) EVENT_LOG_v1 {"time_micros": 1769846325731613, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.731654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3863768, prev total WAL file size 3863768, number of live WAL files 2.
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.732435) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3570KB)], [62(10MB)]
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846325732497, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 14981521, "oldest_snapshot_seqno": -1}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5498 keys, 9230069 bytes, temperature: kUnknown
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846325950876, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9230069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9193321, "index_size": 21918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13765, "raw_key_size": 140287, "raw_average_key_size": 25, "raw_value_size": 9094579, "raw_average_value_size": 1654, "num_data_blocks": 886, "num_entries": 5498, "num_filter_entries": 5498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846325, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.951147) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9230069 bytes
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.956029) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.6 rd, 42.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.8 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.6) write-amplify(2.5) OK, records in: 6513, records dropped: 1015 output_compression: NoCompression
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.956068) EVENT_LOG_v1 {"time_micros": 1769846325956053, "job": 34, "event": "compaction_finished", "compaction_time_micros": 218450, "compaction_time_cpu_micros": 19134, "output_level": 6, "num_output_files": 1, "total_output_size": 9230069, "num_input_records": 6513, "num_output_records": 5498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846325956574, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846325957720, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.732296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.957788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.957793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.957796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.957798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:45 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-07:58:45.957799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 02:58:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:46.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.328 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "a24ac16c-df64-4cef-a252-1f1c38920602" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.329 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.361 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.459 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.460 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.468 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.469 247403 INFO nova.compute.claims [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:58:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:46 np0005603621 nova_compute[247399]: 2026-01-31 07:58:46.601 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:46.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:58:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460982885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.031 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.038 247403 DEBUG nova.compute.provider_tree [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.053 247403 DEBUG nova.scheduler.client.report [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.075 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.076 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.132 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.147 247403 INFO nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.168 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.173 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.290 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.291 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.292 247403 INFO nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Creating image(s)#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.324 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 213 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 225 op/s
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.353 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.381 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.384 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.438 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.441 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.442 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.442 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.470 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.477 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 a24ac16c-df64-4cef-a252-1f1c38920602_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:47 np0005603621 podman[276912]: 2026-01-31 07:58:47.544952721 +0000 UTC m=+0.089224853 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 02:58:47 np0005603621 podman[276910]: 2026-01-31 07:58:47.547291264 +0000 UTC m=+0.091562596 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.690 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.794 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 a24ac16c-df64-4cef-a252-1f1c38920602_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:47 np0005603621 nova_compute[247399]: 2026-01-31 07:58:47.871 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] resizing rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.052 247403 DEBUG nova.objects.instance [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lazy-loading 'migration_context' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.097 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.098 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Ensure instance console log exists: /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.099 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.099 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.099 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.101 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.107 247403 WARNING nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.113 247403 DEBUG nova.virt.libvirt.host [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.115 247403 DEBUG nova.virt.libvirt.host [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.119 247403 DEBUG nova.virt.libvirt.host [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.120 247403 DEBUG nova.virt.libvirt.host [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.122 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.123 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.123 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.124 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.124 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.124 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.124 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.125 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.125 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.125 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.125 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.125 247403 DEBUG nova.virt.hardware [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:58:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:48.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.129 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:58:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918692873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.601 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.624 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:48 np0005603621 nova_compute[247399]: 2026-01-31 07:58:48.629 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:48.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004148891157174763 of space, bias 1.0, pg target 1.2446673471524288 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:58:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 02:58:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:58:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/161151341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.089 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.091 247403 DEBUG nova.objects.instance [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lazy-loading 'pci_devices' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.123 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <uuid>a24ac16c-df64-4cef-a252-1f1c38920602</uuid>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <name>instance-0000002d</name>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1524536714</nova:name>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:58:48</nova:creationTime>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:user uuid="2d20e7b6189c4916947ddae2155da8cf">tempest-UnshelveToHostMultiNodesTest-849130801-project-member</nova:user>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <nova:project uuid="0f3a75a965fc495bbe02cb5bfad2053b">tempest-UnshelveToHostMultiNodesTest-849130801</nova:project>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <entry name="serial">a24ac16c-df64-4cef-a252-1f1c38920602</entry>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <entry name="uuid">a24ac16c-df64-4cef-a252-1f1c38920602</entry>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk.config">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/console.log" append="off"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:58:49 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:58:49 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:58:49 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:58:49 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.186 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.187 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.187 247403 INFO nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Using config drive#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.212 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 213 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.7 MiB/s wr, 213 op/s
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.471 247403 INFO nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Creating config drive at /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.474 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp50b68o8g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.592 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp50b68o8g" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.615 247403 DEBUG nova.storage.rbd_utils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:58:49 np0005603621 nova_compute[247399]: 2026-01-31 07:58:49.618 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config a24ac16c-df64-4cef-a252-1f1c38920602_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:58:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:50.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.197 247403 DEBUG oslo_concurrency.processutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config a24ac16c-df64-4cef-a252-1f1c38920602_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.197 247403 INFO nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Deleting local config drive /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config because it was imported into RBD.#033[00m
Jan 31 02:58:50 np0005603621 systemd-machined[212769]: New machine qemu-19-instance-0000002d.
Jan 31 02:58:50 np0005603621 systemd[1]: Started Virtual Machine qemu-19-instance-0000002d.
Jan 31 02:58:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.937 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846330.9373577, a24ac16c-df64-4cef-a252-1f1c38920602 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.938 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.940 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.941 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.943 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance spawned successfully.#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.944 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.981 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.983 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.984 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.984 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.984 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.985 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.985 247403 DEBUG nova.virt.libvirt.driver [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 02:58:50 np0005603621 nova_compute[247399]: 2026-01-31 07:58:50.989 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.050 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.050 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846330.9400153, a24ac16c-df64-4cef-a252-1f1c38920602 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.050 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] VM Started (Lifecycle Event)#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.086 247403 INFO nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Took 3.80 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.086 247403 DEBUG nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.093 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.097 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.134 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.155 247403 INFO nova.compute.manager [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Took 4.73 seconds to build instance.#033[00m
Jan 31 02:58:51 np0005603621 nova_compute[247399]: 2026-01-31 07:58:51.185 247403 DEBUG oslo_concurrency.lockutils [None req-41165875-2c92-4a23-ba3d-72e30c06a284 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:58:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 244 MiB data, 535 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 224 op/s
Jan 31 02:58:51 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:51Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:1a:07 10.100.0.5
Jan 31 02:58:51 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:51Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:1a:07 10.100.0.5
Jan 31 02:58:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:52.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:52 np0005603621 nova_compute[247399]: 2026-01-31 07:58:52.177 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:52 np0005603621 nova_compute[247399]: 2026-01-31 07:58:52.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:52.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:58:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2390366414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:58:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 314 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 8.2 MiB/s wr, 359 op/s
Jan 31 02:58:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:54.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:54.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:54 np0005603621 nova_compute[247399]: 2026-01-31 07:58:54.778 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "a24ac16c-df64-4cef-a252-1f1c38920602" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:58:54 np0005603621 nova_compute[247399]: 2026-01-31 07:58:54.779 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:58:54 np0005603621 nova_compute[247399]: 2026-01-31 07:58:54.779 247403 INFO nova.compute.manager [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Shelving#033[00m
Jan 31 02:58:54 np0005603621 nova_compute[247399]: 2026-01-31 07:58:54.797 247403 DEBUG nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:58:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 333 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.6 MiB/s wr, 320 op/s
Jan 31 02:58:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:56.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:58:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:58:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:56.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:58:57 np0005603621 nova_compute[247399]: 2026-01-31 07:58:57.180 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 339 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 351 op/s
Jan 31 02:58:57 np0005603621 nova_compute[247399]: 2026-01-31 07:58:57.706 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:58:58.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:58 np0005603621 NetworkManager[49013]: <info>  [1769846338.2802] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:58 np0005603621 NetworkManager[49013]: <info>  [1769846338.2813] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:58 np0005603621 ovn_controller[149152]: 2026-01-31T07:58:58Z|00101|binding|INFO|Releasing lport fca56836-0662-40f0-bef3-510bc5f598ce from this chassis (sb_readonly=0)
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.688 247403 DEBUG nova.compute.manager [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.689 247403 DEBUG nova.compute.manager [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing instance network info cache due to event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.689 247403 DEBUG oslo_concurrency.lockutils [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.690 247403 DEBUG oslo_concurrency.lockutils [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:58:58 np0005603621 nova_compute[247399]: 2026-01-31 07:58:58.690 247403 DEBUG nova.network.neutron [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:58:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:58:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:58:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:58:58.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:58:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 339 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.7 MiB/s wr, 294 op/s
Jan 31 02:59:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:00.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:00.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:00 np0005603621 nova_compute[247399]: 2026-01-31 07:59:00.788 247403 DEBUG nova.network.neutron [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updated VIF entry in instance network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:59:00 np0005603621 nova_compute[247399]: 2026-01-31 07:59:00.789 247403 DEBUG nova.network.neutron [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:01 np0005603621 nova_compute[247399]: 2026-01-31 07:59:01.002 247403 DEBUG oslo_concurrency.lockutils [req-a6ae5683-ce05-4f78-a4e5-f0f76f7cb173 req-b6b85587-eac6-4514-bbfe-f757c5479caa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 345 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.2 MiB/s wr, 326 op/s
Jan 31 02:59:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:02.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:02 np0005603621 nova_compute[247399]: 2026-01-31 07:59:02.182 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:02 np0005603621 nova_compute[247399]: 2026-01-31 07:59:02.710 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:02.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 378 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 8.0 MiB/s wr, 350 op/s
Jan 31 02:59:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:04.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:04 np0005603621 nova_compute[247399]: 2026-01-31 07:59:04.841 247403 DEBUG nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:59:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 397 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.5 MiB/s wr, 233 op/s
Jan 31 02:59:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:07 np0005603621 nova_compute[247399]: 2026-01-31 07:59:07.187 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 454 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.2 MiB/s wr, 250 op/s
Jan 31 02:59:07 np0005603621 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 31 02:59:07 np0005603621 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002d.scope: Consumed 12.895s CPU time.
Jan 31 02:59:07 np0005603621 systemd-machined[212769]: Machine qemu-19-instance-0000002d terminated.
Jan 31 02:59:07 np0005603621 nova_compute[247399]: 2026-01-31 07:59:07.711 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:07 np0005603621 nova_compute[247399]: 2026-01-31 07:59:07.854 247403 INFO nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 02:59:07 np0005603621 nova_compute[247399]: 2026-01-31 07:59:07.860 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance destroyed successfully.#033[00m
Jan 31 02:59:07 np0005603621 nova_compute[247399]: 2026-01-31 07:59:07.860 247403 DEBUG nova.objects.instance [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lazy-loading 'numa_topology' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.191 247403 INFO nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Beginning cold snapshot process#033[00m
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.341 247403 DEBUG nova.compute.manager [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.342 247403 DEBUG nova.compute.manager [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing instance network info cache due to event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.342 247403 DEBUG oslo_concurrency.lockutils [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.342 247403 DEBUG oslo_concurrency.lockutils [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.343 247403 DEBUG nova.network.neutron [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.399 247403 DEBUG nova.virt.libvirt.imagebackend [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 02:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:59:08 np0005603621 nova_compute[247399]: 2026-01-31 07:59:08.640 247403 DEBUG nova.storage.rbd_utils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] creating snapshot(92f0fbcd8d6f40afa0b7b4899c8a1947) on rbd image(a24ac16c-df64-4cef-a252-1f1c38920602_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:59:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:08.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 454 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 932 KiB/s rd, 6.2 MiB/s wr, 175 op/s
Jan 31 02:59:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 31 02:59:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 31 02:59:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 31 02:59:09 np0005603621 nova_compute[247399]: 2026-01-31 07:59:09.790 247403 DEBUG nova.storage.rbd_utils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] cloning vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk@92f0fbcd8d6f40afa0b7b4899c8a1947 to images/7b22f4ab-bd25-45c1-a738-6910f1407753 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:59:09 np0005603621 nova_compute[247399]: 2026-01-31 07:59:09.902 247403 DEBUG nova.storage.rbd_utils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] flattening images/7b22f4ab-bd25-45c1-a738-6910f1407753 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:59:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:10.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:10 np0005603621 nova_compute[247399]: 2026-01-31 07:59:10.328 247403 DEBUG nova.storage.rbd_utils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] removing snapshot(92f0fbcd8d6f40afa0b7b4899c8a1947) on rbd image(a24ac16c-df64-4cef-a252-1f1c38920602_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 02:59:10 np0005603621 nova_compute[247399]: 2026-01-31 07:59:10.472 247403 DEBUG nova.network.neutron [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updated VIF entry in instance network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:59:10 np0005603621 nova_compute[247399]: 2026-01-31 07:59:10.472 247403 DEBUG nova.network.neutron [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:10 np0005603621 nova_compute[247399]: 2026-01-31 07:59:10.600 247403 DEBUG oslo_concurrency.lockutils [req-60e8818c-2a4d-4a54-8de5-09ffe566c49e req-b274f3a5-b4b5-48ac-9108-cc0cd1092139 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 31 02:59:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 31 02:59:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 31 02:59:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:10.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:10 np0005603621 nova_compute[247399]: 2026-01-31 07:59:10.795 247403 DEBUG nova.storage.rbd_utils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] creating snapshot(snap) on rbd image(7b22f4ab-bd25-45c1-a738-6910f1407753) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:59:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 463 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 6.0 MiB/s wr, 167 op/s
Jan 31 02:59:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 31 02:59:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 31 02:59:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 31 02:59:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:12 np0005603621 nova_compute[247399]: 2026-01-31 07:59:12.191 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:12 np0005603621 nova_compute[247399]: 2026-01-31 07:59:12.714 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:12.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 525 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 7.7 MiB/s wr, 219 op/s
Jan 31 02:59:13 np0005603621 nova_compute[247399]: 2026-01-31 07:59:13.365 247403 INFO nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Snapshot image upload complete#033[00m
Jan 31 02:59:13 np0005603621 nova_compute[247399]: 2026-01-31 07:59:13.365 247403 DEBUG nova.compute.manager [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:14.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.174 247403 INFO nova.compute.manager [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Shelve offloading#033[00m
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.181 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance destroyed successfully.#033[00m
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.181 247403 DEBUG nova.compute.manager [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.184 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.184 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquired lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.184 247403 DEBUG nova.network.neutron [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:59:14 np0005603621 nova_compute[247399]: 2026-01-31 07:59:14.410 247403 DEBUG nova.network.neutron [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:59:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:15 np0005603621 nova_compute[247399]: 2026-01-31 07:59:15.156 247403 DEBUG nova.network.neutron [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 563 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 12 MiB/s wr, 286 op/s
Jan 31 02:59:15 np0005603621 nova_compute[247399]: 2026-01-31 07:59:15.646 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Releasing lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:15 np0005603621 nova_compute[247399]: 2026-01-31 07:59:15.652 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance destroyed successfully.#033[00m
Jan 31 02:59:15 np0005603621 nova_compute[247399]: 2026-01-31 07:59:15.652 247403 DEBUG nova.objects.instance [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lazy-loading 'resources' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:16.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:16 np0005603621 nova_compute[247399]: 2026-01-31 07:59:16.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:16 np0005603621 nova_compute[247399]: 2026-01-31 07:59:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 02:59:16 np0005603621 nova_compute[247399]: 2026-01-31 07:59:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 02:59:16 np0005603621 nova_compute[247399]: 2026-01-31 07:59:16.399 247403 INFO nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Deleting instance files /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602_del#033[00m
Jan 31 02:59:16 np0005603621 nova_compute[247399]: 2026-01-31 07:59:16.400 247403 INFO nova.virt.libvirt.driver [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Deletion of /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602_del complete#033[00m
Jan 31 02:59:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 31 02:59:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 31 02:59:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 31 02:59:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.038 247403 INFO nova.scheduler.client.report [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Deleted allocations for instance a24ac16c-df64-4cef-a252-1f1c38920602#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.158 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.159 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.159 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.159 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.196 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.313 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.313 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 507 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 9.4 MiB/s wr, 256 op/s
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.375 247403 DEBUG oslo_concurrency.processutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.716 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114985722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.826 247403 DEBUG oslo_concurrency.processutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.832 247403 DEBUG nova.compute.provider_tree [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:17 np0005603621 nova_compute[247399]: 2026-01-31 07:59:17.887 247403 DEBUG nova.scheduler.client.report [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:18 np0005603621 nova_compute[247399]: 2026-01-31 07:59:18.109 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:18.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:18 np0005603621 nova_compute[247399]: 2026-01-31 07:59:18.254 247403 DEBUG oslo_concurrency.lockutils [None req-de7d4e7f-49fa-4e16-a307-81da4f32ffa5 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 23.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:18 np0005603621 podman[277547]: 2026-01-31 07:59:18.510695256 +0000 UTC m=+0.069015910 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 02:59:18 np0005603621 podman[277546]: 2026-01-31 07:59:18.511377887 +0000 UTC m=+0.069581608 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:18.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 507 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.7 MiB/s wr, 212 op/s
Jan 31 02:59:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:20.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.267 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.323 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.324 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.325 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:20.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.975 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Acquiring lock "a24ac16c-df64-4cef-a252-1f1c38920602" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.975 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:20 np0005603621 nova_compute[247399]: 2026-01-31 07:59:20.976 247403 INFO nova.compute.manager [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Unshelving#033[00m
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.295 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.296 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.301 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'pci_requests' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:21.321 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.321 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:21.322 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 02:59:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 493 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.8 MiB/s wr, 216 op/s
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.376 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'numa_topology' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.521 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.521 247403 INFO nova.compute.claims [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 02:59:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:21 np0005603621 nova_compute[247399]: 2026-01-31 07:59:21.826 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:22.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.199 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742171325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.233 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.237 247403 DEBUG nova.compute.provider_tree [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.267 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.315 247403 DEBUG nova.scheduler.client.report [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.588 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846347.586904, a24ac16c-df64-4cef-a252-1f1c38920602 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.589 247403 INFO nova.compute.manager [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.717 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.767 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.770 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.770 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.770 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.771 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:22 np0005603621 nova_compute[247399]: 2026-01-31 07:59:22.789 247403 DEBUG nova.compute.manager [None req-099f4bec-223c-4beb-b7e0-d75f12e2d1fa - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:22.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.100 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Acquiring lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.101 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Acquired lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.101 247403 DEBUG nova.network.neutron [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 02:59:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458097885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.282 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 530 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.6 MiB/s wr, 170 op/s
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.381 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.381 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.412 247403 DEBUG nova.network.neutron [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.505 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.506 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4435MB free_disk=20.782012939453125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.506 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.506 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.748 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.748 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance a24ac16c-df64-4cef-a252-1f1c38920602 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.748 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 02:59:23 np0005603621 nova_compute[247399]: 2026-01-31 07:59:23.749 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.043 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:24.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.185 247403 DEBUG nova.network.neutron [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.257 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Releasing lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.259 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.260 247403 INFO nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Creating image(s)#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.287 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.292 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:24.325 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885097392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.456 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.462 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.525 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.561 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.567 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Acquiring lock "2c30323a933d4e6a87b6be9d55493d4a828d8e3f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.568 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "2c30323a933d4e6a87b6be9d55493d4a828d8e3f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.592 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:24.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.842 247403 DEBUG nova.virt.libvirt.imagebackend [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/7b22f4ab-bd25-45c1-a738-6910f1407753/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/7b22f4ab-bd25-45c1-a738-6910f1407753/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.901 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.901 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.905 247403 DEBUG nova.virt.libvirt.imagebackend [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Selected location: {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/7b22f4ab-bd25-45c1-a738-6910f1407753/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 02:59:24 np0005603621 nova_compute[247399]: 2026-01-31 07:59:24.906 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] cloning images/7b22f4ab-bd25-45c1-a738-6910f1407753@snap to None/a24ac16c-df64-4cef-a252-1f1c38920602_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.201 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "2c30323a933d4e6a87b6be9d55493d4a828d8e3f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 509 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.383 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'migration_context' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.557 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] flattening vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.666 247403 DEBUG nova.compute.manager [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.666 247403 DEBUG nova.compute.manager [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing instance network info cache due to event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.666 247403 DEBUG oslo_concurrency.lockutils [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.667 247403 DEBUG oslo_concurrency.lockutils [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.667 247403 DEBUG nova.network.neutron [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:59:25 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.991 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Image rbd:vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.991 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.991 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Ensure instance console log exists: /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.992 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.992 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.992 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.993 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:58:54Z,direct_url=<?>,disk_format='raw',id=7b22f4ab-bd25-45c1-a738-6910f1407753,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1524536714-shelved',owner='0f3a75a965fc495bbe02cb5bfad2053b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:59:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 02:59:25 np0005603621 nova_compute[247399]: 2026-01-31 07:59:25.996 247403 WARNING nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.003 247403 DEBUG nova.virt.libvirt.host [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.004 247403 DEBUG nova.virt.libvirt.host [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.008 247403 DEBUG nova.virt.libvirt.host [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.008 247403 DEBUG nova.virt.libvirt.host [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.009 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.009 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T07:58:54Z,direct_url=<?>,disk_format='raw',id=7b22f4ab-bd25-45c1-a738-6910f1407753,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1524536714-shelved',owner='0f3a75a965fc495bbe02cb5bfad2053b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T07:59:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.009 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.010 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.010 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.010 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.010 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.010 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.010 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.011 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.011 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.011 247403 DEBUG nova.virt.hardware [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.011 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.105 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:59:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009175907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:59:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.532 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.556 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.559 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.900 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.901 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.901 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.901 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 02:59:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 02:59:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768144283' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 02:59:26 np0005603621 nova_compute[247399]: 2026-01-31 07:59:26.998 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.000 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'pci_devices' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.064 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] End _get_guest_xml xml=<domain type="kvm">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <uuid>a24ac16c-df64-4cef-a252-1f1c38920602</uuid>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <name>instance-0000002d</name>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1524536714</nova:name>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 07:59:25</nova:creationTime>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:user uuid="2d20e7b6189c4916947ddae2155da8cf">tempest-UnshelveToHostMultiNodesTest-849130801-project-member</nova:user>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <nova:project uuid="0f3a75a965fc495bbe02cb5bfad2053b">tempest-UnshelveToHostMultiNodesTest-849130801</nova:project>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="7b22f4ab-bd25-45c1-a738-6910f1407753"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <system>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <entry name="serial">a24ac16c-df64-4cef-a252-1f1c38920602</entry>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <entry name="uuid">a24ac16c-df64-4cef-a252-1f1c38920602</entry>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </system>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <os>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </os>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <features>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </features>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </clock>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  <devices>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk.config">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      </source>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      </auth>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </disk>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/console.log" append="off"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </serial>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <video>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </video>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </rng>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 02:59:27 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 02:59:27 np0005603621 nova_compute[247399]:  </devices>
Jan 31 02:59:27 np0005603621 nova_compute[247399]: </domain>
Jan 31 02:59:27 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.167 247403 DEBUG nova.network.neutron [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updated VIF entry in instance network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.168 247403 DEBUG nova.network.neutron [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.175 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.176 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.176 247403 INFO nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Using config drive#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.195 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.233 247403 DEBUG oslo_concurrency.lockutils [req-1386cb18-5519-40be-8df8-ade504021155 req-ad1c9f12-0246-4411-aa74-dd25160433db fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.250 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 497 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.7 MiB/s wr, 287 op/s
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.378 247403 DEBUG nova.objects.instance [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lazy-loading 'keypairs' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.534 247403 INFO nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Creating config drive at /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.537 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpknwod3w4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.658 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpknwod3w4" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.681 247403 DEBUG nova.storage.rbd_utils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] rbd image a24ac16c-df64-4cef-a252-1f1c38920602_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.686 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config a24ac16c-df64-4cef-a252-1f1c38920602_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.890 247403 DEBUG oslo_concurrency.processutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config a24ac16c-df64-4cef-a252-1f1c38920602_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:27 np0005603621 nova_compute[247399]: 2026-01-31 07:59:27.891 247403 INFO nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Deleting local config drive /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602/disk.config because it was imported into RBD.#033[00m
Jan 31 02:59:27 np0005603621 systemd-machined[212769]: New machine qemu-20-instance-0000002d.
Jan 31 02:59:27 np0005603621 systemd[1]: Started Virtual Machine qemu-20-instance-0000002d.
Jan 31 02:59:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.500 247403 DEBUG nova.compute.manager [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.501 247403 DEBUG nova.virt.libvirt.driver [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.501 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846368.5008564, a24ac16c-df64-4cef-a252-1f1c38920602 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.501 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] VM Resumed (Lifecycle Event)#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.507 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance spawned successfully.#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.571 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.574 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.666 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.667 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846368.5010228, a24ac16c-df64-4cef-a252-1f1c38920602 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.667 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] VM Started (Lifecycle Event)#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.700 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.702 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 02:59:28 np0005603621 nova_compute[247399]: 2026-01-31 07:59:28.790 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 02:59:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:28.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 497 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.1 MiB/s wr, 243 op/s
Jan 31 02:59:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 31 02:59:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 31 02:59:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 31 02:59:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:30.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:30.480 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:30.480 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:30.481 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:30.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:31 np0005603621 nova_compute[247399]: 2026-01-31 07:59:31.288 247403 DEBUG nova.compute.manager [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 497 MiB data, 759 MiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.5 MiB/s wr, 378 op/s
Jan 31 02:59:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:31 np0005603621 nova_compute[247399]: 2026-01-31 07:59:31.573 247403 DEBUG oslo_concurrency.lockutils [None req-79d06895-1082-469f-9529-39451e2c5468 aca4a3ac27c249718268b71ac43ef4f9 546c0d49cea64695b5f4ee15ea24aa42 - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:32.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:32 np0005603621 nova_compute[247399]: 2026-01-31 07:59:32.203 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:32 np0005603621 nova_compute[247399]: 2026-01-31 07:59:32.722 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:32.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 443 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 4.7 MiB/s wr, 394 op/s
Jan 31 02:59:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:34.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 405 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 370 op/s
Jan 31 02:59:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.454 247403 DEBUG nova.compute.manager [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.454 247403 DEBUG nova.compute.manager [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing instance network info cache due to event network-changed-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.454 247403 DEBUG oslo_concurrency.lockutils [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.454 247403 DEBUG oslo_concurrency.lockutils [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.454 247403 DEBUG nova.network.neutron [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Refreshing network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 02:59:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 31 02:59:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 31 02:59:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.655 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "a24ac16c-df64-4cef-a252-1f1c38920602" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.656 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.656 247403 INFO nova.compute.manager [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Shelving#033[00m
Jan 31 02:59:36 np0005603621 nova_compute[247399]: 2026-01-31 07:59:36.682 247403 DEBUG nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 02:59:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:36.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:37 np0005603621 nova_compute[247399]: 2026-01-31 07:59:37.206 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 405 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 875 KiB/s wr, 290 op/s
Jan 31 02:59:37 np0005603621 nova_compute[247399]: 2026-01-31 07:59:37.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:38 np0005603621 nova_compute[247399]: 2026-01-31 07:59:38.462 247403 DEBUG nova.network.neutron [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updated VIF entry in instance network info cache for port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 02:59:38 np0005603621 nova_compute[247399]: 2026-01-31 07:59:38.462 247403 DEBUG nova.network.neutron [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [{"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_07:59:38
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', '.mgr', '.rgw.root', 'images', 'vms']
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 02:59:38 np0005603621 nova_compute[247399]: 2026-01-31 07:59:38.533 247403 DEBUG oslo_concurrency.lockutils [req-17184f2e-8f85-4b4c-806b-785b216bc9bb req-ac891150-ca2e-4172-88b2-45db43f7c193 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 02:59:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 02:59:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 02:59:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41013877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 02:59:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 02:59:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41013877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 02:59:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 405 MiB data, 699 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 112 op/s
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.131 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.132 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.133 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.133 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.133 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.135 247403 INFO nova.compute.manager [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Terminating instance#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.136 247403 DEBUG nova.compute.manager [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 02:59:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:40 np0005603621 kernel: tapd6392acd-fa (unregistering): left promiscuous mode
Jan 31 02:59:40 np0005603621 NetworkManager[49013]: <info>  [1769846380.1941] device (tapd6392acd-fa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 02:59:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:59:40Z|00102|binding|INFO|Releasing lport d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 from this chassis (sb_readonly=0)
Jan 31 02:59:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:59:40Z|00103|binding|INFO|Setting lport d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 down in Southbound
Jan 31 02:59:40 np0005603621 ovn_controller[149152]: 2026-01-31T07:59:40Z|00104|binding|INFO|Removing iface tapd6392acd-fa ovn-installed in OVS
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.208 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Jan 31 02:59:40 np0005603621 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002b.scope: Consumed 15.376s CPU time.
Jan 31 02:59:40 np0005603621 systemd-machined[212769]: Machine qemu-18-instance-0000002b terminated.
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.359 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.373 247403 INFO nova.virt.libvirt.driver [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Instance destroyed successfully.#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.374 247403 DEBUG nova.objects.instance [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lazy-loading 'resources' on Instance uuid 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.534 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:1a:07 10.100.0.5'], port_security=['fa:16:3e:04:1a:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c28cb14d-420d-4734-af33-c452602f84f5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dbf6b6306ca449dfb064371ec88681f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f2274c0-e32f-4e92-b2d3-ffc323775fce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=01325513-d5d2-4c4f-b0c6-fdb3a77ce88f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.535 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 in datapath c28cb14d-420d-4734-af33-c452602f84f5 unbound from our chassis#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.537 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c28cb14d-420d-4734-af33-c452602f84f5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.539 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a641b52e-d4da-40a2-9960-31c3767fbd39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.539 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5 namespace which is not needed anymore#033[00m
Jan 31 02:59:40 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [NOTICE]   (276767) : haproxy version is 2.8.14-c23fe91
Jan 31 02:59:40 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [NOTICE]   (276767) : path to executable is /usr/sbin/haproxy
Jan 31 02:59:40 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [WARNING]  (276767) : Exiting Master process...
Jan 31 02:59:40 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [WARNING]  (276767) : Exiting Master process...
Jan 31 02:59:40 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [ALERT]    (276767) : Current worker (276769) exited with code 143 (Terminated)
Jan 31 02:59:40 np0005603621 neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5[276761]: [WARNING]  (276767) : All workers exited. Exiting... (0)
Jan 31 02:59:40 np0005603621 systemd[1]: libpod-1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111.scope: Deactivated successfully.
Jan 31 02:59:40 np0005603621 podman[278276]: 2026-01-31 07:59:40.662698252 +0000 UTC m=+0.049871602 container died 1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 02:59:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111-userdata-shm.mount: Deactivated successfully.
Jan 31 02:59:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f94284ea8aa915ea848961abcc70a48276a1d8a944c078eefd484013b3860a31-merged.mount: Deactivated successfully.
Jan 31 02:59:40 np0005603621 podman[278276]: 2026-01-31 07:59:40.706581315 +0000 UTC m=+0.093754655 container cleanup 1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 02:59:40 np0005603621 systemd[1]: libpod-conmon-1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111.scope: Deactivated successfully.
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.715 247403 DEBUG nova.virt.libvirt.vif [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T07:58:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1030320071',display_name='tempest-FloatingIPsAssociationTestJSON-server-1030320071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1030320071',id=43,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T07:58:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dbf6b6306ca449dfb064371ec88681f5',ramdisk_id='',reservation_id='r-q0lh8mgv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-338180924',owner_user_name='tempest-FloatingIPsAssociationTestJSON-338180924-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T07:58:39Z,user_data=None,user_id='e5b93162787e405080a5a790c1847434',uuid=38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.716 247403 DEBUG nova.network.os_vif_util [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Converting VIF {"id": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "address": "fa:16:3e:04:1a:07", "network": {"id": "c28cb14d-420d-4734-af33-c452602f84f5", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-462404100-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dbf6b6306ca449dfb064371ec88681f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6392acd-fa", "ovs_interfaceid": "d6392acd-fa98-4f6e-b62a-ceffd7ca0c29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.717 247403 DEBUG nova.network.os_vif_util [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.717 247403 DEBUG os_vif [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.719 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6392acd-fa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.722 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.725 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.728 247403 INFO os_vif [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:1a:07,bridge_name='br-int',has_traffic_filtering=True,id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29,network=Network(c28cb14d-420d-4734-af33-c452602f84f5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6392acd-fa')#033[00m
Jan 31 02:59:40 np0005603621 podman[278305]: 2026-01-31 07:59:40.769885005 +0000 UTC m=+0.043721159 container remove 1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.774 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ba492073-fb3a-4b98-b030-c8e617ca755c]: (4, ('Sat Jan 31 07:59:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5 (1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111)\n1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111\nSat Jan 31 07:59:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5 (1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111)\n1640ff4bc194505b0101238ac1d039ecca848e9dfdfd618441c939e48c1ac111\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.776 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d384386-cb1a-4a10-8f09-952de22aa260]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.778 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc28cb14d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 kernel: tapc28cb14d-40: left promiscuous mode
Jan 31 02:59:40 np0005603621 nova_compute[247399]: 2026-01-31 07:59:40.788 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.793 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[58257a1d-c752-46e7-b094-1c39aaf75103]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.810 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff71ae2-55ed-415b-8829-3b2e0d7cacfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.811 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[42669689-cf6f-417a-9f61-bea6bfd71873]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.824 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e0eaf1-3fcb-4655-84e2-5999231e4ceb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562374, 'reachable_time': 15550, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278336, 'error': None, 'target': 'ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.827 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c28cb14d-420d-4734-af33-c452602f84f5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 02:59:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 07:59:40.827 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e77b9063-708a-405e-8a0b-aaab964f1dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 02:59:40 np0005603621 systemd[1]: run-netns-ovnmeta\x2dc28cb14d\x2d420d\x2d4734\x2daf33\x2dc452602f84f5.mount: Deactivated successfully.
Jan 31 02:59:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.284 247403 INFO nova.virt.libvirt.driver [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Deleting instance files /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_del#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.286 247403 INFO nova.virt.libvirt.driver [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Deletion of /var/lib/nova/instances/38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e_del complete#033[00m
Jan 31 02:59:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 409 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 369 KiB/s wr, 145 op/s
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.421 247403 INFO nova.compute.manager [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.422 247403 DEBUG oslo.service.loopingcall [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.422 247403 DEBUG nova.compute.manager [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.423 247403 DEBUG nova.network.neutron [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 02:59:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 02:59:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 02:59:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.929 247403 DEBUG nova.compute.manager [req-8ed5cf28-a7c8-4bf8-882e-67e567b7e48a req-42cb09f7-bb30-4876-96e5-ff0f4f509ae2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-vif-unplugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.930 247403 DEBUG oslo_concurrency.lockutils [req-8ed5cf28-a7c8-4bf8-882e-67e567b7e48a req-42cb09f7-bb30-4876-96e5-ff0f4f509ae2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.930 247403 DEBUG oslo_concurrency.lockutils [req-8ed5cf28-a7c8-4bf8-882e-67e567b7e48a req-42cb09f7-bb30-4876-96e5-ff0f4f509ae2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.930 247403 DEBUG oslo_concurrency.lockutils [req-8ed5cf28-a7c8-4bf8-882e-67e567b7e48a req-42cb09f7-bb30-4876-96e5-ff0f4f509ae2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.931 247403 DEBUG nova.compute.manager [req-8ed5cf28-a7c8-4bf8-882e-67e567b7e48a req-42cb09f7-bb30-4876-96e5-ff0f4f509ae2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] No waiting events found dispatching network-vif-unplugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:41 np0005603621 nova_compute[247399]: 2026-01-31 07:59:41.931 247403 DEBUG nova.compute.manager [req-8ed5cf28-a7c8-4bf8-882e-67e567b7e48a req-42cb09f7-bb30-4876-96e5-ff0f4f509ae2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-vif-unplugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 02:59:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.720 247403 DEBUG nova.network.neutron [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.725 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.742 247403 DEBUG nova.compute.manager [req-ff0a09f6-c4a7-4fc2-b189-3d2638e7fc6a req-f29a0c9c-4571-4d8f-879e-4c745e79af8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-vif-deleted-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.742 247403 INFO nova.compute.manager [req-ff0a09f6-c4a7-4fc2-b189-3d2638e7fc6a req-f29a0c9c-4571-4d8f-879e-4c745e79af8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Neutron deleted interface d6392acd-fa98-4f6e-b62a-ceffd7ca0c29; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.743 247403 DEBUG nova.network.neutron [req-ff0a09f6-c4a7-4fc2-b189-3d2638e7fc6a req-f29a0c9c-4571-4d8f-879e-4c745e79af8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c602e6d7-b9c7-4dff-9a26-e40b7e8218ee does not exist
Jan 31 02:59:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b9ad68dd-32f5-4b8c-98ab-531a85679e3a does not exist
Jan 31 02:59:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 222fe521-34bc-43f4-b8f9-b74c149310cf does not exist
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 02:59:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.828 247403 INFO nova.compute.manager [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Took 1.41 seconds to deallocate network for instance.#033[00m
Jan 31 02:59:42 np0005603621 nova_compute[247399]: 2026-01-31 07:59:42.840 247403 DEBUG nova.compute.manager [req-ff0a09f6-c4a7-4fc2-b189-3d2638e7fc6a req-f29a0c9c-4571-4d8f-879e-4c745e79af8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Detach interface failed, port_id=d6392acd-fa98-4f6e-b62a-ceffd7ca0c29, reason: Instance 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 02:59:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:42.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.056 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.057 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.142 247403 DEBUG oslo_concurrency.processutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 02:59:43 np0005603621 podman[278479]: 2026-01-31 07:59:43.313671081 +0000 UTC m=+0.098248125 container create f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 02:59:43 np0005603621 podman[278479]: 2026-01-31 07:59:43.234227805 +0000 UTC m=+0.018804889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:59:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 336 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 997 KiB/s rd, 2.5 MiB/s wr, 185 op/s
Jan 31 02:59:43 np0005603621 systemd[1]: Started libpod-conmon-f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721.scope.
Jan 31 02:59:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:59:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 02:59:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845672507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.631 247403 DEBUG oslo_concurrency.processutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.638 247403 DEBUG nova.compute.provider_tree [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 02:59:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 02:59:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 02:59:43 np0005603621 podman[278479]: 2026-01-31 07:59:43.693441014 +0000 UTC m=+0.478018078 container init f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:43 np0005603621 podman[278479]: 2026-01-31 07:59:43.700708621 +0000 UTC m=+0.485285665 container start f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:43 np0005603621 agitated_cerf[278514]: 167 167
Jan 31 02:59:43 np0005603621 systemd[1]: libpod-f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721.scope: Deactivated successfully.
Jan 31 02:59:43 np0005603621 conmon[278514]: conmon f92f8d981fb8cb347180 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721.scope/container/memory.events
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.712 247403 DEBUG nova.scheduler.client.report [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.790 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:43 np0005603621 podman[278479]: 2026-01-31 07:59:43.802578509 +0000 UTC m=+0.587155563 container attach f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 02:59:43 np0005603621 podman[278479]: 2026-01-31 07:59:43.803809858 +0000 UTC m=+0.588386922 container died f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:59:43 np0005603621 nova_compute[247399]: 2026-01-31 07:59:43.915 247403 INFO nova.scheduler.client.report [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Deleted allocations for instance 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.073 247403 DEBUG oslo_concurrency.lockutils [None req-f3073bc3-3e77-487d-8bf7-26235c4398c4 e5b93162787e405080a5a790c1847434 dbf6b6306ca449dfb064371ec88681f5 - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.078 247403 DEBUG nova.compute.manager [req-583e7d9c-3905-45f2-b86a-d45d2b3a7f3f req-3b57428b-b2af-47bd-b5ba-9cdb38905cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.078 247403 DEBUG oslo_concurrency.lockutils [req-583e7d9c-3905-45f2-b86a-d45d2b3a7f3f req-3b57428b-b2af-47bd-b5ba-9cdb38905cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.078 247403 DEBUG oslo_concurrency.lockutils [req-583e7d9c-3905-45f2-b86a-d45d2b3a7f3f req-3b57428b-b2af-47bd-b5ba-9cdb38905cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.079 247403 DEBUG oslo_concurrency.lockutils [req-583e7d9c-3905-45f2-b86a-d45d2b3a7f3f req-3b57428b-b2af-47bd-b5ba-9cdb38905cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.079 247403 DEBUG nova.compute.manager [req-583e7d9c-3905-45f2-b86a-d45d2b3a7f3f req-3b57428b-b2af-47bd-b5ba-9cdb38905cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] No waiting events found dispatching network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 02:59:44 np0005603621 nova_compute[247399]: 2026-01-31 07:59:44.079 247403 WARNING nova.compute.manager [req-583e7d9c-3905-45f2-b86a-d45d2b3a7f3f req-3b57428b-b2af-47bd-b5ba-9cdb38905cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Received unexpected event network-vif-plugged-d6392acd-fa98-4f6e-b62a-ceffd7ca0c29 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 02:59:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d5de97c3431ea24c3e11b6c0c2be879aedb54602cfb08eb3d79419dc655ca797-merged.mount: Deactivated successfully.
Jan 31 02:59:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:44.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:44 np0005603621 podman[278479]: 2026-01-31 07:59:44.267140315 +0000 UTC m=+1.051717399 container remove f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 02:59:44 np0005603621 systemd[1]: libpod-conmon-f92f8d981fb8cb34718006fab85bceec4b8c8c987adb9c1dea17619bbbb9c721.scope: Deactivated successfully.
Jan 31 02:59:44 np0005603621 podman[278542]: 2026-01-31 07:59:44.418076897 +0000 UTC m=+0.026963054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:59:44 np0005603621 podman[278542]: 2026-01-31 07:59:44.53832917 +0000 UTC m=+0.147215297 container create 1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bell, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:59:44 np0005603621 systemd[1]: Started libpod-conmon-1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4.scope.
Jan 31 02:59:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:59:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd40f8751f82170f577e2f7860899f67d30a8af70fa7f7456b430007fc8d71e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd40f8751f82170f577e2f7860899f67d30a8af70fa7f7456b430007fc8d71e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd40f8751f82170f577e2f7860899f67d30a8af70fa7f7456b430007fc8d71e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd40f8751f82170f577e2f7860899f67d30a8af70fa7f7456b430007fc8d71e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd40f8751f82170f577e2f7860899f67d30a8af70fa7f7456b430007fc8d71e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:44 np0005603621 podman[278542]: 2026-01-31 07:59:44.749416455 +0000 UTC m=+0.358302602 container init 1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bell, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:59:44 np0005603621 podman[278542]: 2026-01-31 07:59:44.756228238 +0000 UTC m=+0.365114365 container start 1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 02:59:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:44.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:44 np0005603621 podman[278542]: 2026-01-31 07:59:44.890925362 +0000 UTC m=+0.499811499 container attach 1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 272 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.6 MiB/s wr, 223 op/s
Jan 31 02:59:45 np0005603621 awesome_bell[278558]: --> passed data devices: 0 physical, 1 LVM
Jan 31 02:59:45 np0005603621 awesome_bell[278558]: --> relative data size: 1.0
Jan 31 02:59:45 np0005603621 awesome_bell[278558]: --> All data devices are unavailable
Jan 31 02:59:45 np0005603621 systemd[1]: libpod-1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4.scope: Deactivated successfully.
Jan 31 02:59:45 np0005603621 podman[278574]: 2026-01-31 07:59:45.597610245 +0000 UTC m=+0.026973715 container died 1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:59:45 np0005603621 nova_compute[247399]: 2026-01-31 07:59:45.722 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:46.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3dd40f8751f82170f577e2f7860899f67d30a8af70fa7f7456b430007fc8d71e-merged.mount: Deactivated successfully.
Jan 31 02:59:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:46 np0005603621 nova_compute[247399]: 2026-01-31 07:59:46.733 247403 DEBUG nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 02:59:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:47 np0005603621 podman[278574]: 2026-01-31 07:59:47.028140846 +0000 UTC m=+1.457504306 container remove 1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 02:59:47 np0005603621 systemd[1]: libpod-conmon-1b2518889c0ca17867957c08d6c0f3e69dbdf0b8321cc023aae4946cc0e0e5a4.scope: Deactivated successfully.
Jan 31 02:59:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 274 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.4 MiB/s wr, 267 op/s
Jan 31 02:59:47 np0005603621 podman[278729]: 2026-01-31 07:59:47.540596431 +0000 UTC m=+0.019663157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:59:47 np0005603621 podman[278729]: 2026-01-31 07:59:47.702405844 +0000 UTC m=+0.181472550 container create 1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_liskov, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:47 np0005603621 nova_compute[247399]: 2026-01-31 07:59:47.727 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:47 np0005603621 systemd[1]: Started libpod-conmon-1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c.scope.
Jan 31 02:59:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:59:48 np0005603621 podman[278729]: 2026-01-31 07:59:48.178429929 +0000 UTC m=+0.657496655 container init 1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:59:48 np0005603621 podman[278729]: 2026-01-31 07:59:48.183653602 +0000 UTC m=+0.662720308 container start 1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 02:59:48 np0005603621 interesting_liskov[278747]: 167 167
Jan 31 02:59:48 np0005603621 systemd[1]: libpod-1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c.scope: Deactivated successfully.
Jan 31 02:59:48 np0005603621 conmon[278747]: conmon 1ed9c70b4772b976b953 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c.scope/container/memory.events
Jan 31 02:59:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:48 np0005603621 podman[278729]: 2026-01-31 07:59:48.358047499 +0000 UTC m=+0.837114235 container attach 1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_liskov, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 02:59:48 np0005603621 podman[278729]: 2026-01-31 07:59:48.358620207 +0000 UTC m=+0.837686903 container died 1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 02:59:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 02:59:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6798 writes, 29K keys, 6794 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6797 writes, 6793 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1535 writes, 6720 keys, 1534 commit groups, 1.0 writes per commit group, ingest: 10.56 MB, 0.02 MB/s#012Interval WAL: 1534 writes, 1533 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     28.2      1.35              0.09        17    0.079       0      0       0.0       0.0#012  L6      1/0    8.80 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7     47.3     39.2      3.59              0.32        16    0.224     80K   8909       0.0       0.0#012 Sum      1/0    8.80 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     34.4     36.2      4.94              0.41        33    0.150     80K   8909       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   4.7     22.5     23.3      2.23              0.10         8    0.278     23K   2586       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     47.3     39.2      3.59              0.32        16    0.224     80K   8909       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     28.2      1.34              0.09        16    0.084       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.037, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.17 GB read, 0.07 MB/s read, 4.9 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.08 MB/s read, 2.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 17.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000232 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1033,17.20 MB,5.65735%) FilterBlock(34,229.55 KB,0.0737391%) IndexBlock(34,421.95 KB,0.135547%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 02:59:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 02:59:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:48.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00649187779173646 of space, bias 1.0, pg target 1.947563337520938 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 02:59:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 02:59:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d606a0d2b4338b7675a013941e0737285e38902025febce896a166f8115fd99b-merged.mount: Deactivated successfully.
Jan 31 02:59:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 274 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 234 op/s
Jan 31 02:59:49 np0005603621 podman[278729]: 2026-01-31 07:59:49.51978711 +0000 UTC m=+1.998853816 container remove 1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:49 np0005603621 systemd[1]: libpod-conmon-1ed9c70b4772b976b953ea547bef837638901523d48fee7fc23243a11191937c.scope: Deactivated successfully.
Jan 31 02:59:49 np0005603621 nova_compute[247399]: 2026-01-31 07:59:49.657 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:49 np0005603621 nova_compute[247399]: 2026-01-31 07:59:49.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:49 np0005603621 podman[278793]: 2026-01-31 07:59:49.63292272 +0000 UTC m=+0.026509661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:59:49 np0005603621 podman[278793]: 2026-01-31 07:59:49.933033549 +0000 UTC m=+0.326620470 container create 29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:50 np0005603621 systemd[1]: Started libpod-conmon-29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e.scope.
Jan 31 02:59:50 np0005603621 podman[278765]: 2026-01-31 07:59:50.052333143 +0000 UTC m=+1.184524205 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:59:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d185d8676ff535d5a3465ce5dea9635b7e5b13753146791eea5b622193cf5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d185d8676ff535d5a3465ce5dea9635b7e5b13753146791eea5b622193cf5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d185d8676ff535d5a3465ce5dea9635b7e5b13753146791eea5b622193cf5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17d185d8676ff535d5a3465ce5dea9635b7e5b13753146791eea5b622193cf5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:50 np0005603621 podman[278764]: 2026-01-31 07:59:50.098140965 +0000 UTC m=+1.238121010 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 02:59:50 np0005603621 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 31 02:59:50 np0005603621 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000002d.scope: Consumed 12.778s CPU time.
Jan 31 02:59:50 np0005603621 systemd-machined[212769]: Machine qemu-20-instance-0000002d terminated.
Jan 31 02:59:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:50 np0005603621 podman[278793]: 2026-01-31 07:59:50.268968311 +0000 UTC m=+0.662555262 container init 29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:50 np0005603621 podman[278793]: 2026-01-31 07:59:50.276674823 +0000 UTC m=+0.670261744 container start 29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:59:50 np0005603621 podman[278793]: 2026-01-31 07:59:50.386251521 +0000 UTC m=+0.779838442 container attach 29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:50 np0005603621 nova_compute[247399]: 2026-01-31 07:59:50.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:50 np0005603621 nova_compute[247399]: 2026-01-31 07:59:50.750 247403 INFO nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance shutdown successfully after 14 seconds.#033[00m
Jan 31 02:59:50 np0005603621 nova_compute[247399]: 2026-01-31 07:59:50.756 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance destroyed successfully.#033[00m
Jan 31 02:59:50 np0005603621 nova_compute[247399]: 2026-01-31 07:59:50.756 247403 DEBUG nova.objects.instance [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lazy-loading 'numa_topology' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 02:59:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 02:59:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:50.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 02:59:50 np0005603621 competent_borg[278834]: {
Jan 31 02:59:50 np0005603621 competent_borg[278834]:    "0": [
Jan 31 02:59:50 np0005603621 competent_borg[278834]:        {
Jan 31 02:59:50 np0005603621 competent_borg[278834]:            "devices": [
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "/dev/loop3"
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            ],
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "lv_name": "ceph_lv0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "lv_size": "7511998464",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "name": "ceph_lv0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "tags": {
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.cephx_lockbox_secret": "",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.cluster_name": "ceph",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.crush_device_class": "",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.encrypted": "0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.osd_id": "0",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.type": "block",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:                "ceph.vdo": "0"
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            },
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "type": "block",
Jan 31 02:59:51 np0005603621 competent_borg[278834]:            "vg_name": "ceph_vg0"
Jan 31 02:59:51 np0005603621 competent_borg[278834]:        }
Jan 31 02:59:51 np0005603621 competent_borg[278834]:    ]
Jan 31 02:59:51 np0005603621 competent_borg[278834]: }
Jan 31 02:59:51 np0005603621 systemd[1]: libpod-29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e.scope: Deactivated successfully.
Jan 31 02:59:51 np0005603621 podman[278793]: 2026-01-31 07:59:51.021565679 +0000 UTC m=+1.415152620 container died 29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 02:59:51 np0005603621 nova_compute[247399]: 2026-01-31 07:59:51.335 247403 INFO nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Beginning cold snapshot process#033[00m
Jan 31 02:59:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 274 MiB data, 630 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 238 op/s
Jan 31 02:59:51 np0005603621 nova_compute[247399]: 2026-01-31 07:59:51.547 247403 DEBUG nova.virt.libvirt.imagebackend [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 02:59:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-17d185d8676ff535d5a3465ce5dea9635b7e5b13753146791eea5b622193cf5f-merged.mount: Deactivated successfully.
Jan 31 02:59:52 np0005603621 nova_compute[247399]: 2026-01-31 07:59:52.153 247403 DEBUG nova.storage.rbd_utils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] creating snapshot(56e49e8abcd14ac9b168f7b9a333d1da) on rbd image(a24ac16c-df64-4cef-a252-1f1c38920602_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 02:59:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:52.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:52 np0005603621 podman[278793]: 2026-01-31 07:59:52.705701006 +0000 UTC m=+3.099287927 container remove 29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 02:59:52 np0005603621 nova_compute[247399]: 2026-01-31 07:59:52.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:52 np0005603621 systemd[1]: libpod-conmon-29d4dcc48fb109aa72c8ceb7a2499d8956da2a14c76690d91e9d471544a75a8e.scope: Deactivated successfully.
Jan 31 02:59:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:52.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:53 np0005603621 podman[279050]: 2026-01-31 07:59:53.160972361 +0000 UTC m=+0.020400188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:59:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 31 02:59:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 215 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 214 op/s
Jan 31 02:59:53 np0005603621 podman[279050]: 2026-01-31 07:59:53.520812571 +0000 UTC m=+0.380240378 container create a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:59:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 31 02:59:53 np0005603621 systemd[1]: Started libpod-conmon-a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934.scope.
Jan 31 02:59:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:59:53 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 31 02:59:53 np0005603621 podman[279050]: 2026-01-31 07:59:53.986296737 +0000 UTC m=+0.845724564 container init a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldstine, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 02:59:53 np0005603621 podman[279050]: 2026-01-31 07:59:53.991961303 +0000 UTC m=+0.851389110 container start a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldstine, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 02:59:53 np0005603621 eager_goldstine[279067]: 167 167
Jan 31 02:59:53 np0005603621 systemd[1]: libpod-a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934.scope: Deactivated successfully.
Jan 31 02:59:54 np0005603621 podman[279050]: 2026-01-31 07:59:54.19000776 +0000 UTC m=+1.049435597 container attach a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldstine, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 31 02:59:54 np0005603621 podman[279050]: 2026-01-31 07:59:54.190458495 +0000 UTC m=+1.049886302 container died a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:59:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:54.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-91a2967130300dd8c3047170257b8bfd7e6ef565797ee596b5c5edf7f906f186-merged.mount: Deactivated successfully.
Jan 31 02:59:55 np0005603621 nova_compute[247399]: 2026-01-31 07:59:55.019 247403 DEBUG nova.storage.rbd_utils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] cloning vms/a24ac16c-df64-4cef-a252-1f1c38920602_disk@56e49e8abcd14ac9b168f7b9a333d1da to images/919869d2-31bd-400c-b71c-e086ee5512c0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 02:59:55 np0005603621 nova_compute[247399]: 2026-01-31 07:59:55.370 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846380.3693135, 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 02:59:55 np0005603621 nova_compute[247399]: 2026-01-31 07:59:55.371 247403 INFO nova.compute.manager [-] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 02:59:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 202 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 34 KiB/s wr, 100 op/s
Jan 31 02:59:55 np0005603621 nova_compute[247399]: 2026-01-31 07:59:55.472 247403 DEBUG nova.compute.manager [None req-cbce1b26-d1ef-4f77-8bbe-ed608d53fd1e - - - - - -] [instance: 38c0b95d-b6e9-4fff-99ad-33ac90e7ba3e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 02:59:55 np0005603621 podman[279050]: 2026-01-31 07:59:55.53963295 +0000 UTC m=+2.399060747 container remove a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 02:59:55 np0005603621 systemd[1]: libpod-conmon-a062bd53bd51024165e96c35ff6336e99a1db9a84d67cb2f354d12a722908934.scope: Deactivated successfully.
Jan 31 02:59:55 np0005603621 nova_compute[247399]: 2026-01-31 07:59:55.727 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:55 np0005603621 podman[279129]: 2026-01-31 07:59:55.660042248 +0000 UTC m=+0.019887464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 02:59:55 np0005603621 podman[279129]: 2026-01-31 07:59:55.822815791 +0000 UTC m=+0.182660967 container create 74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 02:59:56 np0005603621 systemd[1]: Started libpod-conmon-74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee.scope.
Jan 31 02:59:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 02:59:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2d313b1f338dccd9207ddf2e69011349401c4d1cc114794da2450cc543d9db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2d313b1f338dccd9207ddf2e69011349401c4d1cc114794da2450cc543d9db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2d313b1f338dccd9207ddf2e69011349401c4d1cc114794da2450cc543d9db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f2d313b1f338dccd9207ddf2e69011349401c4d1cc114794da2450cc543d9db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 02:59:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:56 np0005603621 podman[279129]: 2026-01-31 07:59:56.345101433 +0000 UTC m=+0.704946669 container init 74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 02:59:56 np0005603621 podman[279129]: 2026-01-31 07:59:56.353600489 +0000 UTC m=+0.713445675 container start 74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 02:59:56 np0005603621 podman[279129]: 2026-01-31 07:59:56.566696437 +0000 UTC m=+0.926541653 container attach 74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 02:59:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 02:59:56 np0005603621 nova_compute[247399]: 2026-01-31 07:59:56.670 247403 DEBUG nova.storage.rbd_utils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] flattening images/919869d2-31bd-400c-b71c-e086ee5512c0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 02:59:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:56.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]: {
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:        "osd_id": 0,
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:        "type": "bluestore"
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]:    }
Jan 31 02:59:57 np0005603621 intelligent_goldwasser[279146]: }
Jan 31 02:59:57 np0005603621 systemd[1]: libpod-74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee.scope: Deactivated successfully.
Jan 31 02:59:57 np0005603621 podman[279129]: 2026-01-31 07:59:57.141684868 +0000 UTC m=+1.501530054 container died 74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 02:59:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 202 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 14 KiB/s wr, 49 op/s
Jan 31 02:59:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0f2d313b1f338dccd9207ddf2e69011349401c4d1cc114794da2450cc543d9db-merged.mount: Deactivated successfully.
Jan 31 02:59:57 np0005603621 nova_compute[247399]: 2026-01-31 07:59:57.730 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 02:59:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:07:59:58.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:58 np0005603621 podman[279129]: 2026-01-31 07:59:58.662825284 +0000 UTC m=+3.022670470 container remove 74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 02:59:58 np0005603621 systemd[1]: libpod-conmon-74b212413eaac1a4fbb1c95ea056962b5d25690bdc2cabe19d6a4a080895fbee.scope: Deactivated successfully.
Jan 31 02:59:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 02:59:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 02:59:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 02:59:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:07:59:58.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 02:59:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 202 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 14 KiB/s wr, 49 op/s
Jan 31 02:59:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 02:59:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:00:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 03:00:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:00.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:00:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5b652174-640e-46c0-8119-753a4fdd8363 does not exist
Jan 31 03:00:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 29a1da0d-14b4-433a-89a9-b2d9b2ca7987 does not exist
Jan 31 03:00:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 375cb81b-027c-42be-81e8-abc313735cee does not exist
Jan 31 03:00:00 np0005603621 nova_compute[247399]: 2026-01-31 08:00:00.730 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:00:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 03:00:00 np0005603621 nova_compute[247399]: 2026-01-31 08:00:00.815 247403 DEBUG nova.storage.rbd_utils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] removing snapshot(56e49e8abcd14ac9b168f7b9a333d1da) on rbd image(a24ac16c-df64-4cef-a252-1f1c38920602_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:00:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 226 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 1.7 MiB/s wr, 65 op/s
Jan 31 03:00:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 31 03:00:01 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:00:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 31 03:00:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 31 03:00:01 np0005603621 nova_compute[247399]: 2026-01-31 08:00:01.987 247403 DEBUG nova.storage.rbd_utils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] creating snapshot(snap) on rbd image(919869d2-31bd-400c-b71c-e086ee5512c0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:00:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:02 np0005603621 nova_compute[247399]: 2026-01-31 08:00:02.732 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 31 03:00:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 31 03:00:02 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 31 03:00:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 278 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.8 MiB/s wr, 94 op/s
Jan 31 03:00:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:00:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:00:05 np0005603621 nova_compute[247399]: 2026-01-31 08:00:05.322 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846390.3207119, a24ac16c-df64-4cef-a252-1f1c38920602 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:00:05 np0005603621 nova_compute[247399]: 2026-01-31 08:00:05.322 247403 INFO nova.compute.manager [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:00:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 298 active+clean; 283 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 5.8 MiB/s wr, 90 op/s
Jan 31 03:00:05 np0005603621 nova_compute[247399]: 2026-01-31 08:00:05.387 247403 DEBUG nova.compute.manager [None req-bb5caf67-acc9-4ed8-b5c2-1e0c3a77ce67 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:05 np0005603621 nova_compute[247399]: 2026-01-31 08:00:05.391 247403 DEBUG nova.compute.manager [None req-bb5caf67-acc9-4ed8-b5c2-1e0c3a77ce67 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:00:05 np0005603621 nova_compute[247399]: 2026-01-31 08:00:05.469 247403 INFO nova.compute.manager [None req-bb5caf67-acc9-4ed8-b5c2-1e0c3a77ce67 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.#033[00m
Jan 31 03:00:05 np0005603621 nova_compute[247399]: 2026-01-31 08:00:05.731 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:06.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 31 03:00:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 31 03:00:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 31 03:00:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:06 np0005603621 nova_compute[247399]: 2026-01-31 08:00:06.953 247403 INFO nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Snapshot image upload complete#033[00m
Jan 31 03:00:06 np0005603621 nova_compute[247399]: 2026-01-31 08:00:06.954 247403 DEBUG nova.compute.manager [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.055 247403 INFO nova.compute.manager [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Shelve offloading#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.063 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance destroyed successfully.#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.064 247403 DEBUG nova.compute.manager [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.066 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.067 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquired lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.067 247403 DEBUG nova.network.neutron [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.377 247403 DEBUG nova.network.neutron [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:00:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 283 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 5.0 MiB/s wr, 135 op/s
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.734 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.893 247403 DEBUG nova.network.neutron [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.934 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Releasing lock "refresh_cache-a24ac16c-df64-4cef-a252-1f1c38920602" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.940 247403 INFO nova.virt.libvirt.driver [-] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Instance destroyed successfully.#033[00m
Jan 31 03:00:07 np0005603621 nova_compute[247399]: 2026-01-31 08:00:07.940 247403 DEBUG nova.objects.instance [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lazy-loading 'resources' on Instance uuid a24ac16c-df64-4cef-a252-1f1c38920602 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:00:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:08.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.231 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.460 247403 INFO nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Deleting instance files /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602_del#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.461 247403 INFO nova.virt.libvirt.driver [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Deletion of /var/lib/nova/instances/a24ac16c-df64-4cef-a252-1f1c38920602_del complete#033[00m
Jan 31 03:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.623 247403 INFO nova.scheduler.client.report [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Deleted allocations for instance a24ac16c-df64-4cef-a252-1f1c38920602#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.700 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.701 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:08 np0005603621 nova_compute[247399]: 2026-01-31 08:00:08.750 247403 DEBUG oslo_concurrency.processutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:08.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1583617251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:09 np0005603621 nova_compute[247399]: 2026-01-31 08:00:09.161 247403 DEBUG oslo_concurrency.processutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 31 03:00:09 np0005603621 nova_compute[247399]: 2026-01-31 08:00:09.167 247403 DEBUG nova.compute.provider_tree [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:00:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 31 03:00:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 31 03:00:09 np0005603621 nova_compute[247399]: 2026-01-31 08:00:09.263 247403 DEBUG nova.scheduler.client.report [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:00:09 np0005603621 nova_compute[247399]: 2026-01-31 08:00:09.304 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 283 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 9.0 KiB/s wr, 59 op/s
Jan 31 03:00:09 np0005603621 nova_compute[247399]: 2026-01-31 08:00:09.438 247403 DEBUG oslo_concurrency.lockutils [None req-ce54e23c-e7ae-4f75-8056-d9fc20ab18ba 2d20e7b6189c4916947ddae2155da8cf 0f3a75a965fc495bbe02cb5bfad2053b - - default default] Lock "a24ac16c-df64-4cef-a252-1f1c38920602" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 32.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:10.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:10 np0005603621 nova_compute[247399]: 2026-01-31 08:00:10.733 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:10.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 270 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 8.2 KiB/s wr, 68 op/s
Jan 31 03:00:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 31 03:00:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 31 03:00:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 31 03:00:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:12.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:12 np0005603621 nova_compute[247399]: 2026-01-31 08:00:12.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 145 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 7.5 KiB/s wr, 136 op/s
Jan 31 03:00:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:14.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.437 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.438 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.453 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.527 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.527 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.534 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.534 247403 INFO nova.compute.claims [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:00:14 np0005603621 nova_compute[247399]: 2026-01-31 08:00:14.648 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2575298159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.055 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.059 247403 DEBUG nova.compute.provider_tree [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.078 247403 DEBUG nova.scheduler.client.report [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.124 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.125 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.178 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.179 247403 DEBUG nova.network.neutron [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.199 247403 INFO nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.222 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:00:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 122 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 4.9 KiB/s wr, 111 op/s
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.445 247403 DEBUG nova.policy [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46ffd64a348845fab6cdc53249353575', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '521dcd459f144f2bb32de93d50ae0391', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.492 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.493 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.493 247403 INFO nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Creating image(s)#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.518 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.548 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.571 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.574 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.620 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.621 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.621 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.622 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.644 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.646 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:15 np0005603621 nova_compute[247399]: 2026-01-31 08:00:15.735 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:16.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 31 03:00:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 31 03:00:17 np0005603621 nova_compute[247399]: 2026-01-31 08:00:17.002 247403 DEBUG nova.network.neutron [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Successfully created port: 6363fb39-e709-434b-b811-28edcc63c280 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:00:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 31 03:00:17 np0005603621 nova_compute[247399]: 2026-01-31 08:00:17.230 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:17 np0005603621 nova_compute[247399]: 2026-01-31 08:00:17.230 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:00:17 np0005603621 nova_compute[247399]: 2026-01-31 08:00:17.265 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: a24ac16c-df64-4cef-a252-1f1c38920602] Skipping network cache update for instance because it has been migrated to another host. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9902#033[00m
Jan 31 03:00:17 np0005603621 nova_compute[247399]: 2026-01-31 08:00:17.265 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:00:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 122 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 91 KiB/s rd, 6.1 KiB/s wr, 130 op/s
Jan 31 03:00:17 np0005603621 nova_compute[247399]: 2026-01-31 08:00:17.738 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:18.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:18 np0005603621 nova_compute[247399]: 2026-01-31 08:00:18.903 247403 DEBUG nova.network.neutron [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Successfully updated port: 6363fb39-e709-434b-b811-28edcc63c280 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:00:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:18.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:18 np0005603621 nova_compute[247399]: 2026-01-31 08:00:18.919 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "refresh_cache-b4a73ec8-501b-454f-9cf7-4d4a09344db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:00:18 np0005603621 nova_compute[247399]: 2026-01-31 08:00:18.919 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquired lock "refresh_cache-b4a73ec8-501b-454f-9cf7-4d4a09344db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:00:18 np0005603621 nova_compute[247399]: 2026-01-31 08:00:18.919 247403 DEBUG nova.network.neutron [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:00:19 np0005603621 nova_compute[247399]: 2026-01-31 08:00:19.021 247403 DEBUG nova.compute.manager [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Received event network-changed-6363fb39-e709-434b-b811-28edcc63c280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:00:19 np0005603621 nova_compute[247399]: 2026-01-31 08:00:19.021 247403 DEBUG nova.compute.manager [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Refreshing instance network info cache due to event network-changed-6363fb39-e709-434b-b811-28edcc63c280. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:00:19 np0005603621 nova_compute[247399]: 2026-01-31 08:00:19.022 247403 DEBUG oslo_concurrency.lockutils [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b4a73ec8-501b-454f-9cf7-4d4a09344db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:00:19 np0005603621 nova_compute[247399]: 2026-01-31 08:00:19.113 247403 DEBUG nova.network.neutron [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:00:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 122 MiB data, 544 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 5.2 KiB/s wr, 110 op/s
Jan 31 03:00:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:20 np0005603621 podman[279553]: 2026-01-31 08:00:20.505099938 +0000 UTC m=+0.064328883 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 03:00:20 np0005603621 podman[279554]: 2026-01-31 08:00:20.505590534 +0000 UTC m=+0.064996075 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:00:20 np0005603621 nova_compute[247399]: 2026-01-31 08:00:20.737 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:20 np0005603621 nova_compute[247399]: 2026-01-31 08:00:20.844 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 5.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:00:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:20.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.318 247403 DEBUG nova.network.neutron [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Updating instance_info_cache with network_info: [{"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.320 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.361 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] resizing rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:00:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 143 MiB data, 550 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 921 KiB/s wr, 107 op/s
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.692 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Releasing lock "refresh_cache-b4a73ec8-501b-454f-9cf7-4d4a09344db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.693 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Instance network_info: |[{"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.693 247403 DEBUG oslo_concurrency.lockutils [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b4a73ec8-501b-454f-9cf7-4d4a09344db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:00:21 np0005603621 nova_compute[247399]: 2026-01-31 08:00:21.693 247403 DEBUG nova.network.neutron [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Refreshing network info cache for port 6363fb39-e709-434b-b811-28edcc63c280 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:00:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:00:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.740 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.858 247403 DEBUG nova.objects.instance [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'migration_context' on Instance uuid b4a73ec8-501b-454f-9cf7-4d4a09344db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.880 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.881 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Ensure instance console log exists: /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.881 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.881 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.882 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.884 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Start _get_guest_xml network_info=[{"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.889 247403 WARNING nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.894 247403 DEBUG nova.virt.libvirt.host [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.895 247403 DEBUG nova.virt.libvirt.host [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.904 247403 DEBUG nova.virt.libvirt.host [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.905 247403 DEBUG nova.virt.libvirt.host [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.906 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.907 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.907 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.908 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.908 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.908 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.908 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.909 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.909 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.909 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.909 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.910 247403 DEBUG nova.virt.hardware [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:00:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:22 np0005603621 nova_compute[247399]: 2026-01-31 08:00:22.914 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.238 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:00:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1568939564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.345 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.368 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.371 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 169 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.548 247403 DEBUG nova.network.neutron [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Updated VIF entry in instance network info cache for port 6363fb39-e709-434b-b811-28edcc63c280. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.549 247403 DEBUG nova.network.neutron [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Updating instance_info_cache with network_info: [{"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.617 247403 DEBUG oslo_concurrency.lockutils [req-caaf4020-fc29-42b1-a2b7-997433a882b6 req-cef8a470-3c96-4d96-9d25-4c7b70c8c583 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b4a73ec8-501b-454f-9cf7-4d4a09344db3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:00:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:00:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616069583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.768 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.770 247403 DEBUG nova.virt.libvirt.vif [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:00:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1240390044',display_name='tempest-ImagesTestJSON-server-1240390044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1240390044',id=49,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-3ec1ef1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:15Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=b4a73ec8-501b-454f-9cf7-4d4a09344db3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.771 247403 DEBUG nova.network.os_vif_util [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.772 247403 DEBUG nova.network.os_vif_util [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.774 247403 DEBUG nova.objects.instance [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'pci_devices' on Instance uuid b4a73ec8-501b-454f-9cf7-4d4a09344db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.862 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <uuid>b4a73ec8-501b-454f-9cf7-4d4a09344db3</uuid>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <name>instance-00000031</name>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesTestJSON-server-1240390044</nova:name>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:00:22</nova:creationTime>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:user uuid="46ffd64a348845fab6cdc53249353575">tempest-ImagesTestJSON-1780438391-project-member</nova:user>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:project uuid="521dcd459f144f2bb32de93d50ae0391">tempest-ImagesTestJSON-1780438391</nova:project>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <nova:port uuid="6363fb39-e709-434b-b811-28edcc63c280">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <entry name="serial">b4a73ec8-501b-454f-9cf7-4d4a09344db3</entry>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <entry name="uuid">b4a73ec8-501b-454f-9cf7-4d4a09344db3</entry>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk.config">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:f3:f2:14"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <target dev="tap6363fb39-e7"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/console.log" append="off"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:00:23 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:00:23 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:00:23 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:00:23 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.864 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Preparing to wait for external event network-vif-plugged-6363fb39-e709-434b-b811-28edcc63c280 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.864 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.864 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.865 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.865 247403 DEBUG nova.virt.libvirt.vif [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:00:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1240390044',display_name='tempest-ImagesTestJSON-server-1240390044',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1240390044',id=49,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-3ec1ef1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:00:15Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=b4a73ec8-501b-454f-9cf7-4d4a09344db3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.866 247403 DEBUG nova.network.os_vif_util [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.866 247403 DEBUG nova.network.os_vif_util [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.867 247403 DEBUG os_vif [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.868 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.868 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.872 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6363fb39-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.874 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6363fb39-e7, col_values=(('external_ids', {'iface-id': '6363fb39-e709-434b-b811-28edcc63c280', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:f2:14', 'vm-uuid': 'b4a73ec8-501b-454f-9cf7-4d4a09344db3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:23 np0005603621 NetworkManager[49013]: <info>  [1769846423.8780] manager: (tap6363fb39-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.880 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:23 np0005603621 nova_compute[247399]: 2026-01-31 08:00:23.885 247403 INFO os_vif [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7')#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.216 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.217 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.218 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No VIF found with MAC fa:16:3e:f3:f2:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.219 247403 INFO nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Using config drive#033[00m
Jan 31 03:00:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.247 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.341 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.342 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.875 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.875 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.876 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.876 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:00:24 np0005603621 nova_compute[247399]: 2026-01-31 08:00:24.877 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:24.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657842209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.283 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 193 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.7 MiB/s wr, 86 op/s
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.615 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.616 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.741 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.742 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4607MB free_disk=20.967517852783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.742 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.742 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.818 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance b4a73ec8-501b-454f-9cf7-4d4a09344db3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.819 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.819 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.883 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.908 247403 INFO nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Creating config drive at /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/disk.config#033[00m
Jan 31 03:00:25 np0005603621 nova_compute[247399]: 2026-01-31 08:00:25.913 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppfzobtnt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:26.033 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:00:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:26.034 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.034 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.036 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppfzobtnt" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.060 247403 DEBUG nova.storage.rbd_utils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.064 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/disk.config b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:00:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:00:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/142136120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.291 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.299 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.320 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.357 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:00:26 np0005603621 nova_compute[247399]: 2026-01-31 08:00:26.358 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:26.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.055 247403 DEBUG oslo_concurrency.processutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/disk.config b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.055 247403 INFO nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Deleting local config drive /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3/disk.config because it was imported into RBD.#033[00m
Jan 31 03:00:27 np0005603621 kernel: tap6363fb39-e7: entered promiscuous mode
Jan 31 03:00:27 np0005603621 NetworkManager[49013]: <info>  [1769846427.0933] manager: (tap6363fb39-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Jan 31 03:00:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:27Z|00105|binding|INFO|Claiming lport 6363fb39-e709-434b-b811-28edcc63c280 for this chassis.
Jan 31 03:00:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:27Z|00106|binding|INFO|6363fb39-e709-434b-b811-28edcc63c280: Claiming fa:16:3e:f3:f2:14 10.100.0.10
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.094 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.096 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.109 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:f2:14 10.100.0.10'], port_security=['fa:16:3e:f3:f2:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4a73ec8-501b-454f-9cf7-4d4a09344db3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6363fb39-e709-434b-b811-28edcc63c280) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.111 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6363fb39-e709-434b-b811-28edcc63c280 in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 bound to our chassis#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.112 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24914779-babc-4c55-b38b-adf9bfc5c103#033[00m
Jan 31 03:00:27 np0005603621 systemd-udevd[279848]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.122 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.123 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80083d33-314b-409d-a7d0-aeceb32b76e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.124 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24914779-b1 in ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:00:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:27Z|00107|binding|INFO|Setting lport 6363fb39-e709-434b-b811-28edcc63c280 ovn-installed in OVS
Jan 31 03:00:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:27Z|00108|binding|INFO|Setting lport 6363fb39-e709-434b-b811-28edcc63c280 up in Southbound
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.126 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 NetworkManager[49013]: <info>  [1769846427.1278] device (tap6363fb39-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.126 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24914779-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.126 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9e03d6dc-e76a-4b11-8793-6fe8e18f4720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.128 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed4cfac-93da-40bd-adcf-e852da02be28]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 NetworkManager[49013]: <info>  [1769846427.1296] device (tap6363fb39-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:00:27 np0005603621 systemd-machined[212769]: New machine qemu-21-instance-00000031.
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.136 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[0861f222-8563-4c37-b00b-d1ce2c84b87c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.146 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca48c01-93a3-4a25-a7e0-986cfe95558c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 systemd[1]: Started Virtual Machine qemu-21-instance-00000031.
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.167 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0decc7ab-5f30-432b-8589-f0abdb7c355d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.171 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2efd5c-15a9-42a7-a584-c8438e37b627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 NetworkManager[49013]: <info>  [1769846427.1726] manager: (tap24914779-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.196 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9283f1de-bc00-4e73-9ac3-44ea568ebdb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.199 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e378361a-5ec7-400b-93ff-16f4d3c301ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 NetworkManager[49013]: <info>  [1769846427.2124] device (tap24914779-b0): carrier: link connected
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.214 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.215 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.215 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.214 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[dcadb137-0f9b-459e-8aed-7238e6cb5fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.226 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2f38301d-53e4-46c9-8eba-1e019cfdb619]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573272, 'reachable_time': 16434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279884, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.234 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[898ca9d4-b5b7-4f54-97e2-81be2a343fb6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:baf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573272, 'tstamp': 573272}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279885, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.243 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1346d8-5a79-45f8-86bb-6654aa5d30fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573272, 'reachable_time': 16434, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279886, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.263 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b03a0fb0-9e94-42be-aab5-f22cb956c1a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.298 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4ae518-b1f3-4172-be2a-34b6a8a97c4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.300 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.300 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.301 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24914779-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.302 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 kernel: tap24914779-b0: entered promiscuous mode
Jan 31 03:00:27 np0005603621 NetworkManager[49013]: <info>  [1769846427.3043] manager: (tap24914779-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.308 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24914779-b0, col_values=(('external_ids', {'iface-id': '23cfbf86-f443-4dea-a9ae-1c6f9be9ee53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:27Z|00109|binding|INFO|Releasing lport 23cfbf86-f443-4dea-a9ae-1c6f9be9ee53 from this chassis (sb_readonly=0)
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.311 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.311 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e2e90f-14d0-42c0-b932-b205cd6b1be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.312 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:00:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:27.312 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'env', 'PROCESS_TAG=haproxy-24914779-babc-4c55-b38b-adf9bfc5c103', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24914779-babc-4c55-b38b-adf9bfc5c103.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 250 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.6 MiB/s wr, 131 op/s
Jan 31 03:00:27 np0005603621 podman[279937]: 2026-01-31 08:00:27.604136093 +0000 UTC m=+0.022657374 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.704 247403 DEBUG nova.compute.manager [req-e43c3afa-2a2a-4863-b958-a3f8e3ff240d req-c07a1226-cc7e-40a4-9025-d5a435391fc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Received event network-vif-plugged-6363fb39-e709-434b-b811-28edcc63c280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.705 247403 DEBUG oslo_concurrency.lockutils [req-e43c3afa-2a2a-4863-b958-a3f8e3ff240d req-c07a1226-cc7e-40a4-9025-d5a435391fc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.705 247403 DEBUG oslo_concurrency.lockutils [req-e43c3afa-2a2a-4863-b958-a3f8e3ff240d req-c07a1226-cc7e-40a4-9025-d5a435391fc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.705 247403 DEBUG oslo_concurrency.lockutils [req-e43c3afa-2a2a-4863-b958-a3f8e3ff240d req-c07a1226-cc7e-40a4-9025-d5a435391fc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.705 247403 DEBUG nova.compute.manager [req-e43c3afa-2a2a-4863-b958-a3f8e3ff240d req-c07a1226-cc7e-40a4-9025-d5a435391fc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Processing event network-vif-plugged-6363fb39-e709-434b-b811-28edcc63c280 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.869 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846427.8686666, b4a73ec8-501b-454f-9cf7-4d4a09344db3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.869 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] VM Started (Lifecycle Event)#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.871 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.874 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.877 247403 INFO nova.virt.libvirt.driver [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Instance spawned successfully.#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.877 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.922 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.928 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.932 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.933 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.933 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.934 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.934 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.935 247403 DEBUG nova.virt.libvirt.driver [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.967 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.967 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846427.8695118, b4a73ec8-501b-454f-9cf7-4d4a09344db3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.968 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.992 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.996 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846427.8735385, b4a73ec8-501b-454f-9cf7-4d4a09344db3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:00:27 np0005603621 nova_compute[247399]: 2026-01-31 08:00:27.996 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.003 247403 INFO nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Took 12.51 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.003 247403 DEBUG nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.022 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.024 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:00:28 np0005603621 podman[279937]: 2026-01-31 08:00:28.057275909 +0000 UTC m=+0.475797100 container create f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.065 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.085 247403 INFO nova.compute.manager [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Took 13.58 seconds to build instance.#033[00m
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.119 247403 DEBUG oslo_concurrency.lockutils [None req-24c43188-71c8-4e17-8fd4-61ce1dd727cf 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:28.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:28 np0005603621 systemd[1]: Started libpod-conmon-f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef.scope.
Jan 31 03:00:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:00:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b176999994707d7ade921f0daa05b3a250075db04a36cd858432c95baec530/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:00:28 np0005603621 podman[279937]: 2026-01-31 08:00:28.580615282 +0000 UTC m=+0.999136493 container init f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:00:28 np0005603621 podman[279937]: 2026-01-31 08:00:28.586293221 +0000 UTC m=+1.004814412 container start f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:00:28 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [NOTICE]   (279981) : New worker (279983) forked
Jan 31 03:00:28 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [NOTICE]   (279981) : Loading success.
Jan 31 03:00:28 np0005603621 nova_compute[247399]: 2026-01-31 08:00:28.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:28.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.373 247403 INFO nova.compute.manager [None req-f9a4bd98-5208-4b05-9555-93af1fb742ec 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Pausing#033[00m
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.374 247403 DEBUG nova.objects.instance [None req-f9a4bd98-5208-4b05-9555-93af1fb742ec 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'flavor' on Instance uuid b4a73ec8-501b-454f-9cf7-4d4a09344db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 250 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 103 op/s
Jan 31 03:00:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 31 03:00:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.530 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846429.5299048, b4a73ec8-501b-454f-9cf7-4d4a09344db3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.530 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.533 247403 DEBUG nova.compute.manager [None req-f9a4bd98-5208-4b05-9555-93af1fb742ec 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.653 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.656 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:00:29 np0005603621 nova_compute[247399]: 2026-01-31 08:00:29.688 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 31 03:00:30 np0005603621 nova_compute[247399]: 2026-01-31 08:00:30.128 247403 DEBUG nova.compute.manager [req-326f7533-08a8-4e77-919b-ec3e0f36e5e6 req-41ff3587-8b0d-44d6-8be2-9d98db5a888a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Received event network-vif-plugged-6363fb39-e709-434b-b811-28edcc63c280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:00:30 np0005603621 nova_compute[247399]: 2026-01-31 08:00:30.128 247403 DEBUG oslo_concurrency.lockutils [req-326f7533-08a8-4e77-919b-ec3e0f36e5e6 req-41ff3587-8b0d-44d6-8be2-9d98db5a888a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:30 np0005603621 nova_compute[247399]: 2026-01-31 08:00:30.129 247403 DEBUG oslo_concurrency.lockutils [req-326f7533-08a8-4e77-919b-ec3e0f36e5e6 req-41ff3587-8b0d-44d6-8be2-9d98db5a888a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:30 np0005603621 nova_compute[247399]: 2026-01-31 08:00:30.129 247403 DEBUG oslo_concurrency.lockutils [req-326f7533-08a8-4e77-919b-ec3e0f36e5e6 req-41ff3587-8b0d-44d6-8be2-9d98db5a888a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:30 np0005603621 nova_compute[247399]: 2026-01-31 08:00:30.129 247403 DEBUG nova.compute.manager [req-326f7533-08a8-4e77-919b-ec3e0f36e5e6 req-41ff3587-8b0d-44d6-8be2-9d98db5a888a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] No waiting events found dispatching network-vif-plugged-6363fb39-e709-434b-b811-28edcc63c280 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:00:30 np0005603621 nova_compute[247399]: 2026-01-31 08:00:30.130 247403 WARNING nova.compute.manager [req-326f7533-08a8-4e77-919b-ec3e0f36e5e6 req-41ff3587-8b0d-44d6-8be2-9d98db5a888a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Received unexpected event network-vif-plugged-6363fb39-e709-434b-b811-28edcc63c280 for instance with vm_state paused and task_state None.#033[00m
Jan 31 03:00:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:30.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:30.481 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:30.481 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:30.482 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:30.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 250 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 180 op/s
Jan 31 03:00:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:32.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:32 np0005603621 nova_compute[247399]: 2026-01-31 08:00:32.744 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:32.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:32 np0005603621 nova_compute[247399]: 2026-01-31 08:00:32.974 247403 DEBUG nova.compute.manager [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:00:33 np0005603621 nova_compute[247399]: 2026-01-31 08:00:33.247 247403 INFO nova.compute.manager [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] instance snapshotting#033[00m
Jan 31 03:00:33 np0005603621 nova_compute[247399]: 2026-01-31 08:00:33.247 247403 WARNING nova.compute.manager [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Jan 31 03:00:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 250 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 4.7 MiB/s wr, 248 op/s
Jan 31 03:00:33 np0005603621 nova_compute[247399]: 2026-01-31 08:00:33.684 247403 INFO nova.virt.libvirt.driver [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Beginning live snapshot process#033[00m
Jan 31 03:00:33 np0005603621 nova_compute[247399]: 2026-01-31 08:00:33.880 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:34 np0005603621 nova_compute[247399]: 2026-01-31 08:00:34.155 247403 DEBUG nova.virt.libvirt.imagebackend [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:00:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:34.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:34 np0005603621 nova_compute[247399]: 2026-01-31 08:00:34.574 247403 DEBUG nova.storage.rbd_utils [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(f59dae7fc3f84bed8894a8487bc5173a) on rbd image(b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:00:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:34.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:35.036 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 31 03:00:35 np0005603621 nova_compute[247399]: 2026-01-31 08:00:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:00:35 np0005603621 nova_compute[247399]: 2026-01-31 08:00:35.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:00:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 31 03:00:35 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 31 03:00:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 220 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.4 KiB/s wr, 233 op/s
Jan 31 03:00:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:36.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:36 np0005603621 nova_compute[247399]: 2026-01-31 08:00:36.261 247403 DEBUG nova.storage.rbd_utils [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] cloning vms/b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk@f59dae7fc3f84bed8894a8487bc5173a to images/61a37dcc-3d92-40cb-8dfd-e5d9e9576f30 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:00:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 31 03:00:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:36.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 31 03:00:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 31 03:00:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 169 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.4 KiB/s wr, 173 op/s
Jan 31 03:00:37 np0005603621 nova_compute[247399]: 2026-01-31 08:00:37.739 247403 DEBUG nova.storage.rbd_utils [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] flattening images/61a37dcc-3d92-40cb-8dfd-e5d9e9576f30 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:00:37 np0005603621 nova_compute[247399]: 2026-01-31 08:00:37.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:38.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:00:38
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:00:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:00:38 np0005603621 nova_compute[247399]: 2026-01-31 08:00:38.824 247403 DEBUG nova.storage.rbd_utils [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] removing snapshot(f59dae7fc3f84bed8894a8487bc5173a) on rbd image(b4a73ec8-501b-454f-9cf7-4d4a09344db3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:00:38 np0005603621 nova_compute[247399]: 2026-01-31 08:00:38.882 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:38.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 169 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.4 KiB/s wr, 171 op/s
Jan 31 03:00:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 31 03:00:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 31 03:00:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 31 03:00:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:40.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:41 np0005603621 nova_compute[247399]: 2026-01-31 08:00:41.288 247403 DEBUG nova.storage.rbd_utils [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(snap) on rbd image(61a37dcc-3d92-40cb-8dfd-e5d9e9576f30) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:00:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 189 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 58 op/s
Jan 31 03:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 31 03:00:42 np0005603621 nova_compute[247399]: 2026-01-31 08:00:42.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 31 03:00:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:42.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:43 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 31 03:00:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 215 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 120 op/s
Jan 31 03:00:43 np0005603621 nova_compute[247399]: 2026-01-31 08:00:43.885 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:44.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 191 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Jan 31 03:00:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:46 np0005603621 nova_compute[247399]: 2026-01-31 08:00:46.579 247403 INFO nova.virt.libvirt.driver [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Snapshot image upload complete#033[00m
Jan 31 03:00:46 np0005603621 nova_compute[247399]: 2026-01-31 08:00:46.580 247403 INFO nova.compute.manager [None req-720ec718-d9c8-49c5-be83-a63cf58f4a64 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Took 13.33 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:00:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:46.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 31 03:00:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 134 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 91 op/s
Jan 31 03:00:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 31 03:00:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 31 03:00:47 np0005603621 nova_compute[247399]: 2026-01-31 08:00:47.805 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:48.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:48 np0005603621 nova_compute[247399]: 2026-01-31 08:00:48.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009960101433091383 of space, bias 1.0, pg target 0.29880304299274146 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:00:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:00:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:48.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 134 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 31 03:00:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 31 03:00:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 31 03:00:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:51 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 31 03:00:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 134 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 64 op/s
Jan 31 03:00:51 np0005603621 podman[280195]: 2026-01-31 08:00:51.488725069 +0000 UTC m=+0.045437219 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:00:51 np0005603621 podman[280196]: 2026-01-31 08:00:51.508645276 +0000 UTC m=+0.065011215 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 03:00:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:52 np0005603621 nova_compute[247399]: 2026-01-31 08:00:52.806 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 134 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.2 KiB/s wr, 56 op/s
Jan 31 03:00:53 np0005603621 nova_compute[247399]: 2026-01-31 08:00:53.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.033 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.034 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.034 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.034 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.034 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.036 247403 INFO nova.compute.manager [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Terminating instance#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.037 247403 DEBUG nova.compute.manager [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:00:54 np0005603621 kernel: tap6363fb39-e7 (unregistering): left promiscuous mode
Jan 31 03:00:54 np0005603621 NetworkManager[49013]: <info>  [1769846454.1195] device (tap6363fb39-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:00:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:54Z|00110|binding|INFO|Releasing lport 6363fb39-e709-434b-b811-28edcc63c280 from this chassis (sb_readonly=0)
Jan 31 03:00:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:54Z|00111|binding|INFO|Setting lport 6363fb39-e709-434b-b811-28edcc63c280 down in Southbound
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.125 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:00:54Z|00112|binding|INFO|Removing iface tap6363fb39-e7 ovn-installed in OVS
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.128 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.136 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 31 03:00:54 np0005603621 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000031.scope: Consumed 2.122s CPU time.
Jan 31 03:00:54 np0005603621 systemd-machined[212769]: Machine qemu-21-instance-00000031 terminated.
Jan 31 03:00:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:54.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.269 247403 INFO nova.virt.libvirt.driver [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Instance destroyed successfully.#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.269 247403 DEBUG nova.objects.instance [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'resources' on Instance uuid b4a73ec8-501b-454f-9cf7-4d4a09344db3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.291 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:f2:14 10.100.0.10'], port_security=['fa:16:3e:f3:f2:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b4a73ec8-501b-454f-9cf7-4d4a09344db3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6363fb39-e709-434b-b811-28edcc63c280) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.292 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6363fb39-e709-434b-b811-28edcc63c280 in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 unbound from our chassis#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.293 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24914779-babc-4c55-b38b-adf9bfc5c103, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.294 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[38e4b6ab-71bc-4619-a2a9-341850a131d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.295 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace which is not needed anymore#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.414 247403 DEBUG nova.virt.libvirt.vif [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:00:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1240390044',display_name='tempest-ImagesTestJSON-server-1240390044',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1240390044',id=49,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:00:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-3ec1ef1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:00:46Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=b4a73ec8-501b-454f-9cf7-4d4a09344db3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.415 247403 DEBUG nova.network.os_vif_util [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "6363fb39-e709-434b-b811-28edcc63c280", "address": "fa:16:3e:f3:f2:14", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6363fb39-e7", "ovs_interfaceid": "6363fb39-e709-434b-b811-28edcc63c280", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.416 247403 DEBUG nova.network.os_vif_util [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.416 247403 DEBUG os_vif [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.418 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.418 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6363fb39-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.422 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.424 247403 INFO os_vif [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:f2:14,bridge_name='br-int',has_traffic_filtering=True,id=6363fb39-e709-434b-b811-28edcc63c280,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6363fb39-e7')#033[00m
Jan 31 03:00:54 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [NOTICE]   (279981) : haproxy version is 2.8.14-c23fe91
Jan 31 03:00:54 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [NOTICE]   (279981) : path to executable is /usr/sbin/haproxy
Jan 31 03:00:54 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [WARNING]  (279981) : Exiting Master process...
Jan 31 03:00:54 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [WARNING]  (279981) : Exiting Master process...
Jan 31 03:00:54 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [ALERT]    (279981) : Current worker (279983) exited with code 143 (Terminated)
Jan 31 03:00:54 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[279977]: [WARNING]  (279981) : All workers exited. Exiting... (0)
Jan 31 03:00:54 np0005603621 systemd[1]: libpod-f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef.scope: Deactivated successfully.
Jan 31 03:00:54 np0005603621 podman[280274]: 2026-01-31 08:00:54.441464879 +0000 UTC m=+0.083850437 container died f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:00:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef-userdata-shm.mount: Deactivated successfully.
Jan 31 03:00:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f8b176999994707d7ade921f0daa05b3a250075db04a36cd858432c95baec530-merged.mount: Deactivated successfully.
Jan 31 03:00:54 np0005603621 podman[280274]: 2026-01-31 08:00:54.753505129 +0000 UTC m=+0.395890667 container cleanup f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 03:00:54 np0005603621 systemd[1]: libpod-conmon-f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef.scope: Deactivated successfully.
Jan 31 03:00:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:54 np0005603621 podman[280320]: 2026-01-31 08:00:54.962107097 +0000 UTC m=+0.193693811 container remove f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.966 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4f39773e-a3e8-42a3-9cba-91477670e664]: (4, ('Sat Jan 31 08:00:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef)\nf415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef\nSat Jan 31 08:00:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (f415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef)\nf415561d47fe90a94cb14f0086bf656d12013183bfef2e790d617242c60edbef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.968 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1a112bad-9878-4516-8eea-b3f87a307cd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.969 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 kernel: tap24914779-b0: left promiscuous mode
Jan 31 03:00:54 np0005603621 nova_compute[247399]: 2026-01-31 08:00:54.977 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.980 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f6085db1-6dc7-4f7e-bf68-38e8f7d14e42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.994 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[289c997a-a356-45c5-b387-39e8314350ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:54.995 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb1df08-67c2-4833-a9b0-713805c30434]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:55.005 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[00a3824d-17bb-40fb-8093-fb7d9e2a4472]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573267, 'reachable_time': 32346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280334, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:55.007 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:00:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:00:55.007 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab3b299-09ec-4b3f-9afc-3e5707f1047a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:00:55 np0005603621 systemd[1]: run-netns-ovnmeta\x2d24914779\x2dbabc\x2d4c55\x2db38b\x2dadf9bfc5c103.mount: Deactivated successfully.
Jan 31 03:00:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 114 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.7 KiB/s wr, 57 op/s
Jan 31 03:00:55 np0005603621 nova_compute[247399]: 2026-01-31 08:00:55.583 247403 INFO nova.virt.libvirt.driver [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Deleting instance files /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3_del#033[00m
Jan 31 03:00:55 np0005603621 nova_compute[247399]: 2026-01-31 08:00:55.584 247403 INFO nova.virt.libvirt.driver [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Deletion of /var/lib/nova/instances/b4a73ec8-501b-454f-9cf7-4d4a09344db3_del complete#033[00m
Jan 31 03:00:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:00:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:00:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:00:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 31 03:00:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 31 03:00:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 31 03:00:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 62 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 3.1 KiB/s wr, 94 op/s
Jan 31 03:00:57 np0005603621 nova_compute[247399]: 2026-01-31 08:00:57.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:00:57 np0005603621 nova_compute[247399]: 2026-01-31 08:00:57.915 247403 INFO nova.compute.manager [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Took 3.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:00:57 np0005603621 nova_compute[247399]: 2026-01-31 08:00:57.916 247403 DEBUG oslo.service.loopingcall [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:00:57 np0005603621 nova_compute[247399]: 2026-01-31 08:00:57.917 247403 DEBUG nova.compute.manager [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:00:57 np0005603621 nova_compute[247399]: 2026-01-31 08:00:57.918 247403 DEBUG nova.network.neutron [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:00:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:00:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:00:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:00:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:00:58.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:00:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 62 MiB data, 504 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.5 KiB/s wr, 50 op/s
Jan 31 03:00:59 np0005603621 nova_compute[247399]: 2026-01-31 08:00:59.421 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:00 np0005603621 nova_compute[247399]: 2026-01-31 08:01:00.136 247403 DEBUG nova.network.neutron [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:01:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:00 np0005603621 nova_compute[247399]: 2026-01-31 08:01:00.286 247403 INFO nova.compute.manager [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Took 2.37 seconds to deallocate network for instance.#033[00m
Jan 31 03:01:00 np0005603621 nova_compute[247399]: 2026-01-31 08:01:00.380 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:00 np0005603621 nova_compute[247399]: 2026-01-31 08:01:00.381 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:00 np0005603621 nova_compute[247399]: 2026-01-31 08:01:00.571 247403 DEBUG nova.compute.manager [req-1489030a-b2c5-433d-ae54-53f83c08f27c req-c6f6863a-1ad0-46af-b1dd-c6b29bae822b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Received event network-vif-deleted-6363fb39-e709-434b-b811-28edcc63c280 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:00 np0005603621 nova_compute[247399]: 2026-01-31 08:01:00.948 247403 DEBUG nova.scheduler.client.report [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:01:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 41 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Jan 31 03:01:01 np0005603621 nova_compute[247399]: 2026-01-31 08:01:01.584 247403 DEBUG nova.scheduler.client.report [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:01:01 np0005603621 nova_compute[247399]: 2026-01-31 08:01:01.584 247403 DEBUG nova.compute.provider_tree [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:01:01 np0005603621 nova_compute[247399]: 2026-01-31 08:01:01.829 247403 DEBUG nova.scheduler.client.report [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:01:01 np0005603621 nova_compute[247399]: 2026-01-31 08:01:01.885 247403 DEBUG nova.scheduler.client.report [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:01:01 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 931466ed-54b7-47e4-9a35-e87520cedcaa does not exist
Jan 31 03:01:01 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4016f1cb-cafa-4e5c-93a5-5d9b479238dc does not exist
Jan 31 03:01:01 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c7e00069-b5fe-439f-8fdf-678eff94d68e does not exist
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:01:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:01:01 np0005603621 nova_compute[247399]: 2026-01-31 08:01:01.971 247403 DEBUG oslo_concurrency.processutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:01:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:02.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:01:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2545940774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.428 247403 DEBUG oslo_concurrency.processutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.433 247403 DEBUG nova.compute.provider_tree [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:01:02 np0005603621 podman[280691]: 2026-01-31 08:01:02.354291737 +0000 UTC m=+0.019750232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.488 247403 DEBUG nova.scheduler.client.report [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:01:02 np0005603621 podman[280691]: 2026-01-31 08:01:02.489448456 +0000 UTC m=+0.154906921 container create 0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.559 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:02 np0005603621 systemd[1]: Started libpod-conmon-0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c.scope.
Jan 31 03:01:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.646 247403 INFO nova.scheduler.client.report [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Deleted allocations for instance b4a73ec8-501b-454f-9cf7-4d4a09344db3#033[00m
Jan 31 03:01:02 np0005603621 podman[280691]: 2026-01-31 08:01:02.668207686 +0000 UTC m=+0.333666151 container init 0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:01:02 np0005603621 podman[280691]: 2026-01-31 08:01:02.676074704 +0000 UTC m=+0.341533179 container start 0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:01:02 np0005603621 compassionate_grothendieck[280709]: 167 167
Jan 31 03:01:02 np0005603621 systemd[1]: libpod-0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c.scope: Deactivated successfully.
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.787 247403 DEBUG oslo_concurrency.lockutils [None req-66d83018-fb5f-4707-86be-e18cb45ec727 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b4a73ec8-501b-454f-9cf7-4d4a09344db3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:02 np0005603621 nova_compute[247399]: 2026-01-31 08:01:02.810 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:03 np0005603621 podman[280691]: 2026-01-31 08:01:03.045214519 +0000 UTC m=+0.710672984 container attach 0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:01:03 np0005603621 podman[280691]: 2026-01-31 08:01:03.045859859 +0000 UTC m=+0.711318324 container died 0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:01:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 41 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Jan 31 03:01:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:03.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8c2a8b89447ab982291a26cc40e40590646b665df4e98d315d27d5bc8a1cc81b-merged.mount: Deactivated successfully.
Jan 31 03:01:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:01:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:01:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:01:04 np0005603621 podman[280691]: 2026-01-31 08:01:04.020509771 +0000 UTC m=+1.685968236 container remove 0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_grothendieck, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:01:04 np0005603621 systemd[1]: libpod-conmon-0098b450ac17d53f2985e066273507ac45f681c8f7244532e44731ee8b34738c.scope: Deactivated successfully.
Jan 31 03:01:04 np0005603621 podman[280733]: 2026-01-31 08:01:04.201003466 +0000 UTC m=+0.102474252 container create f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldberg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:01:04 np0005603621 podman[280733]: 2026-01-31 08:01:04.116362485 +0000 UTC m=+0.017833291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:01:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:04.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:04 np0005603621 systemd[1]: Started libpod-conmon-f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5.scope.
Jan 31 03:01:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3de6f3a5da4b2a39bf98a18dea58487643f3b85d9fcebcb75cbe5bc04c762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3de6f3a5da4b2a39bf98a18dea58487643f3b85d9fcebcb75cbe5bc04c762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3de6f3a5da4b2a39bf98a18dea58487643f3b85d9fcebcb75cbe5bc04c762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3de6f3a5da4b2a39bf98a18dea58487643f3b85d9fcebcb75cbe5bc04c762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3de6f3a5da4b2a39bf98a18dea58487643f3b85d9fcebcb75cbe5bc04c762/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:04 np0005603621 nova_compute[247399]: 2026-01-31 08:01:04.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:04 np0005603621 podman[280733]: 2026-01-31 08:01:04.442989514 +0000 UTC m=+0.344460310 container init f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 03:01:04 np0005603621 podman[280733]: 2026-01-31 08:01:04.44890775 +0000 UTC m=+0.350378526 container start f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:01:04 np0005603621 podman[280733]: 2026-01-31 08:01:04.59108072 +0000 UTC m=+0.492551516 container attach f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:01:05 np0005603621 gracious_goldberg[280750]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:01:05 np0005603621 gracious_goldberg[280750]: --> relative data size: 1.0
Jan 31 03:01:05 np0005603621 gracious_goldberg[280750]: --> All data devices are unavailable
Jan 31 03:01:05 np0005603621 systemd[1]: libpod-f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5.scope: Deactivated successfully.
Jan 31 03:01:05 np0005603621 podman[280765]: 2026-01-31 08:01:05.28930499 +0000 UTC m=+0.021906889 container died f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:01:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 41 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.1 KiB/s wr, 36 op/s
Jan 31 03:01:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8ff3de6f3a5da4b2a39bf98a18dea58487643f3b85d9fcebcb75cbe5bc04c762-merged.mount: Deactivated successfully.
Jan 31 03:01:05 np0005603621 nova_compute[247399]: 2026-01-31 08:01:05.740 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:05 np0005603621 nova_compute[247399]: 2026-01-31 08:01:05.741 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:05 np0005603621 nova_compute[247399]: 2026-01-31 08:01:05.834 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:01:05 np0005603621 nova_compute[247399]: 2026-01-31 08:01:05.998 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:05 np0005603621 nova_compute[247399]: 2026-01-31 08:01:05.999 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:06 np0005603621 nova_compute[247399]: 2026-01-31 08:01:06.012 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:01:06 np0005603621 nova_compute[247399]: 2026-01-31 08:01:06.013 247403 INFO nova.compute.claims [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:01:06 np0005603621 podman[280765]: 2026-01-31 08:01:06.026644122 +0000 UTC m=+0.759246001 container remove f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:01:06 np0005603621 systemd[1]: libpod-conmon-f04ea2e2d202b2f0b035eb46f90516227182523836a2c5b2bc93a7bbe6fd8fd5.scope: Deactivated successfully.
Jan 31 03:01:06 np0005603621 nova_compute[247399]: 2026-01-31 08:01:06.272 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:06.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:06 np0005603621 podman[280941]: 2026-01-31 08:01:06.592793341 +0000 UTC m=+0.084514799 container create 621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:01:06 np0005603621 podman[280941]: 2026-01-31 08:01:06.533168316 +0000 UTC m=+0.024889794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:01:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470845540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:06 np0005603621 nova_compute[247399]: 2026-01-31 08:01:06.690 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:06 np0005603621 nova_compute[247399]: 2026-01-31 08:01:06.695 247403 DEBUG nova.compute.provider_tree [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:01:06 np0005603621 systemd[1]: Started libpod-conmon-621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83.scope.
Jan 31 03:01:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:06 np0005603621 podman[280941]: 2026-01-31 08:01:06.922882408 +0000 UTC m=+0.414603886 container init 621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:01:06 np0005603621 podman[280941]: 2026-01-31 08:01:06.928461583 +0000 UTC m=+0.420183041 container start 621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:01:06 np0005603621 fervent_meninsky[280959]: 167 167
Jan 31 03:01:06 np0005603621 systemd[1]: libpod-621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83.scope: Deactivated successfully.
Jan 31 03:01:06 np0005603621 podman[280941]: 2026-01-31 08:01:06.966359504 +0000 UTC m=+0.458080982 container attach 621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 03:01:06 np0005603621 podman[280941]: 2026-01-31 08:01:06.96716572 +0000 UTC m=+0.458887178 container died 621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.046 247403 DEBUG nova.scheduler.client.report [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.235 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.236 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:01:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b9e9cb27a02801ed4c210861532af52d71dd2ebd2ef78e947e945f951e7c7c44-merged.mount: Deactivated successfully.
Jan 31 03:01:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 41 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Jan 31 03:01:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:07.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.812 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.852 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.853 247403 DEBUG nova.network.neutron [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:01:07 np0005603621 nova_compute[247399]: 2026-01-31 08:01:07.979 247403 INFO nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:01:08 np0005603621 podman[280941]: 2026-01-31 08:01:08.030435757 +0000 UTC m=+1.522157215 container remove 621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.072 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:01:08 np0005603621 systemd[1]: libpod-conmon-621ee3d72f8cf124f308826eec4646f00d27be4c7eb964e791e6cdb1e6b1ea83.scope: Deactivated successfully.
Jan 31 03:01:08 np0005603621 podman[280985]: 2026-01-31 08:01:08.128244172 +0000 UTC m=+0.021431835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:01:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:08.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:08 np0005603621 podman[280985]: 2026-01-31 08:01:08.289964186 +0000 UTC m=+0.183151829 container create c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.445 247403 DEBUG nova.policy [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46ffd64a348845fab6cdc53249353575', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '521dcd459f144f2bb32de93d50ae0391', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:01:08 np0005603621 systemd[1]: Started libpod-conmon-c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa.scope.
Jan 31 03:01:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d6cb45e003c10aa766d7e73310fc243b791f9716b89c176d665187a94e0206/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d6cb45e003c10aa766d7e73310fc243b791f9716b89c176d665187a94e0206/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d6cb45e003c10aa766d7e73310fc243b791f9716b89c176d665187a94e0206/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d6cb45e003c10aa766d7e73310fc243b791f9716b89c176d665187a94e0206/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:08 np0005603621 podman[280985]: 2026-01-31 08:01:08.615724047 +0000 UTC m=+0.508911690 container init c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:01:08 np0005603621 podman[280985]: 2026-01-31 08:01:08.62122093 +0000 UTC m=+0.514408573 container start c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:01:08 np0005603621 podman[280985]: 2026-01-31 08:01:08.674352751 +0000 UTC m=+0.567540454 container attach c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.712 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.713 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.714 247403 INFO nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Creating image(s)#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.740 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.766 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.802 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.808 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.860 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.861 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.862 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.862 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.887 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:08 np0005603621 nova_compute[247399]: 2026-01-31 08:01:08.890 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:09 np0005603621 nova_compute[247399]: 2026-01-31 08:01:09.267 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846454.267245, b4a73ec8-501b-454f-9cf7-4d4a09344db3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:09 np0005603621 nova_compute[247399]: 2026-01-31 08:01:09.268 247403 INFO nova.compute.manager [-] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:01:09 np0005603621 nova_compute[247399]: 2026-01-31 08:01:09.303 247403 DEBUG nova.compute.manager [None req-c87440fd-447c-40e1-9daf-4b4c9bd4694c - - - - - -] [instance: b4a73ec8-501b-454f-9cf7-4d4a09344db3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]: {
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:    "0": [
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:        {
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "devices": [
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "/dev/loop3"
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            ],
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "lv_name": "ceph_lv0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "lv_size": "7511998464",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "name": "ceph_lv0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "tags": {
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.cluster_name": "ceph",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.crush_device_class": "",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.encrypted": "0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.osd_id": "0",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.type": "block",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:                "ceph.vdo": "0"
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            },
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "type": "block",
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:            "vg_name": "ceph_vg0"
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:        }
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]:    ]
Jan 31 03:01:09 np0005603621 charming_heyrovsky[281002]: }
Jan 31 03:01:09 np0005603621 systemd[1]: libpod-c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa.scope: Deactivated successfully.
Jan 31 03:01:09 np0005603621 podman[280985]: 2026-01-31 08:01:09.354411651 +0000 UTC m=+1.247599294 container died c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:01:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 41 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 852 B/s wr, 5 op/s
Jan 31 03:01:09 np0005603621 nova_compute[247399]: 2026-01-31 08:01:09.427 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b4d6cb45e003c10aa766d7e73310fc243b791f9716b89c176d665187a94e0206-merged.mount: Deactivated successfully.
Jan 31 03:01:09 np0005603621 nova_compute[247399]: 2026-01-31 08:01:09.958 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:10 np0005603621 nova_compute[247399]: 2026-01-31 08:01:10.028 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] resizing rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:01:10 np0005603621 podman[280985]: 2026-01-31 08:01:10.052132426 +0000 UTC m=+1.945320089 container remove c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:01:10 np0005603621 systemd[1]: libpod-conmon-c14f38059bcc0f8c35dcdeb22697a7d1719cfefcb61be15e1da8ae85c56b36fa.scope: Deactivated successfully.
Jan 31 03:01:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:10.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:10 np0005603621 podman[281313]: 2026-01-31 08:01:10.508441802 +0000 UTC m=+0.018236235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:01:10 np0005603621 podman[281313]: 2026-01-31 08:01:10.67408573 +0000 UTC m=+0.183880143 container create 9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:01:10 np0005603621 systemd[1]: Started libpod-conmon-9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418.scope.
Jan 31 03:01:10 np0005603621 nova_compute[247399]: 2026-01-31 08:01:10.793 247403 DEBUG nova.objects.instance [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'migration_context' on Instance uuid 96050b75-1cb6-4b1d-86a1-b86a21570ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:10 np0005603621 podman[281313]: 2026-01-31 08:01:10.987298427 +0000 UTC m=+0.497092860 container init 9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:01:10 np0005603621 podman[281313]: 2026-01-31 08:01:10.992840151 +0000 UTC m=+0.502634564 container start 9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:01:10 np0005603621 musing_turing[281347]: 167 167
Jan 31 03:01:10 np0005603621 systemd[1]: libpod-9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418.scope: Deactivated successfully.
Jan 31 03:01:11 np0005603621 podman[281313]: 2026-01-31 08:01:11.110032086 +0000 UTC m=+0.619826499 container attach 9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:01:11 np0005603621 podman[281313]: 2026-01-31 08:01:11.111052898 +0000 UTC m=+0.620847311 container died 9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:01:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dbca594dc68cf55725f32762ccc609ad244b1a6cd05c6237006d2f84d3b77eaf-merged.mount: Deactivated successfully.
Jan 31 03:01:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 41 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 938 B/s wr, 6 op/s
Jan 31 03:01:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:11.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:11 np0005603621 podman[281313]: 2026-01-31 08:01:11.825367285 +0000 UTC m=+1.335161698 container remove 9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_turing, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:01:11 np0005603621 systemd[1]: libpod-conmon-9aa81bd626701a9154fc133c845de50fce9b1827ee7e1ab1047a4e7dbd007418.scope: Deactivated successfully.
Jan 31 03:01:12 np0005603621 podman[281372]: 2026-01-31 08:01:11.921072103 +0000 UTC m=+0.020005779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:01:12 np0005603621 podman[281372]: 2026-01-31 08:01:12.17252835 +0000 UTC m=+0.271462006 container create b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:01:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:12 np0005603621 systemd[1]: Started libpod-conmon-b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98.scope.
Jan 31 03:01:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b5ccee55c6f7501ed3fa4820eaf8ac9690a65155154ec0e9afe32d10433a56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b5ccee55c6f7501ed3fa4820eaf8ac9690a65155154ec0e9afe32d10433a56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b5ccee55c6f7501ed3fa4820eaf8ac9690a65155154ec0e9afe32d10433a56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b5ccee55c6f7501ed3fa4820eaf8ac9690a65155154ec0e9afe32d10433a56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.813 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.975 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.975 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Ensure instance console log exists: /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.976 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.976 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.976 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:12 np0005603621 nova_compute[247399]: 2026-01-31 08:01:12.992 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:12 np0005603621 podman[281372]: 2026-01-31 08:01:12.993484719 +0000 UTC m=+1.092418425 container init b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:01:12 np0005603621 podman[281372]: 2026-01-31 08:01:12.999678064 +0000 UTC m=+1.098611730 container start b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:01:13 np0005603621 nova_compute[247399]: 2026-01-31 08:01:13.108 247403 WARNING nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 31 03:01:13 np0005603621 nova_compute[247399]: 2026-01-31 08:01:13.108 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 96050b75-1cb6-4b1d-86a1-b86a21570ab2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:01:13 np0005603621 nova_compute[247399]: 2026-01-31 08:01:13.109 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:13 np0005603621 podman[281372]: 2026-01-31 08:01:13.222379526 +0000 UTC m=+1.321313182 container attach b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:01:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 65 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 930 KiB/s wr, 24 op/s
Jan 31 03:01:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:13.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:13 np0005603621 nova_compute[247399]: 2026-01-31 08:01:13.808 247403 DEBUG nova.network.neutron [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Successfully created port: b9f742f2-2918-4ef6-af08-382773a64e5c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]: {
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:        "osd_id": 0,
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:        "type": "bluestore"
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]:    }
Jan 31 03:01:13 np0005603621 elastic_lederberg[281388]: }
Jan 31 03:01:13 np0005603621 systemd[1]: libpod-b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98.scope: Deactivated successfully.
Jan 31 03:01:13 np0005603621 podman[281372]: 2026-01-31 08:01:13.88127005 +0000 UTC m=+1.980203756 container died b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:01:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:01:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3984448942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:01:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:01:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3984448942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:01:14 np0005603621 nova_compute[247399]: 2026-01-31 08:01:14.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e4b5ccee55c6f7501ed3fa4820eaf8ac9690a65155154ec0e9afe32d10433a56-merged.mount: Deactivated successfully.
Jan 31 03:01:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 03:01:15 np0005603621 podman[281372]: 2026-01-31 08:01:15.149579424 +0000 UTC m=+3.248513090 container remove b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lederberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:01:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:01:15 np0005603621 systemd[1]: libpod-conmon-b454380ee8710d08a158c9574043938c3bd72e554c96c802de2bdb575a7c0c98.scope: Deactivated successfully.
Jan 31 03:01:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:01:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:01:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:01:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:15.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:01:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f0188789-f7d8-44fe-b7ef-6819a31d8229 does not exist
Jan 31 03:01:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4e33f3a9-b183-4420-913c-2fb8a40c9a1d does not exist
Jan 31 03:01:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d85a3bcc-7793-41e6-a1f7-217069a74126 does not exist
Jan 31 03:01:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:01:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:01:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:01:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:17.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:17 np0005603621 nova_compute[247399]: 2026-01-31 08:01:17.732 247403 DEBUG nova.network.neutron [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Successfully updated port: b9f742f2-2918-4ef6-af08-382773a64e5c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:01:17 np0005603621 nova_compute[247399]: 2026-01-31 08:01:17.771 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "refresh_cache-96050b75-1cb6-4b1d-86a1-b86a21570ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:01:17 np0005603621 nova_compute[247399]: 2026-01-31 08:01:17.771 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquired lock "refresh_cache-96050b75-1cb6-4b1d-86a1-b86a21570ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:01:17 np0005603621 nova_compute[247399]: 2026-01-31 08:01:17.772 247403 DEBUG nova.network.neutron [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:01:17 np0005603621 nova_compute[247399]: 2026-01-31 08:01:17.814 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.152 247403 DEBUG nova.compute.manager [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received event network-changed-b9f742f2-2918-4ef6-af08-382773a64e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.153 247403 DEBUG nova.compute.manager [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Refreshing instance network info cache due to event network-changed-b9f742f2-2918-4ef6-af08-382773a64e5c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.153 247403 DEBUG oslo_concurrency.lockutils [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-96050b75-1cb6-4b1d-86a1-b86a21570ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.276 247403 DEBUG nova.network.neutron [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:01:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.315 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.315 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.315 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.477 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:01:18 np0005603621 nova_compute[247399]: 2026-01-31 08:01:18.477 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:01:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:01:19 np0005603621 nova_compute[247399]: 2026-01-31 08:01:19.433 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:19.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.163 247403 DEBUG nova.network.neutron [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Updating instance_info_cache with network_info: [{"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.238 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Releasing lock "refresh_cache-96050b75-1cb6-4b1d-86a1-b86a21570ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.239 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance network_info: |[{"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.239 247403 DEBUG oslo_concurrency.lockutils [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-96050b75-1cb6-4b1d-86a1-b86a21570ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.239 247403 DEBUG nova.network.neutron [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Refreshing network info cache for port b9f742f2-2918-4ef6-af08-382773a64e5c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.242 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Start _get_guest_xml network_info=[{"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.246 247403 WARNING nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.257 247403 DEBUG nova.virt.libvirt.host [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.257 247403 DEBUG nova.virt.libvirt.host [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.265 247403 DEBUG nova.virt.libvirt.host [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.265 247403 DEBUG nova.virt.libvirt.host [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.266 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.267 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.267 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.267 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.268 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.268 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.268 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.268 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.268 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.269 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.269 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.269 247403 DEBUG nova.virt.hardware [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.271 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:01:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2140119966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.692 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.718 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:20 np0005603621 nova_compute[247399]: 2026-01-31 08:01:20.723 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:01:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173955284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.127 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.129 247403 DEBUG nova.virt.libvirt.vif [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-387817935',display_name='tempest-ImagesTestJSON-server-387817935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-387817935',id=50,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-haq3thk8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:08Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=96050b75-1cb6-4b1d-86a1-b86a21570ab2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.129 247403 DEBUG nova.network.os_vif_util [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.130 247403 DEBUG nova.network.os_vif_util [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.131 247403 DEBUG nova.objects.instance [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96050b75-1cb6-4b1d-86a1-b86a21570ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.200 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <uuid>96050b75-1cb6-4b1d-86a1-b86a21570ab2</uuid>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <name>instance-00000032</name>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesTestJSON-server-387817935</nova:name>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:01:20</nova:creationTime>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:user uuid="46ffd64a348845fab6cdc53249353575">tempest-ImagesTestJSON-1780438391-project-member</nova:user>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:project uuid="521dcd459f144f2bb32de93d50ae0391">tempest-ImagesTestJSON-1780438391</nova:project>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <nova:port uuid="b9f742f2-2918-4ef6-af08-382773a64e5c">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <entry name="serial">96050b75-1cb6-4b1d-86a1-b86a21570ab2</entry>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <entry name="uuid">96050b75-1cb6-4b1d-86a1-b86a21570ab2</entry>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk.config">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1d:cb:ae"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <target dev="tapb9f742f2-29"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/console.log" append="off"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:01:21 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:01:21 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:01:21 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:01:21 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.201 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Preparing to wait for external event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.202 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.202 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.202 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.203 247403 DEBUG nova.virt.libvirt.vif [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:01:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-387817935',display_name='tempest-ImagesTestJSON-server-387817935',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-387817935',id=50,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-haq3thk8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:01:08Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=96050b75-1cb6-4b1d-86a1-b86a21570ab2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.204 247403 DEBUG nova.network.os_vif_util [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.204 247403 DEBUG nova.network.os_vif_util [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.205 247403 DEBUG os_vif [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.206 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.206 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.207 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.209 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.209 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9f742f2-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.210 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9f742f2-29, col_values=(('external_ids', {'iface-id': 'b9f742f2-2918-4ef6-af08-382773a64e5c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:cb:ae', 'vm-uuid': '96050b75-1cb6-4b1d-86a1-b86a21570ab2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:21 np0005603621 NetworkManager[49013]: <info>  [1769846481.2123] manager: (tapb9f742f2-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.214 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.216 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.217 247403 INFO os_vif [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29')#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.325 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.325 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.326 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No VIF found with MAC fa:16:3e:1d:cb:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.326 247403 INFO nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Using config drive#033[00m
Jan 31 03:01:21 np0005603621 nova_compute[247399]: 2026-01-31 08:01:21.351 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:01:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:21.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.527 247403 INFO nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Creating config drive at /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/disk.config#033[00m
Jan 31 03:01:22 np0005603621 podman[281612]: 2026-01-31 08:01:22.529439268 +0000 UTC m=+0.077072214 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 31 03:01:22 np0005603621 podman[281611]: 2026-01-31 08:01:22.52949936 +0000 UTC m=+0.077254670 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.531 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8k_dg8bc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.649 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8k_dg8bc" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.676 247403 DEBUG nova.storage.rbd_utils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.681 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/disk.config 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.860 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.959 247403 DEBUG oslo_concurrency.processutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/disk.config 96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:22 np0005603621 nova_compute[247399]: 2026-01-31 08:01:22.960 247403 INFO nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Deleting local config drive /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2/disk.config because it was imported into RBD.#033[00m
Jan 31 03:01:23 np0005603621 kernel: tapb9f742f2-29: entered promiscuous mode
Jan 31 03:01:23 np0005603621 NetworkManager[49013]: <info>  [1769846483.0035] manager: (tapb9f742f2-29): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Jan 31 03:01:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:23Z|00113|binding|INFO|Claiming lport b9f742f2-2918-4ef6-af08-382773a64e5c for this chassis.
Jan 31 03:01:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:23Z|00114|binding|INFO|b9f742f2-2918-4ef6-af08-382773a64e5c: Claiming fa:16:3e:1d:cb:ae 10.100.0.4
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.004 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:23Z|00115|binding|INFO|Setting lport b9f742f2-2918-4ef6-af08-382773a64e5c ovn-installed in OVS
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:23 np0005603621 systemd-udevd[281707]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:01:23 np0005603621 systemd-machined[212769]: New machine qemu-22-instance-00000032.
Jan 31 03:01:23 np0005603621 systemd[1]: Started Virtual Machine qemu-22-instance-00000032.
Jan 31 03:01:23 np0005603621 NetworkManager[49013]: <info>  [1769846483.0391] device (tapb9f742f2-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:01:23 np0005603621 NetworkManager[49013]: <info>  [1769846483.0397] device (tapb9f742f2-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:01:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:23Z|00116|binding|INFO|Setting lport b9f742f2-2918-4ef6-af08-382773a64e5c up in Southbound
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.096 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:cb:ae 10.100.0.4'], port_security=['fa:16:3e:1d:cb:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '96050b75-1cb6-4b1d-86a1-b86a21570ab2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b9f742f2-2918-4ef6-af08-382773a64e5c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.098 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b9f742f2-2918-4ef6-af08-382773a64e5c in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 bound to our chassis#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.099 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24914779-babc-4c55-b38b-adf9bfc5c103#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.106 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b6afaaed-0234-49ab-874a-77791fa67203]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.107 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24914779-b1 in ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.109 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24914779-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.109 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c0c0de-8a3c-4f09-89b2-0843b506f99f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.110 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3285cf-2d39-4efb-8522-7c2b31aadbe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.120 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[11664f70-5fcc-4326-9e9c-51c6e9875e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.138 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd4c3469-d278-465f-8c03-863e60a798ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.167 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2bcf1298-419c-4e35-a9ab-78222db1c28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.171 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3e585aac-8d05-4c32-a5db-32f62ed28592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 NetworkManager[49013]: <info>  [1769846483.1732] manager: (tap24914779-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.199 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0958ae42-368c-4e38-b16b-d2c76ed32f84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.202 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a19f64ef-3bd4-462b-ad75-f1e5df9080df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 NetworkManager[49013]: <info>  [1769846483.2196] device (tap24914779-b0): carrier: link connected
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.224 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7469abb6-23d8-4003-af2d-3e1d4abcbd69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.239 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5816b8ff-a5e4-4385-9693-173e3960165b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578873, 'reachable_time': 43364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281740, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.252 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1415f9-cec7-49cf-b665-9333d1ef040a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:baf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578873, 'tstamp': 578873}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281741, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.264 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[de759385-8b2e-426e-bc77-811020ba0982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578873, 'reachable_time': 43364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281742, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.286 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2767ac4-8daa-408f-8728-ba072ed79e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.332 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[00cf1297-6fe2-4ad7-ab3f-f6a5162aa787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.335 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.335 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.336 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24914779-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:23 np0005603621 NetworkManager[49013]: <info>  [1769846483.3383] manager: (tap24914779-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 31 03:01:23 np0005603621 kernel: tap24914779-b0: entered promiscuous mode
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.339 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24914779-b0, col_values=(('external_ids', {'iface-id': '23cfbf86-f443-4dea-a9ae-1c6f9be9ee53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:23Z|00117|binding|INFO|Releasing lport 23cfbf86-f443-4dea-a9ae-1c6f9be9ee53 from this chassis (sb_readonly=0)
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.342 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:01:23 np0005603621 nova_compute[247399]: 2026-01-31 08:01:23.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.347 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4792c748-4955-4d09-8524-9edec7a656a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.348 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:01:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:23.349 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'env', 'PROCESS_TAG=haproxy-24914779-babc-4c55-b38b-adf9bfc5c103', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24914779-babc-4c55-b38b-adf9bfc5c103.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:01:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 31 03:01:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:23.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:23 np0005603621 podman[281775]: 2026-01-31 08:01:23.689097637 +0000 UTC m=+0.072102728 container create 7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:01:23 np0005603621 systemd[1]: Started libpod-conmon-7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439.scope.
Jan 31 03:01:23 np0005603621 podman[281775]: 2026-01-31 08:01:23.638422043 +0000 UTC m=+0.021426974 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:01:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:01:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27088e6ca1d516efd4592d74a1a5fcd17ecf8173e1728ec416cd88c43ebeec1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:01:23 np0005603621 podman[281775]: 2026-01-31 08:01:23.76744754 +0000 UTC m=+0.150452471 container init 7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:01:23 np0005603621 podman[281775]: 2026-01-31 08:01:23.772613102 +0000 UTC m=+0.155618033 container start 7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:01:23 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [NOTICE]   (281795) : New worker (281797) forked
Jan 31 03:01:23 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [NOTICE]   (281795) : Loading success.
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.022 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846484.02178, 96050b75-1cb6-4b1d-86a1-b86a21570ab2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.023 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] VM Started (Lifecycle Event)#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.237 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.241 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846484.021955, 96050b75-1cb6-4b1d-86a1-b86a21570ab2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.241 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:01:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.454 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.455 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.455 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.456 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.457 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.511 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.514 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.596 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:01:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936612949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:24 np0005603621 nova_compute[247399]: 2026-01-31 08:01:24.864 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.135 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.135 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.290 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.292 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4519MB free_disk=20.967525482177734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.292 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.293 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s rd, 885 KiB/s wr, 4 op/s
Jan 31 03:01:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:01:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:25.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.705 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 96050b75-1cb6-4b1d-86a1-b86a21570ab2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.705 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.706 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.747 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.833 247403 DEBUG nova.compute.manager [req-7be6ccd9-ccf1-45e4-af5e-0c8e14b7f31d req-733b4153-6cae-43a4-9b20-3fecfa331c38 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.834 247403 DEBUG oslo_concurrency.lockutils [req-7be6ccd9-ccf1-45e4-af5e-0c8e14b7f31d req-733b4153-6cae-43a4-9b20-3fecfa331c38 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.834 247403 DEBUG oslo_concurrency.lockutils [req-7be6ccd9-ccf1-45e4-af5e-0c8e14b7f31d req-733b4153-6cae-43a4-9b20-3fecfa331c38 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.835 247403 DEBUG oslo_concurrency.lockutils [req-7be6ccd9-ccf1-45e4-af5e-0c8e14b7f31d req-733b4153-6cae-43a4-9b20-3fecfa331c38 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.835 247403 DEBUG nova.compute.manager [req-7be6ccd9-ccf1-45e4-af5e-0c8e14b7f31d req-733b4153-6cae-43a4-9b20-3fecfa331c38 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Processing event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.836 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.838 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846485.838632, 96050b75-1cb6-4b1d-86a1-b86a21570ab2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.839 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.841 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.844 247403 INFO nova.virt.libvirt.driver [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance spawned successfully.#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.844 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.884 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.889 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.895 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.896 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.897 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.897 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.898 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.898 247403 DEBUG nova.virt.libvirt.driver [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:01:25 np0005603621 nova_compute[247399]: 2026-01-31 08:01:25.981 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.038 247403 DEBUG nova.network.neutron [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Updated VIF entry in instance network info cache for port b9f742f2-2918-4ef6-af08-382773a64e5c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.040 247403 DEBUG nova.network.neutron [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Updating instance_info_cache with network_info: [{"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.101 247403 INFO nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Took 17.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.101 247403 DEBUG nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.103 247403 DEBUG oslo_concurrency.lockutils [req-67d2b35c-b194-458b-a51b-12a9428bd29d req-a7e3a1bd-32bf-4fc1-a3aa-d65f67afbd1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-96050b75-1cb6-4b1d-86a1-b86a21570ab2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:01:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:26.147 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:26.148 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:01:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963563610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.223 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.229 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.238 247403 INFO nova.compute.manager [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Took 20.26 seconds to build instance.#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.293 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:01:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.303 247403 DEBUG oslo_concurrency.lockutils [None req-5289cd94-034a-4d67-b45e-17ba8accff93 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.304 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.304 247403 INFO nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.304 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.344 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:01:26 np0005603621 nova_compute[247399]: 2026-01-31 08:01:26.345 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:27 np0005603621 nova_compute[247399]: 2026-01-31 08:01:27.341 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:27 np0005603621 nova_compute[247399]: 2026-01-31 08:01:27.341 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:27 np0005603621 nova_compute[247399]: 2026-01-31 08:01:27.342 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 893 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 31 03:01:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:27.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:27 np0005603621 nova_compute[247399]: 2026-01-31 08:01:27.862 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.218 247403 DEBUG nova.compute.manager [req-8bf382ee-cbff-49c0-b96c-37c3a97d7738 req-c6575d0f-eb15-41ad-b4fa-0dfd8e672dd2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.218 247403 DEBUG oslo_concurrency.lockutils [req-8bf382ee-cbff-49c0-b96c-37c3a97d7738 req-c6575d0f-eb15-41ad-b4fa-0dfd8e672dd2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.218 247403 DEBUG oslo_concurrency.lockutils [req-8bf382ee-cbff-49c0-b96c-37c3a97d7738 req-c6575d0f-eb15-41ad-b4fa-0dfd8e672dd2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.218 247403 DEBUG oslo_concurrency.lockutils [req-8bf382ee-cbff-49c0-b96c-37c3a97d7738 req-c6575d0f-eb15-41ad-b4fa-0dfd8e672dd2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.219 247403 DEBUG nova.compute.manager [req-8bf382ee-cbff-49c0-b96c-37c3a97d7738 req-c6575d0f-eb15-41ad-b4fa-0dfd8e672dd2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] No waiting events found dispatching network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.219 247403 WARNING nova.compute.manager [req-8bf382ee-cbff-49c0-b96c-37c3a97d7738 req-c6575d0f-eb15-41ad-b4fa-0dfd8e672dd2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received unexpected event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c for instance with vm_state active and task_state None.#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.273 247403 DEBUG oslo_concurrency.lockutils [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.274 247403 DEBUG oslo_concurrency.lockutils [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.274 247403 DEBUG nova.compute.manager [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.281 247403 DEBUG nova.compute.manager [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.283 247403 DEBUG nova.objects.instance [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'flavor' on Instance uuid 96050b75-1cb6-4b1d-86a1-b86a21570ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:28 np0005603621 nova_compute[247399]: 2026-01-31 08:01:28.330 247403 DEBUG nova.virt.libvirt.driver [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:01:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 88 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 893 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 31 03:01:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:29.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:30.482 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:30.483 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:30.483 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:31 np0005603621 nova_compute[247399]: 2026-01-31 08:01:31.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 104 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 266 KiB/s wr, 81 op/s
Jan 31 03:01:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:01:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:01:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:32.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:32 np0005603621 nova_compute[247399]: 2026-01-31 08:01:32.864 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:33.150 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 134 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 03:01:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:33.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:01:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:34.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:01:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 134 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 03:01:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:35.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:36 np0005603621 nova_compute[247399]: 2026-01-31 08:01:36.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 138 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 108 op/s
Jan 31 03:01:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:37 np0005603621 nova_compute[247399]: 2026-01-31 08:01:37.866 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:37Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:cb:ae 10.100.0.4
Jan 31 03:01:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:37Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:cb:ae 10.100.0.4
Jan 31 03:01:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:38 np0005603621 nova_compute[247399]: 2026-01-31 08:01:38.370 247403 DEBUG nova.virt.libvirt.driver [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:01:38
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'volumes', '.rgw.root', 'vms']
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:01:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:01:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 138 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 31 03:01:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:01:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:39.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:01:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:01:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:40.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603621 kernel: tapb9f742f2-29 (unregistering): left promiscuous mode
Jan 31 03:01:41 np0005603621 NetworkManager[49013]: <info>  [1769846501.3303] device (tapb9f742f2-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.330 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:41Z|00118|binding|INFO|Releasing lport b9f742f2-2918-4ef6-af08-382773a64e5c from this chassis (sb_readonly=0)
Jan 31 03:01:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:41Z|00119|binding|INFO|Setting lport b9f742f2-2918-4ef6-af08-382773a64e5c down in Southbound
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.335 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:01:41Z|00120|binding|INFO|Removing iface tapb9f742f2-29 ovn-installed in OVS
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.343 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:cb:ae 10.100.0.4'], port_security=['fa:16:3e:1d:cb:ae 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '96050b75-1cb6-4b1d-86a1-b86a21570ab2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b9f742f2-2918-4ef6-af08-382773a64e5c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.345 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b9f742f2-2918-4ef6-af08-382773a64e5c in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 unbound from our chassis#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.346 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24914779-babc-4c55-b38b-adf9bfc5c103, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.348 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[056eb133-8d88-41d3-b072-8c0ad7b12bb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.348 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace which is not needed anymore#033[00m
Jan 31 03:01:41 np0005603621 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 31 03:01:41 np0005603621 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000032.scope: Consumed 13.385s CPU time.
Jan 31 03:01:41 np0005603621 systemd-machined[212769]: Machine qemu-22-instance-00000032 terminated.
Jan 31 03:01:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 144 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 77 op/s
Jan 31 03:01:41 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [NOTICE]   (281795) : haproxy version is 2.8.14-c23fe91
Jan 31 03:01:41 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [NOTICE]   (281795) : path to executable is /usr/sbin/haproxy
Jan 31 03:01:41 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [WARNING]  (281795) : Exiting Master process...
Jan 31 03:01:41 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [ALERT]    (281795) : Current worker (281797) exited with code 143 (Terminated)
Jan 31 03:01:41 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[281790]: [WARNING]  (281795) : All workers exited. Exiting... (0)
Jan 31 03:01:41 np0005603621 systemd[1]: libpod-7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439.scope: Deactivated successfully.
Jan 31 03:01:41 np0005603621 podman[281974]: 2026-01-31 08:01:41.461271941 +0000 UTC m=+0.043971065 container died 7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:01:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439-userdata-shm.mount: Deactivated successfully.
Jan 31 03:01:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f27088e6ca1d516efd4592d74a1a5fcd17ecf8173e1728ec416cd88c43ebeec1-merged.mount: Deactivated successfully.
Jan 31 03:01:41 np0005603621 podman[281974]: 2026-01-31 08:01:41.496178928 +0000 UTC m=+0.078878042 container cleanup 7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:01:41 np0005603621 systemd[1]: libpod-conmon-7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439.scope: Deactivated successfully.
Jan 31 03:01:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:41.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:41 np0005603621 podman[282003]: 2026-01-31 08:01:41.562635847 +0000 UTC m=+0.051034325 container remove 7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.567 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fa67f64e-4526-4085-9acc-bdb9a5d0c905]: (4, ('Sat Jan 31 08:01:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439)\n7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439\nSat Jan 31 08:01:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439)\n7154547bfcdb931b37465685de56964b5f4f5905270e0ea09d84325fe7520439\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.569 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbd5732-e4bf-4931-a68e-7c1ccdba9c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.570 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.572 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603621 kernel: tap24914779-b0: left promiscuous mode
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.575 247403 INFO nova.virt.libvirt.driver [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.579 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.581 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1697607c-f3e5-49fa-a547-16522838d4cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.584 247403 INFO nova.virt.libvirt.driver [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance destroyed successfully.#033[00m
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.584 247403 DEBUG nova.objects.instance [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'numa_topology' on Instance uuid 96050b75-1cb6-4b1d-86a1-b86a21570ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.598 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[21ce9fc3-9dec-4dc0-96af-760f182836ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.599 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0bb49a-00af-4e36-adc7-5f7c897b1c8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.609 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[69982abb-1ca3-4b6b-9326-8b9e5ffe70de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578867, 'reachable_time': 32170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282034, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 systemd[1]: run-netns-ovnmeta\x2d24914779\x2dbabc\x2d4c55\x2db38b\x2dadf9bfc5c103.mount: Deactivated successfully.
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.614 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:01:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:01:41.614 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5501d0-07c7-488c-9e88-8b10060b420f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.615 247403 DEBUG nova.compute.manager [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:41 np0005603621 nova_compute[247399]: 2026-01-31 08:01:41.695 247403 DEBUG oslo_concurrency.lockutils [None req-bf2462ff-d03b-4651-a18b-a18926d4ae72 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:42 np0005603621 nova_compute[247399]: 2026-01-31 08:01:42.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 167 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 471 KiB/s rd, 3.7 MiB/s wr, 85 op/s
Jan 31 03:01:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:43.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:45 np0005603621 nova_compute[247399]: 2026-01-31 08:01:45.233 247403 DEBUG nova.compute.manager [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:45 np0005603621 nova_compute[247399]: 2026-01-31 08:01:45.357 247403 INFO nova.compute.manager [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] instance snapshotting#033[00m
Jan 31 03:01:45 np0005603621 nova_compute[247399]: 2026-01-31 08:01:45.358 247403 WARNING nova.compute.manager [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Jan 31 03:01:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 167 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 03:01:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:45.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:45 np0005603621 nova_compute[247399]: 2026-01-31 08:01:45.819 247403 INFO nova.virt.libvirt.driver [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Beginning cold snapshot process#033[00m
Jan 31 03:01:46 np0005603621 nova_compute[247399]: 2026-01-31 08:01:46.039 247403 DEBUG nova.virt.libvirt.imagebackend [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:01:46 np0005603621 nova_compute[247399]: 2026-01-31 08:01:46.220 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:46.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:46 np0005603621 nova_compute[247399]: 2026-01-31 08:01:46.359 247403 DEBUG nova.storage.rbd_utils [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(4d2e5ee9b9a04692bd297aa7c6996885) on rbd image(96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:01:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 31 03:01:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 31 03:01:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 31 03:01:46 np0005603621 nova_compute[247399]: 2026-01-31 08:01:46.640 247403 DEBUG nova.storage.rbd_utils [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] cloning vms/96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk@4d2e5ee9b9a04692bd297aa7c6996885 to images/9516f569-48fb-4a10-80ea-c3a613366509 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:01:46 np0005603621 nova_compute[247399]: 2026-01-31 08:01:46.757 247403 DEBUG nova.storage.rbd_utils [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] flattening images/9516f569-48fb-4a10-80ea-c3a613366509 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:01:47 np0005603621 nova_compute[247399]: 2026-01-31 08:01:47.150 247403 DEBUG nova.storage.rbd_utils [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] removing snapshot(4d2e5ee9b9a04692bd297aa7c6996885) on rbd image(96050b75-1cb6-4b1d-86a1-b86a21570ab2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:01:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 201 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.5 MiB/s wr, 192 op/s
Jan 31 03:01:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 31 03:01:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:47.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 31 03:01:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 31 03:01:47 np0005603621 nova_compute[247399]: 2026-01-31 08:01:47.638 247403 DEBUG nova.storage.rbd_utils [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(snap) on rbd image(9516f569-48fb-4a10-80ea-c3a613366509) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:01:47 np0005603621 nova_compute[247399]: 2026-01-31 08:01:47.869 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:48.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 31 03:01:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 31 03:01:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031654220174948816 of space, bias 1.0, pg target 0.9496266052484645 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003093265808207705 of space, bias 1.0, pg target 0.9279797424623115 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:01:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:01:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 201 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 3.9 MiB/s wr, 206 op/s
Jan 31 03:01:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:49.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:50.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.137 247403 DEBUG nova.compute.manager [req-bd8180af-3851-4a25-bce0-278156d05e19 req-0f2ccd47-34ad-4931-acc2-dde281c43429 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received event network-vif-unplugged-b9f742f2-2918-4ef6-af08-382773a64e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.137 247403 DEBUG oslo_concurrency.lockutils [req-bd8180af-3851-4a25-bce0-278156d05e19 req-0f2ccd47-34ad-4931-acc2-dde281c43429 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.137 247403 DEBUG oslo_concurrency.lockutils [req-bd8180af-3851-4a25-bce0-278156d05e19 req-0f2ccd47-34ad-4931-acc2-dde281c43429 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.138 247403 DEBUG oslo_concurrency.lockutils [req-bd8180af-3851-4a25-bce0-278156d05e19 req-0f2ccd47-34ad-4931-acc2-dde281c43429 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.138 247403 DEBUG nova.compute.manager [req-bd8180af-3851-4a25-bce0-278156d05e19 req-0f2ccd47-34ad-4931-acc2-dde281c43429 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] No waiting events found dispatching network-vif-unplugged-b9f742f2-2918-4ef6-af08-382773a64e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.138 247403 WARNING nova.compute.manager [req-bd8180af-3851-4a25-bce0-278156d05e19 req-0f2ccd47-34ad-4931-acc2-dde281c43429 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received unexpected event network-vif-unplugged-b9f742f2-2918-4ef6-af08-382773a64e5c for instance with vm_state stopped and task_state image_uploading.#033[00m
Jan 31 03:01:51 np0005603621 nova_compute[247399]: 2026-01-31 08:01:51.222 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 237 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 7.2 MiB/s wr, 273 op/s
Jan 31 03:01:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:51.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:52.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:52 np0005603621 nova_compute[247399]: 2026-01-31 08:01:52.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 246 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 258 op/s
Jan 31 03:01:53 np0005603621 podman[282181]: 2026-01-31 08:01:53.492437426 +0000 UTC m=+0.047964369 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 03:01:53 np0005603621 podman[282182]: 2026-01-31 08:01:53.51260338 +0000 UTC m=+0.067845594 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:01:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:54 np0005603621 nova_compute[247399]: 2026-01-31 08:01:54.193 247403 DEBUG nova.compute.manager [req-0176ddf7-eae1-43ff-9e01-9e831a7f0e14 req-46cebf59-fa33-42c4-9501-8b73fedabde7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:01:54 np0005603621 nova_compute[247399]: 2026-01-31 08:01:54.193 247403 DEBUG oslo_concurrency.lockutils [req-0176ddf7-eae1-43ff-9e01-9e831a7f0e14 req-46cebf59-fa33-42c4-9501-8b73fedabde7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:01:54 np0005603621 nova_compute[247399]: 2026-01-31 08:01:54.194 247403 DEBUG oslo_concurrency.lockutils [req-0176ddf7-eae1-43ff-9e01-9e831a7f0e14 req-46cebf59-fa33-42c4-9501-8b73fedabde7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:01:54 np0005603621 nova_compute[247399]: 2026-01-31 08:01:54.194 247403 DEBUG oslo_concurrency.lockutils [req-0176ddf7-eae1-43ff-9e01-9e831a7f0e14 req-46cebf59-fa33-42c4-9501-8b73fedabde7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:01:54 np0005603621 nova_compute[247399]: 2026-01-31 08:01:54.194 247403 DEBUG nova.compute.manager [req-0176ddf7-eae1-43ff-9e01-9e831a7f0e14 req-46cebf59-fa33-42c4-9501-8b73fedabde7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] No waiting events found dispatching network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:01:54 np0005603621 nova_compute[247399]: 2026-01-31 08:01:54.194 247403 WARNING nova.compute.manager [req-0176ddf7-eae1-43ff-9e01-9e831a7f0e14 req-46cebf59-fa33-42c4-9501-8b73fedabde7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received unexpected event network-vif-plugged-b9f742f2-2918-4ef6-af08-382773a64e5c for instance with vm_state stopped and task_state image_uploading.#033[00m
Jan 31 03:01:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:54.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:01:55 np0005603621 nova_compute[247399]: 2026-01-31 08:01:55.384 247403 INFO nova.virt.libvirt.driver [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Snapshot image upload complete#033[00m
Jan 31 03:01:55 np0005603621 nova_compute[247399]: 2026-01-31 08:01:55.385 247403 INFO nova.compute.manager [None req-3ac5f09e-ee4a-41cd-aaec-c36f5317ea12 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Took 10.03 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:01:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 246 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.9 MiB/s wr, 78 op/s
Jan 31 03:01:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:56 np0005603621 nova_compute[247399]: 2026-01-31 08:01:56.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:56 np0005603621 nova_compute[247399]: 2026-01-31 08:01:56.576 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846501.5749168, 96050b75-1cb6-4b1d-86a1-b86a21570ab2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:01:56 np0005603621 nova_compute[247399]: 2026-01-31 08:01:56.576 247403 INFO nova.compute.manager [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:01:56 np0005603621 nova_compute[247399]: 2026-01-31 08:01:56.605 247403 DEBUG nova.compute.manager [None req-97f6299a-6009-43d4-8713-9e11399f6c1a - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:01:56 np0005603621 nova_compute[247399]: 2026-01-31 08:01:56.609 247403 DEBUG nova.compute.manager [None req-97f6299a-6009-43d4-8713-9e11399f6c1a - - - - - -] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:01:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:01:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 31 03:01:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 31 03:01:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 31 03:01:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 260 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 90 op/s
Jan 31 03:01:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:57 np0005603621 nova_compute[247399]: 2026-01-31 08:01:57.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:01:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 31 03:01:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:01:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:01:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:01:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 31 03:01:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 31 03:01:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 260 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 31 03:01:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:01:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:01:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:01:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:00.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:02:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 18K writes, 74K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 5674 syncs, 3.32 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6157 writes, 22K keys, 6157 commit groups, 1.0 writes per commit group, ingest: 23.19 MB, 0.04 MB/s#012Interval WAL: 6158 writes, 2310 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:02:01 np0005603621 nova_compute[247399]: 2026-01-31 08:02:01.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 236 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 215 KiB/s rd, 3.1 MiB/s wr, 105 op/s
Jan 31 03:02:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.099 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.100 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.100 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.100 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.101 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.102 247403 INFO nova.compute.manager [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Terminating instance#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.103 247403 DEBUG nova.compute.manager [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.109 247403 INFO nova.virt.libvirt.driver [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Instance destroyed successfully.#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.109 247403 DEBUG nova.objects.instance [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'resources' on Instance uuid 96050b75-1cb6-4b1d-86a1-b86a21570ab2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.141 247403 DEBUG nova.virt.libvirt.vif [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:01:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-387817935',display_name='tempest-ImagesTestJSON-server-387817935',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-387817935',id=50,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:01:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-haq3thk8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:01:55Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=96050b75-1cb6-4b1d-86a1-b86a21570ab2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.142 247403 DEBUG nova.network.os_vif_util [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "b9f742f2-2918-4ef6-af08-382773a64e5c", "address": "fa:16:3e:1d:cb:ae", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9f742f2-29", "ovs_interfaceid": "b9f742f2-2918-4ef6-af08-382773a64e5c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.143 247403 DEBUG nova.network.os_vif_util [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.143 247403 DEBUG os_vif [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.146 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.146 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9f742f2-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.147 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.149 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.152 247403 INFO os_vif [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:cb:ae,bridge_name='br-int',has_traffic_filtering=True,id=b9f742f2-2918-4ef6-af08-382773a64e5c,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9f742f2-29')#033[00m
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.516796) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846522516828, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2537, "num_deletes": 516, "total_data_size": 3855676, "memory_usage": 3907528, "flush_reason": "Manual Compaction"}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846522560042, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3794221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29228, "largest_seqno": 31764, "table_properties": {"data_size": 3783169, "index_size": 6714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 26079, "raw_average_key_size": 20, "raw_value_size": 3758898, "raw_average_value_size": 2887, "num_data_blocks": 289, "num_entries": 1302, "num_filter_entries": 1302, "num_deletions": 516, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846327, "oldest_key_time": 1769846327, "file_creation_time": 1769846522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 43296 microseconds, and 5456 cpu microseconds.
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.560088) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3794221 bytes OK
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.560106) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.567730) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.567811) EVENT_LOG_v1 {"time_micros": 1769846522567803, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.567832) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3844205, prev total WAL file size 3844205, number of live WAL files 2.
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.568554) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3705KB)], [65(9013KB)]
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846522568611, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13024290, "oldest_snapshot_seqno": -1}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5751 keys, 10745376 bytes, temperature: kUnknown
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846522681060, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10745376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10704772, "index_size": 25134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 147833, "raw_average_key_size": 25, "raw_value_size": 10599419, "raw_average_value_size": 1843, "num_data_blocks": 1012, "num_entries": 5751, "num_filter_entries": 5751, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846522, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.681380) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10745376 bytes
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.683301) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.7 rd, 95.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.8 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 6800, records dropped: 1049 output_compression: NoCompression
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.683331) EVENT_LOG_v1 {"time_micros": 1769846522683320, "job": 36, "event": "compaction_finished", "compaction_time_micros": 112524, "compaction_time_cpu_micros": 18179, "output_level": 6, "num_output_files": 1, "total_output_size": 10745376, "num_input_records": 6800, "num_output_records": 5751, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846522684074, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846522685161, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.568446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.685280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.685285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.685287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.685289) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:02:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:02:02.685291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.745 247403 INFO nova.virt.libvirt.driver [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Deleting instance files /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2_del#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.746 247403 INFO nova.virt.libvirt.driver [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Deletion of /var/lib/nova/instances/96050b75-1cb6-4b1d-86a1-b86a21570ab2_del complete#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.901 247403 INFO nova.compute.manager [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.901 247403 DEBUG oslo.service.loopingcall [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.902 247403 DEBUG nova.compute.manager [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:02:02 np0005603621 nova_compute[247399]: 2026-01-31 08:02:02.902 247403 DEBUG nova.network.neutron [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:02:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 171 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 562 KiB/s rd, 3.2 MiB/s wr, 149 op/s
Jan 31 03:02:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:03.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.296 247403 DEBUG nova.network.neutron [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.334 247403 INFO nova.compute.manager [-] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Took 1.43 seconds to deallocate network for instance.#033[00m
Jan 31 03:02:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:02:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.436 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.437 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.467 247403 DEBUG nova.compute.manager [req-ac7ff2f2-a656-4e79-a0d9-33471feae237 req-d9c1d7f9-f523-41f6-ae30-bafbaa8c191b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 96050b75-1cb6-4b1d-86a1-b86a21570ab2] Received event network-vif-deleted-b9f742f2-2918-4ef6-af08-382773a64e5c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.525 247403 DEBUG oslo_concurrency.processutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595040429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.957 247403 DEBUG oslo_concurrency.processutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:04 np0005603621 nova_compute[247399]: 2026-01-31 08:02:04.961 247403 DEBUG nova.compute.provider_tree [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:05 np0005603621 nova_compute[247399]: 2026-01-31 08:02:05.053 247403 DEBUG nova.scheduler.client.report [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:05 np0005603621 nova_compute[247399]: 2026-01-31 08:02:05.145 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:05 np0005603621 nova_compute[247399]: 2026-01-31 08:02:05.216 247403 INFO nova.scheduler.client.report [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Deleted allocations for instance 96050b75-1cb6-4b1d-86a1-b86a21570ab2#033[00m
Jan 31 03:02:05 np0005603621 nova_compute[247399]: 2026-01-31 08:02:05.386 247403 DEBUG oslo_concurrency.lockutils [None req-4c95990d-7ed3-40a1-8d35-b76ce3fe93bd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "96050b75-1cb6-4b1d-86a1-b86a21570ab2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 142 MiB data, 613 MiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 2.8 MiB/s wr, 152 op/s
Jan 31 03:02:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:05.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.355 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.356 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.765 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.949 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.949 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.955 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:02:06 np0005603621 nova_compute[247399]: 2026-01-31 08:02:06.955 247403 INFO nova.compute.claims [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.122 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 31 03:02:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 31 03:02:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 31 03:02:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 121 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 1.4 MiB/s wr, 136 op/s
Jan 31 03:02:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78346136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.547 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.555 247403 DEBUG nova.compute.provider_tree [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:07.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.604 247403 DEBUG nova.scheduler.client.report [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.651 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.651 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.768 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.768 247403 DEBUG nova.network.neutron [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.843 247403 INFO nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:07 np0005603621 nova_compute[247399]: 2026-01-31 08:02:07.934 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.249 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.250 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.251 247403 INFO nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Creating image(s)#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.277 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.306 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.515 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.519 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.540 247403 DEBUG nova.policy [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46ffd64a348845fab6cdc53249353575', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '521dcd459f144f2bb32de93d50ae0391', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.572 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.573 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.574 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.574 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.599 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:08 np0005603621 nova_compute[247399]: 2026-01-31 08:02:08.602 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e0363e14-d6b9-4bed-a555-4466ddb7790d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 121 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 435 KiB/s rd, 1.2 MiB/s wr, 122 op/s
Jan 31 03:02:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:09 np0005603621 nova_compute[247399]: 2026-01-31 08:02:09.979 247403 DEBUG nova.network.neutron [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Successfully created port: 63e84446-ba0d-492d-b5be-c6cb01818f14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.180 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e0363e14-d6b9-4bed-a555-4466ddb7790d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.256 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] resizing rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:02:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.441 247403 DEBUG nova.objects.instance [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'migration_context' on Instance uuid e0363e14-d6b9-4bed-a555-4466ddb7790d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.463 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.463 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Ensure instance console log exists: /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.464 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.464 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:10 np0005603621 nova_compute[247399]: 2026-01-31 08:02:10.465 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 121 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 74 KiB/s wr, 66 op/s
Jan 31 03:02:11 np0005603621 nova_compute[247399]: 2026-01-31 08:02:11.558 247403 DEBUG nova.network.neutron [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Successfully updated port: 63e84446-ba0d-492d-b5be-c6cb01818f14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:02:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:11.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:11 np0005603621 nova_compute[247399]: 2026-01-31 08:02:11.593 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:11 np0005603621 nova_compute[247399]: 2026-01-31 08:02:11.594 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquired lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:11 np0005603621 nova_compute[247399]: 2026-01-31 08:02:11.594 247403 DEBUG nova.network.neutron [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:02:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 03:02:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:11.674 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:11 np0005603621 nova_compute[247399]: 2026-01-31 08:02:11.675 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:11.676 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:02:11 np0005603621 nova_compute[247399]: 2026-01-31 08:02:11.927 247403 DEBUG nova.network.neutron [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:02:12 np0005603621 nova_compute[247399]: 2026-01-31 08:02:12.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:12 np0005603621 nova_compute[247399]: 2026-01-31 08:02:12.879 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 116 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 31 03:02:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:13.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:14.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:02:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339512345' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:02:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:02:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3339512345' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.435 247403 DEBUG nova.compute.manager [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received event network-changed-63e84446-ba0d-492d-b5be-c6cb01818f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.436 247403 DEBUG nova.compute.manager [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Refreshing instance network info cache due to event network-changed-63e84446-ba0d-492d-b5be-c6cb01818f14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.436 247403 DEBUG oslo_concurrency.lockutils [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:14.678 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.804 247403 DEBUG nova.network.neutron [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Updating instance_info_cache with network_info: [{"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.849 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Releasing lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.850 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Instance network_info: |[{"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.850 247403 DEBUG oslo_concurrency.lockutils [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.850 247403 DEBUG nova.network.neutron [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Refreshing network info cache for port 63e84446-ba0d-492d-b5be-c6cb01818f14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.853 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Start _get_guest_xml network_info=[{"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.856 247403 WARNING nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.862 247403 DEBUG nova.virt.libvirt.host [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.862 247403 DEBUG nova.virt.libvirt.host [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.866 247403 DEBUG nova.virt.libvirt.host [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.866 247403 DEBUG nova.virt.libvirt.host [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.867 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.867 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.868 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.868 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.868 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.868 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.869 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.869 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.869 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.869 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.869 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.869 247403 DEBUG nova.virt.hardware [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:02:14 np0005603621 nova_compute[247399]: 2026-01-31 08:02:14.872 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885924460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.412 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.434 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 88 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.437 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:15.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241894897' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.861 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.863 247403 DEBUG nova.virt.libvirt.vif [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-424778968',display_name='tempest-ImagesTestJSON-server-424778968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-424778968',id=52,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-ps9bcnlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:08Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=e0363e14-d6b9-4bed-a555-4466ddb7790d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.864 247403 DEBUG nova.network.os_vif_util [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.865 247403 DEBUG nova.network.os_vif_util [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.866 247403 DEBUG nova.objects.instance [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0363e14-d6b9-4bed-a555-4466ddb7790d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.893 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <uuid>e0363e14-d6b9-4bed-a555-4466ddb7790d</uuid>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <name>instance-00000034</name>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesTestJSON-server-424778968</nova:name>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:02:14</nova:creationTime>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:user uuid="46ffd64a348845fab6cdc53249353575">tempest-ImagesTestJSON-1780438391-project-member</nova:user>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:project uuid="521dcd459f144f2bb32de93d50ae0391">tempest-ImagesTestJSON-1780438391</nova:project>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <nova:port uuid="63e84446-ba0d-492d-b5be-c6cb01818f14">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <entry name="serial">e0363e14-d6b9-4bed-a555-4466ddb7790d</entry>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <entry name="uuid">e0363e14-d6b9-4bed-a555-4466ddb7790d</entry>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0363e14-d6b9-4bed-a555-4466ddb7790d_disk">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e0363e14-d6b9-4bed-a555-4466ddb7790d_disk.config">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:b0:27:8f"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <target dev="tap63e84446-ba"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/console.log" append="off"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:02:15 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:02:15 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:02:15 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:02:15 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.894 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Preparing to wait for external event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.895 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.895 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.895 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.896 247403 DEBUG nova.virt.libvirt.vif [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-424778968',display_name='tempest-ImagesTestJSON-server-424778968',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-424778968',id=52,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-ps9bcnlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:08Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=e0363e14-d6b9-4bed-a555-4466ddb7790d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.897 247403 DEBUG nova.network.os_vif_util [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.897 247403 DEBUG nova.network.os_vif_util [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.898 247403 DEBUG os_vif [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.898 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.899 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.899 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.901 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.902 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63e84446-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.902 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63e84446-ba, col_values=(('external_ids', {'iface-id': '63e84446-ba0d-492d-b5be-c6cb01818f14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:27:8f', 'vm-uuid': 'e0363e14-d6b9-4bed-a555-4466ddb7790d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.903 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:15 np0005603621 NetworkManager[49013]: <info>  [1769846535.9046] manager: (tap63e84446-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.906 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:15 np0005603621 nova_compute[247399]: 2026-01-31 08:02:15.909 247403 INFO os_vif [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba')#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.018 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.019 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.019 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No VIF found with MAC fa:16:3e:b0:27:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.019 247403 INFO nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Using config drive#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.045 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:16.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.714 247403 INFO nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Creating config drive at /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/disk.config#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.719 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6eu6jl3y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.843 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6eu6jl3y" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.873 247403 DEBUG nova.storage.rbd_utils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e0363e14-d6b9-4bed-a555-4466ddb7790d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.877 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/disk.config e0363e14-d6b9-4bed-a555-4466ddb7790d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:02:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2df727d9-a675-4c4f-a3ca-17620e856c9e does not exist
Jan 31 03:02:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c3d4a2ec-410a-43a1-b2c0-50168af63ea6 does not exist
Jan 31 03:02:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev af7c6fdf-b092-4231-bced-46fda897e69f does not exist
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:02:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.949 247403 DEBUG nova.network.neutron [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Updated VIF entry in instance network info cache for port 63e84446-ba0d-492d-b5be-c6cb01818f14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.950 247403 DEBUG nova.network.neutron [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Updating instance_info_cache with network_info: [{"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:16 np0005603621 nova_compute[247399]: 2026-01-31 08:02:16.969 247403 DEBUG oslo_concurrency.lockutils [req-22e49c82-8d07-4c9c-9420-cf64b50f6957 req-4ab4b839-62c5-43bd-84f9-1c40490a6f47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 88 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 03:02:17 np0005603621 podman[282908]: 2026-01-31 08:02:17.452049537 +0000 UTC m=+0.087899085 container create 758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:02:17 np0005603621 podman[282908]: 2026-01-31 08:02:17.385734522 +0000 UTC m=+0.021584090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:02:17 np0005603621 systemd[1]: Started libpod-conmon-758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505.scope.
Jan 31 03:02:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:17 np0005603621 podman[282908]: 2026-01-31 08:02:17.603466587 +0000 UTC m=+0.239316165 container init 758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:02:17 np0005603621 podman[282908]: 2026-01-31 08:02:17.609640321 +0000 UTC m=+0.245489869 container start 758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:02:17 np0005603621 cranky_wilbur[282928]: 167 167
Jan 31 03:02:17 np0005603621 systemd[1]: libpod-758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505.scope: Deactivated successfully.
Jan 31 03:02:17 np0005603621 podman[282908]: 2026-01-31 08:02:17.664435344 +0000 UTC m=+0.300284922 container attach 758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:02:17 np0005603621 podman[282908]: 2026-01-31 08:02:17.665480027 +0000 UTC m=+0.301329575 container died 758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:02:17 np0005603621 nova_compute[247399]: 2026-01-31 08:02:17.881 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:02:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:02:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:02:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ff3ff0066f86839c6f066b1165c0cc17c7424285854542268bbc91a0881b821d-merged.mount: Deactivated successfully.
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.208 247403 DEBUG oslo_concurrency.processutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/disk.config e0363e14-d6b9-4bed-a555-4466ddb7790d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.208 247403 INFO nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Deleting local config drive /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d/disk.config because it was imported into RBD.#033[00m
Jan 31 03:02:18 np0005603621 kernel: tap63e84446-ba: entered promiscuous mode
Jan 31 03:02:18 np0005603621 NetworkManager[49013]: <info>  [1769846538.2482] manager: (tap63e84446-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Jan 31 03:02:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:18Z|00121|binding|INFO|Claiming lport 63e84446-ba0d-492d-b5be-c6cb01818f14 for this chassis.
Jan 31 03:02:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:18Z|00122|binding|INFO|63e84446-ba0d-492d-b5be-c6cb01818f14: Claiming fa:16:3e:b0:27:8f 10.100.0.13
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:18Z|00123|binding|INFO|Setting lport 63e84446-ba0d-492d-b5be-c6cb01818f14 ovn-installed in OVS
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.255 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.256 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.258 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:18Z|00124|binding|INFO|Setting lport 63e84446-ba0d-492d-b5be-c6cb01818f14 up in Southbound
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.262 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:27:8f 10.100.0.13'], port_security=['fa:16:3e:b0:27:8f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e0363e14-d6b9-4bed-a555-4466ddb7790d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=63e84446-ba0d-492d-b5be-c6cb01818f14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.263 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 63e84446-ba0d-492d-b5be-c6cb01818f14 in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 bound to our chassis#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.264 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24914779-babc-4c55-b38b-adf9bfc5c103#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.274 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[15359184-cc8d-47e8-9e6c-2e213f78a503]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.275 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24914779-b1 in ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.277 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24914779-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.277 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a6d53f-60ad-4e24-801e-7e893d020029]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 systemd-machined[212769]: New machine qemu-23-instance-00000034.
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.278 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4226ede7-92d4-459f-8f78-2b9e8c56c062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 systemd-udevd[282997]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:18 np0005603621 systemd[1]: Started Virtual Machine qemu-23-instance-00000034.
Jan 31 03:02:18 np0005603621 NetworkManager[49013]: <info>  [1769846538.2964] device (tap63e84446-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:02:18 np0005603621 NetworkManager[49013]: <info>  [1769846538.2972] device (tap63e84446-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.294 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[21168781-ca7a-418d-8a38-c9bd799cb5c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.317 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39ec58f2-8a14-4511-88f1-be4ef4adaf0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.340 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[023bd8b5-74c8-413f-834f-083e469fabaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.344 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[823f0a63-d55d-4658-8a71-542bc8ac472e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 NetworkManager[49013]: <info>  [1769846538.3461] manager: (tap24914779-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Jan 31 03:02:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:18 np0005603621 podman[282908]: 2026-01-31 08:02:18.367538989 +0000 UTC m=+1.003388537 container remove 758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wilbur, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.373 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4deb89-7fd1-4253-affb-9b93485ff82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.379 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[676a4f54-ec99-44c7-8ecb-42c3c29487c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 systemd[1]: libpod-conmon-758662c30a117bbbe6d784bcbe3a38b57979a98e9c07641c29c57836bcca9505.scope: Deactivated successfully.
Jan 31 03:02:18 np0005603621 NetworkManager[49013]: <info>  [1769846538.4072] device (tap24914779-b0): carrier: link connected
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.422 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d3de3f42-cbc4-4847-b5ed-97e2e7188a67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.436 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4d258576-89fd-4054-a1d6-e9f091b0a5ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584392, 'reachable_time': 25184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283044, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.449 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[30c5ed5b-c044-4769-a130-19c6f114271d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:baf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 584392, 'tstamp': 584392}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283047, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.463 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9a79e5d9-c7f0-41a8-9769-e3965d33f17b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584392, 'reachable_time': 25184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283059, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.485 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c00368b4-622d-4dce-a1a7-6a9084b6b51b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.532 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f78e210-9008-4ded-8ec8-c4fa84b7b66e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.533 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.533 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.533 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24914779-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.535 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 NetworkManager[49013]: <info>  [1769846538.5356] manager: (tap24914779-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 31 03:02:18 np0005603621 kernel: tap24914779-b0: entered promiscuous mode
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.537 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.539 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24914779-b0, col_values=(('external_ids', {'iface-id': '23cfbf86-f443-4dea-a9ae-1c6f9be9ee53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.539 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:18Z|00125|binding|INFO|Releasing lport 23cfbf86-f443-4dea-a9ae-1c6f9be9ee53 from this chassis (sb_readonly=0)
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.541 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.542 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b001a46d-ae84-4ab8-b12a-c4106d1ee540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.542 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:02:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:18.543 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'env', 'PROCESS_TAG=haproxy-24914779-babc-4c55-b38b-adf9bfc5c103', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24914779-babc-4c55-b38b-adf9bfc5c103.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.546 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:18 np0005603621 podman[283054]: 2026-01-31 08:02:18.55809163 +0000 UTC m=+0.090630341 container create 001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.575 247403 DEBUG nova.compute.manager [req-fab686e1-843d-4a2f-9883-2b14519de9d1 req-33e1486b-acb9-4b28-badf-f31d87d74489 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.575 247403 DEBUG oslo_concurrency.lockutils [req-fab686e1-843d-4a2f-9883-2b14519de9d1 req-33e1486b-acb9-4b28-badf-f31d87d74489 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.576 247403 DEBUG oslo_concurrency.lockutils [req-fab686e1-843d-4a2f-9883-2b14519de9d1 req-33e1486b-acb9-4b28-badf-f31d87d74489 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.576 247403 DEBUG oslo_concurrency.lockutils [req-fab686e1-843d-4a2f-9883-2b14519de9d1 req-33e1486b-acb9-4b28-badf-f31d87d74489 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.576 247403 DEBUG nova.compute.manager [req-fab686e1-843d-4a2f-9883-2b14519de9d1 req-33e1486b-acb9-4b28-badf-f31d87d74489 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Processing event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:02:18 np0005603621 podman[283054]: 2026-01-31 08:02:18.489566765 +0000 UTC m=+0.022105496 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:02:18 np0005603621 systemd[1]: Started libpod-conmon-001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85.scope.
Jan 31 03:02:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc7c119a606ff7b893d9c1afd0f39dd44150e66849946ec528e39f3f9864af1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc7c119a606ff7b893d9c1afd0f39dd44150e66849946ec528e39f3f9864af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc7c119a606ff7b893d9c1afd0f39dd44150e66849946ec528e39f3f9864af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc7c119a606ff7b893d9c1afd0f39dd44150e66849946ec528e39f3f9864af1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fc7c119a606ff7b893d9c1afd0f39dd44150e66849946ec528e39f3f9864af1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.663 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846538.6632752, e0363e14-d6b9-4bed-a555-4466ddb7790d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.664 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.667 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.670 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.674 247403 INFO nova.virt.libvirt.driver [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Instance spawned successfully.#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.675 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.691 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.696 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.724 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.725 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.725 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.726 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.726 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.727 247403 DEBUG nova.virt.libvirt.driver [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.731 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.731 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846538.6634464, e0363e14-d6b9-4bed-a555-4466ddb7790d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.732 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:02:18 np0005603621 podman[283054]: 2026-01-31 08:02:18.744121758 +0000 UTC m=+0.276660509 container init 001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 03:02:18 np0005603621 podman[283054]: 2026-01-31 08:02:18.750763557 +0000 UTC m=+0.283302268 container start 001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.775 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.780 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846538.6696787, e0363e14-d6b9-4bed-a555-4466ddb7790d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.781 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:02:18 np0005603621 podman[283054]: 2026-01-31 08:02:18.784400854 +0000 UTC m=+0.316939565 container attach 001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.820 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.826 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.841 247403 INFO nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Took 10.59 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.842 247403 DEBUG nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.858 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.936 247403 INFO nova.compute.manager [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Took 12.04 seconds to build instance.#033[00m
Jan 31 03:02:18 np0005603621 podman[283146]: 2026-01-31 08:02:18.865091601 +0000 UTC m=+0.018841603 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:02:18 np0005603621 nova_compute[247399]: 2026-01-31 08:02:18.970 247403 DEBUG oslo_concurrency.lockutils [None req-60435c13-6507-4379-a67a-ed458c0191f6 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:19 np0005603621 podman[283146]: 2026-01-31 08:02:19.022101697 +0000 UTC m=+0.175851679 container create 827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:02:19 np0005603621 systemd[1]: Started libpod-conmon-827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93.scope.
Jan 31 03:02:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d38f4b46441fe0f2daa4607a31f6503b9d163fbca54b7f571effd41945127cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:19 np0005603621 podman[283146]: 2026-01-31 08:02:19.317986099 +0000 UTC m=+0.471736101 container init 827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:02:19 np0005603621 podman[283146]: 2026-01-31 08:02:19.323801212 +0000 UTC m=+0.477551194 container start 827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:02:19 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [NOTICE]   (283166) : New worker (283168) forked
Jan 31 03:02:19 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [NOTICE]   (283166) : Loading success.
Jan 31 03:02:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 88 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 03:02:19 np0005603621 eager_cerf[283119]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:02:19 np0005603621 eager_cerf[283119]: --> relative data size: 1.0
Jan 31 03:02:19 np0005603621 eager_cerf[283119]: --> All data devices are unavailable
Jan 31 03:02:19 np0005603621 systemd[1]: libpod-001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85.scope: Deactivated successfully.
Jan 31 03:02:19 np0005603621 podman[283054]: 2026-01-31 08:02:19.574456013 +0000 UTC m=+1.106994744 container died 001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:02:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:19.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.724 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.725 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.725 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:02:19 np0005603621 nova_compute[247399]: 2026-01-31 08:02:19.725 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e0363e14-d6b9-4bed-a555-4466ddb7790d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7fc7c119a606ff7b893d9c1afd0f39dd44150e66849946ec528e39f3f9864af1-merged.mount: Deactivated successfully.
Jan 31 03:02:19 np0005603621 podman[283054]: 2026-01-31 08:02:19.874803925 +0000 UTC m=+1.407342636 container remove 001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:02:19 np0005603621 systemd[1]: libpod-conmon-001eab0ea6e27af64e2f16f546a34750cd0328bdbb2787d211b8863bf6ad8a85.scope: Deactivated successfully.
Jan 31 03:02:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:02:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.437112984 +0000 UTC m=+0.063842649 container create 2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.392569794 +0000 UTC m=+0.019299449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:02:20 np0005603621 systemd[1]: Started libpod-conmon-2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb.scope.
Jan 31 03:02:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.589074841 +0000 UTC m=+0.215804516 container init 2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.594493721 +0000 UTC m=+0.221223376 container start 2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:02:20 np0005603621 systemd[1]: libpod-2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb.scope: Deactivated successfully.
Jan 31 03:02:20 np0005603621 cool_allen[283355]: 167 167
Jan 31 03:02:20 np0005603621 conmon[283355]: conmon 2b3ea631d95d1b96f883 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb.scope/container/memory.events
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.622145791 +0000 UTC m=+0.248875466 container attach 2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.622803011 +0000 UTC m=+0.249532676 container died 2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:02:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5e4063c415bc3e483080009f9f19e7e72500a7df8e70b76eb8d162faac869958-merged.mount: Deactivated successfully.
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.778 247403 DEBUG nova.compute.manager [req-8c4316f4-1cc6-4892-87f0-17ad564c0074 req-b71fd4fe-a852-4d30-9adb-c927ceae2b94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.780 247403 DEBUG oslo_concurrency.lockutils [req-8c4316f4-1cc6-4892-87f0-17ad564c0074 req-b71fd4fe-a852-4d30-9adb-c927ceae2b94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.781 247403 DEBUG oslo_concurrency.lockutils [req-8c4316f4-1cc6-4892-87f0-17ad564c0074 req-b71fd4fe-a852-4d30-9adb-c927ceae2b94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.781 247403 DEBUG oslo_concurrency.lockutils [req-8c4316f4-1cc6-4892-87f0-17ad564c0074 req-b71fd4fe-a852-4d30-9adb-c927ceae2b94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.781 247403 DEBUG nova.compute.manager [req-8c4316f4-1cc6-4892-87f0-17ad564c0074 req-b71fd4fe-a852-4d30-9adb-c927ceae2b94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] No waiting events found dispatching network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.782 247403 WARNING nova.compute.manager [req-8c4316f4-1cc6-4892-87f0-17ad564c0074 req-b71fd4fe-a852-4d30-9adb-c927ceae2b94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received unexpected event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:02:20 np0005603621 nova_compute[247399]: 2026-01-31 08:02:20.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:20 np0005603621 podman[283339]: 2026-01-31 08:02:20.973945701 +0000 UTC m=+0.600675356 container remove 2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 03:02:20 np0005603621 systemd[1]: libpod-conmon-2b3ea631d95d1b96f8831d3d794fc88f2f03b39a1536a652261654b5fe6130fb.scope: Deactivated successfully.
Jan 31 03:02:21 np0005603621 podman[283379]: 2026-01-31 08:02:21.158815633 +0000 UTC m=+0.080957126 container create 2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.163 247403 DEBUG nova.objects.instance [None req-17ca010c-afa3-4291-b586-48dd039462a2 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'pci_devices' on Instance uuid e0363e14-d6b9-4bed-a555-4466ddb7790d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:21 np0005603621 podman[283379]: 2026-01-31 08:02:21.097133994 +0000 UTC m=+0.019275517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.198 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846541.1976535, e0363e14-d6b9-4bed-a555-4466ddb7790d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.198 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:02:21 np0005603621 systemd[1]: Started libpod-conmon-2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227.scope.
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.228 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.232 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b67eda0f21f0046afd791a2990aecd9c21a8a70a46705bb77de44e463115d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b67eda0f21f0046afd791a2990aecd9c21a8a70a46705bb77de44e463115d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b67eda0f21f0046afd791a2990aecd9c21a8a70a46705bb77de44e463115d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68b67eda0f21f0046afd791a2990aecd9c21a8a70a46705bb77de44e463115d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.254 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:02:21 np0005603621 podman[283379]: 2026-01-31 08:02:21.2767139 +0000 UTC m=+0.198855403 container init 2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:02:21 np0005603621 podman[283379]: 2026-01-31 08:02:21.288488811 +0000 UTC m=+0.210630304 container start 2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:02:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 88 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 03:02:21 np0005603621 podman[283379]: 2026-01-31 08:02:21.447615783 +0000 UTC m=+0.369757276 container attach 2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:02:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:21 np0005603621 kernel: tap63e84446-ba (unregistering): left promiscuous mode
Jan 31 03:02:21 np0005603621 NetworkManager[49013]: <info>  [1769846541.9312] device (tap63e84446-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:02:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:21Z|00126|binding|INFO|Releasing lport 63e84446-ba0d-492d-b5be-c6cb01818f14 from this chassis (sb_readonly=0)
Jan 31 03:02:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:21Z|00127|binding|INFO|Setting lport 63e84446-ba0d-492d-b5be-c6cb01818f14 down in Southbound
Jan 31 03:02:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:21Z|00128|binding|INFO|Removing iface tap63e84446-ba ovn-installed in OVS
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.938 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.940 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:21 np0005603621 nova_compute[247399]: 2026-01-31 08:02:21.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:21.948 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:27:8f 10.100.0.13'], port_security=['fa:16:3e:b0:27:8f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e0363e14-d6b9-4bed-a555-4466ddb7790d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=63e84446-ba0d-492d-b5be-c6cb01818f14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:21.949 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 63e84446-ba0d-492d-b5be-c6cb01818f14 in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 unbound from our chassis#033[00m
Jan 31 03:02:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:21.954 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24914779-babc-4c55-b38b-adf9bfc5c103, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:02:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:21.956 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[33a4e077-09ab-4ed1-9330-667a88ae6188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:21.956 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace which is not needed anymore#033[00m
Jan 31 03:02:21 np0005603621 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000034.scope: Deactivated successfully.
Jan 31 03:02:21 np0005603621 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000034.scope: Consumed 2.964s CPU time.
Jan 31 03:02:21 np0005603621 systemd-machined[212769]: Machine qemu-23-instance-00000034 terminated.
Jan 31 03:02:22 np0005603621 busy_faraday[283399]: {
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:    "0": [
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:        {
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "devices": [
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "/dev/loop3"
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            ],
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "lv_name": "ceph_lv0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "lv_size": "7511998464",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "name": "ceph_lv0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "tags": {
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.cluster_name": "ceph",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.crush_device_class": "",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.encrypted": "0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.osd_id": "0",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.type": "block",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:                "ceph.vdo": "0"
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            },
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "type": "block",
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:            "vg_name": "ceph_vg0"
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:        }
Jan 31 03:02:22 np0005603621 busy_faraday[283399]:    ]
Jan 31 03:02:22 np0005603621 busy_faraday[283399]: }
Jan 31 03:02:22 np0005603621 systemd[1]: libpod-2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227.scope: Deactivated successfully.
Jan 31 03:02:22 np0005603621 podman[283379]: 2026-01-31 08:02:22.03722848 +0000 UTC m=+0.959369973 container died 2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.084 247403 DEBUG nova.compute.manager [None req-17ca010c-afa3-4291-b586-48dd039462a2 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-68b67eda0f21f0046afd791a2990aecd9c21a8a70a46705bb77de44e463115d7-merged.mount: Deactivated successfully.
Jan 31 03:02:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:22.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.658 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Updating instance_info_cache with network_info: [{"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:22 np0005603621 podman[283379]: 2026-01-31 08:02:22.669407495 +0000 UTC m=+1.591548988 container remove 2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:02:22 np0005603621 systemd[1]: libpod-conmon-2721ac4125fdf725fb5e36989fb471344bfa8e452f867591b01ee243d9374227.scope: Deactivated successfully.
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.723 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-e0363e14-d6b9-4bed-a555-4466ddb7790d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.724 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:22 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [NOTICE]   (283166) : haproxy version is 2.8.14-c23fe91
Jan 31 03:02:22 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [NOTICE]   (283166) : path to executable is /usr/sbin/haproxy
Jan 31 03:02:22 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [WARNING]  (283166) : Exiting Master process...
Jan 31 03:02:22 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [ALERT]    (283166) : Current worker (283168) exited with code 143 (Terminated)
Jan 31 03:02:22 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[283162]: [WARNING]  (283166) : All workers exited. Exiting... (0)
Jan 31 03:02:22 np0005603621 systemd[1]: libpod-827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93.scope: Deactivated successfully.
Jan 31 03:02:22 np0005603621 podman[283434]: 2026-01-31 08:02:22.913703655 +0000 UTC m=+0.880504393 container died 827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.929 247403 DEBUG nova.compute.manager [req-8ea5fd8b-5be8-4948-a75b-a14cb45c3736 req-1fe1e75b-a426-4d31-8994-2ed5ce2ab8c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received event network-vif-unplugged-63e84446-ba0d-492d-b5be-c6cb01818f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.929 247403 DEBUG oslo_concurrency.lockutils [req-8ea5fd8b-5be8-4948-a75b-a14cb45c3736 req-1fe1e75b-a426-4d31-8994-2ed5ce2ab8c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.930 247403 DEBUG oslo_concurrency.lockutils [req-8ea5fd8b-5be8-4948-a75b-a14cb45c3736 req-1fe1e75b-a426-4d31-8994-2ed5ce2ab8c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.930 247403 DEBUG oslo_concurrency.lockutils [req-8ea5fd8b-5be8-4948-a75b-a14cb45c3736 req-1fe1e75b-a426-4d31-8994-2ed5ce2ab8c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.930 247403 DEBUG nova.compute.manager [req-8ea5fd8b-5be8-4948-a75b-a14cb45c3736 req-1fe1e75b-a426-4d31-8994-2ed5ce2ab8c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] No waiting events found dispatching network-vif-unplugged-63e84446-ba0d-492d-b5be-c6cb01818f14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:22 np0005603621 nova_compute[247399]: 2026-01-31 08:02:22.930 247403 WARNING nova.compute.manager [req-8ea5fd8b-5be8-4948-a75b-a14cb45c3736 req-1fe1e75b-a426-4d31-8994-2ed5ce2ab8c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received unexpected event network-vif-unplugged-63e84446-ba0d-492d-b5be-c6cb01818f14 for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:02:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93-userdata-shm.mount: Deactivated successfully.
Jan 31 03:02:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9d38f4b46441fe0f2daa4607a31f6503b9d163fbca54b7f571effd41945127cd-merged.mount: Deactivated successfully.
Jan 31 03:02:23 np0005603621 podman[283434]: 2026-01-31 08:02:23.030510897 +0000 UTC m=+0.997311645 container cleanup 827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:02:23 np0005603621 systemd[1]: libpod-conmon-827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93.scope: Deactivated successfully.
Jan 31 03:02:23 np0005603621 podman[283591]: 2026-01-31 08:02:23.193304905 +0000 UTC m=+0.143189362 container remove 827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.197 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[036093c5-2005-4e6a-8a36-99a91001010e]: (4, ('Sat Jan 31 08:02:22 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93)\n827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93\nSat Jan 31 08:02:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93)\n827be8fc90b682c827d5bf4ea4500d74806d3220954ae867f54e0989b5d85d93\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.198 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d42d181c-b4b3-4fa5-afc5-1624314ea83a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.200 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:23 np0005603621 nova_compute[247399]: 2026-01-31 08:02:23.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:23 np0005603621 kernel: tap24914779-b0: left promiscuous mode
Jan 31 03:02:23 np0005603621 nova_compute[247399]: 2026-01-31 08:02:23.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.215 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[49ddebcf-4889-4162-b4e7-d7872cd5c215]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.227 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c0564d-786a-4813-b296-dafddf71508c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.229 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9c92f695-bd0a-4f9c-968c-d3ec4cce9337]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.242 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e330df85-b341-4036-8a6d-9ebd159a84d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 584385, 'reachable_time': 39890, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283642, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 systemd[1]: run-netns-ovnmeta\x2d24914779\x2dbabc\x2d4c55\x2db38b\x2dadf9bfc5c103.mount: Deactivated successfully.
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.245 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:02:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:23.245 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[425828f0-9d6a-404f-8c88-5dcbd8ec4a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.328694252 +0000 UTC m=+0.049324232 container create c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 03:02:23 np0005603621 systemd[1]: Started libpod-conmon-c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484.scope.
Jan 31 03:02:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.302065145 +0000 UTC m=+0.022695145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.403402101 +0000 UTC m=+0.124032101 container init c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.409257035 +0000 UTC m=+0.129887015 container start c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:02:23 np0005603621 distracted_tesla[283666]: 167 167
Jan 31 03:02:23 np0005603621 systemd[1]: libpod-c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484.scope: Deactivated successfully.
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.416190413 +0000 UTC m=+0.136820393 container attach c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.417345719 +0000 UTC m=+0.137975719 container died c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:02:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 88 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Jan 31 03:02:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-47713cf3b35e3e5f435aabcc3f58f3294e9ae84455fb5ce123edaf20b9382847-merged.mount: Deactivated successfully.
Jan 31 03:02:23 np0005603621 podman[283650]: 2026-01-31 08:02:23.482598991 +0000 UTC m=+0.203228971 container remove c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:02:23 np0005603621 systemd[1]: libpod-conmon-c9b7a5d1d8878800d2f4d1e3c88d15dff0f7b00eccc2f883afa9794cf5833484.scope: Deactivated successfully.
Jan 31 03:02:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:23.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:23 np0005603621 podman[283694]: 2026-01-31 08:02:23.645994017 +0000 UTC m=+0.067825542 container create d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:02:23 np0005603621 podman[283694]: 2026-01-31 08:02:23.603497012 +0000 UTC m=+0.025328557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:02:23 np0005603621 systemd[1]: Started libpod-conmon-d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952.scope.
Jan 31 03:02:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c396fed8bbd6896bc93cf549e0376e68de42777f75cba1f8224aaab463b13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c396fed8bbd6896bc93cf549e0376e68de42777f75cba1f8224aaab463b13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c396fed8bbd6896bc93cf549e0376e68de42777f75cba1f8224aaab463b13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1c396fed8bbd6896bc93cf549e0376e68de42777f75cba1f8224aaab463b13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:23 np0005603621 podman[283694]: 2026-01-31 08:02:23.751374431 +0000 UTC m=+0.173205986 container init d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:02:23 np0005603621 podman[283710]: 2026-01-31 08:02:23.755129088 +0000 UTC m=+0.073203291 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:02:23 np0005603621 podman[283694]: 2026-01-31 08:02:23.757680699 +0000 UTC m=+0.179512224 container start d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:02:23 np0005603621 podman[283694]: 2026-01-31 08:02:23.767599821 +0000 UTC m=+0.189431356 container attach d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:02:23 np0005603621 podman[283709]: 2026-01-31 08:02:23.769032865 +0000 UTC m=+0.086963194 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:02:24 np0005603621 nova_compute[247399]: 2026-01-31 08:02:24.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:24.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:24 np0005603621 nova_compute[247399]: 2026-01-31 08:02:24.434 247403 DEBUG nova.compute.manager [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:24 np0005603621 nova_compute[247399]: 2026-01-31 08:02:24.498 247403 INFO nova.compute.manager [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] instance snapshotting#033[00m
Jan 31 03:02:24 np0005603621 nova_compute[247399]: 2026-01-31 08:02:24.500 247403 WARNING nova.compute.manager [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]: {
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:        "osd_id": 0,
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:        "type": "bluestore"
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]:    }
Jan 31 03:02:24 np0005603621 condescending_chebyshev[283731]: }
Jan 31 03:02:24 np0005603621 systemd[1]: libpod-d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952.scope: Deactivated successfully.
Jan 31 03:02:24 np0005603621 podman[283694]: 2026-01-31 08:02:24.577188523 +0000 UTC m=+0.999020058 container died d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:02:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2f1c396fed8bbd6896bc93cf549e0376e68de42777f75cba1f8224aaab463b13-merged.mount: Deactivated successfully.
Jan 31 03:02:24 np0005603621 podman[283694]: 2026-01-31 08:02:24.830181056 +0000 UTC m=+1.252012611 container remove d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:02:24 np0005603621 systemd[1]: libpod-conmon-d93c56c777251f231b3708ef0968e803f76b03f7e654a5ae71e423bf6dd8e952.scope: Deactivated successfully.
Jan 31 03:02:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:02:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:02:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:02:24 np0005603621 nova_compute[247399]: 2026-01-31 08:02:24.929 247403 INFO nova.virt.libvirt.driver [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Beginning cold snapshot process#033[00m
Jan 31 03:02:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:02:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 30b7ec03-b1f6-42b2-aacc-99c90c5fd887 does not exist
Jan 31 03:02:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fbd31538-739e-4779-a436-eba594fc1b4c does not exist
Jan 31 03:02:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 58fd98cf-7063-435a-bb6d-ca01a3d28208 does not exist
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.096 247403 DEBUG nova.compute.manager [req-8ca914ee-ac9d-45ac-bb24-a9ee401f335f req-06c243a6-7c1d-4c5f-a39f-e2ce05baf72b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.096 247403 DEBUG oslo_concurrency.lockutils [req-8ca914ee-ac9d-45ac-bb24-a9ee401f335f req-06c243a6-7c1d-4c5f-a39f-e2ce05baf72b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.096 247403 DEBUG oslo_concurrency.lockutils [req-8ca914ee-ac9d-45ac-bb24-a9ee401f335f req-06c243a6-7c1d-4c5f-a39f-e2ce05baf72b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.097 247403 DEBUG oslo_concurrency.lockutils [req-8ca914ee-ac9d-45ac-bb24-a9ee401f335f req-06c243a6-7c1d-4c5f-a39f-e2ce05baf72b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.097 247403 DEBUG nova.compute.manager [req-8ca914ee-ac9d-45ac-bb24-a9ee401f335f req-06c243a6-7c1d-4c5f-a39f-e2ce05baf72b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] No waiting events found dispatching network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.097 247403 WARNING nova.compute.manager [req-8ca914ee-ac9d-45ac-bb24-a9ee401f335f req-06c243a6-7c1d-4c5f-a39f-e2ce05baf72b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received unexpected event network-vif-plugged-63e84446-ba0d-492d-b5be-c6cb01818f14 for instance with vm_state suspended and task_state image_uploading.#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.103 247403 DEBUG nova.virt.libvirt.imagebackend [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.239 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.239 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.239 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.239 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.240 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 88 MiB data, 556 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 98 op/s
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.560 247403 DEBUG nova.storage.rbd_utils [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(5895b37775544bc8a42395d881dbf82f) on rbd image(e0363e14-d6b9-4bed-a555-4466ddb7790d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:02:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596344122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.683 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.806 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.807 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:02:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.974 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.975 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4504MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.975 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:25 np0005603621 nova_compute[247399]: 2026-01-31 08:02:25.976 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 31 03:02:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 31 03:02:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.055 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e0363e14-d6b9-4bed-a555-4466ddb7790d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.055 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.056 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.105 247403 DEBUG nova.storage.rbd_utils [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] cloning vms/e0363e14-d6b9-4bed-a555-4466ddb7790d_disk@5895b37775544bc8a42395d881dbf82f to images/48732de6-33cc-4857-a389-34f02c69b538 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.141 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.269 247403 DEBUG nova.storage.rbd_utils [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] flattening images/48732de6-33cc-4857-a389-34f02c69b538 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:02:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:26.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675954731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.614 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.618 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.631 247403 DEBUG nova.storage.rbd_utils [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] removing snapshot(5895b37775544bc8a42395d881dbf82f) on rbd image(e0363e14-d6b9-4bed-a555-4466ddb7790d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.649 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.690 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:02:26 np0005603621 nova_compute[247399]: 2026-01-31 08:02:26.690 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 31 03:02:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 31 03:02:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 31 03:02:27 np0005603621 nova_compute[247399]: 2026-01-31 08:02:27.139 247403 DEBUG nova.storage.rbd_utils [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(snap) on rbd image(48732de6-33cc-4857-a389-34f02c69b538) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:02:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 113 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 31 03:02:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:27.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:27 np0005603621 nova_compute[247399]: 2026-01-31 08:02:27.885 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 31 03:02:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:28.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 31 03:02:28 np0005603621 nova_compute[247399]: 2026-01-31 08:02:28.692 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:28 np0005603621 nova_compute[247399]: 2026-01-31 08:02:28.724 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:28 np0005603621 nova_compute[247399]: 2026-01-31 08:02:28.724 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:02:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 31 03:02:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 113 MiB data, 557 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.4 MiB/s wr, 30 op/s
Jan 31 03:02:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:29.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:30.483 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:30.483 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:30.483 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:30 np0005603621 nova_compute[247399]: 2026-01-31 08:02:30.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 134 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 104 op/s
Jan 31 03:02:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:31.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:02:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:32.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:02:32 np0005603621 nova_compute[247399]: 2026-01-31 08:02:32.594 247403 INFO nova.virt.libvirt.driver [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Snapshot image upload complete#033[00m
Jan 31 03:02:32 np0005603621 nova_compute[247399]: 2026-01-31 08:02:32.595 247403 INFO nova.compute.manager [None req-db0971d3-6eba-4fce-8af6-7e9aa7ef24eb 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Took 8.09 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:02:32 np0005603621 nova_compute[247399]: 2026-01-31 08:02:32.916 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 134 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.9 MiB/s wr, 104 op/s
Jan 31 03:02:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:33.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:34.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 134 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 837 KiB/s wr, 88 op/s
Jan 31 03:02:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:35.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:35 np0005603621 nova_compute[247399]: 2026-01-31 08:02:35.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:36.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 31 03:02:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 31 03:02:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.089 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846542.0844471, e0363e14-d6b9-4bed-a555-4466ddb7790d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.090 247403 INFO nova.compute.manager [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.204 247403 DEBUG nova.compute.manager [None req-8db2df25-657d-4b2b-a585-0d26ec90e833 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.208 247403 DEBUG nova.compute.manager [None req-8db2df25-657d-4b2b-a585-0d26ec90e833 - - - - - -] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: suspended, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 31 03:02:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 31 03:02:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 31 03:02:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 130 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 860 KiB/s wr, 98 op/s
Jan 31 03:02:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:37.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.878 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.879 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.879 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.879 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.879 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.880 247403 INFO nova.compute.manager [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Terminating instance#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.881 247403 DEBUG nova.compute.manager [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.887 247403 INFO nova.virt.libvirt.driver [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Instance destroyed successfully.#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.888 247403 DEBUG nova.objects.instance [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'resources' on Instance uuid e0363e14-d6b9-4bed-a555-4466ddb7790d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.919 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.961 247403 DEBUG nova.virt.libvirt.vif [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:02:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-424778968',display_name='tempest-ImagesTestJSON-server-424778968',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-424778968',id=52,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-ps9bcnlz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:02:32Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=e0363e14-d6b9-4bed-a555-4466ddb7790d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.962 247403 DEBUG nova.network.os_vif_util [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "63e84446-ba0d-492d-b5be-c6cb01818f14", "address": "fa:16:3e:b0:27:8f", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63e84446-ba", "ovs_interfaceid": "63e84446-ba0d-492d-b5be-c6cb01818f14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.962 247403 DEBUG nova.network.os_vif_util [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.963 247403 DEBUG os_vif [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.965 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63e84446-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.967 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:37 np0005603621 nova_compute[247399]: 2026-01-31 08:02:37.973 247403 INFO os_vif [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b0:27:8f,bridge_name='br-int',has_traffic_filtering=True,id=63e84446-ba0d-492d-b5be-c6cb01818f14,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63e84446-ba')#033[00m
Jan 31 03:02:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:38.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:02:38
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:02:38 np0005603621 nova_compute[247399]: 2026-01-31 08:02:38.510 247403 INFO nova.virt.libvirt.driver [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Deleting instance files /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d_del#033[00m
Jan 31 03:02:38 np0005603621 nova_compute[247399]: 2026-01-31 08:02:38.511 247403 INFO nova.virt.libvirt.driver [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Deletion of /var/lib/nova/instances/e0363e14-d6b9-4bed-a555-4466ddb7790d_del complete#033[00m
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:02:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:02:38 np0005603621 nova_compute[247399]: 2026-01-31 08:02:38.769 247403 INFO nova.compute.manager [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:02:38 np0005603621 nova_compute[247399]: 2026-01-31 08:02:38.770 247403 DEBUG oslo.service.loopingcall [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:02:38 np0005603621 nova_compute[247399]: 2026-01-31 08:02:38.770 247403 DEBUG nova.compute.manager [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:02:38 np0005603621 nova_compute[247399]: 2026-01-31 08:02:38.770 247403 DEBUG nova.network.neutron [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:02:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 130 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.2 KiB/s wr, 43 op/s
Jan 31 03:02:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:40.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:41 np0005603621 nova_compute[247399]: 2026-01-31 08:02:41.411 247403 DEBUG nova.network.neutron [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:41 np0005603621 nova_compute[247399]: 2026-01-31 08:02:41.437 247403 INFO nova.compute.manager [-] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Took 2.67 seconds to deallocate network for instance.#033[00m
Jan 31 03:02:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 109 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 47 op/s
Jan 31 03:02:41 np0005603621 nova_compute[247399]: 2026-01-31 08:02:41.520 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:41 np0005603621 nova_compute[247399]: 2026-01-31 08:02:41.520 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:41 np0005603621 nova_compute[247399]: 2026-01-31 08:02:41.604 247403 DEBUG oslo_concurrency.processutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:41 np0005603621 nova_compute[247399]: 2026-01-31 08:02:41.784 247403 DEBUG nova.compute.manager [req-52ce2549-2cf7-4548-815b-85a835b76aae req-27a6bfae-8e21-4ceb-9d5c-22b61d906e49 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e0363e14-d6b9-4bed-a555-4466ddb7790d] Received event network-vif-deleted-63e84446-ba0d-492d-b5be-c6cb01818f14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184018640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.056 247403 DEBUG oslo_concurrency.processutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.061 247403 DEBUG nova.compute.provider_tree [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.096 247403 DEBUG nova.scheduler.client.report [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.151 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.289 247403 INFO nova.scheduler.client.report [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Deleted allocations for instance e0363e14-d6b9-4bed-a555-4466ddb7790d#033[00m
Jan 31 03:02:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.409 247403 DEBUG oslo_concurrency.lockutils [None req-59173a94-170c-4cab-b400-f2036aea1eba 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e0363e14-d6b9-4bed-a555-4466ddb7790d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.792 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.792 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.828 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.921 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.961 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.962 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.968 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.968 247403 INFO nova.compute.claims [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:02:42 np0005603621 nova_compute[247399]: 2026-01-31 08:02:42.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.169 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 71 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 1.5 MiB/s wr, 105 op/s
Jan 31 03:02:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:02:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580494141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.595 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.600 247403 DEBUG nova.compute.provider_tree [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:02:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:43.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.621 247403 DEBUG nova.scheduler.client.report [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.646 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.647 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.713 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.714 247403 DEBUG nova.network.neutron [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.745 247403 INFO nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.779 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.931 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.932 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.932 247403 INFO nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Creating image(s)#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.954 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:43 np0005603621 nova_compute[247399]: 2026-01-31 08:02:43.978 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.000 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.004 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.054 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.055 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.055 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.056 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:44.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.456 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.461 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e549bdcc-5097-4890-8754-89d9f1d15a13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:44 np0005603621 nova_compute[247399]: 2026-01-31 08:02:44.479 247403 DEBUG nova.policy [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '46ffd64a348845fab6cdc53249353575', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '521dcd459f144f2bb32de93d50ae0391', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:02:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 88 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 2.4 MiB/s wr, 99 op/s
Jan 31 03:02:45 np0005603621 nova_compute[247399]: 2026-01-31 08:02:45.464 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e549bdcc-5097-4890-8754-89d9f1d15a13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.003s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:45 np0005603621 nova_compute[247399]: 2026-01-31 08:02:45.539 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] resizing rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:02:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:45.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:02:45 np0005603621 nova_compute[247399]: 2026-01-31 08:02:45.975 247403 DEBUG nova.objects.instance [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'migration_context' on Instance uuid e549bdcc-5097-4890-8754-89d9f1d15a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:46 np0005603621 nova_compute[247399]: 2026-01-31 08:02:46.028 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:02:46 np0005603621 nova_compute[247399]: 2026-01-31 08:02:46.029 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Ensure instance console log exists: /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:02:46 np0005603621 nova_compute[247399]: 2026-01-31 08:02:46.029 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:46 np0005603621 nova_compute[247399]: 2026-01-31 08:02:46.030 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:46 np0005603621 nova_compute[247399]: 2026-01-31 08:02:46.030 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:02:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:46.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:02:46 np0005603621 nova_compute[247399]: 2026-01-31 08:02:46.422 247403 DEBUG nova.network.neutron [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Successfully created port: 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:02:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 31 03:02:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 31 03:02:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 31 03:02:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 120 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.1 MiB/s wr, 89 op/s
Jan 31 03:02:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:47.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:47 np0005603621 nova_compute[247399]: 2026-01-31 08:02:47.923 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:47 np0005603621 nova_compute[247399]: 2026-01-31 08:02:47.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:48.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014505760864507724 of space, bias 1.0, pg target 0.4351728259352317 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:02:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:02:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 120 MiB data, 566 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.1 MiB/s wr, 89 op/s
Jan 31 03:02:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:49 np0005603621 nova_compute[247399]: 2026-01-31 08:02:49.892 247403 DEBUG nova.network.neutron [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Successfully updated port: 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:02:49 np0005603621 nova_compute[247399]: 2026-01-31 08:02:49.936 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:49 np0005603621 nova_compute[247399]: 2026-01-31 08:02:49.936 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquired lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:49 np0005603621 nova_compute[247399]: 2026-01-31 08:02:49.936 247403 DEBUG nova.network.neutron [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:02:50 np0005603621 nova_compute[247399]: 2026-01-31 08:02:50.141 247403 DEBUG nova.compute.manager [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-changed-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:50 np0005603621 nova_compute[247399]: 2026-01-31 08:02:50.142 247403 DEBUG nova.compute.manager [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Refreshing instance network info cache due to event network-changed-5a6ed0c1-e054-4f76-8079-2f367ccf0d67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:02:50 np0005603621 nova_compute[247399]: 2026-01-31 08:02:50.142 247403 DEBUG oslo_concurrency.lockutils [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:02:50 np0005603621 nova_compute[247399]: 2026-01-31 08:02:50.223 247403 DEBUG nova.network.neutron [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:02:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:50.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 134 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 4.3 MiB/s wr, 86 op/s
Jan 31 03:02:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:02:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:02:51 np0005603621 nova_compute[247399]: 2026-01-31 08:02:51.880 247403 DEBUG nova.network.neutron [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Updating instance_info_cache with network_info: [{"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:51 np0005603621 nova_compute[247399]: 2026-01-31 08:02:51.998 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Releasing lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:51 np0005603621 nova_compute[247399]: 2026-01-31 08:02:51.999 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Instance network_info: |[{"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.000 247403 DEBUG oslo_concurrency.lockutils [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.000 247403 DEBUG nova.network.neutron [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Refreshing network info cache for port 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.005 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Start _get_guest_xml network_info=[{"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.011 247403 WARNING nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.017 247403 DEBUG nova.virt.libvirt.host [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.018 247403 DEBUG nova.virt.libvirt.host [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.022 247403 DEBUG nova.virt.libvirt.host [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.022 247403 DEBUG nova.virt.libvirt.host [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.024 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.024 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.024 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.024 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.025 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.025 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.025 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.025 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.025 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.025 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.026 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.026 247403 DEBUG nova.virt.hardware [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.028 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:52.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028097470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.447 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.471 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.476 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:02:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4083439868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.881 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.883 247403 DEBUG nova.virt.libvirt.vif [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:02:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1905191450',display_name='tempest-ImagesTestJSON-server-1905191450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1905191450',id=54,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-mophrfb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:43Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=e549bdcc-5097-4890-8754-89d9f1d15a13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.883 247403 DEBUG nova.network.os_vif_util [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.884 247403 DEBUG nova.network.os_vif_util [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.885 247403 DEBUG nova.objects.instance [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'pci_devices' on Instance uuid e549bdcc-5097-4890-8754-89d9f1d15a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.925 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <uuid>e549bdcc-5097-4890-8754-89d9f1d15a13</uuid>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <name>instance-00000036</name>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesTestJSON-server-1905191450</nova:name>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:02:52</nova:creationTime>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:user uuid="46ffd64a348845fab6cdc53249353575">tempest-ImagesTestJSON-1780438391-project-member</nova:user>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:project uuid="521dcd459f144f2bb32de93d50ae0391">tempest-ImagesTestJSON-1780438391</nova:project>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <nova:port uuid="5a6ed0c1-e054-4f76-8079-2f367ccf0d67">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <entry name="serial">e549bdcc-5097-4890-8754-89d9f1d15a13</entry>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <entry name="uuid">e549bdcc-5097-4890-8754-89d9f1d15a13</entry>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e549bdcc-5097-4890-8754-89d9f1d15a13_disk">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e549bdcc-5097-4890-8754-89d9f1d15a13_disk.config">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:b9:cb:e4"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <target dev="tap5a6ed0c1-e0"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/console.log" append="off"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:02:52 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:02:52 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:02:52 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:02:52 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.926 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Preparing to wait for external event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.927 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.927 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.927 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.928 247403 DEBUG nova.virt.libvirt.vif [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:02:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1905191450',display_name='tempest-ImagesTestJSON-server-1905191450',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1905191450',id=54,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-mophrfb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:02:43Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=e549bdcc-5097-4890-8754-89d9f1d15a13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.929 247403 DEBUG nova.network.os_vif_util [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.929 247403 DEBUG nova.network.os_vif_util [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.930 247403 DEBUG os_vif [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.932 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.932 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.935 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a6ed0c1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.936 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a6ed0c1-e0, col_values=(('external_ids', {'iface-id': '5a6ed0c1-e054-4f76-8079-2f367ccf0d67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:cb:e4', 'vm-uuid': 'e549bdcc-5097-4890-8754-89d9f1d15a13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:52 np0005603621 NetworkManager[49013]: <info>  [1769846572.9382] manager: (tap5a6ed0c1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.939 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:02:52 np0005603621 nova_compute[247399]: 2026-01-31 08:02:52.943 247403 INFO os_vif [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0')#033[00m
Jan 31 03:02:53 np0005603621 nova_compute[247399]: 2026-01-31 08:02:53.390 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:53 np0005603621 nova_compute[247399]: 2026-01-31 08:02:53.391 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:02:53 np0005603621 nova_compute[247399]: 2026-01-31 08:02:53.391 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No VIF found with MAC fa:16:3e:b9:cb:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:02:53 np0005603621 nova_compute[247399]: 2026-01-31 08:02:53.391 247403 INFO nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Using config drive#033[00m
Jan 31 03:02:53 np0005603621 nova_compute[247399]: 2026-01-31 08:02:53.417 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 134 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.1 MiB/s wr, 48 op/s
Jan 31 03:02:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:54.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:54 np0005603621 podman[284403]: 2026-01-31 08:02:54.495118005 +0000 UTC m=+0.051776148 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:02:54 np0005603621 podman[284404]: 2026-01-31 08:02:54.52485223 +0000 UTC m=+0.080164601 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:02:54 np0005603621 nova_compute[247399]: 2026-01-31 08:02:54.663 247403 INFO nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Creating config drive at /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/disk.config#033[00m
Jan 31 03:02:54 np0005603621 nova_compute[247399]: 2026-01-31 08:02:54.667 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpal3zqf5k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:54 np0005603621 nova_compute[247399]: 2026-01-31 08:02:54.791 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpal3zqf5k" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:54 np0005603621 nova_compute[247399]: 2026-01-31 08:02:54.858 247403 DEBUG nova.storage.rbd_utils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] rbd image e549bdcc-5097-4890-8754-89d9f1d15a13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:02:54 np0005603621 nova_compute[247399]: 2026-01-31 08:02:54.863 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/disk.config e549bdcc-5097-4890-8754-89d9f1d15a13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:02:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 143 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 84 op/s
Jan 31 03:02:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:55.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.708 247403 DEBUG nova.network.neutron [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Updated VIF entry in instance network info cache for port 5a6ed0c1-e054-4f76-8079-2f367ccf0d67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.709 247403 DEBUG nova.network.neutron [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Updating instance_info_cache with network_info: [{"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.779 247403 DEBUG oslo_concurrency.processutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/disk.config e549bdcc-5097-4890-8754-89d9f1d15a13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.916s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.780 247403 INFO nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Deleting local config drive /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13/disk.config because it was imported into RBD.#033[00m
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.782 247403 DEBUG oslo_concurrency.lockutils [req-bded5952-3a4b-4bde-b24a-8565e608f6af req-853c6832-111f-4d5d-aa74-97b891d80f1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:02:55 np0005603621 kernel: tap5a6ed0c1-e0: entered promiscuous mode
Jan 31 03:02:55 np0005603621 NetworkManager[49013]: <info>  [1769846575.8182] manager: (tap5a6ed0c1-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Jan 31 03:02:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:55Z|00129|binding|INFO|Claiming lport 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 for this chassis.
Jan 31 03:02:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:55Z|00130|binding|INFO|5a6ed0c1-e054-4f76-8079-2f367ccf0d67: Claiming fa:16:3e:b9:cb:e4 10.100.0.13
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:55Z|00131|binding|INFO|Setting lport 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 ovn-installed in OVS
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:55 np0005603621 nova_compute[247399]: 2026-01-31 08:02:55.830 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:55 np0005603621 systemd-udevd[284495]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:02:55 np0005603621 NetworkManager[49013]: <info>  [1769846575.8504] device (tap5a6ed0c1-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:02:55 np0005603621 NetworkManager[49013]: <info>  [1769846575.8514] device (tap5a6ed0c1-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:02:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:55Z|00132|binding|INFO|Setting lport 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 up in Southbound
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.873 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:cb:e4 10.100.0.13'], port_security=['fa:16:3e:b9:cb:e4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e549bdcc-5097-4890-8754-89d9f1d15a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5a6ed0c1-e054-4f76-8079-2f367ccf0d67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.874 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 bound to our chassis#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.876 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24914779-babc-4c55-b38b-adf9bfc5c103#033[00m
Jan 31 03:02:55 np0005603621 systemd-machined[212769]: New machine qemu-24-instance-00000036.
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.883 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa184c9-2ea8-46f8-9211-0a89b207507f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.884 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24914779-b1 in ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.885 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24914779-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.886 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[908971d8-33de-43a4-ba7b-6397b4390cc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.886 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea45be4-13ce-4841-a9c4-86c9be083b41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 systemd[1]: Started Virtual Machine qemu-24-instance-00000036.
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.896 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6719a27e-242a-467c-9f37-4a985b7f1493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.915 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7c93429d-ba08-49aa-8fcf-13a84177b150]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.938 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c771c819-8d5a-4e40-a6ee-b05626b7d287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 NetworkManager[49013]: <info>  [1769846575.9454] manager: (tap24914779-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.944 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[77951df6-f73a-441e-ac7f-345344dd5402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.968 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[103f5f74-cee6-47de-a15f-65113471c804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.973 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[83449101-f27c-4749-b7fb-b20e83e846a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:55 np0005603621 NetworkManager[49013]: <info>  [1769846575.9902] device (tap24914779-b0): carrier: link connected
Jan 31 03:02:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:55.996 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fe264cc3-45a3-4515-82bc-ab5c8732cd8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.007 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[094da30b-14b0-4555-8bed-64fcdc8f71eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588150, 'reachable_time': 27252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284531, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.017 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[34fe5bbe-b736-42c9-8f79-e799904077b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:baf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588150, 'tstamp': 588150}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284532, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.027 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4997d8-cb12-4807-a8c7-13907761c065]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24914779-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:0b:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588150, 'reachable_time': 27252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284533, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.052 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a79f6f-e068-4548-9256-e8695056ae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.091 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa040c7-3867-4f12-be61-906713253e74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.093 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.093 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.093 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24914779-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.095 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:56 np0005603621 NetworkManager[49013]: <info>  [1769846576.0956] manager: (tap24914779-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 31 03:02:56 np0005603621 kernel: tap24914779-b0: entered promiscuous mode
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.102 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24914779-b0, col_values=(('external_ids', {'iface-id': '23cfbf86-f443-4dea-a9ae-1c6f9be9ee53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.103 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:02:56Z|00133|binding|INFO|Releasing lport 23cfbf86-f443-4dea-a9ae-1c6f9be9ee53 from this chassis (sb_readonly=0)
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.110 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.113 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.114 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[806886ba-5c87-4737-b9df-7190ed835c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.114 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/24914779-babc-4c55-b38b-adf9bfc5c103.pid.haproxy
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 24914779-babc-4c55-b38b-adf9bfc5c103
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:02:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:02:56.115 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'env', 'PROCESS_TAG=haproxy-24914779-babc-4c55-b38b-adf9bfc5c103', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24914779-babc-4c55-b38b-adf9bfc5c103.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.362 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846576.3616276, e549bdcc-5097-4890-8754-89d9f1d15a13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.362 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] VM Started (Lifecycle Event)#033[00m
Jan 31 03:02:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:02:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.518 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.524 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846576.3620057, e549bdcc-5097-4890-8754-89d9f1d15a13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.525 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:02:56 np0005603621 podman[284604]: 2026-01-31 08:02:56.448171528 +0000 UTC m=+0.022177280 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.597 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.600 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:56 np0005603621 nova_compute[247399]: 2026-01-31 08:02:56.706 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:02:57 np0005603621 podman[284604]: 2026-01-31 08:02:57.024333521 +0000 UTC m=+0.598339253 container create 260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:02:57 np0005603621 systemd[1]: Started libpod-conmon-260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e.scope.
Jan 31 03:02:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:02:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:02:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92fc8125eada22b757092a0b41595928b13d246e1e5954c6875cdefbed99ebbe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:02:57 np0005603621 podman[284604]: 2026-01-31 08:02:57.380050795 +0000 UTC m=+0.954056547 container init 260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:02:57 np0005603621 podman[284604]: 2026-01-31 08:02:57.386194318 +0000 UTC m=+0.960200050 container start 260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:02:57 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [NOTICE]   (284623) : New worker (284625) forked
Jan 31 03:02:57 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [NOTICE]   (284623) : Loading success.
Jan 31 03:02:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 180 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 146 op/s
Jan 31 03:02:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:57.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:57 np0005603621 nova_compute[247399]: 2026-01-31 08:02:57.927 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:57 np0005603621 nova_compute[247399]: 2026-01-31 08:02:57.937 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.261 247403 DEBUG nova.compute.manager [req-1d83d4f6-335f-4472-8898-a17908a0855a req-ef3fc144-debd-414f-ab48-917947fdd109 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.261 247403 DEBUG oslo_concurrency.lockutils [req-1d83d4f6-335f-4472-8898-a17908a0855a req-ef3fc144-debd-414f-ab48-917947fdd109 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.261 247403 DEBUG oslo_concurrency.lockutils [req-1d83d4f6-335f-4472-8898-a17908a0855a req-ef3fc144-debd-414f-ab48-917947fdd109 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.262 247403 DEBUG oslo_concurrency.lockutils [req-1d83d4f6-335f-4472-8898-a17908a0855a req-ef3fc144-debd-414f-ab48-917947fdd109 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.262 247403 DEBUG nova.compute.manager [req-1d83d4f6-335f-4472-8898-a17908a0855a req-ef3fc144-debd-414f-ab48-917947fdd109 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Processing event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.263 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.267 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846578.2674792, e549bdcc-5097-4890-8754-89d9f1d15a13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.268 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.271 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.274 247403 INFO nova.virt.libvirt.driver [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Instance spawned successfully.#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.275 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.305 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.310 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.313 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.313 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.314 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.314 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.315 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.315 247403 DEBUG nova.virt.libvirt.driver [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.348 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:02:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:02:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:02:58.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.405 247403 INFO nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Took 14.47 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.406 247403 DEBUG nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.513 247403 INFO nova.compute.manager [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Took 15.59 seconds to build instance.#033[00m
Jan 31 03:02:58 np0005603621 nova_compute[247399]: 2026-01-31 08:02:58.549 247403 DEBUG oslo_concurrency.lockutils [None req-2d295d3d-1d24-43c3-8968-a04b23c227fd 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:02:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 31 03:02:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 31 03:02:59 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 31 03:02:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 180 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 141 op/s
Jan 31 03:02:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:02:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:02:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:02:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:00 np0005603621 nova_compute[247399]: 2026-01-31 08:03:00.432 247403 DEBUG nova.compute.manager [req-b8f2b87e-301f-4c54-82b9-c4dfe055839c req-e2bec137-7b80-4e71-934f-7ae72fd573ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:00 np0005603621 nova_compute[247399]: 2026-01-31 08:03:00.433 247403 DEBUG oslo_concurrency.lockutils [req-b8f2b87e-301f-4c54-82b9-c4dfe055839c req-e2bec137-7b80-4e71-934f-7ae72fd573ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:00 np0005603621 nova_compute[247399]: 2026-01-31 08:03:00.433 247403 DEBUG oslo_concurrency.lockutils [req-b8f2b87e-301f-4c54-82b9-c4dfe055839c req-e2bec137-7b80-4e71-934f-7ae72fd573ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:00 np0005603621 nova_compute[247399]: 2026-01-31 08:03:00.434 247403 DEBUG oslo_concurrency.lockutils [req-b8f2b87e-301f-4c54-82b9-c4dfe055839c req-e2bec137-7b80-4e71-934f-7ae72fd573ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:00 np0005603621 nova_compute[247399]: 2026-01-31 08:03:00.434 247403 DEBUG nova.compute.manager [req-b8f2b87e-301f-4c54-82b9-c4dfe055839c req-e2bec137-7b80-4e71-934f-7ae72fd573ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] No waiting events found dispatching network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:00 np0005603621 nova_compute[247399]: 2026-01-31 08:03:00.434 247403 WARNING nova.compute.manager [req-b8f2b87e-301f-4c54-82b9-c4dfe055839c req-e2bec137-7b80-4e71-934f-7ae72fd573ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received unexpected event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:03:01 np0005603621 nova_compute[247399]: 2026-01-31 08:03:01.339 247403 DEBUG nova.compute.manager [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:01 np0005603621 nova_compute[247399]: 2026-01-31 08:03:01.389 247403 INFO nova.compute.manager [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] instance snapshotting#033[00m
Jan 31 03:03:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 181 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 03:03:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:01.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:02 np0005603621 nova_compute[247399]: 2026-01-31 08:03:02.653 247403 INFO nova.virt.libvirt.driver [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Beginning live snapshot process#033[00m
Jan 31 03:03:03 np0005603621 nova_compute[247399]: 2026-01-31 08:03:03.225 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:03 np0005603621 nova_compute[247399]: 2026-01-31 08:03:03.235 247403 DEBUG nova.virt.libvirt.imagebackend [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:03:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 211 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.1 MiB/s wr, 215 op/s
Jan 31 03:03:03 np0005603621 nova_compute[247399]: 2026-01-31 08:03:03.626 247403 DEBUG nova.storage.rbd_utils [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(8de9e3cfeeae483b95746926751ef90c) on rbd image(e549bdcc-5097-4890-8754-89d9f1d15a13_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:03:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:03.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 31 03:03:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 31 03:03:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 31 03:03:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 227 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 178 op/s
Jan 31 03:03:05 np0005603621 nova_compute[247399]: 2026-01-31 08:03:05.596 247403 DEBUG nova.storage.rbd_utils [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] cloning vms/e549bdcc-5097-4890-8754-89d9f1d15a13_disk@8de9e3cfeeae483b95746926751ef90c to images/4f6468b7-082d-4995-a147-34ef0ccaa0d2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:03:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:05.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 31 03:03:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:06.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 31 03:03:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 31 03:03:06 np0005603621 nova_compute[247399]: 2026-01-31 08:03:06.838 247403 DEBUG nova.storage.rbd_utils [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] flattening images/4f6468b7-082d-4995-a147-34ef0ccaa0d2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:03:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 227 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 233 op/s
Jan 31 03:03:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:07 np0005603621 nova_compute[247399]: 2026-01-31 08:03:07.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:08 np0005603621 nova_compute[247399]: 2026-01-31 08:03:08.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:08.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:03:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 227 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.7 MiB/s wr, 204 op/s
Jan 31 03:03:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:09.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:09 np0005603621 nova_compute[247399]: 2026-01-31 08:03:09.972 247403 DEBUG nova.storage.rbd_utils [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] removing snapshot(8de9e3cfeeae483b95746926751ef90c) on rbd image(e549bdcc-5097-4890-8754-89d9f1d15a13_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:03:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 31 03:03:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 31 03:03:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 31 03:03:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 242 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 31 03:03:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 31 03:03:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:11.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 31 03:03:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 31 03:03:11 np0005603621 nova_compute[247399]: 2026-01-31 08:03:11.941 247403 DEBUG nova.storage.rbd_utils [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] creating snapshot(snap) on rbd image(4f6468b7-082d-4995-a147-34ef0ccaa0d2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:03:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:12 np0005603621 nova_compute[247399]: 2026-01-31 08:03:12.967 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:13 np0005603621 nova_compute[247399]: 2026-01-31 08:03:13.230 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 16 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 314 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 8.9 MiB/s wr, 299 op/s
Jan 31 03:03:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:13.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 31 03:03:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 31 03:03:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:03:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:14.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:03:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 31 03:03:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:03:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3781447090' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:03:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:03:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3781447090' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:03:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 16 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 285 active+clean; 322 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 318 op/s
Jan 31 03:03:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:16.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:03:17Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:cb:e4 10.100.0.13
Jan 31 03:03:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:03:17Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:cb:e4 10.100.0.13
Jan 31 03:03:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 31 03:03:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 12 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 291 active+clean; 324 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 8.4 MiB/s wr, 248 op/s
Jan 31 03:03:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 31 03:03:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 31 03:03:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:17 np0005603621 nova_compute[247399]: 2026-01-31 08:03:17.969 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:18 np0005603621 nova_compute[247399]: 2026-01-31 08:03:18.232 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:18 np0005603621 nova_compute[247399]: 2026-01-31 08:03:18.360 247403 INFO nova.virt.libvirt.driver [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Snapshot image upload complete#033[00m
Jan 31 03:03:18 np0005603621 nova_compute[247399]: 2026-01-31 08:03:18.360 247403 INFO nova.compute.manager [None req-001d3c00-80b3-441c-b735-c82e029eeae5 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Took 16.97 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:03:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 31 03:03:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 31 03:03:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 31 03:03:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 324 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 760 KiB/s rd, 1.6 MiB/s wr, 134 op/s
Jan 31 03:03:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:19.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:03:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.528 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.529 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.530 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:03:20 np0005603621 nova_compute[247399]: 2026-01-31 08:03:20.530 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e549bdcc-5097-4890-8754-89d9f1d15a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 317 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 956 KiB/s rd, 2.2 MiB/s wr, 192 op/s
Jan 31 03:03:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:21.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:03:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:03:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 31 03:03:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 31 03:03:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 31 03:03:22 np0005603621 nova_compute[247399]: 2026-01-31 08:03:22.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:22 np0005603621 nova_compute[247399]: 2026-01-31 08:03:22.974 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Updating instance_info_cache with network_info: [{"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:23 np0005603621 nova_compute[247399]: 2026-01-31 08:03:23.234 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 323 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.9 MiB/s wr, 345 op/s
Jan 31 03:03:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:23.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:24.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 31 03:03:24 np0005603621 nova_compute[247399]: 2026-01-31 08:03:24.789 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-e549bdcc-5097-4890-8754-89d9f1d15a13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:03:24 np0005603621 nova_compute[247399]: 2026-01-31 08:03:24.789 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:03:24 np0005603621 nova_compute[247399]: 2026-01-31 08:03:24.789 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 31 03:03:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 31 03:03:25 np0005603621 podman[284913]: 2026-01-31 08:03:25.450491836 +0000 UTC m=+0.047131773 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:03:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 326 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 850 KiB/s rd, 4.3 MiB/s wr, 283 op/s
Jan 31 03:03:25 np0005603621 podman[284914]: 2026-01-31 08:03:25.497517854 +0000 UTC m=+0.093336765 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:03:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:03:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:03:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.234 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.235 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.235 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798098702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.650 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.752 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.752 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000036 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.877 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.878 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4355MB free_disk=20.85219955444336GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.879 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:26 np0005603621 nova_compute[247399]: 2026-01-31 08:03:26.879 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 31 03:03:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 326 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 533 KiB/s rd, 3.5 MiB/s wr, 222 op/s
Jan 31 03:03:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 31 03:03:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:03:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:27 np0005603621 nova_compute[247399]: 2026-01-31 08:03:27.676 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e549bdcc-5097-4890-8754-89d9f1d15a13 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:03:27 np0005603621 nova_compute[247399]: 2026-01-31 08:03:27.676 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:03:27 np0005603621 nova_compute[247399]: 2026-01-31 08:03:27.676 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:03:27 np0005603621 nova_compute[247399]: 2026-01-31 08:03:27.731 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:27.820 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:27 np0005603621 nova_compute[247399]: 2026-01-31 08:03:27.821 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:27.821 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:03:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:27.822 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.004 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.271 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.275 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.306 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.378 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:03:28 np0005603621 nova_compute[247399]: 2026-01-31 08:03:28.378 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 31 03:03:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:28.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:29 np0005603621 nova_compute[247399]: 2026-01-31 08:03:29.379 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:29 np0005603621 nova_compute[247399]: 2026-01-31 08:03:29.380 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:29 np0005603621 nova_compute[247399]: 2026-01-31 08:03:29.380 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:03:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 326 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 34 KiB/s wr, 33 op/s
Jan 31 03:03:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:29.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 03:03:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:03:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:30.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:30.484 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:30.484 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:30.485 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:03:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 793361de-8bc2-43ef-a613-f75ee6eeb668 does not exist
Jan 31 03:03:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 002e4d4a-b5b2-4833-a0a3-8a1c58aaef79 does not exist
Jan 31 03:03:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1adb5908-6176-4079-9a43-7ad266e3fa4d does not exist
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:03:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 326 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 51 KiB/s wr, 40 op/s
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:03:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:31.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:31 np0005603621 podman[285374]: 2026-01-31 08:03:31.70757494 +0000 UTC m=+0.018129681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:03:31 np0005603621 podman[285374]: 2026-01-31 08:03:31.85958329 +0000 UTC m=+0.170138001 container create 22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:03:31 np0005603621 systemd[1]: Started libpod-conmon-22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b.scope.
Jan 31 03:03:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:03:32 np0005603621 podman[285374]: 2026-01-31 08:03:32.151407694 +0000 UTC m=+0.461962465 container init 22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:03:32 np0005603621 podman[285374]: 2026-01-31 08:03:32.157140864 +0000 UTC m=+0.467695575 container start 22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:03:32 np0005603621 systemd[1]: libpod-22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b.scope: Deactivated successfully.
Jan 31 03:03:32 np0005603621 nostalgic_dewdney[285390]: 167 167
Jan 31 03:03:32 np0005603621 conmon[285390]: conmon 22f2d4ccde1c3b78dd72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b.scope/container/memory.events
Jan 31 03:03:32 np0005603621 podman[285374]: 2026-01-31 08:03:32.297810437 +0000 UTC m=+0.608365168 container attach 22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:03:32 np0005603621 podman[285374]: 2026-01-31 08:03:32.298134287 +0000 UTC m=+0.608688998 container died 22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:03:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-618159b71da8acce3382b37b5c1036c886c80e1ca53894fd8d1fd9f2a6059114-merged.mount: Deactivated successfully.
Jan 31 03:03:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:33 np0005603621 nova_compute[247399]: 2026-01-31 08:03:33.040 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603621 nova_compute[247399]: 2026-01-31 08:03:33.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:33 np0005603621 podman[285374]: 2026-01-31 08:03:33.404134838 +0000 UTC m=+1.714689549 container remove 22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:03:33 np0005603621 systemd[1]: libpod-conmon-22f2d4ccde1c3b78dd72b7066a9efe1ea38b5215f94cb2e2f1fc33b54578ff9b.scope: Deactivated successfully.
Jan 31 03:03:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 347 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 117 op/s
Jan 31 03:03:33 np0005603621 podman[285413]: 2026-01-31 08:03:33.503617325 +0000 UTC m=+0.018530913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:03:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:33.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:33 np0005603621 podman[285413]: 2026-01-31 08:03:33.738588153 +0000 UTC m=+0.253501711 container create 41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:03:33 np0005603621 systemd[1]: Started libpod-conmon-41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6.scope.
Jan 31 03:03:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:03:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dbac5400fe94ecda098e44fc3a8186bc4118cb0c9fddad2e92d1c096e00ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dbac5400fe94ecda098e44fc3a8186bc4118cb0c9fddad2e92d1c096e00ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dbac5400fe94ecda098e44fc3a8186bc4118cb0c9fddad2e92d1c096e00ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dbac5400fe94ecda098e44fc3a8186bc4118cb0c9fddad2e92d1c096e00ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8dbac5400fe94ecda098e44fc3a8186bc4118cb0c9fddad2e92d1c096e00ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:34 np0005603621 podman[285413]: 2026-01-31 08:03:34.148673535 +0000 UTC m=+0.663587113 container init 41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:03:34 np0005603621 podman[285413]: 2026-01-31 08:03:34.155377506 +0000 UTC m=+0.670291064 container start 41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:03:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:34 np0005603621 clever_yalow[285431]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:03:34 np0005603621 clever_yalow[285431]: --> relative data size: 1.0
Jan 31 03:03:34 np0005603621 clever_yalow[285431]: --> All data devices are unavailable
Jan 31 03:03:34 np0005603621 podman[285413]: 2026-01-31 08:03:34.94401019 +0000 UTC m=+1.458923778 container attach 41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:03:34 np0005603621 systemd[1]: libpod-41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6.scope: Deactivated successfully.
Jan 31 03:03:34 np0005603621 podman[285413]: 2026-01-31 08:03:34.95390534 +0000 UTC m=+1.468818928 container died 41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:03:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 353 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 31 03:03:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8a8dbac5400fe94ecda098e44fc3a8186bc4118cb0c9fddad2e92d1c096e00ed-merged.mount: Deactivated successfully.
Jan 31 03:03:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:35.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 31 03:03:36 np0005603621 podman[285413]: 2026-01-31 08:03:36.877335492 +0000 UTC m=+3.392249080 container remove 41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:03:36 np0005603621 systemd[1]: libpod-conmon-41fa705b1d5193ac6509616fbb3bbc12048316094ed6c3af7cf16c87519b9bb6.scope: Deactivated successfully.
Jan 31 03:03:37 np0005603621 podman[285599]: 2026-01-31 08:03:37.362086262 +0000 UTC m=+0.020006930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:03:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 362 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Jan 31 03:03:37 np0005603621 podman[285599]: 2026-01-31 08:03:37.59932909 +0000 UTC m=+0.257249738 container create 360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:03:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:37.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 31 03:03:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 31 03:03:37 np0005603621 systemd[1]: Started libpod-conmon-360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6.scope.
Jan 31 03:03:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:03:37 np0005603621 podman[285599]: 2026-01-31 08:03:37.939724252 +0000 UTC m=+0.597644930 container init 360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:03:37 np0005603621 podman[285599]: 2026-01-31 08:03:37.94885976 +0000 UTC m=+0.606780408 container start 360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:03:37 np0005603621 condescending_banach[285616]: 167 167
Jan 31 03:03:37 np0005603621 systemd[1]: libpod-360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6.scope: Deactivated successfully.
Jan 31 03:03:37 np0005603621 podman[285599]: 2026-01-31 08:03:37.963481229 +0000 UTC m=+0.621401887 container attach 360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:03:37 np0005603621 podman[285599]: 2026-01-31 08:03:37.964066558 +0000 UTC m=+0.621987206 container died 360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 03:03:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-46f381abb426cb4c79058e85c1a3eff5b5a32c9e38b02a377a8e4ad73c5616dd-merged.mount: Deactivated successfully.
Jan 31 03:03:38 np0005603621 nova_compute[247399]: 2026-01-31 08:03:38.043 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:38 np0005603621 podman[285599]: 2026-01-31 08:03:38.051685322 +0000 UTC m=+0.709605970 container remove 360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:03:38 np0005603621 systemd[1]: libpod-conmon-360fa3f114987b7b19999c2ae4a7ffd9240de16e221ec4f08c2d566e08ea1af6.scope: Deactivated successfully.
Jan 31 03:03:38 np0005603621 podman[285642]: 2026-01-31 08:03:38.227857341 +0000 UTC m=+0.055773754 container create b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:03:38 np0005603621 nova_compute[247399]: 2026-01-31 08:03:38.240 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:38 np0005603621 systemd[1]: Started libpod-conmon-b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad.scope.
Jan 31 03:03:38 np0005603621 podman[285642]: 2026-01-31 08:03:38.203541727 +0000 UTC m=+0.031458170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:03:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:03:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56abe106950d684e94ae9e0956dfed2727b271490ffe304cfc6a1a7f97247676/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56abe106950d684e94ae9e0956dfed2727b271490ffe304cfc6a1a7f97247676/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56abe106950d684e94ae9e0956dfed2727b271490ffe304cfc6a1a7f97247676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56abe106950d684e94ae9e0956dfed2727b271490ffe304cfc6a1a7f97247676/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:38 np0005603621 podman[285642]: 2026-01-31 08:03:38.330994653 +0000 UTC m=+0.158911066 container init b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:03:38 np0005603621 podman[285642]: 2026-01-31 08:03:38.338939503 +0000 UTC m=+0.166855916 container start b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:03:38 np0005603621 podman[285642]: 2026-01-31 08:03:38.347457701 +0000 UTC m=+0.175374104 container attach b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:03:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:03:38
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'images', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:03:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:03:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 31 03:03:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 31 03:03:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]: {
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:    "0": [
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:        {
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "devices": [
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "/dev/loop3"
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            ],
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "lv_name": "ceph_lv0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "lv_size": "7511998464",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "name": "ceph_lv0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "tags": {
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.cluster_name": "ceph",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.crush_device_class": "",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.encrypted": "0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.osd_id": "0",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.type": "block",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:                "ceph.vdo": "0"
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            },
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "type": "block",
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:            "vg_name": "ceph_vg0"
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:        }
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]:    ]
Jan 31 03:03:39 np0005603621 sweet_shaw[285658]: }
Jan 31 03:03:39 np0005603621 systemd[1]: libpod-b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad.scope: Deactivated successfully.
Jan 31 03:03:39 np0005603621 podman[285642]: 2026-01-31 08:03:39.307225865 +0000 UTC m=+1.135142308 container died b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:03:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-56abe106950d684e94ae9e0956dfed2727b271490ffe304cfc6a1a7f97247676-merged.mount: Deactivated successfully.
Jan 31 03:03:39 np0005603621 podman[285642]: 2026-01-31 08:03:39.377571426 +0000 UTC m=+1.205487829 container remove b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_shaw, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:03:39 np0005603621 systemd[1]: libpod-conmon-b418a0761e60199b911ab76ea5b1c4dc89bc21151c9e4149cdf16d5ad2e59cad.scope: Deactivated successfully.
Jan 31 03:03:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 338 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 4.4 MiB/s wr, 190 op/s
Jan 31 03:03:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:39.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 31 03:03:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 31 03:03:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.008138771 +0000 UTC m=+0.024907934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.24825193 +0000 UTC m=+0.265021073 container create f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:03:40 np0005603621 systemd[1]: Started libpod-conmon-f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee.scope.
Jan 31 03:03:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.429835368 +0000 UTC m=+0.446604541 container init f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.437748698 +0000 UTC m=+0.454517831 container start f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:03:40 np0005603621 crazy_sanderson[285887]: 167 167
Jan 31 03:03:40 np0005603621 systemd[1]: libpod-f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee.scope: Deactivated successfully.
Jan 31 03:03:40 np0005603621 conmon[285887]: conmon f6c06b5fac5dfe74d7e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee.scope/container/memory.events
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.44389412 +0000 UTC m=+0.460663293 container attach f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.444917343 +0000 UTC m=+0.461686486 container died f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:03:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:40.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-efce49022cf512135dcc1bb42a50f02faa3407bd4669474d5df407e4733e84e4-merged.mount: Deactivated successfully.
Jan 31 03:03:40 np0005603621 podman[285870]: 2026-01-31 08:03:40.511794765 +0000 UTC m=+0.528563908 container remove f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sanderson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:03:40 np0005603621 systemd[1]: libpod-conmon-f6c06b5fac5dfe74d7e64254984c7b60bd29c02e11bba2893107fafea0eb81ee.scope: Deactivated successfully.
Jan 31 03:03:40 np0005603621 podman[285911]: 2026-01-31 08:03:40.672420926 +0000 UTC m=+0.050844030 container create c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:03:40 np0005603621 systemd[1]: Started libpod-conmon-c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835.scope.
Jan 31 03:03:40 np0005603621 podman[285911]: 2026-01-31 08:03:40.651686363 +0000 UTC m=+0.030109487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:03:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:03:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b7b4109aceb03b67f686bac0ac9ef1383c756cbb03cfb296d5e16eda11b691/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b7b4109aceb03b67f686bac0ac9ef1383c756cbb03cfb296d5e16eda11b691/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b7b4109aceb03b67f686bac0ac9ef1383c756cbb03cfb296d5e16eda11b691/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b7b4109aceb03b67f686bac0ac9ef1383c756cbb03cfb296d5e16eda11b691/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:03:40 np0005603621 podman[285911]: 2026-01-31 08:03:40.786648577 +0000 UTC m=+0.165071711 container init c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:03:40 np0005603621 podman[285911]: 2026-01-31 08:03:40.79214987 +0000 UTC m=+0.170572974 container start c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:03:40 np0005603621 podman[285911]: 2026-01-31 08:03:40.796801956 +0000 UTC m=+0.175225380 container attach c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:03:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 339 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.9 MiB/s wr, 212 op/s
Jan 31 03:03:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:41.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:41 np0005603621 silly_wilson[285927]: {
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:        "osd_id": 0,
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:        "type": "bluestore"
Jan 31 03:03:41 np0005603621 silly_wilson[285927]:    }
Jan 31 03:03:41 np0005603621 silly_wilson[285927]: }
Jan 31 03:03:41 np0005603621 systemd[1]: libpod-c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835.scope: Deactivated successfully.
Jan 31 03:03:41 np0005603621 conmon[285927]: conmon c71229535a59f58df2dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835.scope/container/memory.events
Jan 31 03:03:41 np0005603621 podman[285911]: 2026-01-31 08:03:41.741254949 +0000 UTC m=+1.119678063 container died c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:03:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-99b7b4109aceb03b67f686bac0ac9ef1383c756cbb03cfb296d5e16eda11b691-merged.mount: Deactivated successfully.
Jan 31 03:03:42 np0005603621 podman[285911]: 2026-01-31 08:03:42.076200728 +0000 UTC m=+1.454623832 container remove c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 03:03:42 np0005603621 systemd[1]: libpod-conmon-c71229535a59f58df2dd29414cae0abfa72043991274a1d18b48b5f17f03d835.scope: Deactivated successfully.
Jan 31 03:03:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:03:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.138 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "5667536c-3b20-4b38-b8e4-c85686a1eae2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.141 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "5667536c-3b20-4b38-b8e4-c85686a1eae2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a4d28323-12ea-4ac5-bf1d-0f99e86843f0 does not exist
Jan 31 03:03:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2a9c2856-3b6e-42a5-84ea-77a26ea3572e does not exist
Jan 31 03:03:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 363ed61f-d1d2-496c-906c-20645dcfaab8 does not exist
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.177 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.267 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.268 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.282 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.283 247403 INFO nova.compute.claims [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:03:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:42.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:42 np0005603621 nova_compute[247399]: 2026-01-31 08:03:42.484 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:03:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644346732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.030 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.037 247403 DEBUG nova.compute.provider_tree [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.044 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.066 247403 DEBUG nova.scheduler.client.report [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.120 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.121 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:03:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.179 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.180 247403 DEBUG nova.network.neutron [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.213 247403 INFO nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.230 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.342 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.343 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.343 247403 INFO nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Creating image(s)#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.371 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.401 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.430 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.433 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 325 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.7 MiB/s wr, 517 op/s
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.483 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.484 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.485 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.485 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.511 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.514 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:43.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.726 247403 DEBUG nova.network.neutron [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 31 03:03:43 np0005603621 nova_compute[247399]: 2026-01-31 08:03:43.727 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:03:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:44.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:44 np0005603621 nova_compute[247399]: 2026-01-31 08:03:44.881 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:44 np0005603621 nova_compute[247399]: 2026-01-31 08:03:44.969 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] resizing rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:03:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 337 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 501 op/s
Jan 31 03:03:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:45.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.829 247403 DEBUG nova.objects.instance [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lazy-loading 'migration_context' on Instance uuid 5667536c-3b20-4b38-b8e4-c85686a1eae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.843 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.844 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Ensure instance console log exists: /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.844 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.845 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.845 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.846 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.850 247403 WARNING nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.854 247403 DEBUG nova.virt.libvirt.host [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.855 247403 DEBUG nova.virt.libvirt.host [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.857 247403 DEBUG nova.virt.libvirt.host [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.858 247403 DEBUG nova.virt.libvirt.host [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.859 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.860 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.860 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.860 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.861 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.861 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.861 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.861 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.862 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.862 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.862 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.862 247403 DEBUG nova.virt.hardware [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:03:45 np0005603621 nova_compute[247399]: 2026-01-31 08:03:45.866 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/102503805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:46 np0005603621 nova_compute[247399]: 2026-01-31 08:03:46.372 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:03:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:46.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:03:46 np0005603621 nova_compute[247399]: 2026-01-31 08:03:46.503 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:46 np0005603621 nova_compute[247399]: 2026-01-31 08:03:46.508 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 31 03:03:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:03:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271794170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:03:46 np0005603621 nova_compute[247399]: 2026-01-31 08:03:46.974 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:46 np0005603621 nova_compute[247399]: 2026-01-31 08:03:46.975 247403 DEBUG nova.objects.instance [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5667536c-3b20-4b38-b8e4-c85686a1eae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.003 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <uuid>5667536c-3b20-4b38-b8e4-c85686a1eae2</uuid>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <name>instance-00000039</name>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:name>tempest-ListImageFiltersTestJSON-server-2023707036</nova:name>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:03:45</nova:creationTime>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:user uuid="fd3d70d97c394edaa70e32807d7a96ca">tempest-ListImageFiltersTestJSON-1012419265-project-member</nova:user>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <nova:project uuid="3d28270b439f4cb1aa201d46b9f8a843">tempest-ListImageFiltersTestJSON-1012419265</nova:project>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <entry name="serial">5667536c-3b20-4b38-b8e4-c85686a1eae2</entry>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <entry name="uuid">5667536c-3b20-4b38-b8e4-c85686a1eae2</entry>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/5667536c-3b20-4b38-b8e4-c85686a1eae2_disk">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/5667536c-3b20-4b38-b8e4-c85686a1eae2_disk.config">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/console.log" append="off"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:03:47 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:03:47 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:03:47 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:03:47 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:03:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.124 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.124 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.125 247403 INFO nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Using config drive#033[00m
Jan 31 03:03:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.185 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.403 247403 INFO nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Creating config drive at /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/disk.config#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.407 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnqbeyg55 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 356 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 467 op/s
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.529 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpnqbeyg55" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.557 247403 DEBUG nova.storage.rbd_utils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] rbd image 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:03:47 np0005603621 nova_compute[247399]: 2026-01-31 08:03:47.563 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/disk.config 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 31 03:03:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 31 03:03:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 31 03:03:48 np0005603621 nova_compute[247399]: 2026-01-31 08:03:48.045 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:48 np0005603621 nova_compute[247399]: 2026-01-31 08:03:48.242 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:48.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:48 np0005603621 nova_compute[247399]: 2026-01-31 08:03:48.863 247403 DEBUG oslo_concurrency.processutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/disk.config 5667536c-3b20-4b38-b8e4-c85686a1eae2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:48 np0005603621 nova_compute[247399]: 2026-01-31 08:03:48.863 247403 INFO nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Deleting local config drive /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2/disk.config because it was imported into RBD.#033[00m
Jan 31 03:03:48 np0005603621 systemd-machined[212769]: New machine qemu-25-instance-00000039.
Jan 31 03:03:48 np0005603621 systemd[1]: Started Virtual Machine qemu-25-instance-00000039.
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00518070677461381 of space, bias 1.0, pg target 1.554212032384143 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0050480265912897825 of space, bias 1.0, pg target 1.509359950795645 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:03:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.458 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846629.4579177, 5667536c-3b20-4b38-b8e4-c85686a1eae2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.459 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.463 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.464 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.468 247403 INFO nova.virt.libvirt.driver [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance spawned successfully.#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.469 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:03:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 412 MiB data, 739 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.1 MiB/s wr, 426 op/s
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.483 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.489 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.502 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.502 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.504 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.504 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.505 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.505 247403 DEBUG nova.virt.libvirt.driver [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.516 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.516 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846629.4592881, 5667536c-3b20-4b38-b8e4-c85686a1eae2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.516 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] VM Started (Lifecycle Event)#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.543 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.547 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.582 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.605 247403 INFO nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Took 6.26 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.605 247403 DEBUG nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.687 247403 INFO nova.compute.manager [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Took 7.46 seconds to build instance.#033[00m
Jan 31 03:03:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:49 np0005603621 nova_compute[247399]: 2026-01-31 08:03:49.712 247403 DEBUG oslo_concurrency.lockutils [None req-a54410d9-63b9-481c-ab21-357175da9974 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "5667536c-3b20-4b38-b8e4-c85686a1eae2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.102 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.103 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.103 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.103 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.104 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.105 247403 INFO nova.compute.manager [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Terminating instance#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.106 247403 DEBUG nova.compute.manager [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:03:50 np0005603621 kernel: tap5a6ed0c1-e0 (unregistering): left promiscuous mode
Jan 31 03:03:50 np0005603621 NetworkManager[49013]: <info>  [1769846630.1712] device (tap5a6ed0c1-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:03:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:03:50Z|00134|binding|INFO|Releasing lport 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 from this chassis (sb_readonly=0)
Jan 31 03:03:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:03:50Z|00135|binding|INFO|Setting lport 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 down in Southbound
Jan 31 03:03:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:03:50Z|00136|binding|INFO|Removing iface tap5a6ed0c1-e0 ovn-installed in OVS
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.177 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.185 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:cb:e4 10.100.0.13'], port_security=['fa:16:3e:b9:cb:e4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e549bdcc-5097-4890-8754-89d9f1d15a13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24914779-babc-4c55-b38b-adf9bfc5c103', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '521dcd459f144f2bb32de93d50ae0391', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e123a0a-7228-4656-b140-3fc3dfcfddda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0a17edd6-cd7f-4fcb-84f3-df8148e78cb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5a6ed0c1-e054-4f76-8079-2f367ccf0d67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.190 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6ed0c1-e054-4f76-8079-2f367ccf0d67 in datapath 24914779-babc-4c55-b38b-adf9bfc5c103 unbound from our chassis#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.192 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24914779-babc-4c55-b38b-adf9bfc5c103, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.193 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ba4db3-9b24-4ce0-803e-2c50ad5590c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.196 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 namespace which is not needed anymore#033[00m
Jan 31 03:03:50 np0005603621 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000036.scope: Deactivated successfully.
Jan 31 03:03:50 np0005603621 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000036.scope: Consumed 14.291s CPU time.
Jan 31 03:03:50 np0005603621 systemd-machined[212769]: Machine qemu-24-instance-00000036 terminated.
Jan 31 03:03:50 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [NOTICE]   (284623) : haproxy version is 2.8.14-c23fe91
Jan 31 03:03:50 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [NOTICE]   (284623) : path to executable is /usr/sbin/haproxy
Jan 31 03:03:50 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [WARNING]  (284623) : Exiting Master process...
Jan 31 03:03:50 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [ALERT]    (284623) : Current worker (284625) exited with code 143 (Terminated)
Jan 31 03:03:50 np0005603621 neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103[284619]: [WARNING]  (284623) : All workers exited. Exiting... (0)
Jan 31 03:03:50 np0005603621 systemd[1]: libpod-260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e.scope: Deactivated successfully.
Jan 31 03:03:50 np0005603621 NetworkManager[49013]: <info>  [1769846630.3201] manager: (tap5a6ed0c1-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 podman[286402]: 2026-01-31 08:03:50.324918638 +0000 UTC m=+0.044655965 container died 260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.335 247403 INFO nova.virt.libvirt.driver [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Instance destroyed successfully.#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.336 247403 DEBUG nova.objects.instance [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lazy-loading 'resources' on Instance uuid e549bdcc-5097-4890-8754-89d9f1d15a13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:03:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:03:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-92fc8125eada22b757092a0b41595928b13d246e1e5954c6875cdefbed99ebbe-merged.mount: Deactivated successfully.
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.366 247403 DEBUG nova.virt.libvirt.vif [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:02:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1905191450',display_name='tempest-ImagesTestJSON-server-1905191450',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1905191450',id=54,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:02:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='521dcd459f144f2bb32de93d50ae0391',ramdisk_id='',reservation_id='r-mophrfb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1780438391',owner_user_name='tempest-ImagesTestJSON-1780438391-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:03:18Z,user_data=None,user_id='46ffd64a348845fab6cdc53249353575',uuid=e549bdcc-5097-4890-8754-89d9f1d15a13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.367 247403 DEBUG nova.network.os_vif_util [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converting VIF {"id": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "address": "fa:16:3e:b9:cb:e4", "network": {"id": "24914779-babc-4c55-b38b-adf9bfc5c103", "bridge": "br-int", "label": "tempest-ImagesTestJSON-721321856-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "521dcd459f144f2bb32de93d50ae0391", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6ed0c1-e0", "ovs_interfaceid": "5a6ed0c1-e054-4f76-8079-2f367ccf0d67", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.368 247403 DEBUG nova.network.os_vif_util [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.368 247403 DEBUG os_vif [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.370 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.370 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a6ed0c1-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.373 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.375 247403 INFO os_vif [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:cb:e4,bridge_name='br-int',has_traffic_filtering=True,id=5a6ed0c1-e054-4f76-8079-2f367ccf0d67,network=Network(24914779-babc-4c55-b38b-adf9bfc5c103),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6ed0c1-e0')#033[00m
Jan 31 03:03:50 np0005603621 podman[286402]: 2026-01-31 08:03:50.38288179 +0000 UTC m=+0.102619117 container cleanup 260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:03:50 np0005603621 systemd[1]: libpod-conmon-260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e.scope: Deactivated successfully.
Jan 31 03:03:50 np0005603621 podman[286455]: 2026-01-31 08:03:50.447241483 +0000 UTC m=+0.046191932 container remove 260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.452 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[55bf8fa5-4918-41e7-a8f9-d64e64c9e5a2]: (4, ('Sat Jan 31 08:03:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e)\n260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e\nSat Jan 31 08:03:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 (260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e)\n260b555d8eda9e1c66de4f92db3fe683b313ac044b737f50e7ca22062539144e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.454 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[df5384ce-84b3-42bc-aa4e-08201ea5244d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.456 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24914779-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.458 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 kernel: tap24914779-b0: left promiscuous mode
Jan 31 03:03:50 np0005603621 nova_compute[247399]: 2026-01-31 08:03:50.467 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:50.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.473 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc860530-c595-4327-9811-63d7cc14bdb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.489 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ed101297-e9eb-4423-9747-8314b4745378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.492 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8aeee4-7c52-483a-8afa-68eecf7aded7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.507 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ec45684a-0d9a-4089-b7c9-74b705fbe43c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588145, 'reachable_time': 36212, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286474, 'error': None, 'target': 'ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.510 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24914779-babc-4c55-b38b-adf9bfc5c103 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:03:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:03:50.510 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[72e8b47c-f07e-4df3-baa6-8914561c3e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:03:50 np0005603621 systemd[1]: run-netns-ovnmeta\x2d24914779\x2dbabc\x2d4c55\x2db38b\x2dadf9bfc5c103.mount: Deactivated successfully.
Jan 31 03:03:51 np0005603621 nova_compute[247399]: 2026-01-31 08:03:51.146 247403 DEBUG nova.compute.manager [req-b5124cb1-433f-4960-bd23-f023c5cc7dd1 req-08c1f63d-3abc-426f-9573-cebb8a3baa70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-vif-unplugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:51 np0005603621 nova_compute[247399]: 2026-01-31 08:03:51.148 247403 DEBUG oslo_concurrency.lockutils [req-b5124cb1-433f-4960-bd23-f023c5cc7dd1 req-08c1f63d-3abc-426f-9573-cebb8a3baa70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:51 np0005603621 nova_compute[247399]: 2026-01-31 08:03:51.148 247403 DEBUG oslo_concurrency.lockutils [req-b5124cb1-433f-4960-bd23-f023c5cc7dd1 req-08c1f63d-3abc-426f-9573-cebb8a3baa70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:51 np0005603621 nova_compute[247399]: 2026-01-31 08:03:51.148 247403 DEBUG oslo_concurrency.lockutils [req-b5124cb1-433f-4960-bd23-f023c5cc7dd1 req-08c1f63d-3abc-426f-9573-cebb8a3baa70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:51 np0005603621 nova_compute[247399]: 2026-01-31 08:03:51.149 247403 DEBUG nova.compute.manager [req-b5124cb1-433f-4960-bd23-f023c5cc7dd1 req-08c1f63d-3abc-426f-9573-cebb8a3baa70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] No waiting events found dispatching network-vif-unplugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:51 np0005603621 nova_compute[247399]: 2026-01-31 08:03:51.149 247403 DEBUG nova.compute.manager [req-b5124cb1-433f-4960-bd23-f023c5cc7dd1 req-08c1f63d-3abc-426f-9573-cebb8a3baa70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-vif-unplugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:03:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 381 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.3 MiB/s wr, 211 op/s
Jan 31 03:03:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:52 np0005603621 nova_compute[247399]: 2026-01-31 08:03:52.143 247403 INFO nova.virt.libvirt.driver [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Deleting instance files /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13_del#033[00m
Jan 31 03:03:52 np0005603621 nova_compute[247399]: 2026-01-31 08:03:52.144 247403 INFO nova.virt.libvirt.driver [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Deletion of /var/lib/nova/instances/e549bdcc-5097-4890-8754-89d9f1d15a13_del complete#033[00m
Jan 31 03:03:52 np0005603621 nova_compute[247399]: 2026-01-31 08:03:52.220 247403 INFO nova.compute.manager [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Took 2.11 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:03:52 np0005603621 nova_compute[247399]: 2026-01-31 08:03:52.221 247403 DEBUG oslo.service.loopingcall [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:03:52 np0005603621 nova_compute[247399]: 2026-01-31 08:03:52.222 247403 DEBUG nova.compute.manager [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:03:52 np0005603621 nova_compute[247399]: 2026-01-31 08:03:52.222 247403 DEBUG nova.network.neutron [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:03:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:52.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.140 247403 DEBUG nova.network.neutron [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.248 247403 INFO nova.compute.manager [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Took 1.03 seconds to deallocate network for instance.#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.294 247403 DEBUG nova.compute.manager [req-063161ff-6c2e-43c3-adf9-8e52640db2d5 req-b527951e-e4ef-43f6-b6ba-1fe505550584 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.294 247403 DEBUG oslo_concurrency.lockutils [req-063161ff-6c2e-43c3-adf9-8e52640db2d5 req-b527951e-e4ef-43f6-b6ba-1fe505550584 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.295 247403 DEBUG oslo_concurrency.lockutils [req-063161ff-6c2e-43c3-adf9-8e52640db2d5 req-b527951e-e4ef-43f6-b6ba-1fe505550584 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.295 247403 DEBUG oslo_concurrency.lockutils [req-063161ff-6c2e-43c3-adf9-8e52640db2d5 req-b527951e-e4ef-43f6-b6ba-1fe505550584 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.295 247403 DEBUG nova.compute.manager [req-063161ff-6c2e-43c3-adf9-8e52640db2d5 req-b527951e-e4ef-43f6-b6ba-1fe505550584 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] No waiting events found dispatching network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.296 247403 WARNING nova.compute.manager [req-063161ff-6c2e-43c3-adf9-8e52640db2d5 req-b527951e-e4ef-43f6-b6ba-1fe505550584 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received unexpected event network-vif-plugged-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.378 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.379 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:03:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 178 MiB data, 662 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.6 MiB/s wr, 387 op/s
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.624 247403 DEBUG oslo_concurrency.processutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.649 247403 DEBUG nova.compute.manager [req-19df28b6-d00b-49d3-8a4d-5fef324e8aa4 req-cd2d408c-a8e8-47a4-87d6-321482fc62b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Received event network-vif-deleted-5a6ed0c1-e054-4f76-8079-2f367ccf0d67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:03:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:53.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.741 247403 DEBUG nova.compute.manager [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:03:53 np0005603621 nova_compute[247399]: 2026-01-31 08:03:53.800 247403 INFO nova.compute.manager [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] instance snapshotting#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.094 247403 DEBUG oslo_concurrency.processutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.099 247403 DEBUG nova.compute.provider_tree [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.155 247403 DEBUG nova.scheduler.client.report [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.196 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.240 247403 INFO nova.scheduler.client.report [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Deleted allocations for instance e549bdcc-5097-4890-8754-89d9f1d15a13#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.320 247403 DEBUG oslo_concurrency.lockutils [None req-5411a468-d873-4db0-b44f-a3c2d95cb161 46ffd64a348845fab6cdc53249353575 521dcd459f144f2bb32de93d50ae0391 - - default default] Lock "e549bdcc-5097-4890-8754-89d9f1d15a13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.341 247403 INFO nova.virt.libvirt.driver [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Beginning live snapshot process#033[00m
Jan 31 03:03:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:54.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.503 247403 DEBUG nova.virt.libvirt.imagebackend [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:03:54 np0005603621 nova_compute[247399]: 2026-01-31 08:03:54.900 247403 DEBUG nova.storage.rbd_utils [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] creating snapshot(173d5b39d57b4af3a1619383e6bd9755) on rbd image(5667536c-3b20-4b38-b8e4-c85686a1eae2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:03:55 np0005603621 nova_compute[247399]: 2026-01-31 08:03:55.374 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 134 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.0 MiB/s wr, 390 op/s
Jan 31 03:03:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:55.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 31 03:03:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 31 03:03:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 31 03:03:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:03:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:56.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:03:56 np0005603621 podman[286552]: 2026-01-31 08:03:56.5399106 +0000 UTC m=+0.085939552 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:03:56 np0005603621 podman[286553]: 2026-01-31 08:03:56.578517864 +0000 UTC m=+0.124546296 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 03:03:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 134 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 344 op/s
Jan 31 03:03:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:03:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:57.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 31 03:03:58 np0005603621 nova_compute[247399]: 2026-01-31 08:03:58.050 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:03:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 31 03:03:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 31 03:03:58 np0005603621 nova_compute[247399]: 2026-01-31 08:03:58.331 247403 DEBUG nova.storage.rbd_utils [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] cloning vms/5667536c-3b20-4b38-b8e4-c85686a1eae2_disk@173d5b39d57b4af3a1619383e6bd9755 to images/58c2d049-2cc3-4799-9c9b-054364bb8a5e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:03:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:03:58.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:03:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 134 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 47 KiB/s wr, 362 op/s
Jan 31 03:03:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:03:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:03:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:03:59.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:00 np0005603621 nova_compute[247399]: 2026-01-31 08:04:00.201 247403 DEBUG nova.storage.rbd_utils [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] flattening images/58c2d049-2cc3-4799-9c9b-054364bb8a5e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:04:00 np0005603621 nova_compute[247399]: 2026-01-31 08:04:00.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:00.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 153 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.5 MiB/s wr, 115 op/s
Jan 31 03:04:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:01.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:02.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:03 np0005603621 nova_compute[247399]: 2026-01-31 08:04:03.052 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 167 MiB data, 609 MiB used, 20 GiB / 21 GiB avail; 856 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 31 03:04:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:04 np0005603621 nova_compute[247399]: 2026-01-31 08:04:04.437 247403 DEBUG nova.storage.rbd_utils [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] removing snapshot(173d5b39d57b4af3a1619383e6bd9755) on rbd image(5667536c-3b20-4b38-b8e4-c85686a1eae2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:04:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:04.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:05 np0005603621 nova_compute[247399]: 2026-01-31 08:04:05.334 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846630.3323452, e549bdcc-5097-4890-8754-89d9f1d15a13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:05 np0005603621 nova_compute[247399]: 2026-01-31 08:04:05.335 247403 INFO nova.compute.manager [-] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:04:05 np0005603621 nova_compute[247399]: 2026-01-31 08:04:05.356 247403 DEBUG nova.compute.manager [None req-37030272-d013-45fb-b30e-95c183be8181 - - - - - -] [instance: e549bdcc-5097-4890-8754-89d9f1d15a13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:05 np0005603621 nova_compute[247399]: 2026-01-31 08:04:05.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 31 03:04:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 241 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 7.0 MiB/s wr, 131 op/s
Jan 31 03:04:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 31 03:04:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 31 03:04:05 np0005603621 nova_compute[247399]: 2026-01-31 08:04:05.515 247403 DEBUG nova.storage.rbd_utils [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] creating snapshot(snap) on rbd image(58c2d049-2cc3-4799-9c9b-054364bb8a5e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:04:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:06.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 31 03:04:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 31 03:04:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 31 03:04:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 261 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 9.1 MiB/s wr, 212 op/s
Jan 31 03:04:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:07.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.803852) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846647803894, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1709, "num_deletes": 262, "total_data_size": 2811874, "memory_usage": 2851464, "flush_reason": "Manual Compaction"}
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846647823836, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1881148, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31765, "largest_seqno": 33473, "table_properties": {"data_size": 1874307, "index_size": 3787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17237, "raw_average_key_size": 22, "raw_value_size": 1859739, "raw_average_value_size": 2375, "num_data_blocks": 164, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846523, "oldest_key_time": 1769846523, "file_creation_time": 1769846647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 20038 microseconds, and 4297 cpu microseconds.
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.823888) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1881148 bytes OK
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.823912) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.858732) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.858843) EVENT_LOG_v1 {"time_micros": 1769846647858833, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.858870) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2804341, prev total WAL file size 2804341, number of live WAL files 2.
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.859688) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1837KB)], [68(10MB)]
Jan 31 03:04:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846647859773, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 12626524, "oldest_snapshot_seqno": -1}
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6053 keys, 9605015 bytes, temperature: kUnknown
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846648045548, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9605015, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9564571, "index_size": 24206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 154767, "raw_average_key_size": 25, "raw_value_size": 9456110, "raw_average_value_size": 1562, "num_data_blocks": 977, "num_entries": 6053, "num_filter_entries": 6053, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.045880) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9605015 bytes
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.047573) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.9 rd, 51.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(11.8) write-amplify(5.1) OK, records in: 6534, records dropped: 481 output_compression: NoCompression
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.047616) EVENT_LOG_v1 {"time_micros": 1769846648047601, "job": 38, "event": "compaction_finished", "compaction_time_micros": 185885, "compaction_time_cpu_micros": 23872, "output_level": 6, "num_output_files": 1, "total_output_size": 9605015, "num_input_records": 6534, "num_output_records": 6053, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846648048450, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846648049397, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:07.859547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.049440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.049445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.049447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.049449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:08 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:08.049450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:08 np0005603621 nova_compute[247399]: 2026-01-31 08:04:08.054 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:04:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 283 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 9.5 MiB/s wr, 336 op/s
Jan 31 03:04:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:09.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:10 np0005603621 nova_compute[247399]: 2026-01-31 08:04:10.425 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 31 03:04:11 np0005603621 nova_compute[247399]: 2026-01-31 08:04:11.238 247403 INFO nova.virt.libvirt.driver [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Snapshot image upload complete#033[00m
Jan 31 03:04:11 np0005603621 nova_compute[247399]: 2026-01-31 08:04:11.239 247403 INFO nova.compute.manager [None req-bf5d47e3-7e91-4d43-9ed5-59db09eec04b fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Took 17.44 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:04:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 292 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 9.5 MiB/s wr, 372 op/s
Jan 31 03:04:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 31 03:04:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 31 03:04:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:11.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 31 03:04:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 31 03:04:13 np0005603621 nova_compute[247399]: 2026-01-31 08:04:13.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 31 03:04:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 293 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.3 MiB/s wr, 360 op/s
Jan 31 03:04:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:13.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:14.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.749 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.749 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.777 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.875 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.876 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.888 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:04:14 np0005603621 nova_compute[247399]: 2026-01-31 08:04:14.889 247403 INFO nova.compute.claims [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.041 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 31 03:04:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 31 03:04:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 314 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 193 op/s
Jan 31 03:04:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723823175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.526 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.532 247403 DEBUG nova.compute.provider_tree [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.565 247403 DEBUG nova.scheduler.client.report [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.592 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.593 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.662 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.663 247403 DEBUG nova.network.neutron [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.687 247403 INFO nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.712 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:04:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:15.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.806 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.807 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.808 247403 INFO nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Creating image(s)#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.832 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.859 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.887 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.890 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.910 247403 DEBUG nova.policy [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '35d966201a0243c2a8f4c689350e8ddd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '506fa1d7269349f1aa48237dd82ac79e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.947 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.948 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.948 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.948 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.977 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:15 np0005603621 nova_compute[247399]: 2026-01-31 08:04:15.981 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 dd2c35b1-7956-49ad-a88a-21ea779d2720_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 31 03:04:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 31 03:04:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.269 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 dd2c35b1-7956-49ad-a88a-21ea779d2720_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.349 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] resizing rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:04:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:16.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.816 247403 DEBUG nova.objects.instance [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lazy-loading 'migration_context' on Instance uuid dd2c35b1-7956-49ad-a88a-21ea779d2720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.915 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.915 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Ensure instance console log exists: /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.916 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.916 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:16 np0005603621 nova_compute[247399]: 2026-01-31 08:04:16.917 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 31 03:04:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 31 03:04:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 31 03:04:17 np0005603621 nova_compute[247399]: 2026-01-31 08:04:17.321 247403 DEBUG nova.network.neutron [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Successfully created port: ff260644-03d4-48e6-8e4d-0ac43c3f9f60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:04:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 335 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.1 MiB/s wr, 152 op/s
Jan 31 03:04:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:17.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.058 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.303 247403 DEBUG nova.network.neutron [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Successfully updated port: ff260644-03d4-48e6-8e4d-0ac43c3f9f60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.332 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.332 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquired lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.333 247403 DEBUG nova.network.neutron [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:04:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:18.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.507 247403 DEBUG nova.compute.manager [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-changed-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.508 247403 DEBUG nova.compute.manager [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Refreshing instance network info cache due to event network-changed-ff260644-03d4-48e6-8e4d-0ac43c3f9f60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.508 247403 DEBUG oslo_concurrency.lockutils [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:18 np0005603621 nova_compute[247399]: 2026-01-31 08:04:18.625 247403 DEBUG nova.network.neutron [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:04:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 31 03:04:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 31 03:04:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 31 03:04:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 392 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 10 MiB/s wr, 234 op/s
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.694 247403 DEBUG nova.network.neutron [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Updating instance_info_cache with network_info: [{"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.719 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Releasing lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.719 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Instance network_info: |[{"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.720 247403 DEBUG oslo_concurrency.lockutils [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.720 247403 DEBUG nova.network.neutron [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Refreshing network info cache for port ff260644-03d4-48e6-8e4d-0ac43c3f9f60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.723 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Start _get_guest_xml network_info=[{"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.726 247403 WARNING nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:04:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:19.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.733 247403 DEBUG nova.virt.libvirt.host [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.735 247403 DEBUG nova.virt.libvirt.host [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.746 247403 DEBUG nova.virt.libvirt.host [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.747 247403 DEBUG nova.virt.libvirt.host [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.749 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.749 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.750 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.750 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.750 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.750 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.751 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.751 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.751 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.751 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.752 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.752 247403 DEBUG nova.virt.hardware [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:04:19 np0005603621 nova_compute[247399]: 2026-01-31 08:04:19.757 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:04:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/112776618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.199 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.220 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.224 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 31 03:04:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 31 03:04:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:04:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:20.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.688 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.691 247403 DEBUG nova.virt.libvirt.vif [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-715559348',display_name='tempest-ServersTestManualDisk-server-715559348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-715559348',id=60,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDClwd2i9PkdYOSRbbcRWxkdb71Vxdkd0xT2/G+Y4LjlmIwO0yq5QpD1L8zQrD1yQl7nScI5xmbhtYSRlIcyG0Am70fXaS7wg8TUhXshdKjm8ie5Ar8IpgPf6Q5tocpTg==',key_name='tempest-keypair-823043967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='506fa1d7269349f1aa48237dd82ac79e',ramdisk_id='',reservation_id='r-temvets0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1760577672',owner_user_name='tempest-ServersTestManualDisk-1760577672-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='35d966201a0243c2a8f4c689350e8ddd',uuid=dd2c35b1-7956-49ad-a88a-21ea779d2720,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.691 247403 DEBUG nova.network.os_vif_util [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Converting VIF {"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.692 247403 DEBUG nova.network.os_vif_util [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.693 247403 DEBUG nova.objects.instance [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lazy-loading 'pci_devices' on Instance uuid dd2c35b1-7956-49ad-a88a-21ea779d2720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.718 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <uuid>dd2c35b1-7956-49ad-a88a-21ea779d2720</uuid>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <name>instance-0000003c</name>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersTestManualDisk-server-715559348</nova:name>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:04:19</nova:creationTime>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:user uuid="35d966201a0243c2a8f4c689350e8ddd">tempest-ServersTestManualDisk-1760577672-project-member</nova:user>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:project uuid="506fa1d7269349f1aa48237dd82ac79e">tempest-ServersTestManualDisk-1760577672</nova:project>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <nova:port uuid="ff260644-03d4-48e6-8e4d-0ac43c3f9f60">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <entry name="serial">dd2c35b1-7956-49ad-a88a-21ea779d2720</entry>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <entry name="uuid">dd2c35b1-7956-49ad-a88a-21ea779d2720</entry>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/dd2c35b1-7956-49ad-a88a-21ea779d2720_disk">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/dd2c35b1-7956-49ad-a88a-21ea779d2720_disk.config">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:3c:4b:63"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <target dev="tapff260644-03"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/console.log" append="off"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:04:20 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:04:20 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:04:20 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:04:20 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.718 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Preparing to wait for external event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.718 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.719 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.719 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.720 247403 DEBUG nova.virt.libvirt.vif [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-715559348',display_name='tempest-ServersTestManualDisk-server-715559348',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-715559348',id=60,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDClwd2i9PkdYOSRbbcRWxkdb71Vxdkd0xT2/G+Y4LjlmIwO0yq5QpD1L8zQrD1yQl7nScI5xmbhtYSRlIcyG0Am70fXaS7wg8TUhXshdKjm8ie5Ar8IpgPf6Q5tocpTg==',key_name='tempest-keypair-823043967',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='506fa1d7269349f1aa48237dd82ac79e',ramdisk_id='',reservation_id='r-temvets0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1760577672',owner_user_name='tempest-ServersTestManualDisk-1760577672-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='35d966201a0243c2a8f4c689350e8ddd',uuid=dd2c35b1-7956-49ad-a88a-21ea779d2720,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.720 247403 DEBUG nova.network.os_vif_util [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Converting VIF {"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.720 247403 DEBUG nova.network.os_vif_util [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.721 247403 DEBUG os_vif [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.721 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.722 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.725 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.725 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff260644-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.726 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff260644-03, col_values=(('external_ids', {'iface-id': 'ff260644-03d4-48e6-8e4d-0ac43c3f9f60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:4b:63', 'vm-uuid': 'dd2c35b1-7956-49ad-a88a-21ea779d2720'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.727 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:20 np0005603621 NetworkManager[49013]: <info>  [1769846660.7288] manager: (tapff260644-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.730 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.733 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.734 247403 INFO os_vif [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03')#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.819 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.820 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.820 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] No VIF found with MAC fa:16:3e:3c:4b:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.821 247403 INFO nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Using config drive#033[00m
Jan 31 03:04:20 np0005603621 nova_compute[247399]: 2026-01-31 08:04:20.845 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 447 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 17 MiB/s wr, 405 op/s
Jan 31 03:04:21 np0005603621 nova_compute[247399]: 2026-01-31 08:04:21.700 247403 INFO nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Creating config drive at /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/disk.config#033[00m
Jan 31 03:04:21 np0005603621 nova_compute[247399]: 2026-01-31 08:04:21.704 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1nnirb2w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:21.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:21 np0005603621 nova_compute[247399]: 2026-01-31 08:04:21.827 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1nnirb2w" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:21 np0005603621 nova_compute[247399]: 2026-01-31 08:04:21.853 247403 DEBUG nova.storage.rbd_utils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] rbd image dd2c35b1-7956-49ad-a88a-21ea779d2720_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:21 np0005603621 nova_compute[247399]: 2026-01-31 08:04:21.859 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/disk.config dd2c35b1-7956-49ad-a88a-21ea779d2720_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.032 247403 DEBUG nova.network.neutron [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Updated VIF entry in instance network info cache for port ff260644-03d4-48e6-8e4d-0ac43c3f9f60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.032 247403 DEBUG nova.network.neutron [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Updating instance_info_cache with network_info: [{"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.054 247403 DEBUG oslo_concurrency.lockutils [req-6d0dcf31-3ea5-4939-abfe-7c4d00c55024 req-0360f26e-3280-4af1-bea6-568747466c41 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.076 247403 DEBUG oslo_concurrency.processutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/disk.config dd2c35b1-7956-49ad-a88a-21ea779d2720_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.077 247403 INFO nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Deleting local config drive /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720/disk.config because it was imported into RBD.#033[00m
Jan 31 03:04:22 np0005603621 kernel: tapff260644-03: entered promiscuous mode
Jan 31 03:04:22 np0005603621 NetworkManager[49013]: <info>  [1769846662.1240] manager: (tapff260644-03): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.126 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:22Z|00137|binding|INFO|Claiming lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for this chassis.
Jan 31 03:04:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:22Z|00138|binding|INFO|ff260644-03d4-48e6-8e4d-0ac43c3f9f60: Claiming fa:16:3e:3c:4b:63 10.100.0.12
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.140 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:4b:63 10.100.0.12'], port_security=['fa:16:3e:3c:4b:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dd2c35b1-7956-49ad-a88a-21ea779d2720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9c71897-6017-4f84-a423-008c2828939d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '506fa1d7269349f1aa48237dd82ac79e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2f5b0c3b-16f9-4a94-9aaa-61306aa77fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca989f2f-5c28-4141-8302-9cf821f3fb7a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ff260644-03d4-48e6-8e4d-0ac43c3f9f60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.141 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ff260644-03d4-48e6-8e4d-0ac43c3f9f60 in datapath e9c71897-6017-4f84-a423-008c2828939d bound to our chassis#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.143 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e9c71897-6017-4f84-a423-008c2828939d#033[00m
Jan 31 03:04:22 np0005603621 systemd-udevd[287123]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:04:22 np0005603621 systemd-machined[212769]: New machine qemu-26-instance-0000003c.
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.153 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fd25c37f-79a7-4a98-b878-9ff36570ddcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.154 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape9c71897-61 in ovnmeta-e9c71897-6017-4f84-a423-008c2828939d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.157 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape9c71897-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.158 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ccb4dbbd-0bfa-4bc2-be07-08425f8466c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.159 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[79efc8dc-93d6-434c-8351-f3172393a2db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 systemd[1]: Started Virtual Machine qemu-26-instance-0000003c.
Jan 31 03:04:22 np0005603621 NetworkManager[49013]: <info>  [1769846662.1644] device (tapff260644-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:04:22 np0005603621 NetworkManager[49013]: <info>  [1769846662.1651] device (tapff260644-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.172 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[09db495b-fb4a-465c-9dd5-c17ff58d4a34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.177 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:22Z|00139|binding|INFO|Setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 ovn-installed in OVS
Jan 31 03:04:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:22Z|00140|binding|INFO|Setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 up in Southbound
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.198 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eafa2119-8ab3-43b4-a8fe-a7b228d63fc3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.230 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[46667518-9da3-4ddc-92d4-9c5cbc3b0c2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.232 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.237 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a67e027f-0530-4640-baa0-f0491fa8bc4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 systemd-udevd[287126]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:04:22 np0005603621 NetworkManager[49013]: <info>  [1769846662.2391] manager: (tape9c71897-60): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.270 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe1f444-920d-4ee8-a01f-9eca757e36ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.273 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1591d908-ad62-4e03-b170-43c087e0bd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 NetworkManager[49013]: <info>  [1769846662.2918] device (tape9c71897-60): carrier: link connected
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.297 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[77f65a87-88dd-4792-8d23-dbe509194289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.309 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2daa33-86ed-41de-8513-0a17fd0ecf6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9c71897-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:99:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596780, 'reachable_time': 44374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287156, 'error': None, 'target': 'ovnmeta-e9c71897-6017-4f84-a423-008c2828939d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.320 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5155622c-dfb7-47a9-95d1-d33fa9d746b4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:996f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596780, 'tstamp': 596780}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287157, 'error': None, 'target': 'ovnmeta-e9c71897-6017-4f84-a423-008c2828939d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.330 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9d9def-b998-45af-984a-111b17087874]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9c71897-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:99:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596780, 'reachable_time': 44374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287158, 'error': None, 'target': 'ovnmeta-e9c71897-6017-4f84-a423-008c2828939d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.355 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c497afc1-b627-429f-8421-d9ad46ecfe97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.403 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[734ff17d-095b-49cb-9e11-37075b4abfc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.406 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9c71897-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.406 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.407 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9c71897-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 kernel: tape9c71897-60: entered promiscuous mode
Jan 31 03:04:22 np0005603621 NetworkManager[49013]: <info>  [1769846662.4115] manager: (tape9c71897-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.413 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape9c71897-60, col_values=(('external_ids', {'iface-id': '7f44070e-60dd-45ea-9dac-6fdd7b73ad9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:22Z|00141|binding|INFO|Releasing lport 7f44070e-60dd-45ea-9dac-6fdd7b73ad9c from this chassis (sb_readonly=0)
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.417 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e9c71897-6017-4f84-a423-008c2828939d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e9c71897-6017-4f84-a423-008c2828939d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.419 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[89845072-2a60-4ee3-8d8b-51933b495600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.419 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-e9c71897-6017-4f84-a423-008c2828939d
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/e9c71897-6017-4f84-a423-008c2828939d.pid.haproxy
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID e9c71897-6017-4f84-a423-008c2828939d
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.420 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:22.420 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e9c71897-6017-4f84-a423-008c2828939d', 'env', 'PROCESS_TAG=haproxy-e9c71897-6017-4f84-a423-008c2828939d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e9c71897-6017-4f84-a423-008c2828939d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.423 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-5667536c-3b20-4b38-b8e4-c85686a1eae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.423 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-5667536c-3b20-4b38-b8e4-c85686a1eae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.423 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.423 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5667536c-3b20-4b38-b8e4-c85686a1eae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:22.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.571 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846662.5703006, dd2c35b1-7956-49ad-a88a-21ea779d2720 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.571 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] VM Started (Lifecycle Event)#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.605 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.609 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846662.5705848, dd2c35b1-7956-49ad-a88a-21ea779d2720 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.609 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.628 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.631 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.651 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:04:22 np0005603621 nova_compute[247399]: 2026-01-31 08:04:22.676 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:04:22 np0005603621 podman[287232]: 2026-01-31 08:04:22.764814447 +0000 UTC m=+0.054017279 container create 0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:04:22 np0005603621 systemd[1]: Started libpod-conmon-0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30.scope.
Jan 31 03:04:22 np0005603621 podman[287232]: 2026-01-31 08:04:22.730801859 +0000 UTC m=+0.020004711 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:04:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5fa848e6de7ebee787ef411316732cd452bcdd0686eef7361e875bd0b8d41f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:22 np0005603621 podman[287232]: 2026-01-31 08:04:22.859859406 +0000 UTC m=+0.149062268 container init 0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:04:22 np0005603621 podman[287232]: 2026-01-31 08:04:22.865522874 +0000 UTC m=+0.154725706 container start 0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:04:22 np0005603621 neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d[287247]: [NOTICE]   (287251) : New worker (287253) forked
Jan 31 03:04:22 np0005603621 neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d[287247]: [NOTICE]   (287251) : Loading success.
Jan 31 03:04:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 31 03:04:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 31 03:04:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.062 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.082 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-5667536c-3b20-4b38-b8e4-c85686a1eae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.083 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:04:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 435 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 16 MiB/s wr, 386 op/s
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.719 247403 DEBUG nova.compute.manager [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.720 247403 DEBUG oslo_concurrency.lockutils [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.720 247403 DEBUG oslo_concurrency.lockutils [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.721 247403 DEBUG oslo_concurrency.lockutils [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.721 247403 DEBUG nova.compute.manager [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Processing event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.721 247403 DEBUG nova.compute.manager [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.722 247403 DEBUG oslo_concurrency.lockutils [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.724 247403 DEBUG oslo_concurrency.lockutils [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.724 247403 DEBUG oslo_concurrency.lockutils [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.724 247403 DEBUG nova.compute.manager [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.724 247403 WARNING nova.compute.manager [req-ba694733-e85a-4765-8dd2-f293df4212bc req-cd0e9cee-f922-4fe2-9521-3ae93c3364ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received unexpected event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.725 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.734 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846663.7326226, dd2c35b1-7956-49ad-a88a-21ea779d2720 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.734 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:04:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:23.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.736 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.742 247403 INFO nova.virt.libvirt.driver [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Instance spawned successfully.#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.742 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.772 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.777 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.791 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.791 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.792 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.792 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.792 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.793 247403 DEBUG nova.virt.libvirt.driver [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.804 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.917 247403 INFO nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Took 8.11 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.918 247403 DEBUG nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:23 np0005603621 nova_compute[247399]: 2026-01-31 08:04:23.998 247403 INFO nova.compute.manager [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Took 9.16 seconds to build instance.#033[00m
Jan 31 03:04:24 np0005603621 nova_compute[247399]: 2026-01-31 08:04:24.044 247403 DEBUG oslo_concurrency.lockutils [None req-2b04c5e5-6124-4a67-ac65-ebd28c51b6c8 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:24 np0005603621 nova_compute[247399]: 2026-01-31 08:04:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 372 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 9.4 MiB/s wr, 329 op/s
Jan 31 03:04:25 np0005603621 nova_compute[247399]: 2026-01-31 08:04:25.728 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:25 np0005603621 nova_compute[247399]: 2026-01-31 08:04:25.784 247403 DEBUG nova.compute.manager [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:04:25 np0005603621 nova_compute[247399]: 2026-01-31 08:04:25.859 247403 INFO nova.compute.manager [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] instance snapshotting#033[00m
Jan 31 03:04:26 np0005603621 nova_compute[247399]: 2026-01-31 08:04:26.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:26 np0005603621 nova_compute[247399]: 2026-01-31 08:04:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:26 np0005603621 nova_compute[247399]: 2026-01-31 08:04:26.287 247403 INFO nova.virt.libvirt.driver [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Beginning live snapshot process#033[00m
Jan 31 03:04:26 np0005603621 nova_compute[247399]: 2026-01-31 08:04:26.461 247403 DEBUG nova.virt.libvirt.imagebackend [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:04:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:26 np0005603621 nova_compute[247399]: 2026-01-31 08:04:26.805 247403 DEBUG nova.storage.rbd_utils [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] creating snapshot(ada02ab11e8c43bb899a9aa3267dd74d) on rbd image(5667536c-3b20-4b38-b8e4-c85686a1eae2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.221 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.223 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.224 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 372 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.3 MiB/s wr, 261 op/s
Jan 31 03:04:27 np0005603621 podman[287335]: 2026-01-31 08:04:27.500590625 +0000 UTC m=+0.055142624 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:04:27 np0005603621 podman[287336]: 2026-01-31 08:04:27.53064642 +0000 UTC m=+0.082852545 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2449267213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.729 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.820 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.821 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.828 247403 DEBUG nova.storage.rbd_utils [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] cloning vms/5667536c-3b20-4b38-b8e4-c85686a1eae2_disk@ada02ab11e8c43bb899a9aa3267dd74d to images/d9bfe40a-4757-4935-8bdd-50aa5c769418 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.875 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:27 np0005603621 nova_compute[247399]: 2026-01-31 08:04:27.876 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 31 03:04:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.010 247403 DEBUG nova.storage.rbd_utils [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] flattening images/d9bfe40a-4757-4935-8bdd-50aa5c769418 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.104 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.158 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.160 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4209MB free_disk=20.876415252685547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.160 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.161 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.429 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 5667536c-3b20-4b38-b8e4-c85686a1eae2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.430 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance dd2c35b1-7956-49ad-a88a-21ea779d2720 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.430 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.430 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:04:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.532 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:28 np0005603621 nova_compute[247399]: 2026-01-31 08:04:28.936 247403 DEBUG nova.storage.rbd_utils [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] removing snapshot(ada02ab11e8c43bb899a9aa3267dd74d) on rbd image(5667536c-3b20-4b38-b8e4-c85686a1eae2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:04:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069457065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.002 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.009 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.034 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.080 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.080 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:29 np0005603621 NetworkManager[49013]: <info>  [1769846669.3713] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 31 03:04:29 np0005603621 NetworkManager[49013]: <info>  [1769846669.3720] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.401 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:29Z|00142|binding|INFO|Releasing lport 7f44070e-60dd-45ea-9dac-6fdd7b73ad9c from this chassis (sb_readonly=0)
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.419 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 389 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 1.9 MiB/s wr, 193 op/s
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.678 247403 DEBUG nova.compute.manager [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-changed-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.678 247403 DEBUG nova.compute.manager [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Refreshing instance network info cache due to event network-changed-ff260644-03d4-48e6-8e4d-0ac43c3f9f60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.679 247403 DEBUG oslo_concurrency.lockutils [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.679 247403 DEBUG oslo_concurrency.lockutils [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:29 np0005603621 nova_compute[247399]: 2026-01-31 08:04:29.679 247403 DEBUG nova.network.neutron [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Refreshing network info cache for port ff260644-03d4-48e6-8e4d-0ac43c3f9f60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:04:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 31 03:04:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 31 03:04:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 31 03:04:30 np0005603621 nova_compute[247399]: 2026-01-31 08:04:30.081 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:30 np0005603621 nova_compute[247399]: 2026-01-31 08:04:30.081 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:30 np0005603621 nova_compute[247399]: 2026-01-31 08:04:30.167 247403 DEBUG nova.storage.rbd_utils [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] creating snapshot(snap) on rbd image(d9bfe40a-4757-4935-8bdd-50aa5c769418) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:04:30 np0005603621 nova_compute[247399]: 2026-01-31 08:04:30.209 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:30.485 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:30.486 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:30.486 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:30 np0005603621 nova_compute[247399]: 2026-01-31 08:04:30.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 31 03:04:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 31 03:04:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 31 03:04:31 np0005603621 nova_compute[247399]: 2026-01-31 08:04:31.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:04:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 436 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 15 MiB/s rd, 9.1 MiB/s wr, 356 op/s
Jan 31 03:04:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:31.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:32.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:32Z|00143|binding|INFO|Releasing lport 7f44070e-60dd-45ea-9dac-6fdd7b73ad9c from this chassis (sb_readonly=0)
Jan 31 03:04:32 np0005603621 nova_compute[247399]: 2026-01-31 08:04:32.549 247403 DEBUG nova.network.neutron [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Updated VIF entry in instance network info cache for port ff260644-03d4-48e6-8e4d-0ac43c3f9f60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:04:32 np0005603621 nova_compute[247399]: 2026-01-31 08:04:32.550 247403 DEBUG nova.network.neutron [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Updating instance_info_cache with network_info: [{"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:32 np0005603621 nova_compute[247399]: 2026-01-31 08:04:32.566 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:32 np0005603621 nova_compute[247399]: 2026-01-31 08:04:32.594 247403 DEBUG oslo_concurrency.lockutils [req-e7e578dd-f192-4efc-9032-b97279443e4e req-1961f4f3-aa2f-40b9-a9e2-61c770ed46f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-dd2c35b1-7956-49ad-a88a-21ea779d2720" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:33 np0005603621 nova_compute[247399]: 2026-01-31 08:04:33.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 453 MiB data, 830 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 8.2 MiB/s wr, 272 op/s
Jan 31 03:04:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:33.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:34.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:34 np0005603621 nova_compute[247399]: 2026-01-31 08:04:34.566 247403 INFO nova.virt.libvirt.driver [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Snapshot image upload complete#033[00m
Jan 31 03:04:34 np0005603621 nova_compute[247399]: 2026-01-31 08:04:34.566 247403 INFO nova.compute.manager [None req-21ab5f0f-a820-465b-80a5-75257f6eda87 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Took 8.71 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:04:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:35.034 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:35 np0005603621 nova_compute[247399]: 2026-01-31 08:04:35.035 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:35.036 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:04:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 490 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 8.6 MiB/s wr, 271 op/s
Jan 31 03:04:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:35.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:35 np0005603621 nova_compute[247399]: 2026-01-31 08:04:35.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 502 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.6 MiB/s wr, 183 op/s
Jan 31 03:04:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:37Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:4b:63 10.100.0.12
Jan 31 03:04:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:37Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:4b:63 10.100.0.12
Jan 31 03:04:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:37.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 31 03:04:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 31 03:04:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 31 03:04:38 np0005603621 nova_compute[247399]: 2026-01-31 08:04:38.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:04:38
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'volumes']
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:04:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:38.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:04:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:04:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 517 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Jan 31 03:04:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:39.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:40.038 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:40.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:40 np0005603621 nova_compute[247399]: 2026-01-31 08:04:40.773 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 530 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 158 op/s
Jan 31 03:04:41 np0005603621 nova_compute[247399]: 2026-01-31 08:04:41.585 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 31 03:04:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:41.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 31 03:04:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:43 np0005603621 nova_compute[247399]: 2026-01-31 08:04:43.109 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:04:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 63d05a53-7a79-48e5-b328-e630214dd4c6 does not exist
Jan 31 03:04:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5f50a407-8c71-4585-a88c-f44f436290ed does not exist
Jan 31 03:04:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5cd8b904-40fc-4ff6-978d-dec07bb255e1 does not exist
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:04:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:04:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 530 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 879 KiB/s rd, 4.6 MiB/s wr, 156 op/s
Jan 31 03:04:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:43.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:43 np0005603621 podman[287827]: 2026-01-31 08:04:43.916100476 +0000 UTC m=+0.048543198 container create 4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:04:43 np0005603621 systemd[1]: Started libpod-conmon-4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd.scope.
Jan 31 03:04:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:43 np0005603621 podman[287827]: 2026-01-31 08:04:43.895995373 +0000 UTC m=+0.028438125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:04:44 np0005603621 podman[287827]: 2026-01-31 08:04:44.003513528 +0000 UTC m=+0.135956280 container init 4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:04:44 np0005603621 podman[287827]: 2026-01-31 08:04:44.011416897 +0000 UTC m=+0.143859629 container start 4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:04:44 np0005603621 podman[287827]: 2026-01-31 08:04:44.015269828 +0000 UTC m=+0.147712550 container attach 4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:04:44 np0005603621 nervous_margulis[287844]: 167 167
Jan 31 03:04:44 np0005603621 systemd[1]: libpod-4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd.scope: Deactivated successfully.
Jan 31 03:04:44 np0005603621 podman[287827]: 2026-01-31 08:04:44.019433859 +0000 UTC m=+0.151876581 container died 4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:04:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-82385459e044c20f6761604b2648196880695cfc4c706f8ac948a734fc1be772-merged.mount: Deactivated successfully.
Jan 31 03:04:44 np0005603621 podman[287827]: 2026-01-31 08:04:44.06521189 +0000 UTC m=+0.197654602 container remove 4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_margulis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:04:44 np0005603621 systemd[1]: libpod-conmon-4d38a259ee40bf6db3e164405e310b158334f37468ce3a933e17bbf95f6309bd.scope: Deactivated successfully.
Jan 31 03:04:44 np0005603621 podman[287867]: 2026-01-31 08:04:44.212197107 +0000 UTC m=+0.042813699 container create 244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:04:44 np0005603621 systemd[1]: Started libpod-conmon-244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2.scope.
Jan 31 03:04:44 np0005603621 podman[287867]: 2026-01-31 08:04:44.190215034 +0000 UTC m=+0.020831646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:04:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7988d19dc2d88cce4d2893e0d26cff36945efbed3dcd19e745d6620761555e62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7988d19dc2d88cce4d2893e0d26cff36945efbed3dcd19e745d6620761555e62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7988d19dc2d88cce4d2893e0d26cff36945efbed3dcd19e745d6620761555e62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7988d19dc2d88cce4d2893e0d26cff36945efbed3dcd19e745d6620761555e62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7988d19dc2d88cce4d2893e0d26cff36945efbed3dcd19e745d6620761555e62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:44 np0005603621 podman[287867]: 2026-01-31 08:04:44.342524959 +0000 UTC m=+0.173141561 container init 244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:04:44 np0005603621 podman[287867]: 2026-01-31 08:04:44.34795436 +0000 UTC m=+0.178570942 container start 244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:04:44 np0005603621 podman[287867]: 2026-01-31 08:04:44.352154843 +0000 UTC m=+0.182771445 container attach 244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:04:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:44.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:45 np0005603621 stupefied_archimedes[287883]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:04:45 np0005603621 stupefied_archimedes[287883]: --> relative data size: 1.0
Jan 31 03:04:45 np0005603621 stupefied_archimedes[287883]: --> All data devices are unavailable
Jan 31 03:04:45 np0005603621 systemd[1]: libpod-244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2.scope: Deactivated successfully.
Jan 31 03:04:45 np0005603621 podman[287867]: 2026-01-31 08:04:45.191187493 +0000 UTC m=+1.021804095 container died 244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:04:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7988d19dc2d88cce4d2893e0d26cff36945efbed3dcd19e745d6620761555e62-merged.mount: Deactivated successfully.
Jan 31 03:04:45 np0005603621 podman[287867]: 2026-01-31 08:04:45.247050971 +0000 UTC m=+1.077667553 container remove 244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:04:45 np0005603621 systemd[1]: libpod-conmon-244927b284f1e1c15ea9a48d54360e876451f46f5e9969c9c024836fbdae64a2.scope: Deactivated successfully.
Jan 31 03:04:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 530 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 179 op/s
Jan 31 03:04:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:45.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:45 np0005603621 nova_compute[247399]: 2026-01-31 08:04:45.774 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.813877743 +0000 UTC m=+0.037649055 container create ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:04:45 np0005603621 systemd[1]: Started libpod-conmon-ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137.scope.
Jan 31 03:04:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.797314652 +0000 UTC m=+0.021085984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.89637358 +0000 UTC m=+0.120144912 container init ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.902552815 +0000 UTC m=+0.126324127 container start ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:04:45 np0005603621 awesome_faraday[288070]: 167 167
Jan 31 03:04:45 np0005603621 systemd[1]: libpod-ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137.scope: Deactivated successfully.
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.906652294 +0000 UTC m=+0.130423606 container attach ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:04:45 np0005603621 conmon[288070]: conmon ce0e0db78bc8f6422f55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137.scope/container/memory.events
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.908566134 +0000 UTC m=+0.132337487 container died ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:04:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7c324a99b45a64c48a76d28ce7f2b722b053faef69725484abc27872d9a192e9-merged.mount: Deactivated successfully.
Jan 31 03:04:45 np0005603621 podman[288054]: 2026-01-31 08:04:45.952667072 +0000 UTC m=+0.176438384 container remove ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:04:45 np0005603621 systemd[1]: libpod-conmon-ce0e0db78bc8f6422f55f94d87fb7679605437683d63327a19d0b93691297137.scope: Deactivated successfully.
Jan 31 03:04:46 np0005603621 podman[288094]: 2026-01-31 08:04:46.098521824 +0000 UTC m=+0.039837756 container create ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:04:46 np0005603621 systemd[1]: Started libpod-conmon-ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1.scope.
Jan 31 03:04:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10adada964131c3b92173ab1e82a98b3983bf11f7ef128510aa312f9f0658dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10adada964131c3b92173ab1e82a98b3983bf11f7ef128510aa312f9f0658dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10adada964131c3b92173ab1e82a98b3983bf11f7ef128510aa312f9f0658dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10adada964131c3b92173ab1e82a98b3983bf11f7ef128510aa312f9f0658dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:46 np0005603621 podman[288094]: 2026-01-31 08:04:46.172279455 +0000 UTC m=+0.113595397 container init ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:04:46 np0005603621 podman[288094]: 2026-01-31 08:04:46.080509826 +0000 UTC m=+0.021825768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:04:46 np0005603621 podman[288094]: 2026-01-31 08:04:46.178027516 +0000 UTC m=+0.119343438 container start ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:04:46 np0005603621 podman[288094]: 2026-01-31 08:04:46.181304709 +0000 UTC m=+0.122620661 container attach ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:04:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 31 03:04:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 31 03:04:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 31 03:04:47 np0005603621 zen_hoover[288111]: {
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:    "0": [
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:        {
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "devices": [
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "/dev/loop3"
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            ],
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "lv_name": "ceph_lv0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "lv_size": "7511998464",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "name": "ceph_lv0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "tags": {
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.cluster_name": "ceph",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.crush_device_class": "",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.encrypted": "0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.osd_id": "0",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.type": "block",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:                "ceph.vdo": "0"
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            },
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "type": "block",
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:            "vg_name": "ceph_vg0"
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:        }
Jan 31 03:04:47 np0005603621 zen_hoover[288111]:    ]
Jan 31 03:04:47 np0005603621 zen_hoover[288111]: }
Jan 31 03:04:47 np0005603621 systemd[1]: libpod-ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1.scope: Deactivated successfully.
Jan 31 03:04:47 np0005603621 podman[288120]: 2026-01-31 08:04:47.071925014 +0000 UTC m=+0.023863972 container died ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:04:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 515 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 178 op/s
Jan 31 03:04:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d10adada964131c3b92173ab1e82a98b3983bf11f7ef128510aa312f9f0658dc-merged.mount: Deactivated successfully.
Jan 31 03:04:47 np0005603621 podman[288120]: 2026-01-31 08:04:47.70690388 +0000 UTC m=+0.658842808 container remove ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hoover, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:04:47 np0005603621 systemd[1]: libpod-conmon-ce0c73bd4e798542ee4f116aaa67e9d4a7c93ea0c55c37207be5f7609f5da6d1.scope: Deactivated successfully.
Jan 31 03:04:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:04:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:47.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:04:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:48 np0005603621 nova_compute[247399]: 2026-01-31 08:04:48.111 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.211823254 +0000 UTC m=+0.037793411 container create fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:04:48 np0005603621 systemd[1]: Started libpod-conmon-fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c.scope.
Jan 31 03:04:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.28507986 +0000 UTC m=+0.111050037 container init fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.290420658 +0000 UTC m=+0.116390815 container start fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.196359297 +0000 UTC m=+0.022329474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.295829088 +0000 UTC m=+0.121799265 container attach fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:04:48 np0005603621 mystifying_mayer[288292]: 167 167
Jan 31 03:04:48 np0005603621 systemd[1]: libpod-fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c.scope: Deactivated successfully.
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.298476452 +0000 UTC m=+0.124446609 container died fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:04:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e14b7528c5045c32e78ec192e8b45378f73d561e5e757c82f180683485419465-merged.mount: Deactivated successfully.
Jan 31 03:04:48 np0005603621 podman[288276]: 2026-01-31 08:04:48.331997817 +0000 UTC m=+0.157967974 container remove fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:04:48 np0005603621 systemd[1]: libpod-conmon-fe45d1ba8e217fdb0ed266b0e43c83948b6d02c1aec53a8a8c424d8043def52c.scope: Deactivated successfully.
Jan 31 03:04:48 np0005603621 podman[288316]: 2026-01-31 08:04:48.449365081 +0000 UTC m=+0.035864460 container create 36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:04:48 np0005603621 systemd[1]: Started libpod-conmon-36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d.scope.
Jan 31 03:04:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:04:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b524264fafefee42d63c1828ad3d8b877b4e56e3682159ec33eb91c5895c996/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b524264fafefee42d63c1828ad3d8b877b4e56e3682159ec33eb91c5895c996/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b524264fafefee42d63c1828ad3d8b877b4e56e3682159ec33eb91c5895c996/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b524264fafefee42d63c1828ad3d8b877b4e56e3682159ec33eb91c5895c996/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:04:48 np0005603621 podman[288316]: 2026-01-31 08:04:48.432246242 +0000 UTC m=+0.018745641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:04:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:48.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:48 np0005603621 podman[288316]: 2026-01-31 08:04:48.537047162 +0000 UTC m=+0.123546551 container init 36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:04:48 np0005603621 podman[288316]: 2026-01-31 08:04:48.542422171 +0000 UTC m=+0.128921550 container start 36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:04:48 np0005603621 podman[288316]: 2026-01-31 08:04:48.546637703 +0000 UTC m=+0.133137102 container attach 36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:04:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 31 03:04:48 np0005603621 nova_compute[247399]: 2026-01-31 08:04:48.774 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 31 03:04:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007502791736460078 of space, bias 1.0, pg target 2.250837520938023 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006194710531825796 of space, bias 1.0, pg target 1.8460237384840872 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:04:48 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.091 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.092 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.092 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.092 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.093 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.095 247403 INFO nova.compute.manager [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Terminating instance#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.096 247403 DEBUG nova.compute.manager [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:04:49 np0005603621 kernel: tapff260644-03 (unregistering): left promiscuous mode
Jan 31 03:04:49 np0005603621 NetworkManager[49013]: <info>  [1769846689.1506] device (tapff260644-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.164 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00144|binding|INFO|Releasing lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 from this chassis (sb_readonly=0)
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00145|binding|INFO|Setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 down in Southbound
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00146|binding|INFO|Removing iface tapff260644-03 ovn-installed in OVS
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.169 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.175 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.179 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:4b:63 10.100.0.12'], port_security=['fa:16:3e:3c:4b:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dd2c35b1-7956-49ad-a88a-21ea779d2720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9c71897-6017-4f84-a423-008c2828939d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '506fa1d7269349f1aa48237dd82ac79e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f5b0c3b-16f9-4a94-9aaa-61306aa77fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca989f2f-5c28-4141-8302-9cf821f3fb7a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ff260644-03d4-48e6-8e4d-0ac43c3f9f60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.181 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ff260644-03d4-48e6-8e4d-0ac43c3f9f60 in datapath e9c71897-6017-4f84-a423-008c2828939d unbound from our chassis#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.183 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9c71897-6017-4f84-a423-008c2828939d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.185 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[04395ff6-c311-4a32-87ea-816947809cc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.187 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e9c71897-6017-4f84-a423-008c2828939d namespace which is not needed anymore#033[00m
Jan 31 03:04:49 np0005603621 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Jan 31 03:04:49 np0005603621 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000003c.scope: Consumed 13.487s CPU time.
Jan 31 03:04:49 np0005603621 systemd-machined[212769]: Machine qemu-26-instance-0000003c terminated.
Jan 31 03:04:49 np0005603621 neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d[287247]: [NOTICE]   (287251) : haproxy version is 2.8.14-c23fe91
Jan 31 03:04:49 np0005603621 neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d[287247]: [NOTICE]   (287251) : path to executable is /usr/sbin/haproxy
Jan 31 03:04:49 np0005603621 neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d[287247]: [ALERT]    (287251) : Current worker (287253) exited with code 143 (Terminated)
Jan 31 03:04:49 np0005603621 neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d[287247]: [WARNING]  (287251) : All workers exited. Exiting... (0)
Jan 31 03:04:49 np0005603621 systemd[1]: libpod-0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30.scope: Deactivated successfully.
Jan 31 03:04:49 np0005603621 podman[288367]: 2026-01-31 08:04:49.31346266 +0000 UTC m=+0.045905315 container died 0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:04:49 np0005603621 kernel: tapff260644-03: entered promiscuous mode
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00147|binding|INFO|Claiming lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for this chassis.
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.319 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 NetworkManager[49013]: <info>  [1769846689.3201] manager: (tapff260644-03): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00148|binding|INFO|ff260644-03d4-48e6-8e4d-0ac43c3f9f60: Claiming fa:16:3e:3c:4b:63 10.100.0.12
Jan 31 03:04:49 np0005603621 kernel: tapff260644-03 (unregistering): left promiscuous mode
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.332 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:4b:63 10.100.0.12'], port_security=['fa:16:3e:3c:4b:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dd2c35b1-7956-49ad-a88a-21ea779d2720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9c71897-6017-4f84-a423-008c2828939d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '506fa1d7269349f1aa48237dd82ac79e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f5b0c3b-16f9-4a94-9aaa-61306aa77fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca989f2f-5c28-4141-8302-9cf821f3fb7a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ff260644-03d4-48e6-8e4d-0ac43c3f9f60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00149|binding|INFO|Setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 ovn-installed in OVS
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00150|binding|INFO|Setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 up in Southbound
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00151|binding|INFO|Releasing lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 from this chassis (sb_readonly=1)
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00152|binding|INFO|Removing iface tapff260644-03 ovn-installed in OVS
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00153|if_status|INFO|Dropped 2 log messages in last 462 seconds (most recently, 462 seconds ago) due to excessive rate
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00154|if_status|INFO|Not setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 down as sb is readonly
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.344 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00155|binding|INFO|Releasing lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 from this chassis (sb_readonly=0)
Jan 31 03:04:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:04:49Z|00156|binding|INFO|Setting lport ff260644-03d4-48e6-8e4d-0ac43c3f9f60 down in Southbound
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.345 247403 INFO nova.virt.libvirt.driver [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Instance destroyed successfully.#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.346 247403 DEBUG nova.objects.instance [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lazy-loading 'resources' on Instance uuid dd2c35b1-7956-49ad-a88a-21ea779d2720 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30-userdata-shm.mount: Deactivated successfully.
Jan 31 03:04:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c5fa848e6de7ebee787ef411316732cd452bcdd0686eef7361e875bd0b8d41f5-merged.mount: Deactivated successfully.
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.352 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.354 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:4b:63 10.100.0.12'], port_security=['fa:16:3e:3c:4b:63 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'dd2c35b1-7956-49ad-a88a-21ea779d2720', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9c71897-6017-4f84-a423-008c2828939d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '506fa1d7269349f1aa48237dd82ac79e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2f5b0c3b-16f9-4a94-9aaa-61306aa77fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca989f2f-5c28-4141-8302-9cf821f3fb7a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ff260644-03d4-48e6-8e4d-0ac43c3f9f60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.375 247403 DEBUG nova.virt.libvirt.vif [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-715559348',display_name='tempest-ServersTestManualDisk-server-715559348',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-715559348',id=60,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFDClwd2i9PkdYOSRbbcRWxkdb71Vxdkd0xT2/G+Y4LjlmIwO0yq5QpD1L8zQrD1yQl7nScI5xmbhtYSRlIcyG0Am70fXaS7wg8TUhXshdKjm8ie5Ar8IpgPf6Q5tocpTg==',key_name='tempest-keypair-823043967',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:04:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='506fa1d7269349f1aa48237dd82ac79e',ramdisk_id='',reservation_id='r-temvets0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1760577672',owner_user_name='tempest-ServersTestManualDisk-1760577672-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:04:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='35d966201a0243c2a8f4c689350e8ddd',uuid=dd2c35b1-7956-49ad-a88a-21ea779d2720,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.376 247403 DEBUG nova.network.os_vif_util [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Converting VIF {"id": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "address": "fa:16:3e:3c:4b:63", "network": {"id": "e9c71897-6017-4f84-a423-008c2828939d", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-259310959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "506fa1d7269349f1aa48237dd82ac79e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff260644-03", "ovs_interfaceid": "ff260644-03d4-48e6-8e4d-0ac43c3f9f60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.377 247403 DEBUG nova.network.os_vif_util [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.378 247403 DEBUG os_vif [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.379 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.380 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff260644-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.384 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.387 247403 INFO os_vif [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:4b:63,bridge_name='br-int',has_traffic_filtering=True,id=ff260644-03d4-48e6-8e4d-0ac43c3f9f60,network=Network(e9c71897-6017-4f84-a423-008c2828939d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff260644-03')#033[00m
Jan 31 03:04:49 np0005603621 podman[288367]: 2026-01-31 08:04:49.392446797 +0000 UTC m=+0.124889452 container cleanup 0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:04:49 np0005603621 systemd[1]: libpod-conmon-0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30.scope: Deactivated successfully.
Jan 31 03:04:49 np0005603621 strange_germain[288332]: {
Jan 31 03:04:49 np0005603621 strange_germain[288332]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:04:49 np0005603621 strange_germain[288332]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:04:49 np0005603621 strange_germain[288332]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:04:49 np0005603621 strange_germain[288332]:        "osd_id": 0,
Jan 31 03:04:49 np0005603621 strange_germain[288332]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:04:49 np0005603621 strange_germain[288332]:        "type": "bluestore"
Jan 31 03:04:49 np0005603621 strange_germain[288332]:    }
Jan 31 03:04:49 np0005603621 strange_germain[288332]: }
Jan 31 03:04:49 np0005603621 systemd[1]: libpod-36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d.scope: Deactivated successfully.
Jan 31 03:04:49 np0005603621 conmon[288332]: conmon 36aca245c909e42bbf3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d.scope/container/memory.events
Jan 31 03:04:49 np0005603621 podman[288316]: 2026-01-31 08:04:49.430387442 +0000 UTC m=+1.016886821 container died 36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:04:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4b524264fafefee42d63c1828ad3d8b877b4e56e3682159ec33eb91c5895c996-merged.mount: Deactivated successfully.
Jan 31 03:04:49 np0005603621 podman[288415]: 2026-01-31 08:04:49.482344857 +0000 UTC m=+0.069047964 container remove 0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.486 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c20903-009f-49af-93e3-66661e3d73f2]: (4, ('Sat Jan 31 08:04:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d (0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30)\n0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30\nSat Jan 31 08:04:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e9c71897-6017-4f84-a423-008c2828939d (0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30)\n0d6f7988fc900f3f4a226fdcb4e65120f61d744b59ed26de451caab62c0dee30\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 podman[288316]: 2026-01-31 08:04:49.48817967 +0000 UTC m=+1.074679049 container remove 36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.488 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca5d207-ef9a-48ae-a536-45bda95c79cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.490 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9c71897-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.492 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 kernel: tape9c71897-60: left promiscuous mode
Jan 31 03:04:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 497 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 46 KiB/s wr, 118 op/s
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.500 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.505 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cf0c1d-8535-49e4-99d5-efc7d84c53b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 systemd[1]: libpod-conmon-36aca245c909e42bbf3e346b74755e71ba19a5c5754d388eb87eefce8856e69d.scope: Deactivated successfully.
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.516 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f6493e6b-74e7-44fc-8796-acf73d045af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.518 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0b78b4e9-a3dd-4e59-8e3c-ec5cab473852]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.534 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0deefb16-5099-47b7-98fd-faba0ab56ad7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596773, 'reachable_time': 30095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288453, 'error': None, 'target': 'ovnmeta-e9c71897-6017-4f84-a423-008c2828939d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 systemd[1]: run-netns-ovnmeta\x2de9c71897\x2d6017\x2d4f84\x2da423\x2d008c2828939d.mount: Deactivated successfully.
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.540 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e9c71897-6017-4f84-a423-008c2828939d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.540 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff1ec7c-08ea-4745-8c6d-1408bd84b6a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.541 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ff260644-03d4-48e6-8e4d-0ac43c3f9f60 in datapath e9c71897-6017-4f84-a423-008c2828939d unbound from our chassis#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.543 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9c71897-6017-4f84-a423-008c2828939d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.544 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8800aa-e7e0-486e-bd19-dab0f5e2a927]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.545 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ff260644-03d4-48e6-8e4d-0ac43c3f9f60 in datapath e9c71897-6017-4f84-a423-008c2828939d unbound from our chassis#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.546 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9c71897-6017-4f84-a423-008c2828939d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:04:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:04:49.547 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee6cd79-6092-491e-9f2a-13fd4d820b3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:04:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:04:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:04:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:04:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f0d0f1e1-f166-47e0-abbf-adbf932c1ffb does not exist
Jan 31 03:04:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 94977ee8-ed5b-4038-b757-8a3e164ee983 does not exist
Jan 31 03:04:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6241f7f1-9aa6-4fe4-8fce-a9e570684847 does not exist
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.744 247403 DEBUG nova.compute.manager [req-e8f5fc94-ebad-47db-92ee-23a06ed17c9e req-a92744bd-cbd1-4c1c-862e-faeac05e01a4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-unplugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.744 247403 DEBUG oslo_concurrency.lockutils [req-e8f5fc94-ebad-47db-92ee-23a06ed17c9e req-a92744bd-cbd1-4c1c-862e-faeac05e01a4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.745 247403 DEBUG oslo_concurrency.lockutils [req-e8f5fc94-ebad-47db-92ee-23a06ed17c9e req-a92744bd-cbd1-4c1c-862e-faeac05e01a4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.745 247403 DEBUG oslo_concurrency.lockutils [req-e8f5fc94-ebad-47db-92ee-23a06ed17c9e req-a92744bd-cbd1-4c1c-862e-faeac05e01a4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.745 247403 DEBUG nova.compute.manager [req-e8f5fc94-ebad-47db-92ee-23a06ed17c9e req-a92744bd-cbd1-4c1c-862e-faeac05e01a4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-unplugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.745 247403 DEBUG nova.compute.manager [req-e8f5fc94-ebad-47db-92ee-23a06ed17c9e req-a92744bd-cbd1-4c1c-862e-faeac05e01a4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-unplugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:04:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:49.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:04:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.858 247403 INFO nova.virt.libvirt.driver [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Deleting instance files /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720_del#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.859 247403 INFO nova.virt.libvirt.driver [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Deletion of /var/lib/nova/instances/dd2c35b1-7956-49ad-a88a-21ea779d2720_del complete#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.959 247403 INFO nova.compute.manager [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.960 247403 DEBUG oslo.service.loopingcall [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.960 247403 DEBUG nova.compute.manager [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:04:49 np0005603621 nova_compute[247399]: 2026-01-31 08:04:49.960 247403 DEBUG nova.network.neutron [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:04:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:50.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 31 03:04:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 31 03:04:50 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 31 03:04:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 405 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 12 KiB/s wr, 66 op/s
Jan 31 03:04:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:51.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.825 247403 DEBUG nova.network.neutron [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.875 247403 INFO nova.compute.manager [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Took 1.91 seconds to deallocate network for instance.#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.938 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.938 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.941 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.942 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.942 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.942 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.942 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.942 247403 WARNING nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received unexpected event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.943 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.943 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.943 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.943 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.944 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.944 247403 WARNING nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received unexpected event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.944 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.944 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.945 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.945 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.945 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.945 247403 WARNING nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received unexpected event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.946 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-unplugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.946 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.946 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.947 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.947 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-unplugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.947 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-unplugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.947 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.948 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.948 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.948 247403 DEBUG oslo_concurrency.lockutils [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.949 247403 DEBUG nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] No waiting events found dispatching network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:04:51 np0005603621 nova_compute[247399]: 2026-01-31 08:04:51.949 247403 WARNING nova.compute.manager [req-1a7a2ec0-9356-4aeb-82d6-cb7893e0b9d3 req-c5221d1c-d804-44e4-8e8e-9622370337b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received unexpected event network-vif-plugged-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.079 247403 DEBUG nova.compute.manager [req-81b24ad0-ad12-47c9-a16b-55407e60b927 req-ade63303-9fdd-4515-80b0-1244eb0353ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Received event network-vif-deleted-ff260644-03d4-48e6-8e4d-0ac43c3f9f60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.211 247403 DEBUG oslo_concurrency.processutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:52.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757188820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.614 247403 DEBUG oslo_concurrency.processutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.620 247403 DEBUG nova.compute.provider_tree [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.635 247403 DEBUG nova.scheduler.client.report [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.832 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:52 np0005603621 nova_compute[247399]: 2026-01-31 08:04:52.942 247403 INFO nova.scheduler.client.report [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Deleted allocations for instance dd2c35b1-7956-49ad-a88a-21ea779d2720#033[00m
Jan 31 03:04:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 31 03:04:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 31 03:04:52 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.047 247403 DEBUG oslo_concurrency.lockutils [None req-ba844baf-6673-494c-9bdb-4d644d7fa114 35d966201a0243c2a8f4c689350e8ddd 506fa1d7269349f1aa48237dd82ac79e - - default default] Lock "dd2c35b1-7956-49ad-a88a-21ea779d2720" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.113 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.476 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.477 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 379 MiB data, 850 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 1.3 MiB/s wr, 161 op/s
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.511 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.647 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.647 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.654 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.655 247403 INFO nova.compute.claims [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:04:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:53.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:53 np0005603621 nova_compute[247399]: 2026-01-31 08:04:53.876 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108194132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.305 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.311 247403 DEBUG nova.compute.provider_tree [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.329 247403 DEBUG nova.scheduler.client.report [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.365 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.366 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.384 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.441 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.441 247403 DEBUG nova.network.neutron [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.469 247403 INFO nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.489 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:04:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.642 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.643 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.643 247403 INFO nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Creating image(s)#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.666 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.692 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.724 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.729 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.788 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.789 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.789 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.790 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.811 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:04:54 np0005603621 nova_compute[247399]: 2026-01-31 08:04:54.814 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.052 247403 DEBUG nova.policy [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57fcb774fb574bf0beea4fb49adb0f80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44ad7f776f814675b2232eb023baacdd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:04:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 215 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 592 KiB/s rd, 3.8 MiB/s wr, 284 op/s
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.610 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "5667536c-3b20-4b38-b8e4-c85686a1eae2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.611 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "5667536c-3b20-4b38-b8e4-c85686a1eae2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.611 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "5667536c-3b20-4b38-b8e4-c85686a1eae2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.611 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "5667536c-3b20-4b38-b8e4-c85686a1eae2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.611 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "5667536c-3b20-4b38-b8e4-c85686a1eae2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.613 247403 INFO nova.compute.manager [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Terminating instance#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.613 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "refresh_cache-5667536c-3b20-4b38-b8e4-c85686a1eae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.614 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquired lock "refresh_cache-5667536c-3b20-4b38-b8e4-c85686a1eae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.614 247403 DEBUG nova.network.neutron [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.692 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.878s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.771 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] resizing rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:04:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:55.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:55 np0005603621 nova_compute[247399]: 2026-01-31 08:04:55.813 247403 DEBUG nova.network.neutron [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.041 247403 DEBUG nova.network.neutron [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Successfully created port: ee5bd84e-fd6b-46f1-bb1d-d034166fc33a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.048 247403 DEBUG nova.objects.instance [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'migration_context' on Instance uuid 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.065 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.065 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Ensure instance console log exists: /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.066 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.066 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.066 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.269 247403 DEBUG nova.network.neutron [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.293 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Releasing lock "refresh_cache-5667536c-3b20-4b38-b8e4-c85686a1eae2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.294 247403 DEBUG nova.compute.manager [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:04:56 np0005603621 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000039.scope: Deactivated successfully.
Jan 31 03:04:56 np0005603621 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000039.scope: Consumed 16.601s CPU time.
Jan 31 03:04:56 np0005603621 systemd-machined[212769]: Machine qemu-25-instance-00000039 terminated.
Jan 31 03:04:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.717 247403 INFO nova.virt.libvirt.driver [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance destroyed successfully.#033[00m
Jan 31 03:04:56 np0005603621 nova_compute[247399]: 2026-01-31 08:04:56.717 247403 DEBUG nova.objects.instance [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lazy-loading 'resources' on Instance uuid 5667536c-3b20-4b38-b8e4-c85686a1eae2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.355 247403 DEBUG nova.network.neutron [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Successfully updated port: ee5bd84e-fd6b-46f1-bb1d-d034166fc33a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.382 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.382 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquired lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.382 247403 DEBUG nova.network.neutron [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.497 247403 INFO nova.virt.libvirt.driver [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Deleting instance files /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2_del#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.498 247403 INFO nova.virt.libvirt.driver [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Deletion of /var/lib/nova/instances/5667536c-3b20-4b38-b8e4-c85686a1eae2_del complete#033[00m
Jan 31 03:04:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 193 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 612 KiB/s rd, 4.0 MiB/s wr, 289 op/s
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.550 247403 DEBUG nova.compute.manager [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-changed-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.550 247403 DEBUG nova.compute.manager [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Refreshing instance network info cache due to event network-changed-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.550 247403 DEBUG oslo_concurrency.lockutils [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.577 247403 INFO nova.compute.manager [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Took 1.28 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.578 247403 DEBUG oslo.service.loopingcall [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.578 247403 DEBUG nova.compute.manager [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.579 247403 DEBUG nova.network.neutron [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:04:57 np0005603621 nova_compute[247399]: 2026-01-31 08:04:57.678 247403 DEBUG nova.network.neutron [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:04:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:04:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:04:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:04:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 31 03:04:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 31 03:04:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:58 np0005603621 podman[288742]: 2026-01-31 08:04:58.509874829 +0000 UTC m=+0.061006411 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.530 247403 DEBUG nova.network.neutron [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:04:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:04:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:04:58 np0005603621 podman[288743]: 2026-01-31 08:04:58.548581068 +0000 UTC m=+0.099504623 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.556 247403 DEBUG nova.network.neutron [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.573 247403 INFO nova.compute.manager [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Took 0.99 seconds to deallocate network for instance.#033[00m
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.628 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.629 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:04:58 np0005603621 nova_compute[247399]: 2026-01-31 08:04:58.723 247403 DEBUG oslo_concurrency.processutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3086188274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.186 247403 DEBUG oslo_concurrency.processutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.193 247403 DEBUG nova.compute.provider_tree [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.256 247403 DEBUG nova.scheduler.client.report [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.333 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.374 247403 INFO nova.scheduler.client.report [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Deleted allocations for instance 5667536c-3b20-4b38-b8e4-c85686a1eae2#033[00m
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.386 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:04:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 170 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 619 KiB/s rd, 4.9 MiB/s wr, 290 op/s
Jan 31 03:04:59 np0005603621 nova_compute[247399]: 2026-01-31 08:04:59.507 247403 DEBUG oslo_concurrency.lockutils [None req-677b1ac7-088d-47d6-9926-7b8a8d36d905 fd3d70d97c394edaa70e32807d7a96ca 3d28270b439f4cb1aa201d46b9f8a843 - - default default] Lock "5667536c-3b20-4b38-b8e4-c85686a1eae2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.645273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846699645329, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 961, "num_deletes": 259, "total_data_size": 1227463, "memory_usage": 1250720, "flush_reason": "Manual Compaction"}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846699652536, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1210284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33474, "largest_seqno": 34434, "table_properties": {"data_size": 1205423, "index_size": 2385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11182, "raw_average_key_size": 20, "raw_value_size": 1195517, "raw_average_value_size": 2213, "num_data_blocks": 104, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846648, "oldest_key_time": 1769846648, "file_creation_time": 1769846699, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7299 microseconds, and 3514 cpu microseconds.
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.652576) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1210284 bytes OK
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.652596) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.654445) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.654460) EVENT_LOG_v1 {"time_micros": 1769846699654455, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.654483) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1222781, prev total WAL file size 1222781, number of live WAL files 2.
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.655107) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1181KB)], [71(9379KB)]
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846699655192, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10815299, "oldest_snapshot_seqno": -1}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6063 keys, 8861738 bytes, temperature: kUnknown
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846699742025, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8861738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8821535, "index_size": 23965, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 155862, "raw_average_key_size": 25, "raw_value_size": 8713222, "raw_average_value_size": 1437, "num_data_blocks": 958, "num_entries": 6063, "num_filter_entries": 6063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846699, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.742675) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8861738 bytes
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.744599) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.4 rd, 102.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.2 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(16.3) write-amplify(7.3) OK, records in: 6593, records dropped: 530 output_compression: NoCompression
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.744623) EVENT_LOG_v1 {"time_micros": 1769846699744612, "job": 40, "event": "compaction_finished", "compaction_time_micros": 86909, "compaction_time_cpu_micros": 22575, "output_level": 6, "num_output_files": 1, "total_output_size": 8861738, "num_input_records": 6593, "num_output_records": 6063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846699745104, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846699746475, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.654992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.746541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.746550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.746552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.746554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:59 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:04:59.746555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:04:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:04:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:04:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:04:59.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.292 247403 DEBUG nova.network.neutron [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updating instance_info_cache with network_info: [{"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.360 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Releasing lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.361 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Instance network_info: |[{"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.361 247403 DEBUG oslo_concurrency.lockutils [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.361 247403 DEBUG nova.network.neutron [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Refreshing network info cache for port ee5bd84e-fd6b-46f1-bb1d-d034166fc33a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.363 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Start _get_guest_xml network_info=[{"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.369 247403 WARNING nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.374 247403 DEBUG nova.virt.libvirt.host [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.375 247403 DEBUG nova.virt.libvirt.host [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.382 247403 DEBUG nova.virt.libvirt.host [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.383 247403 DEBUG nova.virt.libvirt.host [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.384 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.385 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.385 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.385 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.386 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.386 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.386 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.387 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.387 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.387 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.387 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.388 247403 DEBUG nova.virt.hardware [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.391 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:00.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:05:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3072574535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.802 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.826 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:00 np0005603621 nova_compute[247399]: 2026-01-31 08:05:00.829 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:05:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701894870' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.256 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.258 247403 DEBUG nova.virt.libvirt.vif [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1172009567',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1172009567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1172009567',id=62,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-xtn1qmx0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:54Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.258 247403 DEBUG nova.network.os_vif_util [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.259 247403 DEBUG nova.network.os_vif_util [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.260 247403 DEBUG nova.objects.instance [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'pci_devices' on Instance uuid 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.324 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <uuid>2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701</uuid>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <name>instance-0000003e</name>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1172009567</nova:name>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:05:00</nova:creationTime>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:user uuid="57fcb774fb574bf0beea4fb49adb0f80">tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member</nova:user>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:project uuid="44ad7f776f814675b2232eb023baacdd">tempest-ImagesOneServerNegativeTestJSON-1383889839</nova:project>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <nova:port uuid="ee5bd84e-fd6b-46f1-bb1d-d034166fc33a">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <entry name="serial">2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701</entry>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <entry name="uuid">2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701</entry>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk.config">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:2c:e5:a6"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <target dev="tapee5bd84e-fd"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/console.log" append="off"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:05:01 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:05:01 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:05:01 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:05:01 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.326 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Preparing to wait for external event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.326 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.327 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.327 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.329 247403 DEBUG nova.virt.libvirt.vif [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1172009567',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1172009567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1172009567',id=62,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-xtn1qmx0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:04:54Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.330 247403 DEBUG nova.network.os_vif_util [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.331 247403 DEBUG nova.network.os_vif_util [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.331 247403 DEBUG os_vif [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.332 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.333 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.333 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.335 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.335 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee5bd84e-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.336 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee5bd84e-fd, col_values=(('external_ids', {'iface-id': 'ee5bd84e-fd6b-46f1-bb1d-d034166fc33a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2c:e5:a6', 'vm-uuid': '2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:01 np0005603621 NetworkManager[49013]: <info>  [1769846701.3388] manager: (tapee5bd84e-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.344 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.345 247403 INFO os_vif [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd')#033[00m
Jan 31 03:05:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 167 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 4.6 MiB/s wr, 227 op/s
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.564 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.564 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.565 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No VIF found with MAC fa:16:3e:2c:e5:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.565 247403 INFO nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Using config drive#033[00m
Jan 31 03:05:01 np0005603621 nova_compute[247399]: 2026-01-31 08:05:01.584 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 31 03:05:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 31 03:05:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.120 247403 INFO nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Creating config drive at /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/disk.config#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.124 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptrwma5ku execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.246 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptrwma5ku" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.277 247403 DEBUG nova.storage.rbd_utils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.282 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/disk.config 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.478 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.562 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.814 247403 DEBUG oslo_concurrency.processutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/disk.config 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.815 247403 INFO nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Deleting local config drive /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701/disk.config because it was imported into RBD.#033[00m
Jan 31 03:05:02 np0005603621 kernel: tapee5bd84e-fd: entered promiscuous mode
Jan 31 03:05:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:02Z|00157|binding|INFO|Claiming lport ee5bd84e-fd6b-46f1-bb1d-d034166fc33a for this chassis.
Jan 31 03:05:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:02Z|00158|binding|INFO|ee5bd84e-fd6b-46f1-bb1d-d034166fc33a: Claiming fa:16:3e:2c:e5:a6 10.100.0.9
Jan 31 03:05:02 np0005603621 NetworkManager[49013]: <info>  [1769846702.8609] manager: (tapee5bd84e-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.862 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.872 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:e5:a6 10.100.0.9'], port_security=['fa:16:3e:2c:e5:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c7c770f-1117-4714-b72a-35b15967e8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44ad7f776f814675b2232eb023baacdd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fc774b6-544e-41fe-a3b6-65cc9e40791e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f38317d-a32f-4f40-8398-767a8ccddb32, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.874 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ee5bd84e-fd6b-46f1-bb1d-d034166fc33a in datapath 4c7c770f-1117-4714-b72a-35b15967e8f7 bound to our chassis#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.875 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c7c770f-1117-4714-b72a-35b15967e8f7#033[00m
Jan 31 03:05:02 np0005603621 systemd-udevd[288995]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.882 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[df59aeb5-3c13-498d-957a-33078fab0a02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.883 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c7c770f-11 in ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.884 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c7c770f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.884 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[86881b92-dc52-4fc3-b165-5d79077da8d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.885 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f6be42c2-8062-4bc2-bbc5-dab10b8dc25d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 NetworkManager[49013]: <info>  [1769846702.8913] device (tapee5bd84e-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:05:02 np0005603621 NetworkManager[49013]: <info>  [1769846702.8920] device (tapee5bd84e-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:05:02 np0005603621 systemd-machined[212769]: New machine qemu-27-instance-0000003e.
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.894 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fed4d4fa-29f5-4fb1-b149-9b7cec3587ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:02Z|00159|binding|INFO|Setting lport ee5bd84e-fd6b-46f1-bb1d-d034166fc33a ovn-installed in OVS
Jan 31 03:05:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:02Z|00160|binding|INFO|Setting lport ee5bd84e-fd6b-46f1-bb1d-d034166fc33a up in Southbound
Jan 31 03:05:02 np0005603621 nova_compute[247399]: 2026-01-31 08:05:02.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:02 np0005603621 systemd[1]: Started Virtual Machine qemu-27-instance-0000003e.
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.914 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[09171d48-f0c8-4ed9-a45b-0dedefe77d3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.934 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e744217d-f71f-42de-b1c6-816821933d7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.937 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[436e3362-5961-46de-aa67-07e37323a476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 systemd-udevd[289002]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:05:02 np0005603621 NetworkManager[49013]: <info>  [1769846702.9389] manager: (tap4c7c770f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Jan 31 03:05:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.965 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[95ceca38-f9ef-48d0-8e51-78c652918a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.968 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aa7a72-3fd0-476a-b660-2ac5fc842408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:02 np0005603621 NetworkManager[49013]: <info>  [1769846702.9828] device (tap4c7c770f-10): carrier: link connected
Jan 31 03:05:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.984 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6811d5d7-4fc1-4fda-a2d9-92f2c1f9320b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:02.999 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[96ba338e-19fc-4d27-8b29-4b552ce14200]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c7c770f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600849, 'reachable_time': 42106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289031, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.010 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f3660443-23e5-4463-9af1-b465b41153c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:2d70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600849, 'tstamp': 600849}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289032, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.020 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e1660e49-8cc4-4dc9-8462-37a876065abc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c7c770f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600849, 'reachable_time': 42106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289033, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.040 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f17c6ca1-aa3b-422d-88a5-9d40967b376c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.083 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74665ee9-9baf-460f-a60b-620efe05ea6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.084 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c7c770f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.084 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:05:03 np0005603621 kernel: tap4c7c770f-10: entered promiscuous mode
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.085 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c7c770f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:03 np0005603621 NetworkManager[49013]: <info>  [1769846703.0871] manager: (tap4c7c770f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.086 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.089 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.089 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c7c770f-10, col_values=(('external_ids', {'iface-id': '831cf344-1b40-47d7-9209-66080414c7e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:03Z|00161|binding|INFO|Releasing lport 831cf344-1b40-47d7-9209-66080414c7e0 from this chassis (sb_readonly=0)
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.090 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.091 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.092 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5e7a42-db81-4505-89c4-8251f3817123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.092 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-4c7c770f-1117-4714-b72a-35b15967e8f7
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 4c7c770f-1117-4714-b72a-35b15967e8f7
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:05:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:03.093 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'env', 'PROCESS_TAG=haproxy-4c7c770f-1117-4714-b72a-35b15967e8f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c7c770f-1117-4714-b72a-35b15967e8f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.172 247403 DEBUG nova.network.neutron [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updated VIF entry in instance network info cache for port ee5bd84e-fd6b-46f1-bb1d-d034166fc33a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.172 247403 DEBUG nova.network.neutron [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updating instance_info_cache with network_info: [{"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.218 247403 DEBUG oslo_concurrency.lockutils [req-fa55767d-0101-479c-8cf8-ad41105d86ec req-341e7ea5-837a-4356-a9a4-fd2f127dd989 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.267 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846703.2663417, 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.268 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] VM Started (Lifecycle Event)#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.299 247403 DEBUG nova.compute.manager [req-38a06516-29f4-4999-8aea-96c6ebff4644 req-6b0c226c-d4cf-41df-a72d-593a8eb2ec0e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.300 247403 DEBUG oslo_concurrency.lockutils [req-38a06516-29f4-4999-8aea-96c6ebff4644 req-6b0c226c-d4cf-41df-a72d-593a8eb2ec0e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.301 247403 DEBUG oslo_concurrency.lockutils [req-38a06516-29f4-4999-8aea-96c6ebff4644 req-6b0c226c-d4cf-41df-a72d-593a8eb2ec0e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.301 247403 DEBUG oslo_concurrency.lockutils [req-38a06516-29f4-4999-8aea-96c6ebff4644 req-6b0c226c-d4cf-41df-a72d-593a8eb2ec0e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.301 247403 DEBUG nova.compute.manager [req-38a06516-29f4-4999-8aea-96c6ebff4644 req-6b0c226c-d4cf-41df-a72d-593a8eb2ec0e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Processing event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.303 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.303 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.307 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.308 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.312 247403 INFO nova.virt.libvirt.driver [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Instance spawned successfully.#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.312 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.341 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.342 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846703.2681816, 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.343 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.354 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.354 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.355 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.355 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.356 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.356 247403 DEBUG nova.virt.libvirt.driver [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.389 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.394 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846703.306649, 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.394 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.450 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.453 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:05:03 np0005603621 podman[289104]: 2026-01-31 08:05:03.454345528 +0000 UTC m=+0.079676989 container create 08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.482 247403 INFO nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Took 8.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.482 247403 DEBUG nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.490 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:05:03 np0005603621 systemd[1]: Started libpod-conmon-08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3.scope.
Jan 31 03:05:03 np0005603621 podman[289104]: 2026-01-31 08:05:03.400195434 +0000 UTC m=+0.025526915 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:05:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 167 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Jan 31 03:05:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeefbce0216dc4e6b6ee9bbb67c2b0f7e2c1536e6bd02dfbf8a594a48d3f54c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:03 np0005603621 podman[289104]: 2026-01-31 08:05:03.538147676 +0000 UTC m=+0.163479157 container init 08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:05:03 np0005603621 podman[289104]: 2026-01-31 08:05:03.542626577 +0000 UTC m=+0.167958038 container start 08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:05:03 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [NOTICE]   (289124) : New worker (289126) forked
Jan 31 03:05:03 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [NOTICE]   (289124) : Loading success.
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.594 247403 INFO nova.compute.manager [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Took 9.96 seconds to build instance.#033[00m
Jan 31 03:05:03 np0005603621 nova_compute[247399]: 2026-01-31 08:05:03.626 247403 DEBUG oslo_concurrency.lockutils [None req-6063e857-5fde-4203-ab2b-e853a9307a91 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:03.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:04 np0005603621 nova_compute[247399]: 2026-01-31 08:05:04.343 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846689.34102, dd2c35b1-7956-49ad-a88a-21ea779d2720 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:04 np0005603621 nova_compute[247399]: 2026-01-31 08:05:04.344 247403 INFO nova.compute.manager [-] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:05:04 np0005603621 nova_compute[247399]: 2026-01-31 08:05:04.373 247403 DEBUG nova.compute.manager [None req-f3d95665-7084-498f-b26a-a459f9787124 - - - - - -] [instance: dd2c35b1-7956-49ad-a88a-21ea779d2720] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 31 03:05:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 31 03:05:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 31 03:05:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 167 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 1012 KiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 31 03:05:05 np0005603621 nova_compute[247399]: 2026-01-31 08:05:05.720 247403 DEBUG nova.compute.manager [req-8b6b3f01-4408-420c-833b-f020212f6f06 req-094ef099-49b2-45ea-85aa-25a53c680cac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:05 np0005603621 nova_compute[247399]: 2026-01-31 08:05:05.721 247403 DEBUG oslo_concurrency.lockutils [req-8b6b3f01-4408-420c-833b-f020212f6f06 req-094ef099-49b2-45ea-85aa-25a53c680cac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:05 np0005603621 nova_compute[247399]: 2026-01-31 08:05:05.721 247403 DEBUG oslo_concurrency.lockutils [req-8b6b3f01-4408-420c-833b-f020212f6f06 req-094ef099-49b2-45ea-85aa-25a53c680cac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:05 np0005603621 nova_compute[247399]: 2026-01-31 08:05:05.721 247403 DEBUG oslo_concurrency.lockutils [req-8b6b3f01-4408-420c-833b-f020212f6f06 req-094ef099-49b2-45ea-85aa-25a53c680cac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:05 np0005603621 nova_compute[247399]: 2026-01-31 08:05:05.721 247403 DEBUG nova.compute.manager [req-8b6b3f01-4408-420c-833b-f020212f6f06 req-094ef099-49b2-45ea-85aa-25a53c680cac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] No waiting events found dispatching network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:05 np0005603621 nova_compute[247399]: 2026-01-31 08:05:05.721 247403 WARNING nova.compute.manager [req-8b6b3f01-4408-420c-833b-f020212f6f06 req-094ef099-49b2-45ea-85aa-25a53c680cac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received unexpected event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:05:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:05.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:06 np0005603621 nova_compute[247399]: 2026-01-31 08:05:06.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:06.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 31 03:05:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 31 03:05:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 31 03:05:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 167 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 52 KiB/s wr, 194 op/s
Jan 31 03:05:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:07.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:08 np0005603621 nova_compute[247399]: 2026-01-31 08:05:08.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:05:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:08.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 167 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 47 KiB/s wr, 188 op/s
Jan 31 03:05:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:09.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:10.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:11 np0005603621 nova_compute[247399]: 2026-01-31 08:05:11.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 167 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 185 op/s
Jan 31 03:05:11 np0005603621 nova_compute[247399]: 2026-01-31 08:05:11.716 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846696.7148097, 5667536c-3b20-4b38-b8e4-c85686a1eae2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:11 np0005603621 nova_compute[247399]: 2026-01-31 08:05:11.716 247403 INFO nova.compute.manager [-] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:05:11 np0005603621 nova_compute[247399]: 2026-01-31 08:05:11.769 247403 DEBUG nova.compute.manager [None req-7af75e98-fbac-4061-afbc-c2984ee1b711 - - - - - -] [instance: 5667536c-3b20-4b38-b8e4-c85686a1eae2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:12 np0005603621 nova_compute[247399]: 2026-01-31 08:05:12.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:12 np0005603621 nova_compute[247399]: 2026-01-31 08:05:12.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:05:12 np0005603621 nova_compute[247399]: 2026-01-31 08:05:12.226 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:05:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:12.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 31 03:05:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 31 03:05:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 31 03:05:13 np0005603621 nova_compute[247399]: 2026-01-31 08:05:13.220 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 167 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 11 KiB/s wr, 133 op/s
Jan 31 03:05:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:05:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643373114' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:05:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:05:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643373114' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:05:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:15Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2c:e5:a6 10.100.0.9
Jan 31 03:05:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:15Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2c:e5:a6 10.100.0.9
Jan 31 03:05:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 174 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 73 op/s
Jan 31 03:05:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:16 np0005603621 nova_compute[247399]: 2026-01-31 08:05:16.343 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:16.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 183 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 978 KiB/s rd, 2.0 MiB/s wr, 81 op/s
Jan 31 03:05:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:17.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 31 03:05:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 31 03:05:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 31 03:05:18 np0005603621 nova_compute[247399]: 2026-01-31 08:05:18.223 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:18.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 196 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 190 KiB/s rd, 3.2 MiB/s wr, 82 op/s
Jan 31 03:05:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:19.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:20 np0005603621 nova_compute[247399]: 2026-01-31 08:05:20.997 247403 DEBUG nova.compute.manager [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:21 np0005603621 nova_compute[247399]: 2026-01-31 08:05:21.088 247403 INFO nova.compute.manager [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] instance snapshotting#033[00m
Jan 31 03:05:21 np0005603621 nova_compute[247399]: 2026-01-31 08:05:21.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 200 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 414 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 31 03:05:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:21.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:21 np0005603621 nova_compute[247399]: 2026-01-31 08:05:21.895 247403 INFO nova.virt.libvirt.driver [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Beginning live snapshot process#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.237 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.238 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.238 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.244 247403 DEBUG nova.virt.libvirt.imagebackend [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.269 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.269 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.269 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:05:22 np0005603621 nova_compute[247399]: 2026-01-31 08:05:22.270 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:22.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:23 np0005603621 nova_compute[247399]: 2026-01-31 08:05:23.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:23 np0005603621 nova_compute[247399]: 2026-01-31 08:05:23.368 247403 DEBUG nova.storage.rbd_utils [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] creating snapshot(a252d720e3654ab98075d533080804b1) on rbd image(2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:05:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 200 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Jan 31 03:05:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 31 03:05:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 31 03:05:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 31 03:05:23 np0005603621 nova_compute[247399]: 2026-01-31 08:05:23.760 247403 DEBUG nova.storage.rbd_utils [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] cloning vms/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk@a252d720e3654ab98075d533080804b1 to images/6f41fe69-cbe5-422f-b305-4d86e60f48e0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:05:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:23.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:24 np0005603621 nova_compute[247399]: 2026-01-31 08:05:24.091 247403 DEBUG nova.storage.rbd_utils [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] flattening images/6f41fe69-cbe5-422f-b305-4d86e60f48e0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:05:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 157 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 113 op/s
Jan 31 03:05:25 np0005603621 nova_compute[247399]: 2026-01-31 08:05:25.762 247403 DEBUG nova.storage.rbd_utils [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] removing snapshot(a252d720e3654ab98075d533080804b1) on rbd image(2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:05:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:25.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:26 np0005603621 nova_compute[247399]: 2026-01-31 08:05:26.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 31 03:05:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 31 03:05:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 31 03:05:27 np0005603621 nova_compute[247399]: 2026-01-31 08:05:27.274 247403 DEBUG nova.storage.rbd_utils [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] creating snapshot(snap) on rbd image(6f41fe69-cbe5-422f-b305-4d86e60f48e0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:05:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 151 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.3 MiB/s wr, 143 op/s
Jan 31 03:05:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:27.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.225 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 31 03:05:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 6f41fe69-cbe5-422f-b305-4d86e60f48e0 could not be found.
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 6f41fe69-cbe5-422f-b305-4d86e60f48e0
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver 
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver 
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 6f41fe69-cbe5-422f-b305-4d86e60f48e0 could not be found.
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.512 247403 ERROR nova.virt.libvirt.driver #033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.571 247403 DEBUG nova.storage.rbd_utils [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] removing snapshot(snap) on rbd image(6f41fe69-cbe5-422f-b305-4d86e60f48e0) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:05:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.642 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updating instance_info_cache with network_info: [{"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.680 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.681 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.682 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.682 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.683 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.683 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.718 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.719 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.719 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.719 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:05:28 np0005603621 nova_compute[247399]: 2026-01-31 08:05:28.719 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479342444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:29 np0005603621 nova_compute[247399]: 2026-01-31 08:05:29.132 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 31 03:05:29 np0005603621 podman[289379]: 2026-01-31 08:05:29.254518959 +0000 UTC m=+0.072363209 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:05:29 np0005603621 podman[289380]: 2026-01-31 08:05:29.254733205 +0000 UTC m=+0.072121681 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:05:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 31 03:05:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 31 03:05:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 170 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 5.2 MiB/s wr, 166 op/s
Jan 31 03:05:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.096 247403 WARNING nova.compute.manager [None req-4310c7c0-5943-4107-b626-8f206cc3bc1f 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Image not found during snapshot: nova.exception.ImageNotFound: Image 6f41fe69-cbe5-422f-b305-4d86e60f48e0 could not be found.#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.104 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.105 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.230 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.231 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4321MB free_disk=20.942794799804688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.456 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.457 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.457 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:05:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:30.487 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:30.489 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:30.489 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.568 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3774564071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:30 np0005603621 nova_compute[247399]: 2026-01-31 08:05:30.997 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.001 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.100 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.134 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.134 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.135 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.375 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 150 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.3 MiB/s wr, 115 op/s
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.672 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.672 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.672 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.672 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:31 np0005603621 nova_compute[247399]: 2026-01-31 08:05:31.673 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:05:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:31.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.208 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.209 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.209 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.209 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.209 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.210 247403 INFO nova.compute.manager [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Terminating instance#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.211 247403 DEBUG nova.compute.manager [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:05:32 np0005603621 kernel: tapee5bd84e-fd (unregistering): left promiscuous mode
Jan 31 03:05:32 np0005603621 NetworkManager[49013]: <info>  [1769846732.2605] device (tapee5bd84e-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:05:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:32Z|00162|binding|INFO|Releasing lport ee5bd84e-fd6b-46f1-bb1d-d034166fc33a from this chassis (sb_readonly=0)
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.268 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:32Z|00163|binding|INFO|Setting lport ee5bd84e-fd6b-46f1-bb1d-d034166fc33a down in Southbound
Jan 31 03:05:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:05:32Z|00164|binding|INFO|Removing iface tapee5bd84e-fd ovn-installed in OVS
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.286 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2c:e5:a6 10.100.0.9'], port_security=['fa:16:3e:2c:e5:a6 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c7c770f-1117-4714-b72a-35b15967e8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44ad7f776f814675b2232eb023baacdd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fc774b6-544e-41fe-a3b6-65cc9e40791e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f38317d-a32f-4f40-8398-767a8ccddb32, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.287 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ee5bd84e-fd6b-46f1-bb1d-d034166fc33a in datapath 4c7c770f-1117-4714-b72a-35b15967e8f7 unbound from our chassis#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.288 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c7c770f-1117-4714-b72a-35b15967e8f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.289 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[472c03fb-660b-41e7-b9b1-5f212dc1603b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.289 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 namespace which is not needed anymore#033[00m
Jan 31 03:05:32 np0005603621 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Jan 31 03:05:32 np0005603621 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000003e.scope: Consumed 12.970s CPU time.
Jan 31 03:05:32 np0005603621 systemd-machined[212769]: Machine qemu-27-instance-0000003e terminated.
Jan 31 03:05:32 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [NOTICE]   (289124) : haproxy version is 2.8.14-c23fe91
Jan 31 03:05:32 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [NOTICE]   (289124) : path to executable is /usr/sbin/haproxy
Jan 31 03:05:32 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [WARNING]  (289124) : Exiting Master process...
Jan 31 03:05:32 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [ALERT]    (289124) : Current worker (289126) exited with code 143 (Terminated)
Jan 31 03:05:32 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[289119]: [WARNING]  (289124) : All workers exited. Exiting... (0)
Jan 31 03:05:32 np0005603621 systemd[1]: libpod-08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3.scope: Deactivated successfully.
Jan 31 03:05:32 np0005603621 podman[289489]: 2026-01-31 08:05:32.402949673 +0000 UTC m=+0.042740866 container died 08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:05:32 np0005603621 kernel: tapee5bd84e-fd: entered promiscuous mode
Jan 31 03:05:32 np0005603621 kernel: tapee5bd84e-fd (unregistering): left promiscuous mode
Jan 31 03:05:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3-userdata-shm.mount: Deactivated successfully.
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.436 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ebeefbce0216dc4e6b6ee9bbb67c2b0f7e2c1536e6bd02dfbf8a594a48d3f54c-merged.mount: Deactivated successfully.
Jan 31 03:05:32 np0005603621 podman[289489]: 2026-01-31 08:05:32.444297245 +0000 UTC m=+0.084088438 container cleanup 08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.448 247403 INFO nova.virt.libvirt.driver [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Instance destroyed successfully.#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.448 247403 DEBUG nova.objects.instance [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'resources' on Instance uuid 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:32 np0005603621 systemd[1]: libpod-conmon-08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3.scope: Deactivated successfully.
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.476 247403 DEBUG nova.virt.libvirt.vif [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1172009567',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1172009567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1172009567',id=62,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:05:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-xtn1qmx0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:05:30Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.476 247403 DEBUG nova.network.os_vif_util [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "address": "fa:16:3e:2c:e5:a6", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee5bd84e-fd", "ovs_interfaceid": "ee5bd84e-fd6b-46f1-bb1d-d034166fc33a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.477 247403 DEBUG nova.network.os_vif_util [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.477 247403 DEBUG os_vif [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.479 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.479 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee5bd84e-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.481 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.482 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.484 247403 INFO os_vif [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2c:e5:a6,bridge_name='br-int',has_traffic_filtering=True,id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee5bd84e-fd')#033[00m
Jan 31 03:05:32 np0005603621 podman[289533]: 2026-01-31 08:05:32.507108102 +0000 UTC m=+0.045670249 container remove 08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.510 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8a7515-334f-4286-a712-cba02a81540a]: (4, ('Sat Jan 31 08:05:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 (08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3)\n08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3\nSat Jan 31 08:05:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 (08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3)\n08ccc166df7f401b89b18895dcb9c89225be2c6c1f8ba8b654335b45bf95f3c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.512 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[994eef73-5bf7-4c46-a099-023941f84263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.513 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c7c770f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.514 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 kernel: tap4c7c770f-10: left promiscuous mode
Jan 31 03:05:32 np0005603621 nova_compute[247399]: 2026-01-31 08:05:32.521 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.523 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5af456c5-ba5d-45ed-8f46-04b82c77f114]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.544 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba91c2e-0315-46e0-a262-cbf3f95d2302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.545 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ba486ab0-abb8-49e6-b93e-c42cad62bdd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.556 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[94a089ea-09dd-419a-8551-d63902f5eb41]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600844, 'reachable_time': 35495, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289566, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 systemd[1]: run-netns-ovnmeta\x2d4c7c770f\x2d1117\x2d4714\x2db72a\x2d35b15967e8f7.mount: Deactivated successfully.
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.560 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:32.560 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d79f188f-9678-4a16-b9a7-4a80ef7f2993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:05:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.188 247403 DEBUG nova.compute.manager [req-1f605484-7a80-40bd-b055-3d1a8bb1f90e req-728e4060-4ed5-4b23-9dd9-e7f0e5956865 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-vif-unplugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.189 247403 DEBUG oslo_concurrency.lockutils [req-1f605484-7a80-40bd-b055-3d1a8bb1f90e req-728e4060-4ed5-4b23-9dd9-e7f0e5956865 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.189 247403 DEBUG oslo_concurrency.lockutils [req-1f605484-7a80-40bd-b055-3d1a8bb1f90e req-728e4060-4ed5-4b23-9dd9-e7f0e5956865 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.189 247403 DEBUG oslo_concurrency.lockutils [req-1f605484-7a80-40bd-b055-3d1a8bb1f90e req-728e4060-4ed5-4b23-9dd9-e7f0e5956865 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.189 247403 DEBUG nova.compute.manager [req-1f605484-7a80-40bd-b055-3d1a8bb1f90e req-728e4060-4ed5-4b23-9dd9-e7f0e5956865 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] No waiting events found dispatching network-vif-unplugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.190 247403 DEBUG nova.compute.manager [req-1f605484-7a80-40bd-b055-3d1a8bb1f90e req-728e4060-4ed5-4b23-9dd9-e7f0e5956865 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-vif-unplugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.227 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.257 247403 INFO nova.virt.libvirt.driver [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Deleting instance files /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_del#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.258 247403 INFO nova.virt.libvirt.driver [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Deletion of /var/lib/nova/instances/2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701_del complete#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.350 247403 INFO nova.compute.manager [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.350 247403 DEBUG oslo.service.loopingcall [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.351 247403 DEBUG nova.compute.manager [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:05:33 np0005603621 nova_compute[247399]: 2026-01-31 08:05:33.351 247403 DEBUG nova.network.neutron [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:05:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 140 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 97 op/s
Jan 31 03:05:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:35 np0005603621 nova_compute[247399]: 2026-01-31 08:05:35.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:05:35 np0005603621 nova_compute[247399]: 2026-01-31 08:05:35.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:05:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 65 MiB data, 643 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 109 op/s
Jan 31 03:05:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:35.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.165 247403 DEBUG nova.compute.manager [req-d4d043e6-23c6-4148-9c7a-68db35b7917a req-24905bce-1142-4aca-afe0-7d62e2411fe3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.166 247403 DEBUG oslo_concurrency.lockutils [req-d4d043e6-23c6-4148-9c7a-68db35b7917a req-24905bce-1142-4aca-afe0-7d62e2411fe3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.166 247403 DEBUG oslo_concurrency.lockutils [req-d4d043e6-23c6-4148-9c7a-68db35b7917a req-24905bce-1142-4aca-afe0-7d62e2411fe3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.166 247403 DEBUG oslo_concurrency.lockutils [req-d4d043e6-23c6-4148-9c7a-68db35b7917a req-24905bce-1142-4aca-afe0-7d62e2411fe3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.166 247403 DEBUG nova.compute.manager [req-d4d043e6-23c6-4148-9c7a-68db35b7917a req-24905bce-1142-4aca-afe0-7d62e2411fe3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] No waiting events found dispatching network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.166 247403 WARNING nova.compute.manager [req-d4d043e6-23c6-4148-9c7a-68db35b7917a req-24905bce-1142-4aca-afe0-7d62e2411fe3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received unexpected event network-vif-plugged-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.167 247403 DEBUG nova.compute.manager [req-bcbfa335-d97b-45b2-bf83-760f33b37f56 req-22a081fb-a9e7-4cf3-b0e5-f70a2f2307c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Received event network-vif-deleted-ee5bd84e-fd6b-46f1-bb1d-d034166fc33a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.167 247403 INFO nova.compute.manager [req-bcbfa335-d97b-45b2-bf83-760f33b37f56 req-22a081fb-a9e7-4cf3-b0e5-f70a2f2307c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Neutron deleted interface ee5bd84e-fd6b-46f1-bb1d-d034166fc33a; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.168 247403 DEBUG nova.network.neutron [req-bcbfa335-d97b-45b2-bf83-760f33b37f56 req-22a081fb-a9e7-4cf3-b0e5-f70a2f2307c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.198 247403 DEBUG nova.network.neutron [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.211 247403 DEBUG nova.compute.manager [req-bcbfa335-d97b-45b2-bf83-760f33b37f56 req-22a081fb-a9e7-4cf3-b0e5-f70a2f2307c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Detach interface failed, port_id=ee5bd84e-fd6b-46f1-bb1d-d034166fc33a, reason: Instance 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.273 247403 INFO nova.compute.manager [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Took 2.92 seconds to deallocate network for instance.#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.362 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.362 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.513 247403 DEBUG oslo_concurrency.processutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3740249495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.921 247403 DEBUG oslo_concurrency.processutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.926 247403 DEBUG nova.compute.provider_tree [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:36 np0005603621 nova_compute[247399]: 2026-01-31 08:05:36.989 247403 DEBUG nova.scheduler.client.report [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:37 np0005603621 nova_compute[247399]: 2026-01-31 08:05:37.119 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:37.210 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:05:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:37.211 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:05:37 np0005603621 nova_compute[247399]: 2026-01-31 08:05:37.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:37 np0005603621 nova_compute[247399]: 2026-01-31 08:05:37.251 247403 INFO nova.scheduler.client.report [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Deleted allocations for instance 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701#033[00m
Jan 31 03:05:37 np0005603621 nova_compute[247399]: 2026-01-31 08:05:37.330 247403 DEBUG oslo_concurrency.lockutils [None req-3068efc1-c35a-4756-ae60-fe2eb7c14ee5 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:37 np0005603621 nova_compute[247399]: 2026-01-31 08:05:37.481 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 47 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Jan 31 03:05:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:37.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 31 03:05:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 31 03:05:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 31 03:05:38 np0005603621 nova_compute[247399]: 2026-01-31 08:05:38.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:05:38
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', 'images']
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:05:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:38.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:05:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:05:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 70 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 2.8 MiB/s wr, 118 op/s
Jan 31 03:05:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:39.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:40.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 88 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 31 03:05:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:41.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:42 np0005603621 nova_compute[247399]: 2026-01-31 08:05:42.484 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:43 np0005603621 nova_compute[247399]: 2026-01-31 08:05:43.229 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 88 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 03:05:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:05:45.213 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:05:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 88 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.1 MiB/s wr, 40 op/s
Jan 31 03:05:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:46.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:47 np0005603621 nova_compute[247399]: 2026-01-31 08:05:47.443 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846732.442481, 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:05:47 np0005603621 nova_compute[247399]: 2026-01-31 08:05:47.444 247403 INFO nova.compute.manager [-] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:05:47 np0005603621 nova_compute[247399]: 2026-01-31 08:05:47.487 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 88 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 31 03:05:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:47.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:48 np0005603621 nova_compute[247399]: 2026-01-31 08:05:48.133 247403 DEBUG nova.compute.manager [None req-3b625a46-4eb9-42c0-bd4e-db8614ec8bf8 - - - - - -] [instance: 2dd4a0d4-fd17-4933-9f6d-4a5e84ce8701] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:05:48 np0005603621 nova_compute[247399]: 2026-01-31 08:05:48.230 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:05:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 88 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.7 MiB/s wr, 17 op/s
Jan 31 03:05:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:50.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:05:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 069bdc6c-bc33-4829-a72d-634ee7ab67dc does not exist
Jan 31 03:05:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fc4de203-1827-4779-8395-0c7be04b326b does not exist
Jan 31 03:05:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 07c8cf33-5c1c-4447-ae13-6bb401e1acbe does not exist
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:05:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.21703174 +0000 UTC m=+0.081806827 container create b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.154234972 +0000 UTC m=+0.019010079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:05:51 np0005603621 systemd[1]: Started libpod-conmon-b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323.scope.
Jan 31 03:05:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.319168414 +0000 UTC m=+0.183943521 container init b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.324283785 +0000 UTC m=+0.189058872 container start b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:05:51 np0005603621 eloquent_varahamihira[289938]: 167 167
Jan 31 03:05:51 np0005603621 systemd[1]: libpod-b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323.scope: Deactivated successfully.
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.343125059 +0000 UTC m=+0.207900166 container attach b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.343594362 +0000 UTC m=+0.208369449 container died b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:05:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4997b94336d6d557814b2768bd9bdd7120a1ece16c3406d98ed95b5a925d9986-merged.mount: Deactivated successfully.
Jan 31 03:05:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 88 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 746 KiB/s wr, 2 op/s
Jan 31 03:05:51 np0005603621 podman[289921]: 2026-01-31 08:05:51.566395316 +0000 UTC m=+0.431170403 container remove b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:05:51 np0005603621 systemd[1]: libpod-conmon-b67660d1d1084e47f2f8086ae0658c8f14aa877b8f67b1924c5d4885f4d51323.scope: Deactivated successfully.
Jan 31 03:05:51 np0005603621 podman[289962]: 2026-01-31 08:05:51.683925726 +0000 UTC m=+0.035144858 container create b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:05:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:05:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:05:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:05:51 np0005603621 systemd[1]: Started libpod-conmon-b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3.scope.
Jan 31 03:05:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0119166e4d71ab5fe1cd974783854207609828d4f1191d4012e8400121111ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0119166e4d71ab5fe1cd974783854207609828d4f1191d4012e8400121111ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0119166e4d71ab5fe1cd974783854207609828d4f1191d4012e8400121111ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0119166e4d71ab5fe1cd974783854207609828d4f1191d4012e8400121111ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0119166e4d71ab5fe1cd974783854207609828d4f1191d4012e8400121111ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:51 np0005603621 podman[289962]: 2026-01-31 08:05:51.666948521 +0000 UTC m=+0.018167653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:05:51 np0005603621 podman[289962]: 2026-01-31 08:05:51.767723313 +0000 UTC m=+0.118942465 container init b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:05:51 np0005603621 podman[289962]: 2026-01-31 08:05:51.776702527 +0000 UTC m=+0.127921659 container start b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:05:51 np0005603621 podman[289962]: 2026-01-31 08:05:51.781439345 +0000 UTC m=+0.132658497 container attach b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:05:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:51.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:52 np0005603621 nova_compute[247399]: 2026-01-31 08:05:52.490 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:52 np0005603621 objective_maxwell[289979]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:05:52 np0005603621 objective_maxwell[289979]: --> relative data size: 1.0
Jan 31 03:05:52 np0005603621 objective_maxwell[289979]: --> All data devices are unavailable
Jan 31 03:05:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:52 np0005603621 systemd[1]: libpod-b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3.scope: Deactivated successfully.
Jan 31 03:05:52 np0005603621 podman[289962]: 2026-01-31 08:05:52.623216012 +0000 UTC m=+0.974435144 container died b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:05:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b0119166e4d71ab5fe1cd974783854207609828d4f1191d4012e8400121111ff-merged.mount: Deactivated successfully.
Jan 31 03:05:52 np0005603621 podman[289962]: 2026-01-31 08:05:52.666891997 +0000 UTC m=+1.018111129 container remove b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:05:52 np0005603621 systemd[1]: libpod-conmon-b291f92ab74431a61398ef3ca7ed6719899704ce373463078da6a4e532471ef3.scope: Deactivated successfully.
Jan 31 03:05:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.205693086 +0000 UTC m=+0.038571585 container create da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.231 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:53 np0005603621 systemd[1]: Started libpod-conmon-da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e.scope.
Jan 31 03:05:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.282451973 +0000 UTC m=+0.115330482 container init da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.188867657 +0000 UTC m=+0.021746156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.288324678 +0000 UTC m=+0.121203157 container start da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:05:53 np0005603621 elated_agnesi[290159]: 167 167
Jan 31 03:05:53 np0005603621 systemd[1]: libpod-da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e.scope: Deactivated successfully.
Jan 31 03:05:53 np0005603621 conmon[290159]: conmon da55ff24a1c6d67939f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e.scope/container/memory.events
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.293014805 +0000 UTC m=+0.125893284 container attach da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.29378392 +0000 UTC m=+0.126662419 container died da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:05:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ae1069e2b7b8bd234031dd74e20e8792a163c8fb3f402c1d6201febbf7a35c0d-merged.mount: Deactivated successfully.
Jan 31 03:05:53 np0005603621 podman[290143]: 2026-01-31 08:05:53.32811163 +0000 UTC m=+0.160990129 container remove da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_agnesi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:05:53 np0005603621 systemd[1]: libpod-conmon-da55ff24a1c6d67939f6b2fe4843074f3d61e17d6c6935d79239c0c435e1962e.scope: Deactivated successfully.
Jan 31 03:05:53 np0005603621 podman[290183]: 2026-01-31 08:05:53.480195237 +0000 UTC m=+0.048942641 container create fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mahavira, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:05:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 102 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 571 KiB/s wr, 2 op/s
Jan 31 03:05:53 np0005603621 systemd[1]: Started libpod-conmon-fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd.scope.
Jan 31 03:05:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:53 np0005603621 podman[290183]: 2026-01-31 08:05:53.458260747 +0000 UTC m=+0.027008231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:05:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cddb5ce0fb669b7f6644ded04883657c6ea391e57aea42e9d8239b66ee2fad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cddb5ce0fb669b7f6644ded04883657c6ea391e57aea42e9d8239b66ee2fad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cddb5ce0fb669b7f6644ded04883657c6ea391e57aea42e9d8239b66ee2fad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90cddb5ce0fb669b7f6644ded04883657c6ea391e57aea42e9d8239b66ee2fad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:53 np0005603621 podman[290183]: 2026-01-31 08:05:53.570228461 +0000 UTC m=+0.138975865 container init fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:05:53 np0005603621 podman[290183]: 2026-01-31 08:05:53.576989854 +0000 UTC m=+0.145737258 container start fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.576 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.578 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:53 np0005603621 podman[290183]: 2026-01-31 08:05:53.581348481 +0000 UTC m=+0.150095905 container attach fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mahavira, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.654 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.765 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.767 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.778 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.779 247403 INFO nova.compute.claims [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:05:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:53.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:53 np0005603621 nova_compute[247399]: 2026-01-31 08:05:53.961 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]: {
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:    "0": [
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:        {
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "devices": [
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "/dev/loop3"
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            ],
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "lv_name": "ceph_lv0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "lv_size": "7511998464",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "name": "ceph_lv0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "tags": {
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.cluster_name": "ceph",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.crush_device_class": "",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.encrypted": "0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.osd_id": "0",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.type": "block",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:                "ceph.vdo": "0"
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            },
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "type": "block",
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:            "vg_name": "ceph_vg0"
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:        }
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]:    ]
Jan 31 03:05:54 np0005603621 nice_mahavira[290200]: }
Jan 31 03:05:54 np0005603621 systemd[1]: libpod-fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd.scope: Deactivated successfully.
Jan 31 03:05:54 np0005603621 conmon[290200]: conmon fdd8817941e4166f3f0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd.scope/container/memory.events
Jan 31 03:05:54 np0005603621 podman[290183]: 2026-01-31 08:05:54.315948385 +0000 UTC m=+0.884695789 container died fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mahavira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:05:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-90cddb5ce0fb669b7f6644ded04883657c6ea391e57aea42e9d8239b66ee2fad-merged.mount: Deactivated successfully.
Jan 31 03:05:54 np0005603621 podman[290183]: 2026-01-31 08:05:54.374312632 +0000 UTC m=+0.943060036 container remove fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:05:54 np0005603621 systemd[1]: libpod-conmon-fdd8817941e4166f3f0a0a43c3a86665e42a4eb0e7b81ef7072cbee153022edd.scope: Deactivated successfully.
Jan 31 03:05:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:05:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4198702505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.409 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.416 247403 DEBUG nova.compute.provider_tree [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.435 247403 DEBUG nova.scheduler.client.report [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.496 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.497 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:05:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:54.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.839 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.839 247403 DEBUG nova.network.neutron [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.854651221 +0000 UTC m=+0.034643251 container create be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:05:54 np0005603621 systemd[1]: Started libpod-conmon-be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e.scope.
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.896 247403 INFO nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:05:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.919023817 +0000 UTC m=+0.099015887 container init be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.923621913 +0000 UTC m=+0.103613943 container start be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:05:54 np0005603621 nova_compute[247399]: 2026-01-31 08:05:54.925 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:05:54 np0005603621 affectionate_kowalevski[290400]: 167 167
Jan 31 03:05:54 np0005603621 systemd[1]: libpod-be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e.scope: Deactivated successfully.
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.92832006 +0000 UTC m=+0.108312110 container attach be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:05:54 np0005603621 conmon[290400]: conmon be5583e2716044e430c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e.scope/container/memory.events
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.928841117 +0000 UTC m=+0.108833157 container died be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.839863605 +0000 UTC m=+0.019855665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:05:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-149d7cb5db988dd517294b9266d9fdb36e2af7a04471d59b60249b771af6c51e-merged.mount: Deactivated successfully.
Jan 31 03:05:54 np0005603621 podman[290384]: 2026-01-31 08:05:54.967898636 +0000 UTC m=+0.147890666 container remove be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kowalevski, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:05:54 np0005603621 systemd[1]: libpod-conmon-be5583e2716044e430c7c81a8194965c1c47e205f5155ba36b50af16adff0c5e.scope: Deactivated successfully.
Jan 31 03:05:55 np0005603621 podman[290422]: 2026-01-31 08:05:55.113441118 +0000 UTC m=+0.042994185 container create 263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.119 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.120 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.121 247403 INFO nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Creating image(s)#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.147 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:55 np0005603621 systemd[1]: Started libpod-conmon-263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad.scope.
Jan 31 03:05:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:05:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd33cd86cbd969aabc5a140a8f1a4eca3d1f9128ac0860b2e3dc890f3bfc6558/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd33cd86cbd969aabc5a140a8f1a4eca3d1f9128ac0860b2e3dc890f3bfc6558/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd33cd86cbd969aabc5a140a8f1a4eca3d1f9128ac0860b2e3dc890f3bfc6558/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd33cd86cbd969aabc5a140a8f1a4eca3d1f9128ac0860b2e3dc890f3bfc6558/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.176 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:55 np0005603621 podman[290422]: 2026-01-31 08:05:55.094340166 +0000 UTC m=+0.023893243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:05:55 np0005603621 podman[290422]: 2026-01-31 08:05:55.208928213 +0000 UTC m=+0.138481300 container init 263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:05:55 np0005603621 podman[290422]: 2026-01-31 08:05:55.214432557 +0000 UTC m=+0.143985624 container start 263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:05:55 np0005603621 podman[290422]: 2026-01-31 08:05:55.218850205 +0000 UTC m=+0.148403302 container attach 263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.217 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.224 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.279 247403 DEBUG nova.policy [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57fcb774fb574bf0beea4fb49adb0f80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44ad7f776f814675b2232eb023baacdd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.298 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.300 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.301 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.301 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.340 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.345 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:05:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 93 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.663 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.731 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] resizing rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.841 247403 DEBUG nova.objects.instance [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'migration_context' on Instance uuid 56447338-cea0-4d74-b9e1-bac3b5d793a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:05:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:55.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.880 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.881 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Ensure instance console log exists: /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.881 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.882 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:05:55 np0005603621 nova_compute[247399]: 2026-01-31 08:05:55.882 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]: {
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:        "osd_id": 0,
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:        "type": "bluestore"
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]:    }
Jan 31 03:05:55 np0005603621 suspicious_mayer[290456]: }
Jan 31 03:05:56 np0005603621 systemd[1]: libpod-263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad.scope: Deactivated successfully.
Jan 31 03:05:56 np0005603621 conmon[290456]: conmon 263a7507febdbedc4da4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad.scope/container/memory.events
Jan 31 03:05:56 np0005603621 podman[290626]: 2026-01-31 08:05:56.057108992 +0000 UTC m=+0.031251435 container died 263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:05:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fd33cd86cbd969aabc5a140a8f1a4eca3d1f9128ac0860b2e3dc890f3bfc6558-merged.mount: Deactivated successfully.
Jan 31 03:05:56 np0005603621 podman[290626]: 2026-01-31 08:05:56.11042314 +0000 UTC m=+0.084565493 container remove 263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:05:56 np0005603621 systemd[1]: libpod-conmon-263a7507febdbedc4da4ddd7ce36067ed917328c407994ec144bb71a6bba43ad.scope: Deactivated successfully.
Jan 31 03:05:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:05:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:05:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:05:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:05:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c8bcb88a-bb85-4d1d-93b6-5d7100ee1261 does not exist
Jan 31 03:05:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 14e95386-6c09-4ec8-85fa-f61cf10334fd does not exist
Jan 31 03:05:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fce078fc-f537-43e1-b9eb-23e5b620fa36 does not exist
Jan 31 03:05:56 np0005603621 nova_compute[247399]: 2026-01-31 08:05:56.358 247403 DEBUG nova.network.neutron [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Successfully created port: 84f2b859-a58e-449a-8b25-3b66ed6a6e67 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:05:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:56.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:05:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:05:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:05:57 np0005603621 nova_compute[247399]: 2026-01-31 08:05:57.494 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 107 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.9 MiB/s wr, 124 op/s
Jan 31 03:05:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:57.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.244 247403 DEBUG nova.network.neutron [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Successfully updated port: 84f2b859-a58e-449a-8b25-3b66ed6a6e67 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.273 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "refresh_cache-56447338-cea0-4d74-b9e1-bac3b5d793a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.274 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquired lock "refresh_cache-56447338-cea0-4d74-b9e1-bac3b5d793a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.274 247403 DEBUG nova.network.neutron [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.397 247403 DEBUG nova.compute.manager [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-changed-84f2b859-a58e-449a-8b25-3b66ed6a6e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.398 247403 DEBUG nova.compute.manager [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Refreshing instance network info cache due to event network-changed-84f2b859-a58e-449a-8b25-3b66ed6a6e67. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:05:58 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.398 247403 DEBUG oslo_concurrency.lockutils [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-56447338-cea0-4d74-b9e1-bac3b5d793a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:05:58 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:05:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:05:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:05:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:05:58 np0005603621 nova_compute[247399]: 2026-01-31 08:05:58.711 247403 DEBUG nova.network.neutron [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:05:59 np0005603621 podman[290693]: 2026-01-31 08:05:59.515483731 +0000 UTC m=+0.062435226 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:05:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 134 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 31 03:05:59 np0005603621 podman[290694]: 2026-01-31 08:05:59.543448091 +0000 UTC m=+0.091413568 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller)
Jan 31 03:05:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:05:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:05:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:05:59.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.590 247403 DEBUG nova.network.neutron [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Updating instance_info_cache with network_info: [{"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.616 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Releasing lock "refresh_cache-56447338-cea0-4d74-b9e1-bac3b5d793a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.616 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Instance network_info: |[{"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.616 247403 DEBUG oslo_concurrency.lockutils [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-56447338-cea0-4d74-b9e1-bac3b5d793a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.617 247403 DEBUG nova.network.neutron [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Refreshing network info cache for port 84f2b859-a58e-449a-8b25-3b66ed6a6e67 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.619 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Start _get_guest_xml network_info=[{"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.624 247403 WARNING nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.629 247403 DEBUG nova.virt.libvirt.host [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.629 247403 DEBUG nova.virt.libvirt.host [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.634 247403 DEBUG nova.virt.libvirt.host [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.635 247403 DEBUG nova.virt.libvirt.host [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.636 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.636 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.636 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.637 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.637 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.637 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.637 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.637 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.638 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.638 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.638 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.638 247403 DEBUG nova.virt.hardware [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:06:00 np0005603621 nova_compute[247399]: 2026-01-31 08:06:00.640 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196143115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.068 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.094 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.099 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509164034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 134 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.551 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.553 247403 DEBUG nova.virt.libvirt.vif [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-778677047',display_name='tempest-ImagesOneServerNegativeTestJSON-server-778677047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-778677047',id=65,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-c37ubdsn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:54Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=56447338-cea0-4d74-b9e1-bac3b5d793a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.554 247403 DEBUG nova.network.os_vif_util [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.555 247403 DEBUG nova.network.os_vif_util [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.556 247403 DEBUG nova.objects.instance [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'pci_devices' on Instance uuid 56447338-cea0-4d74-b9e1-bac3b5d793a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.573 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <uuid>56447338-cea0-4d74-b9e1-bac3b5d793a0</uuid>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <name>instance-00000041</name>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-778677047</nova:name>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:06:00</nova:creationTime>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:user uuid="57fcb774fb574bf0beea4fb49adb0f80">tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member</nova:user>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:project uuid="44ad7f776f814675b2232eb023baacdd">tempest-ImagesOneServerNegativeTestJSON-1383889839</nova:project>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <nova:port uuid="84f2b859-a58e-449a-8b25-3b66ed6a6e67">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <entry name="serial">56447338-cea0-4d74-b9e1-bac3b5d793a0</entry>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <entry name="uuid">56447338-cea0-4d74-b9e1-bac3b5d793a0</entry>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/56447338-cea0-4d74-b9e1-bac3b5d793a0_disk">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/56447338-cea0-4d74-b9e1-bac3b5d793a0_disk.config">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:e7:c6:7b"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <target dev="tap84f2b859-a5"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/console.log" append="off"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:06:01 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:06:01 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:06:01 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:06:01 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.574 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Preparing to wait for external event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.575 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.575 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.575 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.576 247403 DEBUG nova.virt.libvirt.vif [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:05:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-778677047',display_name='tempest-ImagesOneServerNegativeTestJSON-server-778677047',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-778677047',id=65,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-c37ubdsn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:05:54Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=56447338-cea0-4d74-b9e1-bac3b5d793a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.576 247403 DEBUG nova.network.os_vif_util [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.577 247403 DEBUG nova.network.os_vif_util [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.577 247403 DEBUG os_vif [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.578 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.579 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.579 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.582 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.583 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84f2b859-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.583 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84f2b859-a5, col_values=(('external_ids', {'iface-id': '84f2b859-a58e-449a-8b25-3b66ed6a6e67', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:c6:7b', 'vm-uuid': '56447338-cea0-4d74-b9e1-bac3b5d793a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.587 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:06:01 np0005603621 NetworkManager[49013]: <info>  [1769846761.5873] manager: (tap84f2b859-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.593 247403 INFO os_vif [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5')#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.657 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.658 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.658 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No VIF found with MAC fa:16:3e:e7:c6:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.659 247403 INFO nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Using config drive#033[00m
Jan 31 03:06:01 np0005603621 nova_compute[247399]: 2026-01-31 08:06:01.689 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:01.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.400 247403 INFO nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Creating config drive at /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/disk.config#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.403 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzwxm9ujy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.527 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzwxm9ujy" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.563 247403 DEBUG nova.storage.rbd_utils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.568 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/disk.config 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.698 247403 DEBUG oslo_concurrency.processutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/disk.config 56447338-cea0-4d74-b9e1-bac3b5d793a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.699 247403 INFO nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Deleting local config drive /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0/disk.config because it was imported into RBD.#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.729 247403 DEBUG nova.network.neutron [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Updated VIF entry in instance network info cache for port 84f2b859-a58e-449a-8b25-3b66ed6a6e67. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.730 247403 DEBUG nova.network.neutron [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Updating instance_info_cache with network_info: [{"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:02 np0005603621 kernel: tap84f2b859-a5: entered promiscuous mode
Jan 31 03:06:02 np0005603621 NetworkManager[49013]: <info>  [1769846762.7448] manager: (tap84f2b859-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Jan 31 03:06:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:02Z|00165|binding|INFO|Claiming lport 84f2b859-a58e-449a-8b25-3b66ed6a6e67 for this chassis.
Jan 31 03:06:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:02Z|00166|binding|INFO|84f2b859-a58e-449a-8b25-3b66ed6a6e67: Claiming fa:16:3e:e7:c6:7b 10.100.0.6
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.746 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:02Z|00167|binding|INFO|Setting lport 84f2b859-a58e-449a-8b25-3b66ed6a6e67 ovn-installed in OVS
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.753 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.756 247403 DEBUG oslo_concurrency.lockutils [req-9888b6f5-da79-4cfa-b6b8-1fa635fcf26f req-433de737-e166-42e3-944f-58e3bc2303d0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-56447338-cea0-4d74-b9e1-bac3b5d793a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.756 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:02Z|00168|binding|INFO|Setting lport 84f2b859-a58e-449a-8b25-3b66ed6a6e67 up in Southbound
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.765 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c6:7b 10.100.0.6'], port_security=['fa:16:3e:e7:c6:7b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56447338-cea0-4d74-b9e1-bac3b5d793a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c7c770f-1117-4714-b72a-35b15967e8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44ad7f776f814675b2232eb023baacdd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fc774b6-544e-41fe-a3b6-65cc9e40791e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f38317d-a32f-4f40-8398-767a8ccddb32, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=84f2b859-a58e-449a-8b25-3b66ed6a6e67) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.766 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 84f2b859-a58e-449a-8b25-3b66ed6a6e67 in datapath 4c7c770f-1117-4714-b72a-35b15967e8f7 bound to our chassis#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.768 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c7c770f-1117-4714-b72a-35b15967e8f7#033[00m
Jan 31 03:06:02 np0005603621 systemd-udevd[290927]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:06:02 np0005603621 systemd-machined[212769]: New machine qemu-28-instance-00000041.
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.777 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7c261f5b-4597-47ec-b297-b30fdaf424e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.778 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c7c770f-11 in ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:06:02 np0005603621 NetworkManager[49013]: <info>  [1769846762.7806] device (tap84f2b859-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:06:02 np0005603621 NetworkManager[49013]: <info>  [1769846762.7812] device (tap84f2b859-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.781 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c7c770f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.781 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdea742-2fb6-4d08-815c-481ed196aff1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.783 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a66feabe-db2e-4319-85d5-2eb3faa135f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 systemd[1]: Started Virtual Machine qemu-28-instance-00000041.
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.793 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9312f5-9ae2-438b-86a5-bbbc030ee72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.803 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad04bab-6fdf-4d52-a125-0f53ad04b843]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.826 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3045eea0-78c7-478c-ae09-75826de64360]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.831 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f0dcf0-ac34-4851-8c56-3ef135b3caa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 systemd-udevd[290930]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:06:02 np0005603621 NetworkManager[49013]: <info>  [1769846762.8328] manager: (tap4c7c770f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.857 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[656ab945-c247-4413-9fba-9f0cd7133fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.860 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[43ec49f3-14b9-446f-b6f9-308ea84a9a90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 NetworkManager[49013]: <info>  [1769846762.8805] device (tap4c7c770f-10): carrier: link connected
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.884 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5baf3e-10ef-4889-bd95-4982a47eff98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.895 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b126b4-87d1-43af-8277-c57ddda5060d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c7c770f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606839, 'reachable_time': 22025, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290960, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.908 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[057cdb51-7fb3-4c06-933b-e58346dd798d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:2d70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606839, 'tstamp': 606839}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290961, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.920 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9c45032d-c53b-4632-8510-9ad46d0765d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c7c770f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606839, 'reachable_time': 22025, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290962, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.939 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[881c607a-ee23-486f-bd94-2b7a0c6095bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.976 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[269a0a32-5d8b-4141-84fc-ad44238f7bf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.978 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c7c770f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.978 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.979 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c7c770f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.980 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 NetworkManager[49013]: <info>  [1769846762.9815] manager: (tap4c7c770f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 31 03:06:02 np0005603621 kernel: tap4c7c770f-10: entered promiscuous mode
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.983 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c7c770f-10, col_values=(('external_ids', {'iface-id': '831cf344-1b40-47d7-9209-66080414c7e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.984 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:02Z|00169|binding|INFO|Releasing lport 831cf344-1b40-47d7-9209-66080414c7e0 from this chassis (sb_readonly=0)
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.985 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.986 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f1460d98-5380-43a2-9f38-8c21f990c89e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.987 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-4c7c770f-1117-4714-b72a-35b15967e8f7
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 4c7c770f-1117-4714-b72a-35b15967e8f7
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:02.988 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'env', 'PROCESS_TAG=haproxy-4c7c770f-1117-4714-b72a-35b15967e8f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c7c770f-1117-4714-b72a-35b15967e8f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:06:02 np0005603621 nova_compute[247399]: 2026-01-31 08:06:02.990 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.157 247403 DEBUG nova.compute.manager [req-11286842-f1be-4bce-b34d-f7f9e14d5450 req-28ae84c1-0d6d-4ffd-95e5-f1c95eed3686 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.157 247403 DEBUG oslo_concurrency.lockutils [req-11286842-f1be-4bce-b34d-f7f9e14d5450 req-28ae84c1-0d6d-4ffd-95e5-f1c95eed3686 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.157 247403 DEBUG oslo_concurrency.lockutils [req-11286842-f1be-4bce-b34d-f7f9e14d5450 req-28ae84c1-0d6d-4ffd-95e5-f1c95eed3686 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.157 247403 DEBUG oslo_concurrency.lockutils [req-11286842-f1be-4bce-b34d-f7f9e14d5450 req-28ae84c1-0d6d-4ffd-95e5-f1c95eed3686 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.157 247403 DEBUG nova.compute.manager [req-11286842-f1be-4bce-b34d-f7f9e14d5450 req-28ae84c1-0d6d-4ffd-95e5-f1c95eed3686 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Processing event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.219 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846763.2185814, 56447338-cea0-4d74-b9e1-bac3b5d793a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.219 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] VM Started (Lifecycle Event)#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.221 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.226 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.230 247403 INFO nova.virt.libvirt.driver [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Instance spawned successfully.#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.231 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.246 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.256 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.260 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.261 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.261 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.262 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.262 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.263 247403 DEBUG nova.virt.libvirt.driver [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.291 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.291 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846763.2187116, 56447338-cea0-4d74-b9e1-bac3b5d793a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.291 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:06:03 np0005603621 podman[291033]: 2026-01-31 08:06:03.321920598 +0000 UTC m=+0.051914206 container create 5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.328 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.335 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846763.2258062, 56447338-cea0-4d74-b9e1-bac3b5d793a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.336 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.352 247403 INFO nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Took 8.23 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.353 247403 DEBUG nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:03 np0005603621 systemd[1]: Started libpod-conmon-5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4.scope.
Jan 31 03:06:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:06:03 np0005603621 podman[291033]: 2026-01-31 08:06:03.292262084 +0000 UTC m=+0.022255712 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:06:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f34349d8733a84dbfdac4100ff2d0468070fa0756feeb7591079ceef0e79a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:03 np0005603621 podman[291033]: 2026-01-31 08:06:03.404005391 +0000 UTC m=+0.133999009 container init 5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.403 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.407 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:03 np0005603621 podman[291033]: 2026-01-31 08:06:03.410438434 +0000 UTC m=+0.140432042 container start 5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:06:03 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [NOTICE]   (291052) : New worker (291054) forked
Jan 31 03:06:03 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [NOTICE]   (291052) : Loading success.
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.457 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.500 247403 INFO nova.compute.manager [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Took 9.79 seconds to build instance.#033[00m
Jan 31 03:06:03 np0005603621 nova_compute[247399]: 2026-01-31 08:06:03.526 247403 DEBUG oslo_concurrency.lockutils [None req-728265ad-f69b-41c9-b9b9-2c88acf399c7 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 134 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Jan 31 03:06:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:03.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:05 np0005603621 nova_compute[247399]: 2026-01-31 08:06:05.372 247403 DEBUG nova.compute.manager [req-6e58187d-9be0-4cf1-9cbc-8d7119283d20 req-95e16a55-fcd5-4bcf-a330-75641ed4a09c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:05 np0005603621 nova_compute[247399]: 2026-01-31 08:06:05.372 247403 DEBUG oslo_concurrency.lockutils [req-6e58187d-9be0-4cf1-9cbc-8d7119283d20 req-95e16a55-fcd5-4bcf-a330-75641ed4a09c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:05 np0005603621 nova_compute[247399]: 2026-01-31 08:06:05.372 247403 DEBUG oslo_concurrency.lockutils [req-6e58187d-9be0-4cf1-9cbc-8d7119283d20 req-95e16a55-fcd5-4bcf-a330-75641ed4a09c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:05 np0005603621 nova_compute[247399]: 2026-01-31 08:06:05.373 247403 DEBUG oslo_concurrency.lockutils [req-6e58187d-9be0-4cf1-9cbc-8d7119283d20 req-95e16a55-fcd5-4bcf-a330-75641ed4a09c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:05 np0005603621 nova_compute[247399]: 2026-01-31 08:06:05.373 247403 DEBUG nova.compute.manager [req-6e58187d-9be0-4cf1-9cbc-8d7119283d20 req-95e16a55-fcd5-4bcf-a330-75641ed4a09c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] No waiting events found dispatching network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:05 np0005603621 nova_compute[247399]: 2026-01-31 08:06:05.373 247403 WARNING nova.compute.manager [req-6e58187d-9be0-4cf1-9cbc-8d7119283d20 req-95e16a55-fcd5-4bcf-a330-75641ed4a09c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received unexpected event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:06:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 151 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.0 MiB/s wr, 238 op/s
Jan 31 03:06:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:05.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:06 np0005603621 nova_compute[247399]: 2026-01-31 08:06:06.586 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:06.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:06 np0005603621 nova_compute[247399]: 2026-01-31 08:06:06.915 247403 DEBUG nova.compute.manager [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:06 np0005603621 nova_compute[247399]: 2026-01-31 08:06:06.986 247403 INFO nova.compute.manager [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] instance snapshotting#033[00m
Jan 31 03:06:07 np0005603621 nova_compute[247399]: 2026-01-31 08:06:07.510 247403 INFO nova.virt.libvirt.driver [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Beginning live snapshot process#033[00m
Jan 31 03:06:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 161 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 188 op/s
Jan 31 03:06:07 np0005603621 nova_compute[247399]: 2026-01-31 08:06:07.759 247403 DEBUG nova.virt.libvirt.imagebackend [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:06:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:07.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:08 np0005603621 nova_compute[247399]: 2026-01-31 08:06:08.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:08 np0005603621 nova_compute[247399]: 2026-01-31 08:06:08.400 247403 DEBUG nova.storage.rbd_utils [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] creating snapshot(cbef3e8e360e478e9ba5378d7f791ec9) on rbd image(56447338-cea0-4d74-b9e1-bac3b5d793a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:06:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 31 03:06:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 31 03:06:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 31 03:06:08 np0005603621 nova_compute[247399]: 2026-01-31 08:06:08.738 247403 DEBUG nova.storage.rbd_utils [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] cloning vms/56447338-cea0-4d74-b9e1-bac3b5d793a0_disk@cbef3e8e360e478e9ba5378d7f791ec9 to images/77b495b5-932c-4164-94ff-e9ef49f712d3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:06:08 np0005603621 nova_compute[247399]: 2026-01-31 08:06:08.914 247403 DEBUG nova.storage.rbd_utils [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] flattening images/77b495b5-932c-4164-94ff-e9ef49f712d3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:06:09 np0005603621 nova_compute[247399]: 2026-01-31 08:06:09.403 247403 DEBUG nova.storage.rbd_utils [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] removing snapshot(cbef3e8e360e478e9ba5378d7f791ec9) on rbd image(56447338-cea0-4d74-b9e1-bac3b5d793a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:06:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 211 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.9 MiB/s wr, 247 op/s
Jan 31 03:06:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 31 03:06:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 31 03:06:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 31 03:06:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:09.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:09 np0005603621 nova_compute[247399]: 2026-01-31 08:06:09.885 247403 DEBUG nova.storage.rbd_utils [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] creating snapshot(snap) on rbd image(77b495b5-932c-4164-94ff-e9ef49f712d3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:06:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:10.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 31 03:06:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 31 03:06:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 77b495b5-932c-4164-94ff-e9ef49f712d3 could not be found.
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 77b495b5-932c-4164-94ff-e9ef49f712d3
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver 
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver 
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 77b495b5-932c-4164-94ff-e9ef49f712d3 could not be found.
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.440 247403 ERROR nova.virt.libvirt.driver #033[00m
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.518 247403 DEBUG nova.storage.rbd_utils [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] removing snapshot(snap) on rbd image(77b495b5-932c-4164-94ff-e9ef49f712d3) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:06:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 211 MiB data, 697 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.5 MiB/s wr, 239 op/s
Jan 31 03:06:11 np0005603621 nova_compute[247399]: 2026-01-31 08:06:11.588 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:06:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:11.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:06:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 31 03:06:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 31 03:06:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 31 03:06:12 np0005603621 nova_compute[247399]: 2026-01-31 08:06:12.620 247403 WARNING nova.compute.manager [None req-af39071c-589b-4ce4-add7-d52f30a8e9e2 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Image not found during snapshot: nova.exception.ImageNotFound: Image 77b495b5-932c-4164-94ff-e9ef49f712d3 could not be found.#033[00m
Jan 31 03:06:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:12.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:13 np0005603621 nova_compute[247399]: 2026-01-31 08:06:13.239 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 215 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.2 MiB/s wr, 134 op/s
Jan 31 03:06:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 204 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.5 MiB/s wr, 287 op/s
Jan 31 03:06:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:15.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:16Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e7:c6:7b 10.100.0.6
Jan 31 03:06:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:16Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e7:c6:7b 10.100.0.6
Jan 31 03:06:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.718 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.718 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.718 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.719 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.719 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.720 247403 INFO nova.compute.manager [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Terminating instance#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.721 247403 DEBUG nova.compute.manager [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:06:16 np0005603621 kernel: tap84f2b859-a5 (unregistering): left promiscuous mode
Jan 31 03:06:16 np0005603621 NetworkManager[49013]: <info>  [1769846776.8292] device (tap84f2b859-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:06:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:16Z|00170|binding|INFO|Releasing lport 84f2b859-a58e-449a-8b25-3b66ed6a6e67 from this chassis (sb_readonly=0)
Jan 31 03:06:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:16Z|00171|binding|INFO|Setting lport 84f2b859-a58e-449a-8b25-3b66ed6a6e67 down in Southbound
Jan 31 03:06:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:16Z|00172|binding|INFO|Removing iface tap84f2b859-a5 ovn-installed in OVS
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.832 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:16.852 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:c6:7b 10.100.0.6'], port_security=['fa:16:3e:e7:c6:7b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '56447338-cea0-4d74-b9e1-bac3b5d793a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c7c770f-1117-4714-b72a-35b15967e8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44ad7f776f814675b2232eb023baacdd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fc774b6-544e-41fe-a3b6-65cc9e40791e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f38317d-a32f-4f40-8398-767a8ccddb32, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=84f2b859-a58e-449a-8b25-3b66ed6a6e67) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:16.854 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 84f2b859-a58e-449a-8b25-3b66ed6a6e67 in datapath 4c7c770f-1117-4714-b72a-35b15967e8f7 unbound from our chassis#033[00m
Jan 31 03:06:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:16.856 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c7c770f-1117-4714-b72a-35b15967e8f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:06:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:16.857 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[db518bbd-46e1-4cba-b22c-ac3025088a49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:16.857 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 namespace which is not needed anymore#033[00m
Jan 31 03:06:16 np0005603621 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 31 03:06:16 np0005603621 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000041.scope: Consumed 12.619s CPU time.
Jan 31 03:06:16 np0005603621 systemd-machined[212769]: Machine qemu-28-instance-00000041 terminated.
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.950 247403 INFO nova.virt.libvirt.driver [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Instance destroyed successfully.#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.952 247403 DEBUG nova.objects.instance [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'resources' on Instance uuid 56447338-cea0-4d74-b9e1-bac3b5d793a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.982 247403 DEBUG nova.virt.libvirt.vif [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:05:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-778677047',display_name='tempest-ImagesOneServerNegativeTestJSON-server-778677047',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-778677047',id=65,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-c37ubdsn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:12Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=56447338-cea0-4d74-b9e1-bac3b5d793a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.983 247403 DEBUG nova.network.os_vif_util [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "address": "fa:16:3e:e7:c6:7b", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84f2b859-a5", "ovs_interfaceid": "84f2b859-a58e-449a-8b25-3b66ed6a6e67", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.984 247403 DEBUG nova.network.os_vif_util [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.984 247403 DEBUG os_vif [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.986 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.986 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84f2b859-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:16 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [NOTICE]   (291052) : haproxy version is 2.8.14-c23fe91
Jan 31 03:06:16 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [NOTICE]   (291052) : path to executable is /usr/sbin/haproxy
Jan 31 03:06:16 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [WARNING]  (291052) : Exiting Master process...
Jan 31 03:06:16 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [ALERT]    (291052) : Current worker (291054) exited with code 143 (Terminated)
Jan 31 03:06:16 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291048]: [WARNING]  (291052) : All workers exited. Exiting... (0)
Jan 31 03:06:16 np0005603621 nova_compute[247399]: 2026-01-31 08:06:16.990 247403 INFO os_vif [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:c6:7b,bridge_name='br-int',has_traffic_filtering=True,id=84f2b859-a58e-449a-8b25-3b66ed6a6e67,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84f2b859-a5')#033[00m
Jan 31 03:06:16 np0005603621 systemd[1]: libpod-5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4.scope: Deactivated successfully.
Jan 31 03:06:17 np0005603621 podman[291272]: 2026-01-31 08:06:17.001112762 +0000 UTC m=+0.078406909 container died 5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:06:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4-userdata-shm.mount: Deactivated successfully.
Jan 31 03:06:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a8f34349d8733a84dbfdac4100ff2d0468070fa0756feeb7591079ceef0e79a5-merged.mount: Deactivated successfully.
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.379 247403 DEBUG nova.compute.manager [req-20c50d7d-5027-4380-b7eb-20b676673601 req-5a70bcd2-af69-4477-8e1f-0c2617dfb37f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-vif-unplugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.380 247403 DEBUG oslo_concurrency.lockutils [req-20c50d7d-5027-4380-b7eb-20b676673601 req-5a70bcd2-af69-4477-8e1f-0c2617dfb37f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.380 247403 DEBUG oslo_concurrency.lockutils [req-20c50d7d-5027-4380-b7eb-20b676673601 req-5a70bcd2-af69-4477-8e1f-0c2617dfb37f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.380 247403 DEBUG oslo_concurrency.lockutils [req-20c50d7d-5027-4380-b7eb-20b676673601 req-5a70bcd2-af69-4477-8e1f-0c2617dfb37f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.381 247403 DEBUG nova.compute.manager [req-20c50d7d-5027-4380-b7eb-20b676673601 req-5a70bcd2-af69-4477-8e1f-0c2617dfb37f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] No waiting events found dispatching network-vif-unplugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.381 247403 DEBUG nova.compute.manager [req-20c50d7d-5027-4380-b7eb-20b676673601 req-5a70bcd2-af69-4477-8e1f-0c2617dfb37f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-vif-unplugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:06:17 np0005603621 podman[291272]: 2026-01-31 08:06:17.392226793 +0000 UTC m=+0.469520940 container cleanup 5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:06:17 np0005603621 systemd[1]: libpod-conmon-5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4.scope: Deactivated successfully.
Jan 31 03:06:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 224 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 5.4 MiB/s wr, 307 op/s
Jan 31 03:06:17 np0005603621 podman[291332]: 2026-01-31 08:06:17.844666465 +0000 UTC m=+0.434456577 container remove 5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.850 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[69e2c057-3046-468b-9e14-e6bc62fdbfec]: (4, ('Sat Jan 31 08:06:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 (5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4)\n5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4\nSat Jan 31 08:06:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 (5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4)\n5e7d429bee717e2cb07e41d87e431315d79fa111feab8075bfe527473cdf32d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.852 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4525a4bb-94e6-4c58-86e0-c978ea199535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.853 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c7c770f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.854 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603621 kernel: tap4c7c770f-10: left promiscuous mode
Jan 31 03:06:17 np0005603621 nova_compute[247399]: 2026-01-31 08:06:17.863 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.865 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdce28f-3702-46eb-b872-e9c0b0005645]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.880 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[14bbb893-caad-46f0-81ee-af133c879996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.881 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b56e5c87-c459-4b18-ba80-26a4b047591b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:17.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.895 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[66858d1d-ade4-40e3-9278-5326acca8ab9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606833, 'reachable_time': 42350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291348, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 systemd[1]: run-netns-ovnmeta\x2d4c7c770f\x2d1117\x2d4714\x2db72a\x2d35b15967e8f7.mount: Deactivated successfully.
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.898 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:06:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:17.899 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6c841c0f-00d9-473b-b48a-a22611e4214c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 31 03:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 31 03:06:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.161 247403 INFO nova.virt.libvirt.driver [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Deleting instance files /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0_del#033[00m
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.163 247403 INFO nova.virt.libvirt.driver [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Deletion of /var/lib/nova/instances/56447338-cea0-4d74-b9e1-bac3b5d793a0_del complete#033[00m
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.232 247403 INFO nova.compute.manager [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.233 247403 DEBUG oslo.service.loopingcall [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.233 247403 DEBUG nova.compute.manager [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.234 247403 DEBUG nova.network.neutron [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:06:18 np0005603621 nova_compute[247399]: 2026-01-31 08:06:18.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:18.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.496 247403 DEBUG nova.compute.manager [req-15db6dfd-8170-41c0-91c8-fd874198bc30 req-0fa70e95-d6bd-4f36-b9e6-fa742efaac94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.497 247403 DEBUG oslo_concurrency.lockutils [req-15db6dfd-8170-41c0-91c8-fd874198bc30 req-0fa70e95-d6bd-4f36-b9e6-fa742efaac94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.497 247403 DEBUG oslo_concurrency.lockutils [req-15db6dfd-8170-41c0-91c8-fd874198bc30 req-0fa70e95-d6bd-4f36-b9e6-fa742efaac94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.497 247403 DEBUG oslo_concurrency.lockutils [req-15db6dfd-8170-41c0-91c8-fd874198bc30 req-0fa70e95-d6bd-4f36-b9e6-fa742efaac94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.497 247403 DEBUG nova.compute.manager [req-15db6dfd-8170-41c0-91c8-fd874198bc30 req-0fa70e95-d6bd-4f36-b9e6-fa742efaac94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] No waiting events found dispatching network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.498 247403 WARNING nova.compute.manager [req-15db6dfd-8170-41c0-91c8-fd874198bc30 req-0fa70e95-d6bd-4f36-b9e6-fa742efaac94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received unexpected event network-vif-plugged-84f2b859-a58e-449a-8b25-3b66ed6a6e67 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:06:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 199 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.9 MiB/s wr, 405 op/s
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.632 247403 DEBUG nova.network.neutron [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.675 247403 INFO nova.compute.manager [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Took 1.44 seconds to deallocate network for instance.#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.712 247403 DEBUG nova.compute.manager [req-30cfb7e8-4d58-49bf-91fc-aeedda6c99df req-c7cb0613-3506-4619-ab24-e5e5202fc8f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Received event network-vif-deleted-84f2b859-a58e-449a-8b25-3b66ed6a6e67 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.780 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:19 np0005603621 nova_compute[247399]: 2026-01-31 08:06:19.780 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.011 247403 DEBUG nova.scheduler.client.report [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.114 247403 DEBUG nova.scheduler.client.report [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.115 247403 DEBUG nova.compute.provider_tree [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.266 247403 DEBUG nova.scheduler.client.report [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.291 247403 DEBUG nova.scheduler.client.report [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.361 247403 DEBUG oslo_concurrency.processutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1618499287' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:20.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547306586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.785 247403 DEBUG oslo_concurrency.processutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.791 247403 DEBUG nova.compute.provider_tree [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.813 247403 DEBUG nova.scheduler.client.report [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.850 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:20 np0005603621 nova_compute[247399]: 2026-01-31 08:06:20.873 247403 INFO nova.scheduler.client.report [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Deleted allocations for instance 56447338-cea0-4d74-b9e1-bac3b5d793a0#033[00m
Jan 31 03:06:21 np0005603621 nova_compute[247399]: 2026-01-31 08:06:21.002 247403 DEBUG oslo_concurrency.lockutils [None req-f2b2e21f-a1b6-4e41-8a4f-813d32502f01 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "56447338-cea0-4d74-b9e1-bac3b5d793a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 199 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.6 MiB/s wr, 344 op/s
Jan 31 03:06:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:21.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:21 np0005603621 nova_compute[247399]: 2026-01-31 08:06:21.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:22.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:23 np0005603621 nova_compute[247399]: 2026-01-31 08:06:23.243 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 167 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 323 op/s
Jan 31 03:06:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:23.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:24 np0005603621 nova_compute[247399]: 2026-01-31 08:06:24.188 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:24 np0005603621 nova_compute[247399]: 2026-01-31 08:06:24.189 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:06:24 np0005603621 nova_compute[247399]: 2026-01-31 08:06:24.189 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:06:24 np0005603621 nova_compute[247399]: 2026-01-31 08:06:24.218 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:06:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:24.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 131 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 184 op/s
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.735 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.735 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.777 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:06:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.932 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.932 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.941 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:06:25 np0005603621 nova_compute[247399]: 2026-01-31 08:06:25.941 247403 INFO nova.compute.claims [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.173 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581239588' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.573 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.578 247403 DEBUG nova.compute.provider_tree [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.591 247403 DEBUG nova.scheduler.client.report [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.628 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.629 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:06:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:26.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.723 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.723 247403 DEBUG nova.network.neutron [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.747 247403 INFO nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.773 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.980 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.981 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:06:26 np0005603621 nova_compute[247399]: 2026-01-31 08:06:26.981 247403 INFO nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Creating image(s)#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.007 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.036 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.071 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.076 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.105 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.143 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.144 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.144 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.144 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.169 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.173 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:27 np0005603621 nova_compute[247399]: 2026-01-31 08:06:27.197 247403 DEBUG nova.policy [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57fcb774fb574bf0beea4fb49adb0f80', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44ad7f776f814675b2232eb023baacdd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:06:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 121 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 413 KiB/s rd, 1.4 MiB/s wr, 123 op/s
Jan 31 03:06:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:27.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.226 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.226 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.245 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.552 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.590 247403 DEBUG nova.network.neutron [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Successfully created port: e7683457-d53a-45f3-b358-ac25040aa018 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.636 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] resizing rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:06:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2202931541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.686 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.761 247403 DEBUG nova.objects.instance [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'migration_context' on Instance uuid 27fc8a5b-7228-4b8c-b437-72ae3144622c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.785 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.786 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Ensure instance console log exists: /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.786 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.787 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.787 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.899 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.900 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=20.94271469116211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.900 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:28 np0005603621 nova_compute[247399]: 2026-01-31 08:06:28.901 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.015 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 27fc8a5b-7228-4b8c-b437-72ae3144622c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.016 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.017 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.065 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 165 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 2.9 MiB/s wr, 122 op/s
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.561 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.566 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.590 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.617 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.618 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.898 247403 DEBUG nova.network.neutron [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Successfully updated port: e7683457-d53a-45f3-b358-ac25040aa018 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:06:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:06:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:29.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.942 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "refresh_cache-27fc8a5b-7228-4b8c-b437-72ae3144622c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.943 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquired lock "refresh_cache-27fc8a5b-7228-4b8c-b437-72ae3144622c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:29 np0005603621 nova_compute[247399]: 2026-01-31 08:06:29.943 247403 DEBUG nova.network.neutron [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:06:30 np0005603621 nova_compute[247399]: 2026-01-31 08:06:30.137 247403 DEBUG nova.compute.manager [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-changed-e7683457-d53a-45f3-b358-ac25040aa018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:30 np0005603621 nova_compute[247399]: 2026-01-31 08:06:30.138 247403 DEBUG nova.compute.manager [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Refreshing instance network info cache due to event network-changed-e7683457-d53a-45f3-b358-ac25040aa018. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:06:30 np0005603621 nova_compute[247399]: 2026-01-31 08:06:30.138 247403 DEBUG oslo_concurrency.lockutils [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-27fc8a5b-7228-4b8c-b437-72ae3144622c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:06:30 np0005603621 nova_compute[247399]: 2026-01-31 08:06:30.302 247403 DEBUG nova.network.neutron [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:06:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:30.487 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:30.488 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:30.488 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:30 np0005603621 podman[291661]: 2026-01-31 08:06:30.509844519 +0000 UTC m=+0.053374601 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:06:30 np0005603621 podman[291662]: 2026-01-31 08:06:30.53145247 +0000 UTC m=+0.071999788 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:06:30 np0005603621 nova_compute[247399]: 2026-01-31 08:06:30.618 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:06:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 165 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.7 MiB/s wr, 44 op/s
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.799 247403 DEBUG nova.network.neutron [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Updating instance_info_cache with network_info: [{"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.831 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Releasing lock "refresh_cache-27fc8a5b-7228-4b8c-b437-72ae3144622c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.831 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Instance network_info: |[{"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.831 247403 DEBUG oslo_concurrency.lockutils [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-27fc8a5b-7228-4b8c-b437-72ae3144622c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.832 247403 DEBUG nova.network.neutron [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Refreshing network info cache for port e7683457-d53a-45f3-b358-ac25040aa018 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.834 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Start _get_guest_xml network_info=[{"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.837 247403 WARNING nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.842 247403 DEBUG nova.virt.libvirt.host [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.842 247403 DEBUG nova.virt.libvirt.host [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.845 247403 DEBUG nova.virt.libvirt.host [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.846 247403 DEBUG nova.virt.libvirt.host [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.847 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.848 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.848 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.848 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.848 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.849 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.849 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.849 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.849 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.850 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.850 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.850 247403 DEBUG nova.virt.hardware [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.852 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:31.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.947 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846776.9470458, 56447338-cea0-4d74-b9e1-bac3b5d793a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.948 247403 INFO nova.compute.manager [-] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:06:31 np0005603621 nova_compute[247399]: 2026-01-31 08:06:31.965 247403 DEBUG nova.compute.manager [None req-41bf4340-3478-49d0-97b8-94c3a4034b6c - - - - - -] [instance: 56447338-cea0-4d74-b9e1-bac3b5d793a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.224 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:06:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751372398' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.264 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.288 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.292 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:32.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:06:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073042769' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.697 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.699 247403 DEBUG nova.virt.libvirt.vif [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1608592900',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1608592900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1608592900',id=67,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-dpjf80lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:26Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=27fc8a5b-7228-4b8c-b437-72ae3144622c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.700 247403 DEBUG nova.network.os_vif_util [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.700 247403 DEBUG nova.network.os_vif_util [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.701 247403 DEBUG nova.objects.instance [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'pci_devices' on Instance uuid 27fc8a5b-7228-4b8c-b437-72ae3144622c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.721 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <uuid>27fc8a5b-7228-4b8c-b437-72ae3144622c</uuid>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <name>instance-00000043</name>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1608592900</nova:name>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:06:31</nova:creationTime>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:user uuid="57fcb774fb574bf0beea4fb49adb0f80">tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member</nova:user>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:project uuid="44ad7f776f814675b2232eb023baacdd">tempest-ImagesOneServerNegativeTestJSON-1383889839</nova:project>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <nova:port uuid="e7683457-d53a-45f3-b358-ac25040aa018">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <entry name="serial">27fc8a5b-7228-4b8c-b437-72ae3144622c</entry>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <entry name="uuid">27fc8a5b-7228-4b8c-b437-72ae3144622c</entry>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/27fc8a5b-7228-4b8c-b437-72ae3144622c_disk">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/27fc8a5b-7228-4b8c-b437-72ae3144622c_disk.config">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:e0:15:48"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <target dev="tape7683457-d5"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/console.log" append="off"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:06:32 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:06:32 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:06:32 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:06:32 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.723 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Preparing to wait for external event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.723 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.723 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.723 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.724 247403 DEBUG nova.virt.libvirt.vif [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1608592900',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1608592900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1608592900',id=67,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-dpjf80lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:06:26Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=27fc8a5b-7228-4b8c-b437-72ae3144622c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.725 247403 DEBUG nova.network.os_vif_util [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.725 247403 DEBUG nova.network.os_vif_util [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.726 247403 DEBUG os_vif [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.727 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.727 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.731 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.731 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7683457-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.732 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape7683457-d5, col_values=(('external_ids', {'iface-id': 'e7683457-d53a-45f3-b358-ac25040aa018', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:15:48', 'vm-uuid': '27fc8a5b-7228-4b8c-b437-72ae3144622c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.734 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:32 np0005603621 NetworkManager[49013]: <info>  [1769846792.7351] manager: (tape7683457-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.735 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.739 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.740 247403 INFO os_vif [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5')#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.814 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.814 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.815 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] No VIF found with MAC fa:16:3e:e0:15:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.815 247403 INFO nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Using config drive#033[00m
Jan 31 03:06:32 np0005603621 nova_compute[247399]: 2026-01-31 08:06:32.848 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:33 np0005603621 nova_compute[247399]: 2026-01-31 08:06:33.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 167 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 31 03:06:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:33.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:34.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:35 np0005603621 nova_compute[247399]: 2026-01-31 08:06:35.338 247403 INFO nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Creating config drive at /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/disk.config#033[00m
Jan 31 03:06:35 np0005603621 nova_compute[247399]: 2026-01-31 08:06:35.345 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptdmyju19 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:35 np0005603621 nova_compute[247399]: 2026-01-31 08:06:35.481 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmptdmyju19" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:35 np0005603621 nova_compute[247399]: 2026-01-31 08:06:35.508 247403 DEBUG nova.storage.rbd_utils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] rbd image 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:06:35 np0005603621 nova_compute[247399]: 2026-01-31 08:06:35.513 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/disk.config 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 167 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 03:06:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:35.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.512 247403 DEBUG oslo_concurrency.processutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/disk.config 27fc8a5b-7228-4b8c-b437-72ae3144622c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.999s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.513 247403 INFO nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Deleting local config drive /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c/disk.config because it was imported into RBD.#033[00m
Jan 31 03:06:36 np0005603621 kernel: tape7683457-d5: entered promiscuous mode
Jan 31 03:06:36 np0005603621 NetworkManager[49013]: <info>  [1769846796.5745] manager: (tape7683457-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Jan 31 03:06:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:36Z|00173|binding|INFO|Claiming lport e7683457-d53a-45f3-b358-ac25040aa018 for this chassis.
Jan 31 03:06:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:36Z|00174|binding|INFO|e7683457-d53a-45f3-b358-ac25040aa018: Claiming fa:16:3e:e0:15:48 10.100.0.7
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.574 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:36Z|00175|binding|INFO|Setting lport e7683457-d53a-45f3-b358-ac25040aa018 ovn-installed in OVS
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.582 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.584 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:36Z|00176|binding|INFO|Setting lport e7683457-d53a-45f3-b358-ac25040aa018 up in Southbound
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.586 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:15:48 10.100.0.7'], port_security=['fa:16:3e:e0:15:48 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '27fc8a5b-7228-4b8c-b437-72ae3144622c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c7c770f-1117-4714-b72a-35b15967e8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44ad7f776f814675b2232eb023baacdd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fc774b6-544e-41fe-a3b6-65cc9e40791e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f38317d-a32f-4f40-8398-767a8ccddb32, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e7683457-d53a-45f3-b358-ac25040aa018) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.587 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.588 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e7683457-d53a-45f3-b358-ac25040aa018 in datapath 4c7c770f-1117-4714-b72a-35b15967e8f7 bound to our chassis#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.590 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c7c770f-1117-4714-b72a-35b15967e8f7#033[00m
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.591 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.601 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[875268e5-ab37-4695-b109-e265fc0c1376]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.602 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c7c770f-11 in ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.604 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c7c770f-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.604 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed43e40-ca83-4035-8a05-4fdb30a62339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.605 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[25979e1d-009d-4acf-a636-d4d403602a03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 systemd-udevd[291840]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.617 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4e62578a-1342-4433-a761-807b541402e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 systemd-machined[212769]: New machine qemu-29-instance-00000043.
Jan 31 03:06:36 np0005603621 NetworkManager[49013]: <info>  [1769846796.6268] device (tape7683457-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:06:36 np0005603621 NetworkManager[49013]: <info>  [1769846796.6278] device (tape7683457-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.629 247403 DEBUG nova.network.neutron [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Updated VIF entry in instance network info cache for port e7683457-d53a-45f3-b358-ac25040aa018. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.629 247403 DEBUG nova.network.neutron [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Updating instance_info_cache with network_info: [{"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:36 np0005603621 systemd[1]: Started Virtual Machine qemu-29-instance-00000043.
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.641 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b76572-83b7-4a32-a8c1-a61c9d07fc13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.660 247403 DEBUG oslo_concurrency.lockutils [req-cb6cec12-0489-4240-b71f-244ad1fa872e req-55482407-cbb4-4842-83ea-7c18927de1bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-27fc8a5b-7228-4b8c-b437-72ae3144622c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.676 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6091c49f-6f2a-47e9-9172-ef6e12841c1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.683 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0d33b7-8b27-4f32-8052-a0d29e2fce63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 NetworkManager[49013]: <info>  [1769846796.6843] manager: (tap4c7c770f-10): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.713 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[36a84f95-b180-4353-8de7-0864102886f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.719 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f9b7ae-c20c-44ee-910a-cc2f39af6f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 NetworkManager[49013]: <info>  [1769846796.7373] device (tap4c7c770f-10): carrier: link connected
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.739 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb86538-1703-4f05-843e-12c2e437bc54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.751 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a498fce4-2316-4bb0-a6a2-2b0d10f7845b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c7c770f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610225, 'reachable_time': 37700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291873, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.765 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[86e8d6e0-7a45-4ed4-8ec0-f930671bf3a4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:2d70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 610225, 'tstamp': 610225}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291874, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.785 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[24efba63-c185-43c3-a6f7-d05a14f73baf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c7c770f-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610225, 'reachable_time': 37700, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291875, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.814 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4b30e9-ff3f-435b-b7e4-ce8c88d35351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.867 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7026d89b-4eec-4428-9336-8e324061f048]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.868 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c7c770f-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.868 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.869 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c7c770f-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:36 np0005603621 kernel: tap4c7c770f-10: entered promiscuous mode
Jan 31 03:06:36 np0005603621 NetworkManager[49013]: <info>  [1769846796.8716] manager: (tap4c7c770f-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.876 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c7c770f-10, col_values=(('external_ids', {'iface-id': '831cf344-1b40-47d7-9209-66080414c7e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:36Z|00177|binding|INFO|Releasing lport 831cf344-1b40-47d7-9209-66080414c7e0 from this chassis (sb_readonly=0)
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.878 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.879 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[07b07654-975a-4f95-9c39-ca52a7eab413]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.879 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-4c7c770f-1117-4714-b72a-35b15967e8f7
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/4c7c770f-1117-4714-b72a-35b15967e8f7.pid.haproxy
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 4c7c770f-1117-4714-b72a-35b15967e8f7
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:06:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:36.880 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'env', 'PROCESS_TAG=haproxy-4c7c770f-1117-4714-b72a-35b15967e8f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c7c770f-1117-4714-b72a-35b15967e8f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:06:36 np0005603621 nova_compute[247399]: 2026-01-31 08:06:36.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.047 247403 DEBUG nova.compute.manager [req-d1922ea2-be3b-4ba8-aa83-e50bcf6d68af req-99ce1011-9969-4603-9b0f-daae96dd11e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.047 247403 DEBUG oslo_concurrency.lockutils [req-d1922ea2-be3b-4ba8-aa83-e50bcf6d68af req-99ce1011-9969-4603-9b0f-daae96dd11e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.047 247403 DEBUG oslo_concurrency.lockutils [req-d1922ea2-be3b-4ba8-aa83-e50bcf6d68af req-99ce1011-9969-4603-9b0f-daae96dd11e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.047 247403 DEBUG oslo_concurrency.lockutils [req-d1922ea2-be3b-4ba8-aa83-e50bcf6d68af req-99ce1011-9969-4603-9b0f-daae96dd11e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.047 247403 DEBUG nova.compute.manager [req-d1922ea2-be3b-4ba8-aa83-e50bcf6d68af req-99ce1011-9969-4603-9b0f-daae96dd11e4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Processing event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:06:37 np0005603621 podman[291943]: 2026-01-31 08:06:37.204794549 +0000 UTC m=+0.023403758 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.313 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846797.3126264, 27fc8a5b-7228-4b8c-b437-72ae3144622c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.314 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] VM Started (Lifecycle Event)#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.316 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.320 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.325 247403 INFO nova.virt.libvirt.driver [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Instance spawned successfully.#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.325 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.344 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.352 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.355 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.356 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.356 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.357 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.357 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.358 247403 DEBUG nova.virt.libvirt.driver [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.390 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.391 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846797.3143597, 27fc8a5b-7228-4b8c-b437-72ae3144622c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.391 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.415 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.419 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846797.3195543, 27fc8a5b-7228-4b8c-b437-72ae3144622c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.420 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.433 247403 INFO nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Took 10.45 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.434 247403 DEBUG nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.445 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.448 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.475 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.525 247403 INFO nova.compute.manager [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Took 11.63 seconds to build instance.#033[00m
Jan 31 03:06:37 np0005603621 podman[291943]: 2026-01-31 08:06:37.537853823 +0000 UTC m=+0.356463022 container create 4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 167 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.552 247403 DEBUG oslo_concurrency.lockutils [None req-e5400ad1-403a-4570-ab90-03faf973c021 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:37 np0005603621 systemd[1]: Started libpod-conmon-4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323.scope.
Jan 31 03:06:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:06:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6120980f98f37beee978e42be685e4b488c0124c1c68b1c546bb7f5cd35a00da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:06:37 np0005603621 podman[291943]: 2026-01-31 08:06:37.717519329 +0000 UTC m=+0.536128568 container init 4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:06:37 np0005603621 podman[291943]: 2026-01-31 08:06:37.72234243 +0000 UTC m=+0.540951639 container start 4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:37 np0005603621 nova_compute[247399]: 2026-01-31 08:06:37.734 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:37 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [NOTICE]   (291969) : New worker (291971) forked
Jan 31 03:06:37 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [NOTICE]   (291969) : Loading success.
Jan 31 03:06:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:37.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:38 np0005603621 nova_compute[247399]: 2026-01-31 08:06:38.248 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:06:38
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.meta']
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:06:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:38.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:06:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:06:38 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.396 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.397 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.398 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.398 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.399 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.400 247403 INFO nova.compute.manager [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Terminating instance#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.401 247403 DEBUG nova.compute.manager [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:06:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:39.527 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:39.529 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:06:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 179 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.651 247403 DEBUG nova.compute.manager [req-b346f57f-e2b3-4146-a644-596867aa23b3 req-5c854166-d55b-421f-bc41-0ef08b487c3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.651 247403 DEBUG oslo_concurrency.lockutils [req-b346f57f-e2b3-4146-a644-596867aa23b3 req-5c854166-d55b-421f-bc41-0ef08b487c3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.651 247403 DEBUG oslo_concurrency.lockutils [req-b346f57f-e2b3-4146-a644-596867aa23b3 req-5c854166-d55b-421f-bc41-0ef08b487c3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.652 247403 DEBUG oslo_concurrency.lockutils [req-b346f57f-e2b3-4146-a644-596867aa23b3 req-5c854166-d55b-421f-bc41-0ef08b487c3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.652 247403 DEBUG nova.compute.manager [req-b346f57f-e2b3-4146-a644-596867aa23b3 req-5c854166-d55b-421f-bc41-0ef08b487c3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] No waiting events found dispatching network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:39 np0005603621 nova_compute[247399]: 2026-01-31 08:06:39.652 247403 WARNING nova.compute.manager [req-b346f57f-e2b3-4146-a644-596867aa23b3 req-5c854166-d55b-421f-bc41-0ef08b487c3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received unexpected event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:06:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:06:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:06:40 np0005603621 kernel: tape7683457-d5 (unregistering): left promiscuous mode
Jan 31 03:06:40 np0005603621 NetworkManager[49013]: <info>  [1769846800.0856] device (tape7683457-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.092 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:40Z|00178|binding|INFO|Releasing lport e7683457-d53a-45f3-b358-ac25040aa018 from this chassis (sb_readonly=0)
Jan 31 03:06:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:40Z|00179|binding|INFO|Setting lport e7683457-d53a-45f3-b358-ac25040aa018 down in Southbound
Jan 31 03:06:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:06:40Z|00180|binding|INFO|Removing iface tape7683457-d5 ovn-installed in OVS
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.094 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.101 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:15:48 10.100.0.7'], port_security=['fa:16:3e:e0:15:48 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '27fc8a5b-7228-4b8c-b437-72ae3144622c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c7c770f-1117-4714-b72a-35b15967e8f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44ad7f776f814675b2232eb023baacdd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5fc774b6-544e-41fe-a3b6-65cc9e40791e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f38317d-a32f-4f40-8398-767a8ccddb32, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e7683457-d53a-45f3-b358-ac25040aa018) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.103 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e7683457-d53a-45f3-b358-ac25040aa018 in datapath 4c7c770f-1117-4714-b72a-35b15967e8f7 unbound from our chassis#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.107 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c7c770f-1117-4714-b72a-35b15967e8f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.109 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d65ba04b-d8fd-4cb0-a504-9ad37583b8c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.110 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 namespace which is not needed anymore#033[00m
Jan 31 03:06:40 np0005603621 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 31 03:06:40 np0005603621 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000043.scope: Consumed 2.748s CPU time.
Jan 31 03:06:40 np0005603621 systemd-machined[212769]: Machine qemu-29-instance-00000043 terminated.
Jan 31 03:06:40 np0005603621 kernel: tape7683457-d5: entered promiscuous mode
Jan 31 03:06:40 np0005603621 kernel: tape7683457-d5 (unregistering): left promiscuous mode
Jan 31 03:06:40 np0005603621 NetworkManager[49013]: <info>  [1769846800.2159] manager: (tape7683457-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/94)
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.237 247403 INFO nova.virt.libvirt.driver [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Instance destroyed successfully.#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.237 247403 DEBUG nova.objects.instance [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lazy-loading 'resources' on Instance uuid 27fc8a5b-7228-4b8c-b437-72ae3144622c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.261 247403 DEBUG nova.virt.libvirt.vif [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:06:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1608592900',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1608592900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1608592900',id=67,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:06:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44ad7f776f814675b2232eb023baacdd',ramdisk_id='',reservation_id='r-dpjf80lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1383889839',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1383889839-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:06:37Z,user_data=None,user_id='57fcb774fb574bf0beea4fb49adb0f80',uuid=27fc8a5b-7228-4b8c-b437-72ae3144622c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.262 247403 DEBUG nova.network.os_vif_util [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converting VIF {"id": "e7683457-d53a-45f3-b358-ac25040aa018", "address": "fa:16:3e:e0:15:48", "network": {"id": "4c7c770f-1117-4714-b72a-35b15967e8f7", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-427671175-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44ad7f776f814675b2232eb023baacdd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7683457-d5", "ovs_interfaceid": "e7683457-d53a-45f3-b358-ac25040aa018", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.262 247403 DEBUG nova.network.os_vif_util [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.263 247403 DEBUG os_vif [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.264 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.264 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7683457-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.266 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.268 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.269 247403 INFO os_vif [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:15:48,bridge_name='br-int',has_traffic_filtering=True,id=e7683457-d53a-45f3-b358-ac25040aa018,network=Network(4c7c770f-1117-4714-b72a-35b15967e8f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7683457-d5')#033[00m
Jan 31 03:06:40 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [NOTICE]   (291969) : haproxy version is 2.8.14-c23fe91
Jan 31 03:06:40 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [NOTICE]   (291969) : path to executable is /usr/sbin/haproxy
Jan 31 03:06:40 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [WARNING]  (291969) : Exiting Master process...
Jan 31 03:06:40 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [WARNING]  (291969) : Exiting Master process...
Jan 31 03:06:40 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [ALERT]    (291969) : Current worker (291971) exited with code 143 (Terminated)
Jan 31 03:06:40 np0005603621 neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7[291965]: [WARNING]  (291969) : All workers exited. Exiting... (0)
Jan 31 03:06:40 np0005603621 systemd[1]: libpod-4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323.scope: Deactivated successfully.
Jan 31 03:06:40 np0005603621 podman[292056]: 2026-01-31 08:06:40.350064973 +0000 UTC m=+0.170389244 container died 4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:06:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323-userdata-shm.mount: Deactivated successfully.
Jan 31 03:06:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6120980f98f37beee978e42be685e4b488c0124c1c68b1c546bb7f5cd35a00da-merged.mount: Deactivated successfully.
Jan 31 03:06:40 np0005603621 podman[292056]: 2026-01-31 08:06:40.591183293 +0000 UTC m=+0.411507554 container cleanup 4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:40 np0005603621 systemd[1]: libpod-conmon-4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323.scope: Deactivated successfully.
Jan 31 03:06:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:40 np0005603621 podman[292109]: 2026-01-31 08:06:40.858664383 +0000 UTC m=+0.240586194 container remove 4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.865 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc08542f-37df-4aee-8521-e3c3e9e15c62]: (4, ('Sat Jan 31 08:06:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 (4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323)\n4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323\nSat Jan 31 08:06:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 (4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323)\n4ee2cbda05b138d73f43dc9ac0d0a918dcc0313a77e6e2e626e4768e99bfb323\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.869 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[55c603cc-7fc2-4f76-a0ff-294f124025d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.871 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c7c770f-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.875 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 kernel: tap4c7c770f-10: left promiscuous mode
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.882 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[46451f1d-2d8d-40d4-8ca9-5404139b1db6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 nova_compute[247399]: 2026-01-31 08:06:40.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.900 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e568f395-2fcd-4597-9cee-609a514f8509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.902 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[72dc8a7b-ad51-4743-ae52-350659aa6062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.917 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9579f951-8722-41b0-9921-670d8ecb29d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 610218, 'reachable_time': 42424, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292124, 'error': None, 'target': 'ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:40 np0005603621 systemd[1]: run-netns-ovnmeta\x2d4c7c770f\x2d1117\x2d4714\x2db72a\x2d35b15967e8f7.mount: Deactivated successfully.
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.921 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c7c770f-1117-4714-b72a-35b15967e8f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:06:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:40.921 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[03c42acd-4ad2-4501-8eca-3974e2ca2ef3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:06:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:06:41.531 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:06:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 179 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 400 KiB/s wr, 88 op/s
Jan 31 03:06:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:41.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:42.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.171 247403 DEBUG nova.compute.manager [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-vif-unplugged-e7683457-d53a-45f3-b358-ac25040aa018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.172 247403 DEBUG oslo_concurrency.lockutils [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.172 247403 DEBUG oslo_concurrency.lockutils [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.172 247403 DEBUG oslo_concurrency.lockutils [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.172 247403 DEBUG nova.compute.manager [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] No waiting events found dispatching network-vif-unplugged-e7683457-d53a-45f3-b358-ac25040aa018 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.173 247403 DEBUG nova.compute.manager [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-vif-unplugged-e7683457-d53a-45f3-b358-ac25040aa018 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.173 247403 DEBUG nova.compute.manager [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.173 247403 DEBUG oslo_concurrency.lockutils [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.173 247403 DEBUG oslo_concurrency.lockutils [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.174 247403 DEBUG oslo_concurrency.lockutils [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.174 247403 DEBUG nova.compute.manager [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] No waiting events found dispatching network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.174 247403 WARNING nova.compute.manager [req-13e3b540-6b0a-4d5a-b4d7-a1542e8508ff req-eac85dd3-f03d-4556-88b4-ff4ae5ed5020 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received unexpected event network-vif-plugged-e7683457-d53a-45f3-b358-ac25040aa018 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:06:43 np0005603621 nova_compute[247399]: 2026-01-31 08:06:43.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 192 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 114 op/s
Jan 31 03:06:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:43.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:45 np0005603621 nova_compute[247399]: 2026-01-31 08:06:45.268 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Jan 31 03:06:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:45.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:46.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Jan 31 03:06:47 np0005603621 nova_compute[247399]: 2026-01-31 08:06:47.658 247403 INFO nova.virt.libvirt.driver [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Deleting instance files /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c_del#033[00m
Jan 31 03:06:47 np0005603621 nova_compute[247399]: 2026-01-31 08:06:47.659 247403 INFO nova.virt.libvirt.driver [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Deletion of /var/lib/nova/instances/27fc8a5b-7228-4b8c-b437-72ae3144622c_del complete#033[00m
Jan 31 03:06:47 np0005603621 nova_compute[247399]: 2026-01-31 08:06:47.714 247403 INFO nova.compute.manager [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Took 8.31 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:06:47 np0005603621 nova_compute[247399]: 2026-01-31 08:06:47.715 247403 DEBUG oslo.service.loopingcall [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:06:47 np0005603621 nova_compute[247399]: 2026-01-31 08:06:47.715 247403 DEBUG nova.compute.manager [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:06:47 np0005603621 nova_compute[247399]: 2026-01-31 08:06:47.715 247403 DEBUG nova.network.neutron [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:06:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:47.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:48 np0005603621 nova_compute[247399]: 2026-01-31 08:06:48.253 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:48.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031685118299832494 of space, bias 1.0, pg target 0.9505535489949748 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.191 247403 DEBUG nova.network.neutron [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.210 247403 INFO nova.compute.manager [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Took 1.49 seconds to deallocate network for instance.#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.259 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.260 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.303 247403 DEBUG oslo_concurrency.processutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.337 247403 DEBUG nova.compute.manager [req-2f2d0b2c-85b9-4c43-b4ee-a17d327e34b7 req-636129bd-cc87-46a5-a2a4-1d89b3ae77b1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Received event network-vif-deleted-e7683457-d53a-45f3-b358-ac25040aa018 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:06:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 31 03:06:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:06:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564175959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:06:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:49.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.944 247403 DEBUG oslo_concurrency.processutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.951 247403 DEBUG nova.compute.provider_tree [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.969 247403 DEBUG nova.scheduler.client.report [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:06:49 np0005603621 nova_compute[247399]: 2026-01-31 08:06:49.996 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:50 np0005603621 nova_compute[247399]: 2026-01-31 08:06:50.069 247403 INFO nova.scheduler.client.report [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Deleted allocations for instance 27fc8a5b-7228-4b8c-b437-72ae3144622c#033[00m
Jan 31 03:06:50 np0005603621 nova_compute[247399]: 2026-01-31 08:06:50.191 247403 DEBUG oslo_concurrency.lockutils [None req-fd8d9cf0-b8b0-49ea-be57-de6ada906061 57fcb774fb574bf0beea4fb49adb0f80 44ad7f776f814675b2232eb023baacdd - - default default] Lock "27fc8a5b-7228-4b8c-b437-72ae3144622c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:06:50 np0005603621 nova_compute[247399]: 2026-01-31 08:06:50.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:50.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.5 MiB/s wr, 52 op/s
Jan 31 03:06:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:51.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:52.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:53 np0005603621 nova_compute[247399]: 2026-01-31 08:06:53.257 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Jan 31 03:06:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:06:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:53.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:06:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:06:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:54.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:06:55 np0005603621 nova_compute[247399]: 2026-01-31 08:06:55.236 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846800.2350693, 27fc8a5b-7228-4b8c-b437-72ae3144622c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:06:55 np0005603621 nova_compute[247399]: 2026-01-31 08:06:55.237 247403 INFO nova.compute.manager [-] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:06:55 np0005603621 nova_compute[247399]: 2026-01-31 08:06:55.276 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:55 np0005603621 nova_compute[247399]: 2026-01-31 08:06:55.325 247403 DEBUG nova.compute.manager [None req-b161152b-328a-425c-9093-36d15d238fbb - - - - - -] [instance: 27fc8a5b-7228-4b8c-b437-72ae3144622c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:06:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 877 KiB/s wr, 78 op/s
Jan 31 03:06:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:55.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:56.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 86 op/s
Jan 31 03:06:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:57.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:06:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:06:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:06:58 np0005603621 nova_compute[247399]: 2026-01-31 08:06:58.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:06:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:06:58.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:06:58 np0005603621 nova_compute[247399]: 2026-01-31 08:06:58.853 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:06:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:06:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:06:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:06:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 86 op/s
Jan 31 03:06:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:06:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:06:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:06:59.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:07:00 np0005603621 nova_compute[247399]: 2026-01-31 08:07:00.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bea7ebdd-f034-4a11-a267-f7eaa5385cde does not exist
Jan 31 03:07:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 11139fec-d821-4533-978f-a59b282ded10 does not exist
Jan 31 03:07:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f15d454a-3a89-43f7-8abd-4d32218a172d does not exist
Jan 31 03:07:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:00.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:07:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:07:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:07:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:07:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:07:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:07:01 np0005603621 podman[292363]: 2026-01-31 08:07:01.448095333 +0000 UTC m=+0.078243354 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:07:01 np0005603621 podman[292364]: 2026-01-31 08:07:01.496324661 +0000 UTC m=+0.121223667 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:07:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 167 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 82 op/s
Jan 31 03:07:02 np0005603621 podman[292524]: 2026-01-31 08:07:01.943427445 +0000 UTC m=+0.028435647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:02 np0005603621 podman[292524]: 2026-01-31 08:07:02.0995618 +0000 UTC m=+0.184569942 container create a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:02 np0005603621 systemd[1]: Started libpod-conmon-a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c.scope.
Jan 31 03:07:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:02 np0005603621 podman[292524]: 2026-01-31 08:07:02.586408635 +0000 UTC m=+0.671416837 container init a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:02 np0005603621 podman[292524]: 2026-01-31 08:07:02.594857591 +0000 UTC m=+0.679865703 container start a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:07:02 np0005603621 cranky_solomon[292541]: 167 167
Jan 31 03:07:02 np0005603621 systemd[1]: libpod-a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c.scope: Deactivated successfully.
Jan 31 03:07:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:02 np0005603621 podman[292524]: 2026-01-31 08:07:02.752519753 +0000 UTC m=+0.837527855 container attach a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:07:02 np0005603621 podman[292524]: 2026-01-31 08:07:02.75338473 +0000 UTC m=+0.838392842 container died a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d84696c70b41db3b43a4fa3a6e5767e77e667622db139b4a84fc94afb381db49-merged.mount: Deactivated successfully.
Jan 31 03:07:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:03.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:03 np0005603621 nova_compute[247399]: 2026-01-31 08:07:03.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:07:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:07:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 151 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 88 op/s
Jan 31 03:07:03 np0005603621 podman[292524]: 2026-01-31 08:07:03.990659396 +0000 UTC m=+2.075667518 container remove a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:04 np0005603621 systemd[1]: libpod-conmon-a3ffd67860adfd77aeae5e356451b1d020a7738cd6f8dd87922662cc29aef42c.scope: Deactivated successfully.
Jan 31 03:07:04 np0005603621 podman[292566]: 2026-01-31 08:07:04.130834309 +0000 UTC m=+0.024239094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:04 np0005603621 podman[292566]: 2026-01-31 08:07:04.294197151 +0000 UTC m=+0.187601896 container create 97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:07:04 np0005603621 systemd[1]: Started libpod-conmon-97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18.scope.
Jan 31 03:07:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0add4d2bcf7f51d29f21dd7e0fbaa51e8db7d6c27d68c40ee86f6ac836f17df6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0add4d2bcf7f51d29f21dd7e0fbaa51e8db7d6c27d68c40ee86f6ac836f17df6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0add4d2bcf7f51d29f21dd7e0fbaa51e8db7d6c27d68c40ee86f6ac836f17df6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0add4d2bcf7f51d29f21dd7e0fbaa51e8db7d6c27d68c40ee86f6ac836f17df6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0add4d2bcf7f51d29f21dd7e0fbaa51e8db7d6c27d68c40ee86f6ac836f17df6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:04 np0005603621 podman[292566]: 2026-01-31 08:07:04.581192525 +0000 UTC m=+0.474597360 container init 97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 31 03:07:04 np0005603621 podman[292566]: 2026-01-31 08:07:04.590165757 +0000 UTC m=+0.483570502 container start 97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:07:04 np0005603621 podman[292566]: 2026-01-31 08:07:04.628289327 +0000 UTC m=+0.521694072 container attach 97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:07:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:04.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:05.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:05 np0005603621 nova_compute[247399]: 2026-01-31 08:07:05.282 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:05 np0005603621 dreamy_noyce[292582]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:07:05 np0005603621 dreamy_noyce[292582]: --> relative data size: 1.0
Jan 31 03:07:05 np0005603621 dreamy_noyce[292582]: --> All data devices are unavailable
Jan 31 03:07:05 np0005603621 systemd[1]: libpod-97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18.scope: Deactivated successfully.
Jan 31 03:07:05 np0005603621 podman[292566]: 2026-01-31 08:07:05.505320104 +0000 UTC m=+1.398724839 container died 97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 89 op/s
Jan 31 03:07:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0add4d2bcf7f51d29f21dd7e0fbaa51e8db7d6c27d68c40ee86f6ac836f17df6-merged.mount: Deactivated successfully.
Jan 31 03:07:06 np0005603621 podman[292566]: 2026-01-31 08:07:06.081040636 +0000 UTC m=+1.974445401 container remove 97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:07:06 np0005603621 systemd[1]: libpod-conmon-97b6337d9c5f58bd0a3bbeae5aa4da3c5dfec9960984f168d71d1d9b0a8eef18.scope: Deactivated successfully.
Jan 31 03:07:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:06.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:06 np0005603621 podman[292749]: 2026-01-31 08:07:06.788002869 +0000 UTC m=+0.112571694 container create 8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:07:06 np0005603621 podman[292749]: 2026-01-31 08:07:06.706644548 +0000 UTC m=+0.031213403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:06 np0005603621 systemd[1]: Started libpod-conmon-8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864.scope.
Jan 31 03:07:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:07 np0005603621 podman[292749]: 2026-01-31 08:07:07.047549039 +0000 UTC m=+0.372117874 container init 8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:07:07 np0005603621 podman[292749]: 2026-01-31 08:07:07.057241605 +0000 UTC m=+0.381810440 container start 8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:07:07 np0005603621 recursing_galileo[292766]: 167 167
Jan 31 03:07:07 np0005603621 systemd[1]: libpod-8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864.scope: Deactivated successfully.
Jan 31 03:07:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:07.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:07 np0005603621 podman[292749]: 2026-01-31 08:07:07.301143492 +0000 UTC m=+0.625712357 container attach 8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:07:07 np0005603621 podman[292749]: 2026-01-31 08:07:07.302779163 +0000 UTC m=+0.627348028 container died 8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 711 KiB/s rd, 14 KiB/s wr, 40 op/s
Jan 31 03:07:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e662fe340985c04bdf1ea7e38b387d31329f4e45a51c19654690d8b8faf97376-merged.mount: Deactivated successfully.
Jan 31 03:07:08 np0005603621 podman[292749]: 2026-01-31 08:07:08.065561963 +0000 UTC m=+1.390130778 container remove 8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_galileo, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:07:08 np0005603621 systemd[1]: libpod-conmon-8d8e149c5914da7206b270c29e80ef69cf6bc6e47e0383f9857d1408640c6864.scope: Deactivated successfully.
Jan 31 03:07:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:08 np0005603621 nova_compute[247399]: 2026-01-31 08:07:08.266 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:08 np0005603621 podman[292791]: 2026-01-31 08:07:08.196366411 +0000 UTC m=+0.026049131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:08 np0005603621 podman[292791]: 2026-01-31 08:07:08.309936336 +0000 UTC m=+0.139619046 container create 21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:08 np0005603621 systemd[1]: Started libpod-conmon-21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25.scope.
Jan 31 03:07:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb9a1226d7ffb917cc620d816e7edcaca911223632464e169fd6c6ec1deedc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb9a1226d7ffb917cc620d816e7edcaca911223632464e169fd6c6ec1deedc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb9a1226d7ffb917cc620d816e7edcaca911223632464e169fd6c6ec1deedc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1cb9a1226d7ffb917cc620d816e7edcaca911223632464e169fd6c6ec1deedc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:08 np0005603621 podman[292791]: 2026-01-31 08:07:08.581242876 +0000 UTC m=+0.410925596 container init 21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_meitner, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:08 np0005603621 podman[292791]: 2026-01-31 08:07:08.590113115 +0000 UTC m=+0.419795805 container start 21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_meitner, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:07:08 np0005603621 podman[292791]: 2026-01-31 08:07:08.674530232 +0000 UTC m=+0.504212922 container attach 21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:07:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:08.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:09.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]: {
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:    "0": [
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:        {
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "devices": [
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "/dev/loop3"
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            ],
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "lv_name": "ceph_lv0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "lv_size": "7511998464",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "name": "ceph_lv0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "tags": {
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.cluster_name": "ceph",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.crush_device_class": "",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.encrypted": "0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.osd_id": "0",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.type": "block",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:                "ceph.vdo": "0"
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            },
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "type": "block",
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:            "vg_name": "ceph_vg0"
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:        }
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]:    ]
Jan 31 03:07:09 np0005603621 stoic_meitner[292807]: }
Jan 31 03:07:09 np0005603621 systemd[1]: libpod-21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25.scope: Deactivated successfully.
Jan 31 03:07:09 np0005603621 podman[292791]: 2026-01-31 08:07:09.343418627 +0000 UTC m=+1.173101367 container died 21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c1cb9a1226d7ffb917cc620d816e7edcaca911223632464e169fd6c6ec1deedc-merged.mount: Deactivated successfully.
Jan 31 03:07:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 113 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Jan 31 03:07:09 np0005603621 podman[292791]: 2026-01-31 08:07:09.86452287 +0000 UTC m=+1.694205580 container remove 21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:07:09 np0005603621 systemd[1]: libpod-conmon-21470a3f8be8cc273a60d40f77a0072dac51a2027fbafa9523f0790742a09e25.scope: Deactivated successfully.
Jan 31 03:07:10 np0005603621 nova_compute[247399]: 2026-01-31 08:07:10.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:10 np0005603621 podman[292970]: 2026-01-31 08:07:10.412600732 +0000 UTC m=+0.022395026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:10 np0005603621 podman[292970]: 2026-01-31 08:07:10.665562025 +0000 UTC m=+0.275356289 container create 32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:07:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:10.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:10 np0005603621 systemd[1]: Started libpod-conmon-32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39.scope.
Jan 31 03:07:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:10 np0005603621 podman[292970]: 2026-01-31 08:07:10.950810794 +0000 UTC m=+0.560605078 container init 32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:10 np0005603621 podman[292970]: 2026-01-31 08:07:10.956641757 +0000 UTC m=+0.566436011 container start 32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:07:10 np0005603621 keen_zhukovsky[292986]: 167 167
Jan 31 03:07:10 np0005603621 systemd[1]: libpod-32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39.scope: Deactivated successfully.
Jan 31 03:07:10 np0005603621 conmon[292986]: conmon 32d36be2b81feeb258a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39.scope/container/memory.events
Jan 31 03:07:11 np0005603621 podman[292970]: 2026-01-31 08:07:11.100422882 +0000 UTC m=+0.710217136 container attach 32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:07:11 np0005603621 podman[292970]: 2026-01-31 08:07:11.100958219 +0000 UTC m=+0.710752473 container died 32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_zhukovsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:07:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:11.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a649eddde0707661b2dc70ff8d3ee154fa7d963305e9749ec2e674ff65ded761-merged.mount: Deactivated successfully.
Jan 31 03:07:11 np0005603621 podman[292970]: 2026-01-31 08:07:11.471853205 +0000 UTC m=+1.081647469 container remove 32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:07:11 np0005603621 systemd[1]: libpod-conmon-32d36be2b81feeb258a6ca9d3fe272ea4af748abb1915adeeb35fda312a78d39.scope: Deactivated successfully.
Jan 31 03:07:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 113 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Jan 31 03:07:11 np0005603621 podman[293010]: 2026-01-31 08:07:11.619390279 +0000 UTC m=+0.070542592 container create 9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:07:11 np0005603621 podman[293010]: 2026-01-31 08:07:11.570566591 +0000 UTC m=+0.021718914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:07:11 np0005603621 systemd[1]: Started libpod-conmon-9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50.scope.
Jan 31 03:07:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e02bd32052e80061d5700508b50ea4e562a1abb5c7fc8f28f37f6cb202f5c6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e02bd32052e80061d5700508b50ea4e562a1abb5c7fc8f28f37f6cb202f5c6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e02bd32052e80061d5700508b50ea4e562a1abb5c7fc8f28f37f6cb202f5c6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e02bd32052e80061d5700508b50ea4e562a1abb5c7fc8f28f37f6cb202f5c6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:11 np0005603621 podman[293010]: 2026-01-31 08:07:11.971800541 +0000 UTC m=+0.422952864 container init 9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:07:11 np0005603621 podman[293010]: 2026-01-31 08:07:11.977283274 +0000 UTC m=+0.428435587 container start 9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:12 np0005603621 podman[293010]: 2026-01-31 08:07:12.086865463 +0000 UTC m=+0.538018266 container attach 9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:07:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:12.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]: {
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:        "osd_id": 0,
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:        "type": "bluestore"
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]:    }
Jan 31 03:07:12 np0005603621 suspicious_bartik[293028]: }
Jan 31 03:07:12 np0005603621 systemd[1]: libpod-9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50.scope: Deactivated successfully.
Jan 31 03:07:12 np0005603621 podman[293049]: 2026-01-31 08:07:12.871839272 +0000 UTC m=+0.027351242 container died 9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:07:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:13.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8e02bd32052e80061d5700508b50ea4e562a1abb5c7fc8f28f37f6cb202f5c6a-merged.mount: Deactivated successfully.
Jan 31 03:07:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:13 np0005603621 nova_compute[247399]: 2026-01-31 08:07:13.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:13 np0005603621 podman[293049]: 2026-01-31 08:07:13.446990956 +0000 UTC m=+0.602502896 container remove 9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bartik, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 03:07:13 np0005603621 systemd[1]: libpod-conmon-9101ed0c200e96d66d432ef2894dc1ecb13448753415961f4996040f92739a50.scope: Deactivated successfully.
Jan 31 03:07:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:07:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:07:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 87 MiB data, 667 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.5 KiB/s wr, 28 op/s
Jan 31 03:07:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6f1e076d-82ac-4ccd-b347-17feb40fa393 does not exist
Jan 31 03:07:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8301251b-d9f7-45bd-ac98-6091bc890c9d does not exist
Jan 31 03:07:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 32f29d44-bee2-4796-a8b7-0b1d0d5cd666 does not exist
Jan 31 03:07:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:07:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:14.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:15.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:15 np0005603621 nova_compute[247399]: 2026-01-31 08:07:15.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4158518614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 41 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 5.1 KiB/s wr, 41 op/s
Jan 31 03:07:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:16.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:17.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 56 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 547 KiB/s wr, 37 op/s
Jan 31 03:07:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:18 np0005603621 nova_compute[247399]: 2026-01-31 08:07:18.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:18.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:19.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 88 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 03:07:20 np0005603621 nova_compute[247399]: 2026-01-31 08:07:20.297 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:20.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 88 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Jan 31 03:07:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:22.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:23.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:23 np0005603621 nova_compute[247399]: 2026-01-31 08:07:23.319 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 88 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Jan 31 03:07:24 np0005603621 nova_compute[247399]: 2026-01-31 08:07:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:24 np0005603621 nova_compute[247399]: 2026-01-31 08:07:24.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:07:24 np0005603621 nova_compute[247399]: 2026-01-31 08:07:24.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:07:24 np0005603621 nova_compute[247399]: 2026-01-31 08:07:24.391 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:07:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:24.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:25.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:25 np0005603621 nova_compute[247399]: 2026-01-31 08:07:25.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:25 np0005603621 nova_compute[247399]: 2026-01-31 08:07:25.301 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 88 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 03:07:26 np0005603621 nova_compute[247399]: 2026-01-31 08:07:26.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:26.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 88 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 89 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 03:07:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:28 np0005603621 nova_compute[247399]: 2026-01-31 08:07:28.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:28.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 88 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 83 op/s
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.246 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.246 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.247 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.247 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.247 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.305 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:30.488 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:30.489 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:30.489 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116374053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.678 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:30.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.880 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.882 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4506MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.883 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:30 np0005603621 nova_compute[247399]: 2026-01-31 08:07:30.883 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.173 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.174 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.175 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:07:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:31.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.225 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.346 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.347 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.383 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.502 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 88 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 57 op/s
Jan 31 03:07:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150623082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.699 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.706 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.722 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.785 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.786 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.787 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.794 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.795 247403 INFO nova.compute.claims [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:07:31 np0005603621 nova_compute[247399]: 2026-01-31 08:07:31.931 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:07:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071396587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.483 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.491 247403 DEBUG nova.compute.provider_tree [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.510 247403 DEBUG nova.scheduler.client.report [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:07:32 np0005603621 podman[293239]: 2026-01-31 08:07:32.51254408 +0000 UTC m=+0.059770803 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.533 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.534 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:07:32 np0005603621 podman[293240]: 2026-01-31 08:07:32.544775715 +0000 UTC m=+0.092499323 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.582 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.583 247403 DEBUG nova.network.neutron [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.609 247403 INFO nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.625 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:07:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:32.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.737 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.738 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.738 247403 INFO nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Creating image(s)#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.764 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.792 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.824 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.828 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.847 247403 DEBUG nova.policy [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f189bb192c164df8b0af4c5f50a1285f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.850 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.850 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.850 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.885 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.886 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.887 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.887 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.916 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:32 np0005603621 nova_compute[247399]: 2026-01-31 08:07:32.920 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:33 np0005603621 nova_compute[247399]: 2026-01-31 08:07:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:07:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:33 np0005603621 nova_compute[247399]: 2026-01-31 08:07:33.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 88 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:07:33 np0005603621 nova_compute[247399]: 2026-01-31 08:07:33.652 247403 DEBUG nova.network.neutron [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Successfully created port: 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.111 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.201 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] resizing rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.395 247403 DEBUG nova.objects.instance [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lazy-loading 'migration_context' on Instance uuid 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.415 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.416 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Ensure instance console log exists: /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.416 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.416 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:34 np0005603621 nova_compute[247399]: 2026-01-31 08:07:34.417 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:34.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:35.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:35 np0005603621 nova_compute[247399]: 2026-01-31 08:07:35.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 171 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 93 op/s
Jan 31 03:07:35 np0005603621 nova_compute[247399]: 2026-01-31 08:07:35.661 247403 DEBUG nova.network.neutron [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Successfully updated port: 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:07:35 np0005603621 nova_compute[247399]: 2026-01-31 08:07:35.682 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:35 np0005603621 nova_compute[247399]: 2026-01-31 08:07:35.683 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquired lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:35 np0005603621 nova_compute[247399]: 2026-01-31 08:07:35.684 247403 DEBUG nova.network.neutron [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:07:35 np0005603621 nova_compute[247399]: 2026-01-31 08:07:35.938 247403 DEBUG nova.network.neutron [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:07:36 np0005603621 nova_compute[247399]: 2026-01-31 08:07:36.682 247403 DEBUG nova.compute.manager [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-changed-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:36 np0005603621 nova_compute[247399]: 2026-01-31 08:07:36.683 247403 DEBUG nova.compute.manager [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Refreshing instance network info cache due to event network-changed-0648c44b-0b30-42b4-b493-6c55e4ec6ad5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:07:36 np0005603621 nova_compute[247399]: 2026-01-31 08:07:36.683 247403 DEBUG oslo_concurrency.lockutils [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:36.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 180 MiB data, 676 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 106 op/s
Jan 31 03:07:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:38 np0005603621 nova_compute[247399]: 2026-01-31 08:07:38.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:07:38
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root']
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:07:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:07:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:07:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:38.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.047 247403 DEBUG nova.network.neutron [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Updating instance_info_cache with network_info: [{"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.103 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Releasing lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.104 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance network_info: |[{"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.104 247403 DEBUG oslo_concurrency.lockutils [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.104 247403 DEBUG nova.network.neutron [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Refreshing network info cache for port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.107 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Start _get_guest_xml network_info=[{"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.112 247403 WARNING nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.118 247403 DEBUG nova.virt.libvirt.host [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.119 247403 DEBUG nova.virt.libvirt.host [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.125 247403 DEBUG nova.virt.libvirt.host [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.126 247403 DEBUG nova.virt.libvirt.host [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.128 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.128 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.129 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.129 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.130 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.130 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.131 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.131 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.132 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.132 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.132 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.133 247403 DEBUG nova.virt.hardware [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.137 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2137206006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 189 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 133 op/s
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.593 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.619 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:39 np0005603621 nova_compute[247399]: 2026-01-31 08:07:39.622 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190679369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.070 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.071 247403 DEBUG nova.virt.libvirt.vif [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-701828883',display_name='tempest-InstanceActionsTestJSON-server-701828883',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-701828883',id=71,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ccb16926f2c74ec3b393103a33e7fa3b',ramdisk_id='',reservation_id='r-0ijhyx8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-591935846',owner_user_name='tempest-InstanceActionsTestJSON-591935846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:32Z,user_data=None,user_id='f189bb192c164df8b0af4c5f50a1285f',uuid=0e67bd24-05c7-4caa-b0be-3e08f04a40f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.072 247403 DEBUG nova.network.os_vif_util [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converting VIF {"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.072 247403 DEBUG nova.network.os_vif_util [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.073 247403 DEBUG nova.objects.instance [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.112 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <uuid>0e67bd24-05c7-4caa-b0be-3e08f04a40f3</uuid>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <name>instance-00000047</name>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:name>tempest-InstanceActionsTestJSON-server-701828883</nova:name>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:07:39</nova:creationTime>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:user uuid="f189bb192c164df8b0af4c5f50a1285f">tempest-InstanceActionsTestJSON-591935846-project-member</nova:user>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:project uuid="ccb16926f2c74ec3b393103a33e7fa3b">tempest-InstanceActionsTestJSON-591935846</nova:project>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <nova:port uuid="0648c44b-0b30-42b4-b493-6c55e4ec6ad5">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <entry name="serial">0e67bd24-05c7-4caa-b0be-3e08f04a40f3</entry>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <entry name="uuid">0e67bd24-05c7-4caa-b0be-3e08f04a40f3</entry>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1d:f3:a0"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <target dev="tap0648c44b-0b"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/console.log" append="off"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:07:40 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:07:40 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:07:40 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:07:40 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.114 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Preparing to wait for external event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.114 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.115 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.115 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.117 247403 DEBUG nova.virt.libvirt.vif [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-701828883',display_name='tempest-InstanceActionsTestJSON-server-701828883',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-701828883',id=71,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ccb16926f2c74ec3b393103a33e7fa3b',ramdisk_id='',reservation_id='r-0ijhyx8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-591935846',owner_user_name='tempest-InstanceActionsTestJSON-591935846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:07:32Z,user_data=None,user_id='f189bb192c164df8b0af4c5f50a1285f',uuid=0e67bd24-05c7-4caa-b0be-3e08f04a40f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.117 247403 DEBUG nova.network.os_vif_util [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converting VIF {"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.118 247403 DEBUG nova.network.os_vif_util [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.118 247403 DEBUG os_vif [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.120 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.120 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.121 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.127 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.127 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0648c44b-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.128 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0648c44b-0b, col_values=(('external_ids', {'iface-id': '0648c44b-0b30-42b4-b493-6c55e4ec6ad5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:f3:a0', 'vm-uuid': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:40 np0005603621 NetworkManager[49013]: <info>  [1769846860.1305] manager: (tap0648c44b-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.135 247403 INFO os_vif [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b')#033[00m
Jan 31 03:07:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:40.199 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:40.200 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.313 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.314 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.314 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] No VIF found with MAC fa:16:3e:1d:f3:a0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.314 247403 INFO nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Using config drive#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.334 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:40.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.854 247403 INFO nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Creating config drive at /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/disk.config#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.857 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyitn3pop execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:40 np0005603621 nova_compute[247399]: 2026-01-31 08:07:40.986 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyitn3pop" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.014 247403 DEBUG nova.storage.rbd_utils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] rbd image 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.019 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/disk.config 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 189 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 576 KiB/s rd, 4.4 MiB/s wr, 88 op/s
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.832 247403 DEBUG oslo_concurrency.processutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/disk.config 0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.833 247403 INFO nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Deleting local config drive /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/disk.config because it was imported into RBD.#033[00m
Jan 31 03:07:41 np0005603621 kernel: tap0648c44b-0b: entered promiscuous mode
Jan 31 03:07:41 np0005603621 NetworkManager[49013]: <info>  [1769846861.8788] manager: (tap0648c44b-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Jan 31 03:07:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:41Z|00181|binding|INFO|Claiming lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for this chassis.
Jan 31 03:07:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:41Z|00182|binding|INFO|0648c44b-0b30-42b4-b493-6c55e4ec6ad5: Claiming fa:16:3e:1d:f3:a0 10.100.0.13
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.879 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.882 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.888 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.891 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.904 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f3:a0 10.100.0.13'], port_security=['fa:16:3e:1d:f3:a0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52453c5c-f64d-4d7b-8377-026c50d81413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec99ddda-267e-4cee-9cf8-108290b2b3a7, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=0648c44b-0b30-42b4-b493-6c55e4ec6ad5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:41 np0005603621 systemd-udevd[293639]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.905 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 in datapath 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 bound to our chassis#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.907 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559#033[00m
Jan 31 03:07:41 np0005603621 systemd-machined[212769]: New machine qemu-30-instance-00000047.
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.915 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37b43226-5df9-44a8-90dd-8a111aa98f9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.915 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02611fa3-d1 in ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:07:41 np0005603621 NetworkManager[49013]: <info>  [1769846861.9180] device (tap0648c44b-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:07:41 np0005603621 NetworkManager[49013]: <info>  [1769846861.9189] device (tap0648c44b-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.918 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02611fa3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.918 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[33b88b31-e12b-44d4-b798-3c6caf84de0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.920 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2e9810-bada-476d-835b-77e06ca1afae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 systemd[1]: Started Virtual Machine qemu-30-instance-00000047.
Jan 31 03:07:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:41Z|00183|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 ovn-installed in OVS
Jan 31 03:07:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:41Z|00184|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 up in Southbound
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.930 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e2996c5c-6580-4313-a419-384fc5ce108d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 nova_compute[247399]: 2026-01-31 08:07:41.931 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.941 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5d7cd7-7a2d-4697-bba9-fa185d5d0158]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.963 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c7b149-5879-40ad-9370-b30d240ea665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 NetworkManager[49013]: <info>  [1769846861.9681] manager: (tap02611fa3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.967 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9919e58f-14b1-46d3-960f-3f0066d1d957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.996 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1676e96b-f9b6-4fc8-a2d3-48062ede5b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:41.998 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6965c454-bf0f-41a0-9204-a8ed324ac6cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 NetworkManager[49013]: <info>  [1769846862.0143] device (tap02611fa3-d0): carrier: link connected
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.022 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[20c442b3-3f2b-4fcd-922a-85dd862d64df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.035 247403 DEBUG nova.network.neutron [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Updated VIF entry in instance network info cache for port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.035 247403 DEBUG nova.network.neutron [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Updating instance_info_cache with network_info: [{"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.039 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbe0c5e-c552-4f43-9f16-5bf86d1ff982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02611fa3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:bd:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616752, 'reachable_time': 16363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293673, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.055 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[64eef678-c30e-4ba2-9262-d48a873e8174]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:bd89'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616752, 'tstamp': 616752}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293674, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.056 247403 DEBUG oslo_concurrency.lockutils [req-8f3184f6-36dd-486b-b5d6-fe570e8383ce req-420c411d-ffb2-4e19-91be-ece2a7044a6b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.076 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3f8a10-8776-4134-93bf-9197fd2198a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02611fa3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:bd:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616752, 'reachable_time': 16363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293675, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.107 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9d3b7b-5c68-4fab-b3c9-ba85f4857527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.169 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5b254107-7eb8-41ae-a412-b112c667c279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.171 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02611fa3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.171 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.171 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02611fa3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:42 np0005603621 kernel: tap02611fa3-d0: entered promiscuous mode
Jan 31 03:07:42 np0005603621 NetworkManager[49013]: <info>  [1769846862.1897] manager: (tap02611fa3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.189 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.192 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.192 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02611fa3-d0, col_values=(('external_ids', {'iface-id': '987a3dad-b7aa-4f25-8828-62be2b0a5c8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:42 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:42Z|00185|binding|INFO|Releasing lport 987a3dad-b7aa-4f25-8828-62be2b0a5c8f from this chassis (sb_readonly=0)
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.195 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.196 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c287bc1d-83ab-4a56-b1dc-3ed2643cc0f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.198 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.pid.haproxy
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:07:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:42.199 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'env', 'PROCESS_TAG=haproxy-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.501 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846862.4995964, 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.502 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] VM Started (Lifecycle Event)#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.527 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.531 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846862.4997847, 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.531 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.553 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.557 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:42 np0005603621 nova_compute[247399]: 2026-01-31 08:07:42.576 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:07:42 np0005603621 podman[293748]: 2026-01-31 08:07:42.489263891 +0000 UTC m=+0.020489865 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:07:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:42.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:42 np0005603621 podman[293748]: 2026-01-31 08:07:42.968458926 +0000 UTC m=+0.499684870 container create bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:43 np0005603621 systemd[1]: Started libpod-conmon-bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e.scope.
Jan 31 03:07:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bf2a0560b08cdfa01f923ed1714ec8b9f55f7e72e1151e67323b45e68eeb4fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:43 np0005603621 podman[293748]: 2026-01-31 08:07:43.29390779 +0000 UTC m=+0.825133754 container init bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:07:43 np0005603621 podman[293748]: 2026-01-31 08:07:43.299249638 +0000 UTC m=+0.830475582 container start bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:07:43 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [NOTICE]   (293769) : New worker (293771) forked
Jan 31 03:07:43 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [NOTICE]   (293769) : Loading success.
Jan 31 03:07:43 np0005603621 nova_compute[247399]: 2026-01-31 08:07:43.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.445033) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846863445065, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1936, "num_deletes": 265, "total_data_size": 3103457, "memory_usage": 3152064, "flush_reason": "Manual Compaction"}
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846863547722, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3051600, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34435, "largest_seqno": 36370, "table_properties": {"data_size": 3042963, "index_size": 5259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18871, "raw_average_key_size": 20, "raw_value_size": 3025228, "raw_average_value_size": 3302, "num_data_blocks": 228, "num_entries": 916, "num_filter_entries": 916, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846700, "oldest_key_time": 1769846700, "file_creation_time": 1769846863, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 102824 microseconds, and 5734 cpu microseconds.
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:07:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 194 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 747 KiB/s rd, 5.2 MiB/s wr, 100 op/s
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.547845) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3051600 bytes OK
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.547878) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.602332) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.602422) EVENT_LOG_v1 {"time_micros": 1769846863602403, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.602466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3095352, prev total WAL file size 3095352, number of live WAL files 2.
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.603689) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2980KB)], [74(8654KB)]
Jan 31 03:07:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846863603784, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11913338, "oldest_snapshot_seqno": -1}
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6435 keys, 11756357 bytes, temperature: kUnknown
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846864038114, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11756357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11710547, "index_size": 28612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16133, "raw_key_size": 164727, "raw_average_key_size": 25, "raw_value_size": 11592582, "raw_average_value_size": 1801, "num_data_blocks": 1152, "num_entries": 6435, "num_filter_entries": 6435, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846863, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.038614) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11756357 bytes
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.139081) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 27.4 rd, 27.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.5 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.8) write-amplify(3.9) OK, records in: 6979, records dropped: 544 output_compression: NoCompression
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.139175) EVENT_LOG_v1 {"time_micros": 1769846864139122, "job": 42, "event": "compaction_finished", "compaction_time_micros": 434529, "compaction_time_cpu_micros": 31130, "output_level": 6, "num_output_files": 1, "total_output_size": 11756357, "num_input_records": 6979, "num_output_records": 6435, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846864139978, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846864140943, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:43.603551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.141028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.141040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.141042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.141044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:07:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:07:44.141045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:07:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:44.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.546 247403 DEBUG nova.compute.manager [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.546 247403 DEBUG oslo_concurrency.lockutils [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.547 247403 DEBUG oslo_concurrency.lockutils [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.547 247403 DEBUG oslo_concurrency.lockutils [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.547 247403 DEBUG nova.compute.manager [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Processing event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.547 247403 DEBUG nova.compute.manager [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.547 247403 DEBUG oslo_concurrency.lockutils [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.548 247403 DEBUG oslo_concurrency.lockutils [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.548 247403 DEBUG oslo_concurrency.lockutils [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.548 247403 DEBUG nova.compute.manager [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.548 247403 WARNING nova.compute.manager [req-6fbe981e-14b1-4d33-b815-41e33397d1cc req-72fd9cff-68be-4080-a856-18581f4febd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.549 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.553 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846865.5530417, 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.553 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.555 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.560 247403 INFO nova.virt.libvirt.driver [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance spawned successfully.#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.561 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.579 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 210 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 198 op/s
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.585 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.589 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.589 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.590 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.590 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.590 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.591 247403 DEBUG nova.virt.libvirt.driver [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.613 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.649 247403 INFO nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Took 12.91 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.649 247403 DEBUG nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.719 247403 INFO nova.compute.manager [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Took 14.24 seconds to build instance.#033[00m
Jan 31 03:07:45 np0005603621 nova_compute[247399]: 2026-01-31 08:07:45.746 247403 DEBUG oslo_concurrency.lockutils [None req-1b9b292a-f14f-4275-be67-8f4be7df4773 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:46.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:47 np0005603621 nova_compute[247399]: 2026-01-31 08:07:47.070 247403 DEBUG oslo_concurrency.lockutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:47 np0005603621 nova_compute[247399]: 2026-01-31 08:07:47.070 247403 DEBUG oslo_concurrency.lockutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:47 np0005603621 nova_compute[247399]: 2026-01-31 08:07:47.070 247403 INFO nova.compute.manager [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Rebooting instance#033[00m
Jan 31 03:07:47 np0005603621 nova_compute[247399]: 2026-01-31 08:07:47.104 247403 DEBUG oslo_concurrency.lockutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:07:47 np0005603621 nova_compute[247399]: 2026-01-31 08:07:47.105 247403 DEBUG oslo_concurrency.lockutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquired lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:07:47 np0005603621 nova_compute[247399]: 2026-01-31 08:07:47.105 247403 DEBUG nova.network.neutron [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:07:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 214 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 187 op/s
Jan 31 03:07:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:48 np0005603621 nova_compute[247399]: 2026-01-31 08:07:48.359 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:48.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004153071491717848 of space, bias 1.0, pg target 1.2459214475153544 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:07:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:49.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 214 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 223 op/s
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.134 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:50.203 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.283 247403 DEBUG nova.network.neutron [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Updating instance_info_cache with network_info: [{"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.305 247403 DEBUG oslo_concurrency.lockutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Releasing lock "refresh_cache-0e67bd24-05c7-4caa-b0be-3e08f04a40f3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.308 247403 DEBUG nova.compute.manager [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:50 np0005603621 kernel: tap0648c44b-0b (unregistering): left promiscuous mode
Jan 31 03:07:50 np0005603621 NetworkManager[49013]: <info>  [1769846870.5517] device (tap0648c44b-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:07:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:50Z|00186|binding|INFO|Releasing lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 from this chassis (sb_readonly=0)
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.554 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:50Z|00187|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 down in Southbound
Jan 31 03:07:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:50Z|00188|binding|INFO|Removing iface tap0648c44b-0b ovn-installed in OVS
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.557 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.563 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:50.567 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f3:a0 10.100.0.13'], port_security=['fa:16:3e:1d:f3:a0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52453c5c-f64d-4d7b-8377-026c50d81413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec99ddda-267e-4cee-9cf8-108290b2b3a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=0648c44b-0b30-42b4-b493-6c55e4ec6ad5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:50.569 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 in datapath 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 unbound from our chassis#033[00m
Jan 31 03:07:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:50.572 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:07:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:50.572 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fc3740-6e44-4d5c-9edb-c7f2a1ba8f23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:50.573 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 namespace which is not needed anymore#033[00m
Jan 31 03:07:50 np0005603621 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000047.scope: Deactivated successfully.
Jan 31 03:07:50 np0005603621 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000047.scope: Consumed 5.491s CPU time.
Jan 31 03:07:50 np0005603621 systemd-machined[212769]: Machine qemu-30-instance-00000047 terminated.
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.720 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [NOTICE]   (293769) : haproxy version is 2.8.14-c23fe91
Jan 31 03:07:50 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [NOTICE]   (293769) : path to executable is /usr/sbin/haproxy
Jan 31 03:07:50 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [WARNING]  (293769) : Exiting Master process...
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [ALERT]    (293769) : Current worker (293771) exited with code 143 (Terminated)
Jan 31 03:07:50 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[293765]: [WARNING]  (293769) : All workers exited. Exiting... (0)
Jan 31 03:07:50 np0005603621 systemd[1]: libpod-bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e.scope: Deactivated successfully.
Jan 31 03:07:50 np0005603621 podman[293809]: 2026-01-31 08:07:50.738467534 +0000 UTC m=+0.083045735 container died bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.738 247403 INFO nova.virt.libvirt.driver [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance destroyed successfully.#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.739 247403 DEBUG nova.objects.instance [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lazy-loading 'resources' on Instance uuid 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:50.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.794 247403 DEBUG nova.virt.libvirt.vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-701828883',display_name='tempest-InstanceActionsTestJSON-server-701828883',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-701828883',id=71,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ccb16926f2c74ec3b393103a33e7fa3b',ramdisk_id='',reservation_id='r-0ijhyx8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-591935846',owner_user_name='tempest-InstanceActionsTestJSON-591935846-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:50Z,user_data=None,user_id='f189bb192c164df8b0af4c5f50a1285f',uuid=0e67bd24-05c7-4caa-b0be-3e08f04a40f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.795 247403 DEBUG nova.network.os_vif_util [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converting VIF {"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.796 247403 DEBUG nova.network.os_vif_util [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.796 247403 DEBUG os_vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.797 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.800 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0648c44b-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.853 247403 DEBUG nova.compute.manager [req-4daf0c15-7d1f-47d7-adb7-5456d6799a6e req-d054bdce-22e2-4247-b4e0-06098dbf1bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.853 247403 DEBUG oslo_concurrency.lockutils [req-4daf0c15-7d1f-47d7-adb7-5456d6799a6e req-d054bdce-22e2-4247-b4e0-06098dbf1bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.853 247403 DEBUG oslo_concurrency.lockutils [req-4daf0c15-7d1f-47d7-adb7-5456d6799a6e req-d054bdce-22e2-4247-b4e0-06098dbf1bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.854 247403 DEBUG oslo_concurrency.lockutils [req-4daf0c15-7d1f-47d7-adb7-5456d6799a6e req-d054bdce-22e2-4247-b4e0-06098dbf1bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.854 247403 DEBUG nova.compute.manager [req-4daf0c15-7d1f-47d7-adb7-5456d6799a6e req-d054bdce-22e2-4247-b4e0-06098dbf1bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.854 247403 WARNING nova.compute.manager [req-4daf0c15-7d1f-47d7-adb7-5456d6799a6e req-d054bdce-22e2-4247-b4e0-06098dbf1bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.854 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.858 247403 INFO os_vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b')#033[00m
Jan 31 03:07:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1bf2a0560b08cdfa01f923ed1714ec8b9f55f7e72e1151e67323b45e68eeb4fa-merged.mount: Deactivated successfully.
Jan 31 03:07:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.866 247403 DEBUG nova.virt.libvirt.driver [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Start _get_guest_xml network_info=[{"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.871 247403 WARNING nova.virt.libvirt.driver [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.877 247403 DEBUG nova.virt.libvirt.host [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.878 247403 DEBUG nova.virt.libvirt.host [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.881 247403 DEBUG nova.virt.libvirt.host [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.881 247403 DEBUG nova.virt.libvirt.host [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.882 247403 DEBUG nova.virt.libvirt.driver [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.882 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.883 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.883 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.883 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.884 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.884 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.884 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.884 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.884 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.885 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.885 247403 DEBUG nova.virt.hardware [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.885 247403 DEBUG nova.objects.instance [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:50 np0005603621 nova_compute[247399]: 2026-01-31 08:07:50.927 247403 DEBUG oslo_concurrency.processutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:51 np0005603621 podman[293809]: 2026-01-31 08:07:51.112622291 +0000 UTC m=+0.457200492 container cleanup bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:07:51 np0005603621 systemd[1]: libpod-conmon-bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e.scope: Deactivated successfully.
Jan 31 03:07:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1753775790' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.391 247403 DEBUG oslo_concurrency.processutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:51 np0005603621 podman[293870]: 2026-01-31 08:07:51.436977911 +0000 UTC m=+0.300920513 container remove bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.439 247403 DEBUG oslo_concurrency.processutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.441 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[444a5c57-3f37-4c47-8f50-4283e5243028]: (4, ('Sat Jan 31 08:07:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 (bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e)\nbde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e\nSat Jan 31 08:07:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 (bde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e)\nbde7339f243e9bb67762c6d383d8bf18bfa04146365a186247b4d30f47f24a0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.443 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80198da4-3a33-4519-8dd4-46edb24ced31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.444 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02611fa3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:51 np0005603621 kernel: tap02611fa3-d0: left promiscuous mode
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.451 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea04d94-4504-47ec-b602-8a307b956b18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.462 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.467 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e132c0b-c791-4893-afa4-b57e36151e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.469 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6c366938-53cf-4ead-afc0-84c3c87e99fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.487 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[248f521a-1d32-43f2-a622-1cfafd8ea29e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616747, 'reachable_time': 18693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293906, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.490 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:07:51 np0005603621 systemd[1]: run-netns-ovnmeta\x2d02611fa3\x2dd78b\x2d4e18\x2d9ba5\x2da1c1ebbc7559.mount: Deactivated successfully.
Jan 31 03:07:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:51.490 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb29eb3-733c-464e-9542-e056f0932f79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 214 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.3 MiB/s wr, 193 op/s
Jan 31 03:07:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:07:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527518832' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.910 247403 DEBUG oslo_concurrency.processutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.912 247403 DEBUG nova.virt.libvirt.vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-701828883',display_name='tempest-InstanceActionsTestJSON-server-701828883',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-701828883',id=71,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ccb16926f2c74ec3b393103a33e7fa3b',ramdisk_id='',reservation_id='r-0ijhyx8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-591935846',owner_user_name='tempest-InstanceActionsTestJSON-591935846-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:50Z,user_data=None,user_id='f189bb192c164df8b0af4c5f50a1285f',uuid=0e67bd24-05c7-4caa-b0be-3e08f04a40f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.913 247403 DEBUG nova.network.os_vif_util [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converting VIF {"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.913 247403 DEBUG nova.network.os_vif_util [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.915 247403 DEBUG nova.objects.instance [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.957 247403 DEBUG nova.virt.libvirt.driver [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <uuid>0e67bd24-05c7-4caa-b0be-3e08f04a40f3</uuid>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <name>instance-00000047</name>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:name>tempest-InstanceActionsTestJSON-server-701828883</nova:name>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:07:50</nova:creationTime>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:user uuid="f189bb192c164df8b0af4c5f50a1285f">tempest-InstanceActionsTestJSON-591935846-project-member</nova:user>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:project uuid="ccb16926f2c74ec3b393103a33e7fa3b">tempest-InstanceActionsTestJSON-591935846</nova:project>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <nova:port uuid="0648c44b-0b30-42b4-b493-6c55e4ec6ad5">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <entry name="serial">0e67bd24-05c7-4caa-b0be-3e08f04a40f3</entry>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <entry name="uuid">0e67bd24-05c7-4caa-b0be-3e08f04a40f3</entry>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0e67bd24-05c7-4caa-b0be-3e08f04a40f3_disk.config">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1d:f3:a0"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <target dev="tap0648c44b-0b"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3/console.log" append="off"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:07:51 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:07:51 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:07:51 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:07:51 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.958 247403 DEBUG nova.virt.libvirt.driver [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.958 247403 DEBUG nova.virt.libvirt.driver [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] skipping disk for instance-00000047 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.959 247403 DEBUG nova.virt.libvirt.vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-701828883',display_name='tempest-InstanceActionsTestJSON-server-701828883',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-701828883',id=71,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='ccb16926f2c74ec3b393103a33e7fa3b',ramdisk_id='',reservation_id='r-0ijhyx8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-591935846',owner_user_name='tempest-InstanceActionsTestJSON-591935846-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:50Z,user_data=None,user_id='f189bb192c164df8b0af4c5f50a1285f',uuid=0e67bd24-05c7-4caa-b0be-3e08f04a40f3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.959 247403 DEBUG nova.network.os_vif_util [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converting VIF {"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.960 247403 DEBUG nova.network.os_vif_util [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.960 247403 DEBUG os_vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.961 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.962 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.964 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0648c44b-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.965 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0648c44b-0b, col_values=(('external_ids', {'iface-id': '0648c44b-0b30-42b4-b493-6c55e4ec6ad5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:f3:a0', 'vm-uuid': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:51 np0005603621 NetworkManager[49013]: <info>  [1769846871.9668] manager: (tap0648c44b-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.969 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:51 np0005603621 nova_compute[247399]: 2026-01-31 08:07:51.972 247403 INFO os_vif [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b')#033[00m
Jan 31 03:07:52 np0005603621 systemd-udevd[293787]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:52 np0005603621 kernel: tap0648c44b-0b: entered promiscuous mode
Jan 31 03:07:52 np0005603621 NetworkManager[49013]: <info>  [1769846872.0517] manager: (tap0648c44b-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/100)
Jan 31 03:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:52Z|00189|binding|INFO|Claiming lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for this chassis.
Jan 31 03:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:52Z|00190|binding|INFO|0648c44b-0b30-42b4-b493-6c55e4ec6ad5: Claiming fa:16:3e:1d:f3:a0 10.100.0.13
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.053 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:52Z|00191|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 ovn-installed in OVS
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:52Z|00192|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 up in Southbound
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.064 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f3:a0 10.100.0.13'], port_security=['fa:16:3e:1d:f3:a0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '52453c5c-f64d-4d7b-8377-026c50d81413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec99ddda-267e-4cee-9cf8-108290b2b3a7, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=0648c44b-0b30-42b4-b493-6c55e4ec6ad5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.066 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 in datapath 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 bound to our chassis#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.068 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559#033[00m
Jan 31 03:07:52 np0005603621 NetworkManager[49013]: <info>  [1769846872.0711] device (tap0648c44b-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:07:52 np0005603621 NetworkManager[49013]: <info>  [1769846872.0719] device (tap0648c44b-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.081 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[703b387c-5b24-49f3-b022-354acb52b7a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.082 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02611fa3-d1 in ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.084 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02611fa3-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.084 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f713b67c-f795-4d50-966a-c2b0888fb42f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.085 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9d250c-9d47-4ca4-a8fc-097a87c06f5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 systemd-machined[212769]: New machine qemu-31-instance-00000047.
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.095 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4e49d99c-484c-4473-98c2-898d1669e89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 systemd[1]: Started Virtual Machine qemu-31-instance-00000047.
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.108 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[20ce4f33-6940-4d70-9de0-3e8435f18914]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.138 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9da8f313-b119-4df5-8091-bc509feaf3bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.143 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5aac8fbd-8206-4b76-8fcf-35fafb5fcdec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 NetworkManager[49013]: <info>  [1769846872.1439] manager: (tap02611fa3-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/101)
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.174 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6caa7654-1f6e-4ac0-a001-fa888b803b67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.177 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[eb70a3ee-023b-4e5a-8e30-1cabbfa6d04a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 NetworkManager[49013]: <info>  [1769846872.2019] device (tap02611fa3-d0): carrier: link connected
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.208 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2455c554-6119-4e6e-96fc-fd1c26df8d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.228 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8b32f2-8d60-4bdc-81de-2afa296817f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02611fa3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:bd:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617771, 'reachable_time': 40635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293973, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.242 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[98c992f6-fbe6-40bb-869b-0b557619cbbc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:bd89'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 617771, 'tstamp': 617771}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293974, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.258 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[231a071f-aedb-48cd-b9d6-ccdc301a849f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02611fa3-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:bd:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617771, 'reachable_time': 40635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293975, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.292 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4b451d57-2e1a-41bf-8ee2-1f8274345eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.345 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[be6c6af4-b291-4fc4-916b-d4bd54298b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.347 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02611fa3-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.347 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.348 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02611fa3-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.350 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 NetworkManager[49013]: <info>  [1769846872.3512] manager: (tap02611fa3-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 31 03:07:52 np0005603621 kernel: tap02611fa3-d0: entered promiscuous mode
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.353 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02611fa3-d0, col_values=(('external_ids', {'iface-id': '987a3dad-b7aa-4f25-8828-62be2b0a5c8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:52Z|00193|binding|INFO|Releasing lport 987a3dad-b7aa-4f25-8828-62be2b0a5c8f from this chassis (sb_readonly=0)
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.355 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.355 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.357 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[034dd893-3f2e-45bf-bf81-6b85303812e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.358 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.pid.haproxy
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:52.359 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'env', 'PROCESS_TAG=haproxy-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02611fa3-d78b-4e18-9ba5-a1c1ebbc7559.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.360 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.639 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.640 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846872.638628, 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.641 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.643 247403 DEBUG nova.compute.manager [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.648 247403 INFO nova.virt.libvirt.driver [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance rebooted successfully.#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.648 247403 DEBUG nova.compute.manager [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.685 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.689 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.716 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.717 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846872.640016, 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.718 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] VM Started (Lifecycle Event)#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.753 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.756 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:07:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:52.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:52 np0005603621 podman[294049]: 2026-01-31 08:07:52.760879374 +0000 UTC m=+0.062427396 container create c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.769 247403 DEBUG oslo_concurrency.lockutils [None req-93995fbc-6723-42f0-a203-4e650a431e4c f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:52 np0005603621 systemd[1]: Started libpod-conmon-c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20.scope.
Jan 31 03:07:52 np0005603621 podman[294049]: 2026-01-31 08:07:52.722554678 +0000 UTC m=+0.024102700 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:07:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:07:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e50596b2857ccc59abe0ed3ba368cc835d4200065c51e55940c231685dc9de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:07:52 np0005603621 podman[294049]: 2026-01-31 08:07:52.875680957 +0000 UTC m=+0.177228999 container init c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:07:52 np0005603621 podman[294049]: 2026-01-31 08:07:52.881204071 +0000 UTC m=+0.182752083 container start c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:07:52 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [NOTICE]   (294066) : New worker (294068) forked
Jan 31 03:07:52 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [NOTICE]   (294066) : Loading success.
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.971 247403 DEBUG nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.971 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.972 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.972 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.972 247403 DEBUG nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.972 247403 WARNING nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.972 247403 DEBUG nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.973 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.973 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.973 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.973 247403 DEBUG nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.974 247403 WARNING nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.974 247403 DEBUG nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.974 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.974 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.975 247403 DEBUG oslo_concurrency.lockutils [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.975 247403 DEBUG nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:52 np0005603621 nova_compute[247399]: 2026-01-31 08:07:52.975 247403 WARNING nova.compute.manager [req-103c876e-df6b-4f4c-85e3-6752120c3694 req-4e5ff6c6-bec5-4dbf-b5c4-a87163e20c87 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:07:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:53 np0005603621 nova_compute[247399]: 2026-01-31 08:07:53.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 242 MiB data, 740 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Jan 31 03:07:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:54.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.801 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.802 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.802 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.803 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.803 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.805 247403 INFO nova.compute.manager [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Terminating instance#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.806 247403 DEBUG nova.compute.manager [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:07:54 np0005603621 kernel: tap0648c44b-0b (unregistering): left promiscuous mode
Jan 31 03:07:54 np0005603621 NetworkManager[49013]: <info>  [1769846874.8520] device (tap0648c44b-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.852 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:54Z|00194|binding|INFO|Releasing lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 from this chassis (sb_readonly=0)
Jan 31 03:07:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:54Z|00195|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 down in Southbound
Jan 31 03:07:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:54Z|00196|binding|INFO|Removing iface tap0648c44b-0b ovn-installed in OVS
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.863 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.864 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603621 nova_compute[247399]: 2026-01-31 08:07:54.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:54.876 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f3:a0 10.100.0.13'], port_security=['fa:16:3e:1d:f3:a0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '52453c5c-f64d-4d7b-8377-026c50d81413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec99ddda-267e-4cee-9cf8-108290b2b3a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=0648c44b-0b30-42b4-b493-6c55e4ec6ad5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:54.878 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 in datapath 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 unbound from our chassis#033[00m
Jan 31 03:07:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:54.880 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:07:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:54.881 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c40c5c15-158e-4a9d-869c-8bbc7b9ddb08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:54.881 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 namespace which is not needed anymore#033[00m
Jan 31 03:07:54 np0005603621 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000047.scope: Deactivated successfully.
Jan 31 03:07:54 np0005603621 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000047.scope: Consumed 2.845s CPU time.
Jan 31 03:07:54 np0005603621 systemd-machined[212769]: Machine qemu-31-instance-00000047 terminated.
Jan 31 03:07:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 31 03:07:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 31 03:07:54 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 31 03:07:55 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [NOTICE]   (294066) : haproxy version is 2.8.14-c23fe91
Jan 31 03:07:55 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [NOTICE]   (294066) : path to executable is /usr/sbin/haproxy
Jan 31 03:07:55 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [WARNING]  (294066) : Exiting Master process...
Jan 31 03:07:55 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [ALERT]    (294066) : Current worker (294068) exited with code 143 (Terminated)
Jan 31 03:07:55 np0005603621 neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559[294062]: [WARNING]  (294066) : All workers exited. Exiting... (0)
Jan 31 03:07:55 np0005603621 systemd[1]: libpod-c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20.scope: Deactivated successfully.
Jan 31 03:07:55 np0005603621 podman[294103]: 2026-01-31 08:07:55.019614234 +0000 UTC m=+0.060003651 container died c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:07:55 np0005603621 kernel: tap0648c44b-0b: entered promiscuous mode
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00197|binding|INFO|Claiming lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for this chassis.
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00198|binding|INFO|0648c44b-0b30-42b4-b493-6c55e4ec6ad5: Claiming fa:16:3e:1d:f3:a0 10.100.0.13
Jan 31 03:07:55 np0005603621 NetworkManager[49013]: <info>  [1769846875.0296] manager: (tap0648c44b-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.030 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 systemd-udevd[294081]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:07:55 np0005603621 kernel: tap0648c44b-0b (unregistering): left promiscuous mode
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.039 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f3:a0 10.100.0.13'], port_security=['fa:16:3e:1d:f3:a0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '52453c5c-f64d-4d7b-8377-026c50d81413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec99ddda-267e-4cee-9cf8-108290b2b3a7, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=0648c44b-0b30-42b4-b493-6c55e4ec6ad5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00199|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 ovn-installed in OVS
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00200|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 up in Southbound
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00201|binding|INFO|Releasing lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 from this chassis (sb_readonly=1)
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.045 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00202|binding|INFO|Removing iface tap0648c44b-0b ovn-installed in OVS
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00203|if_status|INFO|Not setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 down as sb is readonly
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00204|binding|INFO|Releasing lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 from this chassis (sb_readonly=0)
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:07:55Z|00205|binding|INFO|Setting lport 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 down in Southbound
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.059 247403 INFO nova.virt.libvirt.driver [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Instance destroyed successfully.#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.060 247403 DEBUG nova.objects.instance [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lazy-loading 'resources' on Instance uuid 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.062 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:f3:a0 10.100.0.13'], port_security=['fa:16:3e:1d:f3:a0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0e67bd24-05c7-4caa-b0be-3e08f04a40f3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ccb16926f2c74ec3b393103a33e7fa3b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '52453c5c-f64d-4d7b-8377-026c50d81413', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec99ddda-267e-4cee-9cf8-108290b2b3a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=0648c44b-0b30-42b4-b493-6c55e4ec6ad5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:07:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20-userdata-shm.mount: Deactivated successfully.
Jan 31 03:07:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-95e50596b2857ccc59abe0ed3ba368cc835d4200065c51e55940c231685dc9de-merged.mount: Deactivated successfully.
Jan 31 03:07:55 np0005603621 podman[294103]: 2026-01-31 08:07:55.085196997 +0000 UTC m=+0.125586414 container cleanup c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:07:55 np0005603621 systemd[1]: libpod-conmon-c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20.scope: Deactivated successfully.
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.097 247403 DEBUG nova.virt.libvirt.vif [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:07:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-701828883',display_name='tempest-InstanceActionsTestJSON-server-701828883',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-701828883',id=71,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:07:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ccb16926f2c74ec3b393103a33e7fa3b',ramdisk_id='',reservation_id='r-0ijhyx8k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-591935846',owner_user_name='tempest-InstanceActionsTestJSON-591935846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:07:52Z,user_data=None,user_id='f189bb192c164df8b0af4c5f50a1285f',uuid=0e67bd24-05c7-4caa-b0be-3e08f04a40f3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.097 247403 DEBUG nova.network.os_vif_util [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converting VIF {"id": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "address": "fa:16:3e:1d:f3:a0", "network": {"id": "02611fa3-d78b-4e18-9ba5-a1c1ebbc7559", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1174245912-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ccb16926f2c74ec3b393103a33e7fa3b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0648c44b-0b", "ovs_interfaceid": "0648c44b-0b30-42b4-b493-6c55e4ec6ad5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.098 247403 DEBUG nova.network.os_vif_util [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.098 247403 DEBUG os_vif [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.102 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0648c44b-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.105 247403 DEBUG nova.compute.manager [req-5c801509-5627-4ce7-92cc-716c8765eb87 req-3de05933-dd07-40bc-90a0-43c6f9e30c66 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.105 247403 DEBUG oslo_concurrency.lockutils [req-5c801509-5627-4ce7-92cc-716c8765eb87 req-3de05933-dd07-40bc-90a0-43c6f9e30c66 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.105 247403 DEBUG oslo_concurrency.lockutils [req-5c801509-5627-4ce7-92cc-716c8765eb87 req-3de05933-dd07-40bc-90a0-43c6f9e30c66 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.105 247403 DEBUG oslo_concurrency.lockutils [req-5c801509-5627-4ce7-92cc-716c8765eb87 req-3de05933-dd07-40bc-90a0-43c6f9e30c66 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.105 247403 DEBUG nova.compute.manager [req-5c801509-5627-4ce7-92cc-716c8765eb87 req-3de05933-dd07-40bc-90a0-43c6f9e30c66 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.106 247403 DEBUG nova.compute.manager [req-5c801509-5627-4ce7-92cc-716c8765eb87 req-3de05933-dd07-40bc-90a0-43c6f9e30c66 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.110 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.112 247403 INFO os_vif [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:f3:a0,bridge_name='br-int',has_traffic_filtering=True,id=0648c44b-0b30-42b4-b493-6c55e4ec6ad5,network=Network(02611fa3-d78b-4e18-9ba5-a1c1ebbc7559),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0648c44b-0b')#033[00m
Jan 31 03:07:55 np0005603621 podman[294136]: 2026-01-31 08:07:55.165200536 +0000 UTC m=+0.058950426 container remove c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.170 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[760205a3-0941-49f6-90e6-12f2767d7336]: (4, ('Sat Jan 31 08:07:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 (c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20)\nc34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20\nSat Jan 31 08:07:55 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 (c34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20)\nc34d73930b2d16bf2ada65db6d21282ac97fc733afbd143850db4396ac0dfc20\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.172 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcdb084-336d-472a-8b0f-7a12355ef227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.173 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02611fa3-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.174 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 kernel: tap02611fa3-d0: left promiscuous mode
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.176 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.179 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0f73c1f7-796f-42b8-903f-81b76376e5f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.193 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6a56bf32-e54a-4acd-a9a9-039b50307cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.195 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8b714c6e-9f4d-47b2-8a0c-ec7920c24558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.209 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb19474-8d18-41e5-b0c2-080133b6f97d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 617764, 'reachable_time': 17998, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294167, 'error': None, 'target': 'ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:55.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:55 np0005603621 systemd[1]: run-netns-ovnmeta\x2d02611fa3\x2dd78b\x2d4e18\x2d9ba5\x2da1c1ebbc7559.mount: Deactivated successfully.
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.213 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.213 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d610fa-2165-463e-a3e8-e1830c1074d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.215 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 in datapath 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 unbound from our chassis#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.217 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.218 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5615ecf8-0455-45b7-bc52-7853b81cb746]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.220 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 0648c44b-0b30-42b4-b493-6c55e4ec6ad5 in datapath 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559 unbound from our chassis#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.221 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02611fa3-d78b-4e18-9ba5-a1c1ebbc7559, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:07:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:07:55.222 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dba64ff8-690a-46f0-8540-041ed161e1f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:07:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 330 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.3 MiB/s wr, 296 op/s
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.763 247403 INFO nova.virt.libvirt.driver [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Deleting instance files /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3_del#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.764 247403 INFO nova.virt.libvirt.driver [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Deletion of /var/lib/nova/instances/0e67bd24-05c7-4caa-b0be-3e08f04a40f3_del complete#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.859 247403 INFO nova.compute.manager [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.860 247403 DEBUG oslo.service.loopingcall [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.860 247403 DEBUG nova.compute.manager [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:07:55 np0005603621 nova_compute[247399]: 2026-01-31 08:07:55.860 247403 DEBUG nova.network.neutron [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:07:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 31 03:07:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 31 03:07:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 31 03:07:56 np0005603621 nova_compute[247399]: 2026-01-31 08:07:56.576 247403 DEBUG nova.network.neutron [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:07:56 np0005603621 nova_compute[247399]: 2026-01-31 08:07:56.606 247403 INFO nova.compute.manager [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Took 0.75 seconds to deallocate network for instance.#033[00m
Jan 31 03:07:56 np0005603621 nova_compute[247399]: 2026-01-31 08:07:56.689 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:56 np0005603621 nova_compute[247399]: 2026-01-31 08:07:56.690 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:56 np0005603621 nova_compute[247399]: 2026-01-31 08:07:56.743 247403 DEBUG oslo_concurrency.processutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:07:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:56.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:56 np0005603621 nova_compute[247399]: 2026-01-31 08:07:56.868 247403 DEBUG nova.compute.manager [req-d21f24a9-605a-49d6-95bf-e969d54ccb0f req-1eb01af3-00dd-4bb7-bb5d-9f3ce9fef567 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-deleted-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.203 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.203 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.204 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.204 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.204 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.204 247403 WARNING nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.204 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.204 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.205 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.205 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.205 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.205 247403 WARNING nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.205 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.205 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.206 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.206 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.206 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.206 247403 WARNING nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.206 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.206 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 WARNING nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-unplugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.207 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.208 247403 DEBUG oslo_concurrency.lockutils [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.208 247403 DEBUG nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] No waiting events found dispatching network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.208 247403 WARNING nova.compute.manager [req-1a40d832-b3ef-4321-ac34-1479341c1b73 req-ff0e2731-3238-42e1-bd93-53b7556bdb5c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Received unexpected event network-vif-plugged-0648c44b-0b30-42b4-b493-6c55e4ec6ad5 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:07:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:07:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:57.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.214 247403 DEBUG oslo_concurrency.processutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:07:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.219 247403 DEBUG nova.compute.provider_tree [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:07:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.239 247403 DEBUG nova.scheduler.client.report [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.274 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.332 247403 INFO nova.scheduler.client.report [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Deleted allocations for instance 0e67bd24-05c7-4caa-b0be-3e08f04a40f3#033[00m
Jan 31 03:07:57 np0005603621 nova_compute[247399]: 2026-01-31 08:07:57.462 247403 DEBUG oslo_concurrency.lockutils [None req-c7a3965c-7ef3-46a5-a6ae-ae6f836e6df9 f189bb192c164df8b0af4c5f50a1285f ccb16926f2c74ec3b393103a33e7fa3b - - default default] Lock "0e67bd24-05c7-4caa-b0be-3e08f04a40f3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:07:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 358 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 15 MiB/s wr, 514 op/s
Jan 31 03:07:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:07:58 np0005603621 nova_compute[247399]: 2026-01-31 08:07:58.364 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:07:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:07:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:07:58.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:07:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:07:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:07:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:07:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:07:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 17 MiB/s wr, 590 op/s
Jan 31 03:08:00 np0005603621 nova_compute[247399]: 2026-01-31 08:08:00.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:01.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:01 np0005603621 nova_compute[247399]: 2026-01-31 08:08:01.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.9 MiB/s wr, 216 op/s
Jan 31 03:08:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:03.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:03.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 31 03:08:03 np0005603621 nova_compute[247399]: 2026-01-31 08:08:03.364 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 31 03:08:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 31 03:08:03 np0005603621 podman[294246]: 2026-01-31 08:08:03.513482708 +0000 UTC m=+0.069191610 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:08:03 np0005603621 podman[294245]: 2026-01-31 08:08:03.522385867 +0000 UTC m=+0.077867382 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 03:08:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.9 MiB/s wr, 310 op/s
Jan 31 03:08:05 np0005603621 nova_compute[247399]: 2026-01-31 08:08:05.110 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:05.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.0 MiB/s wr, 241 op/s
Jan 31 03:08:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:08:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:08:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:07.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 372 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.4 MiB/s wr, 235 op/s
Jan 31 03:08:08 np0005603621 nova_compute[247399]: 2026-01-31 08:08:08.366 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:09.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 340 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 200 op/s
Jan 31 03:08:10 np0005603621 nova_compute[247399]: 2026-01-31 08:08:10.057 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846875.0569148, 0e67bd24-05c7-4caa-b0be-3e08f04a40f3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:10 np0005603621 nova_compute[247399]: 2026-01-31 08:08:10.058 247403 INFO nova.compute.manager [-] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:08:10 np0005603621 nova_compute[247399]: 2026-01-31 08:08:10.074 247403 DEBUG nova.compute.manager [None req-a7edcd8a-e946-41e0-907f-28baeecea279 - - - - - -] [instance: 0e67bd24-05c7-4caa-b0be-3e08f04a40f3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:10 np0005603621 nova_compute[247399]: 2026-01-31 08:08:10.113 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:11.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:11.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 340 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 33 KiB/s wr, 200 op/s
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.589229) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846892589294, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 550, "num_deletes": 252, "total_data_size": 589262, "memory_usage": 600936, "flush_reason": "Manual Compaction"}
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846892658016, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 582597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36371, "largest_seqno": 36920, "table_properties": {"data_size": 579587, "index_size": 982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7321, "raw_average_key_size": 19, "raw_value_size": 573451, "raw_average_value_size": 1525, "num_data_blocks": 43, "num_entries": 376, "num_filter_entries": 376, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846864, "oldest_key_time": 1769846864, "file_creation_time": 1769846892, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 68840 microseconds, and 3993 cpu microseconds.
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.658069) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 582597 bytes OK
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.658100) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.707283) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.707329) EVENT_LOG_v1 {"time_micros": 1769846892707321, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.707352) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 586168, prev total WAL file size 586168, number of live WAL files 2.
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.707966) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(568KB)], [77(11MB)]
Jan 31 03:08:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846892708014, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12338954, "oldest_snapshot_seqno": -1}
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6295 keys, 10405449 bytes, temperature: kUnknown
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846893112006, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 10405449, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361839, "index_size": 26796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 162592, "raw_average_key_size": 25, "raw_value_size": 10247534, "raw_average_value_size": 1627, "num_data_blocks": 1068, "num_entries": 6295, "num_filter_entries": 6295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769846892, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.112327) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10405449 bytes
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.132843) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.5 rd, 25.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(39.0) write-amplify(17.9) OK, records in: 6811, records dropped: 516 output_compression: NoCompression
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.132919) EVENT_LOG_v1 {"time_micros": 1769846893132904, "job": 44, "event": "compaction_finished", "compaction_time_micros": 404093, "compaction_time_cpu_micros": 16532, "output_level": 6, "num_output_files": 1, "total_output_size": 10405449, "num_input_records": 6811, "num_output_records": 6295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846893133490, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769846893135591, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:12.707797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.135628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.135632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.135634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.135635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:08:13.135637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:08:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:13.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:13.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:13 np0005603621 nova_compute[247399]: 2026-01-31 08:08:13.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 293 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 KiB/s wr, 135 op/s
Jan 31 03:08:15 np0005603621 podman[294466]: 2026-01-31 08:08:15.07110698 +0000 UTC m=+0.520655110 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:15 np0005603621 nova_compute[247399]: 2026-01-31 08:08:15.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:15 np0005603621 podman[294484]: 2026-01-31 08:08:15.292010473 +0000 UTC m=+0.092976347 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:15.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:15 np0005603621 podman[294466]: 2026-01-31 08:08:15.35002121 +0000 UTC m=+0.799569330 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:08:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 301 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 760 KiB/s wr, 118 op/s
Jan 31 03:08:16 np0005603621 podman[294620]: 2026-01-31 08:08:16.372187264 +0000 UTC m=+0.183229558 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:08:16 np0005603621 podman[294620]: 2026-01-31 08:08:16.432334118 +0000 UTC m=+0.243376382 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:08:17 np0005603621 podman[294685]: 2026-01-31 08:08:17.22059567 +0000 UTC m=+0.323035489 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, release=1793, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, architecture=x86_64)
Jan 31 03:08:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:17.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:17.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:17 np0005603621 podman[294705]: 2026-01-31 08:08:17.393031038 +0000 UTC m=+0.144220611 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, release=1793, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived)
Jan 31 03:08:17 np0005603621 podman[294685]: 2026-01-31 08:08:17.466272024 +0000 UTC m=+0.568711833 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 03:08:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 297 MiB data, 811 MiB used, 20 GiB / 21 GiB avail; 502 KiB/s rd, 4.8 MiB/s wr, 152 op/s
Jan 31 03:08:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:08:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:08:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.369 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 34e9410b-521f-4c59-a616-1410e29a8b88 does not exist
Jan 31 03:08:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8ee86dcb-a09a-4a74-9330-9598969ca432 does not exist
Jan 31 03:08:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f256906c-33f5-4a4c-8095-bab999712cb1 does not exist
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.671 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.672 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.730 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.889 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.890 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.899 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:08:18 np0005603621 nova_compute[247399]: 2026-01-31 08:08:18.900 247403 INFO nova.compute.claims [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.060 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.223124994 +0000 UTC m=+0.105891784 container create 6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.140563406 +0000 UTC m=+0.023330216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:08:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:19.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:19.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:19 np0005603621 systemd[1]: Started libpod-conmon-6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10.scope.
Jan 31 03:08:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.376826492 +0000 UTC m=+0.259593302 container init 6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.382040046 +0000 UTC m=+0.264806836 container start 6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:19 np0005603621 adoring_hodgkin[295027]: 167 167
Jan 31 03:08:19 np0005603621 systemd[1]: libpod-6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10.scope: Deactivated successfully.
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.477899923 +0000 UTC m=+0.360666713 container attach 6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.478372379 +0000 UTC m=+0.361139169 container died 6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310153952' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.506 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.511 247403 DEBUG nova.compute.provider_tree [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.543 247403 DEBUG nova.scheduler.client.report [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.583 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.584 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:08:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 262 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 738 KiB/s rd, 6.1 MiB/s wr, 193 op/s
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.656 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.657 247403 DEBUG nova.network.neutron [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:08:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-687a057e343a788fcb21fc38f2abf3e304c5aa68f32e93236e3c5da248abd859-merged.mount: Deactivated successfully.
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.746 247403 INFO nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.816 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.957 247403 DEBUG nova.policy [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '496d66c21c524cd193afd4289fccd421', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e1b9f4d9a424402a969a75d0e1a09aa4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:08:19 np0005603621 podman[294992]: 2026-01-31 08:08:19.958004416 +0000 UTC m=+0.840771216 container remove 6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.996 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.997 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:08:19 np0005603621 nova_compute[247399]: 2026-01-31 08:08:19.998 247403 INFO nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Creating image(s)#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.035 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:20 np0005603621 systemd[1]: libpod-conmon-6b48ec6f2e55bd6c1f6491953ff7bec66946053794d3464b9816ecb26b772a10.scope: Deactivated successfully.
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.064 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.093 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.097 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.120 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.153 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.154 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.154 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.155 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:20 np0005603621 podman[295072]: 2026-01-31 08:08:20.077705874 +0000 UTC m=+0.026146934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.186 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:20 np0005603621 nova_compute[247399]: 2026-01-31 08:08:20.190 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0442f276-6f69-4c83-85ea-e2053526fc53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:20 np0005603621 podman[295072]: 2026-01-31 08:08:20.191756454 +0000 UTC m=+0.140197494 container create 93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:08:20 np0005603621 systemd[1]: Started libpod-conmon-93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691.scope.
Jan 31 03:08:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb329b77c235dea2511ae1d99b69172c5d53e68f6eb0d1fd159a8e474fb1c86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb329b77c235dea2511ae1d99b69172c5d53e68f6eb0d1fd159a8e474fb1c86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb329b77c235dea2511ae1d99b69172c5d53e68f6eb0d1fd159a8e474fb1c86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb329b77c235dea2511ae1d99b69172c5d53e68f6eb0d1fd159a8e474fb1c86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb329b77c235dea2511ae1d99b69172c5d53e68f6eb0d1fd159a8e474fb1c86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:20 np0005603621 podman[295072]: 2026-01-31 08:08:20.406660018 +0000 UTC m=+0.355101088 container init 93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:08:20 np0005603621 podman[295072]: 2026-01-31 08:08:20.414838775 +0000 UTC m=+0.363279835 container start 93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:20 np0005603621 podman[295072]: 2026-01-31 08:08:20.536259538 +0000 UTC m=+0.484700608 container attach 93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:08:21 np0005603621 trusting_nobel[295161]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:08:21 np0005603621 trusting_nobel[295161]: --> relative data size: 1.0
Jan 31 03:08:21 np0005603621 trusting_nobel[295161]: --> All data devices are unavailable
Jan 31 03:08:21 np0005603621 systemd[1]: libpod-93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691.scope: Deactivated successfully.
Jan 31 03:08:21 np0005603621 podman[295072]: 2026-01-31 08:08:21.189277263 +0000 UTC m=+1.137718303 container died 93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:08:21 np0005603621 nova_compute[247399]: 2026-01-31 08:08:21.294 247403 DEBUG nova.network.neutron [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Successfully created port: a96fedd3-52a4-4d09-ac57-c314370a6310 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:08:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:21.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:21 np0005603621 nova_compute[247399]: 2026-01-31 08:08:21.422 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0442f276-6f69-4c83-85ea-e2053526fc53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7eb329b77c235dea2511ae1d99b69172c5d53e68f6eb0d1fd159a8e474fb1c86-merged.mount: Deactivated successfully.
Jan 31 03:08:21 np0005603621 nova_compute[247399]: 2026-01-31 08:08:21.502 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] resizing rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:08:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 262 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 5.3 MiB/s wr, 160 op/s
Jan 31 03:08:21 np0005603621 podman[295072]: 2026-01-31 08:08:21.856686571 +0000 UTC m=+1.805127651 container remove 93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:21 np0005603621 systemd[1]: libpod-conmon-93f3c6fab8c6d60a05ba083d4774a87b38fa07a74cc831d98844e6bf81108691.scope: Deactivated successfully.
Jan 31 03:08:21 np0005603621 nova_compute[247399]: 2026-01-31 08:08:21.963 247403 DEBUG nova.objects.instance [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lazy-loading 'migration_context' on Instance uuid 0442f276-6f69-4c83-85ea-e2053526fc53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.017 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.017 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Ensure instance console log exists: /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.018 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.018 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.018 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:22 np0005603621 podman[295455]: 2026-01-31 08:08:22.360329095 +0000 UTC m=+0.018366600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:08:22 np0005603621 podman[295455]: 2026-01-31 08:08:22.477926437 +0000 UTC m=+0.135963922 container create c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.488 247403 DEBUG nova.network.neutron [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Successfully updated port: a96fedd3-52a4-4d09-ac57-c314370a6310 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.548 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "refresh_cache-0442f276-6f69-4c83-85ea-e2053526fc53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.548 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquired lock "refresh_cache-0442f276-6f69-4c83-85ea-e2053526fc53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.549 247403 DEBUG nova.network.neutron [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:08:22 np0005603621 systemd[1]: Started libpod-conmon-c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d.scope.
Jan 31 03:08:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:22 np0005603621 podman[295455]: 2026-01-31 08:08:22.688803194 +0000 UTC m=+0.346840759 container init c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_germain, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:08:22 np0005603621 podman[295455]: 2026-01-31 08:08:22.695542477 +0000 UTC m=+0.353579962 container start c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:08:22 np0005603621 systemd[1]: libpod-c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d.scope: Deactivated successfully.
Jan 31 03:08:22 np0005603621 vigilant_germain[295472]: 167 167
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.748 247403 DEBUG nova.network.neutron [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:08:22 np0005603621 podman[295455]: 2026-01-31 08:08:22.768266016 +0000 UTC m=+0.426303521 container attach c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:08:22 np0005603621 podman[295455]: 2026-01-31 08:08:22.769571336 +0000 UTC m=+0.427608861 container died c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_germain, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.796 247403 DEBUG nova.compute.manager [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-changed-a96fedd3-52a4-4d09-ac57-c314370a6310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.797 247403 DEBUG nova.compute.manager [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Refreshing instance network info cache due to event network-changed-a96fedd3-52a4-4d09-ac57-c314370a6310. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:08:22 np0005603621 nova_compute[247399]: 2026-01-31 08:08:22.797 247403 DEBUG oslo_concurrency.lockutils [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0442f276-6f69-4c83-85ea-e2053526fc53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:08:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-92cfae0b483b2003f9398e07b09c4cb495b4c0b76d667bb8535c49b1d3117237-merged.mount: Deactivated successfully.
Jan 31 03:08:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:23.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:23.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:23 np0005603621 nova_compute[247399]: 2026-01-31 08:08:23.371 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:23 np0005603621 podman[295455]: 2026-01-31 08:08:23.590875939 +0000 UTC m=+1.248913424 container remove c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:08:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 295 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 706 KiB/s rd, 5.5 MiB/s wr, 173 op/s
Jan 31 03:08:23 np0005603621 systemd[1]: libpod-conmon-c783af9462531d2da170cd17e046d01534283aed869d906951f6629222805f4d.scope: Deactivated successfully.
Jan 31 03:08:23 np0005603621 podman[295498]: 2026-01-31 08:08:23.711153705 +0000 UTC m=+0.025262657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:08:23 np0005603621 podman[295498]: 2026-01-31 08:08:23.813856647 +0000 UTC m=+0.127965579 container create 9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:08:23 np0005603621 systemd[1]: Started libpod-conmon-9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7.scope.
Jan 31 03:08:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9877fe4ba6cf7fc082ed33aa6d88764e6927ad9ec214c541367ea00d655f1f13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9877fe4ba6cf7fc082ed33aa6d88764e6927ad9ec214c541367ea00d655f1f13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9877fe4ba6cf7fc082ed33aa6d88764e6927ad9ec214c541367ea00d655f1f13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9877fe4ba6cf7fc082ed33aa6d88764e6927ad9ec214c541367ea00d655f1f13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:24 np0005603621 podman[295498]: 2026-01-31 08:08:24.127259592 +0000 UTC m=+0.441368544 container init 9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:08:24 np0005603621 podman[295498]: 2026-01-31 08:08:24.133053804 +0000 UTC m=+0.447162736 container start 9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:24 np0005603621 podman[295498]: 2026-01-31 08:08:24.199313731 +0000 UTC m=+0.513422663 container attach 9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.202 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.202 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.250 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.250 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.691 247403 DEBUG nova.network.neutron [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Updating instance_info_cache with network_info: [{"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.774 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Releasing lock "refresh_cache-0442f276-6f69-4c83-85ea-e2053526fc53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.775 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Instance network_info: |[{"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.775 247403 DEBUG oslo_concurrency.lockutils [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0442f276-6f69-4c83-85ea-e2053526fc53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.776 247403 DEBUG nova.network.neutron [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Refreshing network info cache for port a96fedd3-52a4-4d09-ac57-c314370a6310 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.779 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Start _get_guest_xml network_info=[{"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.785 247403 WARNING nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.792 247403 DEBUG nova.virt.libvirt.host [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.794 247403 DEBUG nova.virt.libvirt.host [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.797 247403 DEBUG nova.virt.libvirt.host [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.797 247403 DEBUG nova.virt.libvirt.host [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.798 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.799 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.799 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.799 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.800 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.800 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.800 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.800 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.800 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.801 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.801 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.801 247403 DEBUG nova.virt.hardware [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:08:24 np0005603621 nova_compute[247399]: 2026-01-31 08:08:24.805 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]: {
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:    "0": [
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:        {
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "devices": [
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "/dev/loop3"
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            ],
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "lv_name": "ceph_lv0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "lv_size": "7511998464",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "name": "ceph_lv0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "tags": {
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.cluster_name": "ceph",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.crush_device_class": "",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.encrypted": "0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.osd_id": "0",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.type": "block",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:                "ceph.vdo": "0"
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            },
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "type": "block",
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:            "vg_name": "ceph_vg0"
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:        }
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]:    ]
Jan 31 03:08:24 np0005603621 sweet_volhard[295517]: }
Jan 31 03:08:24 np0005603621 systemd[1]: libpod-9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7.scope: Deactivated successfully.
Jan 31 03:08:24 np0005603621 podman[295498]: 2026-01-31 08:08:24.940997847 +0000 UTC m=+1.255106809 container died 9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.124 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:25.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:25.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9877fe4ba6cf7fc082ed33aa6d88764e6927ad9ec214c541367ea00d655f1f13-merged.mount: Deactivated successfully.
Jan 31 03:08:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:08:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079046342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.386 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.412 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.416 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 326 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 733 KiB/s rd, 6.5 MiB/s wr, 181 op/s
Jan 31 03:08:25 np0005603621 podman[295498]: 2026-01-31 08:08:25.656447847 +0000 UTC m=+1.970556789 container remove 9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_volhard, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:25 np0005603621 systemd[1]: libpod-conmon-9b0a10f7595bed5f1db163304ea0cdd7fccc26e22fe07ae1af21b0f4511395e7.scope: Deactivated successfully.
Jan 31 03:08:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:08:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003198972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.905 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.906 247403 DEBUG nova.virt.libvirt.vif [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1234589465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1234589465',id=74,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1b9f4d9a424402a969a75d0e1a09aa4',ramdisk_id='',reservation_id='r-d6yyt09i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1206598227',owner_user_name='tempest-InstanceActionsV221TestJSON-1206598227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:08:19Z,user_data=None,user_id='496d66c21c524cd193afd4289fccd421',uuid=0442f276-6f69-4c83-85ea-e2053526fc53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.907 247403 DEBUG nova.network.os_vif_util [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Converting VIF {"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.907 247403 DEBUG nova.network.os_vif_util [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.908 247403 DEBUG nova.objects.instance [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0442f276-6f69-4c83-85ea-e2053526fc53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.944 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <uuid>0442f276-6f69-4c83-85ea-e2053526fc53</uuid>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <name>instance-0000004a</name>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1234589465</nova:name>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:08:24</nova:creationTime>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:user uuid="496d66c21c524cd193afd4289fccd421">tempest-InstanceActionsV221TestJSON-1206598227-project-member</nova:user>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:project uuid="e1b9f4d9a424402a969a75d0e1a09aa4">tempest-InstanceActionsV221TestJSON-1206598227</nova:project>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <nova:port uuid="a96fedd3-52a4-4d09-ac57-c314370a6310">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <entry name="serial">0442f276-6f69-4c83-85ea-e2053526fc53</entry>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <entry name="uuid">0442f276-6f69-4c83-85ea-e2053526fc53</entry>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0442f276-6f69-4c83-85ea-e2053526fc53_disk">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0442f276-6f69-4c83-85ea-e2053526fc53_disk.config">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:00:42:ca"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <target dev="tapa96fedd3-52"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/console.log" append="off"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:08:25 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:08:25 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:08:25 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:08:25 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.945 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Preparing to wait for external event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.946 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.946 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.946 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.947 247403 DEBUG nova.virt.libvirt.vif [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1234589465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1234589465',id=74,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e1b9f4d9a424402a969a75d0e1a09aa4',ramdisk_id='',reservation_id='r-d6yyt09i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-1206598227',owner_user_name='tempest-InstanceActionsV221TestJSON-1206598227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:08:19Z,user_data=None,user_id='496d66c21c524cd193afd4289fccd421',uuid=0442f276-6f69-4c83-85ea-e2053526fc53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.947 247403 DEBUG nova.network.os_vif_util [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Converting VIF {"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.948 247403 DEBUG nova.network.os_vif_util [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.949 247403 DEBUG os_vif [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.949 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.949 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.950 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.958 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.958 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa96fedd3-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:25 np0005603621 nova_compute[247399]: 2026-01-31 08:08:25.958 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa96fedd3-52, col_values=(('external_ids', {'iface-id': 'a96fedd3-52a4-4d09-ac57-c314370a6310', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:42:ca', 'vm-uuid': '0442f276-6f69-4c83-85ea-e2053526fc53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:26 np0005603621 NetworkManager[49013]: <info>  [1769846906.0023] manager: (tapa96fedd3-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.015 247403 INFO os_vif [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52')#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.168 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.170 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.170 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] No VIF found with MAC fa:16:3e:00:42:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.171 247403 INFO nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Using config drive#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.205 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.281097489 +0000 UTC m=+0.084662776 container create 2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.21981525 +0000 UTC m=+0.023380557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:08:26 np0005603621 systemd[1]: Started libpod-conmon-2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a.scope.
Jan 31 03:08:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.379780666 +0000 UTC m=+0.183346103 container init 2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_volhard, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.387418836 +0000 UTC m=+0.190984123 container start 2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:08:26 np0005603621 objective_volhard[295779]: 167 167
Jan 31 03:08:26 np0005603621 systemd[1]: libpod-2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a.scope: Deactivated successfully.
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.431873025 +0000 UTC m=+0.235438342 container attach 2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.432639389 +0000 UTC m=+0.236204686 container died 2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-09b28b8a10d7674f15007727ada2d77aed4c776ee650db3b190815938f5a6ede-merged.mount: Deactivated successfully.
Jan 31 03:08:26 np0005603621 podman[295752]: 2026-01-31 08:08:26.50541517 +0000 UTC m=+0.308980457 container remove 2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:08:26 np0005603621 systemd[1]: libpod-conmon-2e910b9fb92a606ffc4e806f0b380ace1e49650d0d133063c2895f89bf80990a.scope: Deactivated successfully.
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.697 247403 INFO nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Creating config drive at /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/disk.config#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.701 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpa61ib2e1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:26 np0005603621 podman[295804]: 2026-01-31 08:08:26.710468385 +0000 UTC m=+0.080183606 container create 748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:08:26 np0005603621 podman[295804]: 2026-01-31 08:08:26.665809509 +0000 UTC m=+0.035524700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:08:26 np0005603621 systemd[1]: Started libpod-conmon-748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6.scope.
Jan 31 03:08:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2634863f531cb99974cf8223f18efbcdc3f8f67f8e8dac86ce2a6be0d54352e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2634863f531cb99974cf8223f18efbcdc3f8f67f8e8dac86ce2a6be0d54352e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2634863f531cb99974cf8223f18efbcdc3f8f67f8e8dac86ce2a6be0d54352e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2634863f531cb99974cf8223f18efbcdc3f8f67f8e8dac86ce2a6be0d54352e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.837 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpa61ib2e1" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.872 247403 DEBUG nova.storage.rbd_utils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] rbd image 0442f276-6f69-4c83-85ea-e2053526fc53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:08:26 np0005603621 nova_compute[247399]: 2026-01-31 08:08:26.876 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/disk.config 0442f276-6f69-4c83-85ea-e2053526fc53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:26 np0005603621 podman[295804]: 2026-01-31 08:08:26.88314341 +0000 UTC m=+0.252858621 container init 748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:08:26 np0005603621 podman[295804]: 2026-01-31 08:08:26.892252717 +0000 UTC m=+0.261967918 container start 748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:08:26 np0005603621 podman[295804]: 2026-01-31 08:08:26.948646692 +0000 UTC m=+0.318361893 container attach 748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.137 247403 DEBUG oslo_concurrency.processutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/disk.config 0442f276-6f69-4c83-85ea-e2053526fc53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.138 247403 INFO nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Deleting local config drive /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53/disk.config because it was imported into RBD.#033[00m
Jan 31 03:08:27 np0005603621 virtqemud[247123]: End of file while reading data: Input/output error
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:27 np0005603621 kernel: tapa96fedd3-52: entered promiscuous mode
Jan 31 03:08:27 np0005603621 NetworkManager[49013]: <info>  [1769846907.2039] manager: (tapa96fedd3-52): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.208 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:27Z|00206|binding|INFO|Claiming lport a96fedd3-52a4-4d09-ac57-c314370a6310 for this chassis.
Jan 31 03:08:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:27Z|00207|binding|INFO|a96fedd3-52a4-4d09-ac57-c314370a6310: Claiming fa:16:3e:00:42:ca 10.100.0.6
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.214 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.229 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:42:ca 10.100.0.6'], port_security=['fa:16:3e:00:42:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0442f276-6f69-4c83-85ea-e2053526fc53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1b9f4d9a424402a969a75d0e1a09aa4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '99b9eaca-1328-4099-84ca-1bb798562a48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4772c6ba-d74a-4a03-878d-c116e093f252, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a96fedd3-52a4-4d09-ac57-c314370a6310) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.231 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a96fedd3-52a4-4d09-ac57-c314370a6310 in datapath 5520a690-6fdf-49fe-ad4a-fc586cc36ad6 bound to our chassis#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.239 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5520a690-6fdf-49fe-ad4a-fc586cc36ad6#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.248 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[565224c1-a938-4d5f-9ace-baf4728177a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.250 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5520a690-61 in ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:08:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:27Z|00208|binding|INFO|Setting lport a96fedd3-52a4-4d09-ac57-c314370a6310 ovn-installed in OVS
Jan 31 03:08:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:27Z|00209|binding|INFO|Setting lport a96fedd3-52a4-4d09-ac57-c314370a6310 up in Southbound
Jan 31 03:08:27 np0005603621 systemd-machined[212769]: New machine qemu-32-instance-0000004a.
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.254 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5520a690-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.254 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[75d55a87-8708-4e02-8551-dd8f514d627d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.254 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.257 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc92bd82-3d76-475e-9282-21a6cbb0fb5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 systemd[1]: Started Virtual Machine qemu-32-instance-0000004a.
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.272 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e0797b5c-40b2-4174-8d31-ea15de94a6ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 systemd-udevd[295881]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:08:27 np0005603621 NetworkManager[49013]: <info>  [1769846907.2992] device (tapa96fedd3-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:08:27 np0005603621 NetworkManager[49013]: <info>  [1769846907.3000] device (tapa96fedd3-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.301 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b1f659-a479-40aa-8c79-1b4500537e23]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:27.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:27.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.331 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[62ec1ba0-3235-405a-bafc-9e44c45b9638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 systemd-udevd[295887]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.336 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[879ae65b-6578-405e-8b5b-76ce327348eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 NetworkManager[49013]: <info>  [1769846907.3391] manager: (tap5520a690-60): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.374 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[30b275fe-14d7-4d7e-9b55-d42a402b8ef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.378 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8d396f-b751-494d-b6cc-fd5781b8f69d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 NetworkManager[49013]: <info>  [1769846907.4038] device (tap5520a690-60): carrier: link connected
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.406 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1140b2-8d30-48ac-add2-23a790170e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.423 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f467923d-95d3-4da2-92b3-bdfd10fe126c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5520a690-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fd:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621291, 'reachable_time': 44056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295911, 'error': None, 'target': 'ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.443 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[73710c6a-3e6a-458f-8241-261e09e90c4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:fdd7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 621291, 'tstamp': 621291}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295912, 'error': None, 'target': 'ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.459 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[330d5d35-2a85-4708-b025-b44966e1aee8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5520a690-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:fd:d7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621291, 'reachable_time': 44056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295913, 'error': None, 'target': 'ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.490 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5305152e-a1de-48bd-b77b-b659890f2010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.548 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[847b170c-7bea-4aac-b338-f582624a9551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.550 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5520a690-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.550 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.550 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5520a690-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.552 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 NetworkManager[49013]: <info>  [1769846907.5532] manager: (tap5520a690-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 31 03:08:27 np0005603621 kernel: tap5520a690-60: entered promiscuous mode
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.556 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.557 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5520a690-60, col_values=(('external_ids', {'iface-id': '5ed423b9-2397-4963-9a80-b961df10cff0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.558 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:27Z|00210|binding|INFO|Releasing lport 5ed423b9-2397-4963-9a80-b961df10cff0 from this chassis (sb_readonly=0)
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.561 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5520a690-6fdf-49fe-ad4a-fc586cc36ad6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5520a690-6fdf-49fe-ad4a-fc586cc36ad6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.562 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[04632ac2-0b80-4d38-922e-fe91ec178983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.562 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-5520a690-6fdf-49fe-ad4a-fc586cc36ad6
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/5520a690-6fdf-49fe-ad4a-fc586cc36ad6.pid.haproxy
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 5520a690-6fdf-49fe-ad4a-fc586cc36ad6
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:08:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:27.563 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'env', 'PROCESS_TAG=haproxy-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5520a690-6fdf-49fe-ad4a-fc586cc36ad6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.566 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 355 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]: {
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:        "osd_id": 0,
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:        "type": "bluestore"
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]:    }
Jan 31 03:08:27 np0005603621 xenodochial_liskov[295823]: }
Jan 31 03:08:27 np0005603621 systemd[1]: libpod-748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6.scope: Deactivated successfully.
Jan 31 03:08:27 np0005603621 podman[295804]: 2026-01-31 08:08:27.797686737 +0000 UTC m=+1.167401948 container died 748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.841 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846907.8408134, 0442f276-6f69-4c83-85ea-e2053526fc53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.842 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] VM Started (Lifecycle Event)#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.866 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.876 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846907.8419557, 0442f276-6f69-4c83-85ea-e2053526fc53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.877 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:08:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2634863f531cb99974cf8223f18efbcdc3f8f67f8e8dac86ce2a6be0d54352e1-merged.mount: Deactivated successfully.
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.904 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.909 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:08:27 np0005603621 nova_compute[247399]: 2026-01-31 08:08:27.931 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:08:27 np0005603621 podman[295804]: 2026-01-31 08:08:27.973055038 +0000 UTC m=+1.342770229 container remove 748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:08:28 np0005603621 systemd[1]: libpod-conmon-748fa33f62cd5d15b37168565855b6a9e1890d3d5ed026557c8473e3d7fff7c6.scope: Deactivated successfully.
Jan 31 03:08:28 np0005603621 podman[296018]: 2026-01-31 08:08:27.955259027 +0000 UTC m=+0.031811043 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:08:28 np0005603621 nova_compute[247399]: 2026-01-31 08:08:28.088 247403 DEBUG nova.network.neutron [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Updated VIF entry in instance network info cache for port a96fedd3-52a4-4d09-ac57-c314370a6310. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:08:28 np0005603621 nova_compute[247399]: 2026-01-31 08:08:28.089 247403 DEBUG nova.network.neutron [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Updating instance_info_cache with network_info: [{"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:08:28 np0005603621 podman[296018]: 2026-01-31 08:08:28.112731134 +0000 UTC m=+0.189283100 container create 85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:08:28 np0005603621 nova_compute[247399]: 2026-01-31 08:08:28.162 247403 DEBUG oslo_concurrency.lockutils [req-b5770d79-0f21-4daf-8385-9430534618cb req-88b4d248-8097-4fa5-a086-361a170241f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0442f276-6f69-4c83-85ea-e2053526fc53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:08:28 np0005603621 systemd[1]: Started libpod-conmon-85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac.scope.
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a0a21833-7c59-4c6a-b789-3a36441251c7 does not exist
Jan 31 03:08:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5280b720-063e-4245-8405-06244fdf1975 does not exist
Jan 31 03:08:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8f54478b-34cd-471c-9eed-33577da816ee does not exist
Jan 31 03:08:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:08:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a08d7fdc8ce021c6cef7fa3db8a8ed2e372159cda2441d38954320029dc0d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:08:28 np0005603621 podman[296018]: 2026-01-31 08:08:28.238019098 +0000 UTC m=+0.314571084 container init 85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:08:28 np0005603621 podman[296018]: 2026-01-31 08:08:28.24505487 +0000 UTC m=+0.321606836 container start 85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:08:28 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [NOTICE]   (296061) : New worker (296067) forked
Jan 31 03:08:28 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [NOTICE]   (296061) : Loading success.
Jan 31 03:08:28 np0005603621 nova_compute[247399]: 2026-01-31 08:08:28.373 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.038 247403 DEBUG nova.compute.manager [req-6809971e-7b82-4a34-b803-95624d031f02 req-d07dad16-a3f4-4703-b594-dcc2cf9ec4a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.039 247403 DEBUG oslo_concurrency.lockutils [req-6809971e-7b82-4a34-b803-95624d031f02 req-d07dad16-a3f4-4703-b594-dcc2cf9ec4a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.039 247403 DEBUG oslo_concurrency.lockutils [req-6809971e-7b82-4a34-b803-95624d031f02 req-d07dad16-a3f4-4703-b594-dcc2cf9ec4a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.040 247403 DEBUG oslo_concurrency.lockutils [req-6809971e-7b82-4a34-b803-95624d031f02 req-d07dad16-a3f4-4703-b594-dcc2cf9ec4a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.040 247403 DEBUG nova.compute.manager [req-6809971e-7b82-4a34-b803-95624d031f02 req-d07dad16-a3f4-4703-b594-dcc2cf9ec4a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Processing event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.042 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.048 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.048 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846909.0476878, 0442f276-6f69-4c83-85ea-e2053526fc53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.049 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.053 247403 INFO nova.virt.libvirt.driver [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Instance spawned successfully.#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.053 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.081 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.088 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.088 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.089 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.089 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.090 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.090 247403 DEBUG nova.virt.libvirt.driver [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.093 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.127 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.161 247403 INFO nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Took 9.17 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.162 247403 DEBUG nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.238 247403 INFO nova.compute.manager [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Took 10.39 seconds to build instance.#033[00m
Jan 31 03:08:29 np0005603621 nova_compute[247399]: 2026-01-31 08:08:29.277 247403 DEBUG oslo_concurrency.lockutils [None req-183bf09a-0ccc-4a82-a446-50ce87b7bd97 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:29.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:29.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 372 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 300 KiB/s rd, 4.1 MiB/s wr, 122 op/s
Jan 31 03:08:30 np0005603621 nova_compute[247399]: 2026-01-31 08:08:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:30.490 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:30.491 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:30.491 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.004 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.153 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.154 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.154 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.154 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.154 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.155 247403 INFO nova.compute.manager [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Terminating instance#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.157 247403 DEBUG nova.compute.manager [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:08:31 np0005603621 kernel: tapa96fedd3-52 (unregistering): left promiscuous mode
Jan 31 03:08:31 np0005603621 NetworkManager[49013]: <info>  [1769846911.1992] device (tapa96fedd3-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.209 247403 DEBUG nova.compute.manager [req-f5a66487-47b7-4bbc-bf3f-08a66c08d66a req-d424200c-bcf7-485d-aadf-b94adf335573 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.209 247403 DEBUG oslo_concurrency.lockutils [req-f5a66487-47b7-4bbc-bf3f-08a66c08d66a req-d424200c-bcf7-485d-aadf-b94adf335573 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.209 247403 DEBUG oslo_concurrency.lockutils [req-f5a66487-47b7-4bbc-bf3f-08a66c08d66a req-d424200c-bcf7-485d-aadf-b94adf335573 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.210 247403 DEBUG oslo_concurrency.lockutils [req-f5a66487-47b7-4bbc-bf3f-08a66c08d66a req-d424200c-bcf7-485d-aadf-b94adf335573 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.210 247403 DEBUG nova.compute.manager [req-f5a66487-47b7-4bbc-bf3f-08a66c08d66a req-d424200c-bcf7-485d-aadf-b94adf335573 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] No waiting events found dispatching network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.210 247403 WARNING nova.compute.manager [req-f5a66487-47b7-4bbc-bf3f-08a66c08d66a req-d424200c-bcf7-485d-aadf-b94adf335573 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received unexpected event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:08:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:31Z|00211|binding|INFO|Releasing lport a96fedd3-52a4-4d09-ac57-c314370a6310 from this chassis (sb_readonly=0)
Jan 31 03:08:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:31Z|00212|binding|INFO|Setting lport a96fedd3-52a4-4d09-ac57-c314370a6310 down in Southbound
Jan 31 03:08:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:08:31Z|00213|binding|INFO|Removing iface tapa96fedd3-52 ovn-installed in OVS
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.210 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.218 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:42:ca 10.100.0.6'], port_security=['fa:16:3e:00:42:ca 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0442f276-6f69-4c83-85ea-e2053526fc53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e1b9f4d9a424402a969a75d0e1a09aa4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '99b9eaca-1328-4099-84ca-1bb798562a48', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4772c6ba-d74a-4a03-878d-c116e093f252, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a96fedd3-52a4-4d09-ac57-c314370a6310) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.219 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a96fedd3-52a4-4d09-ac57-c314370a6310 in datapath 5520a690-6fdf-49fe-ad4a-fc586cc36ad6 unbound from our chassis#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.222 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5520a690-6fdf-49fe-ad4a-fc586cc36ad6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.223 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b16d208e-e5af-4b4d-a8da-9a6939d4b084]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.224 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6 namespace which is not needed anymore#033[00m
Jan 31 03:08:31 np0005603621 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Jan 31 03:08:31 np0005603621 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000004a.scope: Consumed 2.729s CPU time.
Jan 31 03:08:31 np0005603621 systemd-machined[212769]: Machine qemu-32-instance-0000004a terminated.
Jan 31 03:08:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:31.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:31.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:31 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [NOTICE]   (296061) : haproxy version is 2.8.14-c23fe91
Jan 31 03:08:31 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [NOTICE]   (296061) : path to executable is /usr/sbin/haproxy
Jan 31 03:08:31 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [WARNING]  (296061) : Exiting Master process...
Jan 31 03:08:31 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [ALERT]    (296061) : Current worker (296067) exited with code 143 (Terminated)
Jan 31 03:08:31 np0005603621 neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6[296032]: [WARNING]  (296061) : All workers exited. Exiting... (0)
Jan 31 03:08:31 np0005603621 systemd[1]: libpod-85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac.scope: Deactivated successfully.
Jan 31 03:08:31 np0005603621 podman[296123]: 2026-01-31 08:08:31.35082508 +0000 UTC m=+0.044706158 container died 85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:08:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac-userdata-shm.mount: Deactivated successfully.
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b2a08d7fdc8ce021c6cef7fa3db8a8ed2e372159cda2441d38954320029dc0d3-merged.mount: Deactivated successfully.
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.394 247403 INFO nova.virt.libvirt.driver [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Instance destroyed successfully.#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.396 247403 DEBUG nova.objects.instance [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lazy-loading 'resources' on Instance uuid 0442f276-6f69-4c83-85ea-e2053526fc53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:08:31 np0005603621 podman[296123]: 2026-01-31 08:08:31.402173297 +0000 UTC m=+0.096054365 container cleanup 85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:08:31 np0005603621 systemd[1]: libpod-conmon-85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac.scope: Deactivated successfully.
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.420 247403 DEBUG nova.virt.libvirt.vif [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:08:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1234589465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1234589465',id=74,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:08:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e1b9f4d9a424402a969a75d0e1a09aa4',ramdisk_id='',reservation_id='r-d6yyt09i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-1206598227',owner_user_name='tempest-InstanceActionsV221TestJSON-1206598227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:08:29Z,user_data=None,user_id='496d66c21c524cd193afd4289fccd421',uuid=0442f276-6f69-4c83-85ea-e2053526fc53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.421 247403 DEBUG nova.network.os_vif_util [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Converting VIF {"id": "a96fedd3-52a4-4d09-ac57-c314370a6310", "address": "fa:16:3e:00:42:ca", "network": {"id": "5520a690-6fdf-49fe-ad4a-fc586cc36ad6", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1196415480-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e1b9f4d9a424402a969a75d0e1a09aa4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa96fedd3-52", "ovs_interfaceid": "a96fedd3-52a4-4d09-ac57-c314370a6310", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.422 247403 DEBUG nova.network.os_vif_util [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.423 247403 DEBUG os_vif [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.425 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa96fedd3-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.426 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.431 247403 INFO os_vif [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:42:ca,bridge_name='br-int',has_traffic_filtering=True,id=a96fedd3-52a4-4d09-ac57-c314370a6310,network=Network(5520a690-6fdf-49fe-ad4a-fc586cc36ad6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa96fedd3-52')#033[00m
Jan 31 03:08:31 np0005603621 podman[296165]: 2026-01-31 08:08:31.469777185 +0000 UTC m=+0.046259548 container remove 85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.473 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6dfbeeee-d6cd-4259-adf5-819c752fc2c2]: (4, ('Sat Jan 31 08:08:31 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6 (85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac)\n85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac\nSat Jan 31 08:08:31 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6 (85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac)\n85f3cb053a7c6878b3bdd2f2613d84bbb07dca60c085ad31007f193cc9e334ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.475 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[28942ae9-7d4e-4e1c-b9de-2c7fca7edb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.476 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5520a690-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.478 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 kernel: tap5520a690-60: left promiscuous mode
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.483 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eacaf02c-0089-4b92-882a-1bcf42ad9061]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.498 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[625703fe-d21a-4496-912f-4a52615cf162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.500 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f97845-0274-43ed-afd4-cf94004870cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.513 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e3868615-82fb-4dea-9540-b9039bb16249]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 621284, 'reachable_time': 16538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296198, 'error': None, 'target': 'ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.516 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5520a690-6fdf-49fe-ad4a-fc586cc36ad6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:08:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:31.517 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d471bb69-51d3-44fc-8010-97b59c1acec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:08:31 np0005603621 systemd[1]: run-netns-ovnmeta\x2d5520a690\x2d6fdf\x2d49fe\x2dad4a\x2dfc586cc36ad6.mount: Deactivated successfully.
Jan 31 03:08:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 372 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 3.8 MiB/s wr, 99 op/s
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.884 247403 INFO nova.virt.libvirt.driver [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Deleting instance files /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53_del#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.885 247403 INFO nova.virt.libvirt.driver [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Deletion of /var/lib/nova/instances/0442f276-6f69-4c83-85ea-e2053526fc53_del complete#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.962 247403 INFO nova.compute.manager [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.963 247403 DEBUG oslo.service.loopingcall [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.964 247403 DEBUG nova.compute.manager [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:08:31 np0005603621 nova_compute[247399]: 2026-01-31 08:08:31.964 247403 DEBUG nova.network.neutron [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.247 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.248 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.248 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.249 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.249 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978173346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.700 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.838 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.839 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4496MB free_disk=20.809898376464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.840 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:32 np0005603621 nova_compute[247399]: 2026-01-31 08:08:32.840 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.045 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0442f276-6f69-4c83-85ea-e2053526fc53 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.046 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.047 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.092 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.003000096s ======
Jan 31 03:08:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:33.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000096s
Jan 31 03:08:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:33.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.377 247403 DEBUG nova.compute.manager [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-vif-unplugged-a96fedd3-52a4-4d09-ac57-c314370a6310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.379 247403 DEBUG oslo_concurrency.lockutils [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.380 247403 DEBUG oslo_concurrency.lockutils [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.382 247403 DEBUG oslo_concurrency.lockutils [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.382 247403 DEBUG nova.compute.manager [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] No waiting events found dispatching network-vif-unplugged-a96fedd3-52a4-4d09-ac57-c314370a6310 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.382 247403 DEBUG nova.compute.manager [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-vif-unplugged-a96fedd3-52a4-4d09-ac57-c314370a6310 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.383 247403 DEBUG nova.compute.manager [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.383 247403 DEBUG oslo_concurrency.lockutils [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.384 247403 DEBUG oslo_concurrency.lockutils [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.384 247403 DEBUG oslo_concurrency.lockutils [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.384 247403 DEBUG nova.compute.manager [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] No waiting events found dispatching network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.385 247403 WARNING nova.compute.manager [req-22c3beb7-efbb-4062-bd3c-22c4981782f7 req-8fa72f23-ab73-46a9-86cd-7a73427783ca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received unexpected event network-vif-plugged-a96fedd3-52a4-4d09-ac57-c314370a6310 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.386 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.425 247403 DEBUG nova.network.neutron [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.467 247403 INFO nova.compute.manager [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Took 1.50 seconds to deallocate network for instance.#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.540 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.542 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.547 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.562 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.591 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.592 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.593 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 353 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.604 247403 DEBUG nova.compute.manager [req-7d4370dc-2b25-4b2f-b0a0-d314e61e8e73 req-4ab1d8fd-3aff-49be-8e01-aecdec2eba74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Received event network-vif-deleted-a96fedd3-52a4-4d09-ac57-c314370a6310 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:08:33 np0005603621 nova_compute[247399]: 2026-01-31 08:08:33.642 247403 DEBUG oslo_concurrency.processutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999282831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.109 247403 DEBUG oslo_concurrency.processutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.114 247403 DEBUG nova.compute.provider_tree [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.128 247403 DEBUG nova.scheduler.client.report [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.156 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.189 247403 INFO nova.scheduler.client.report [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Deleted allocations for instance 0442f276-6f69-4c83-85ea-e2053526fc53#033[00m
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.281 247403 DEBUG oslo_concurrency.lockutils [None req-8d482b02-70b9-48e2-bdeb-764024a32b9d 496d66c21c524cd193afd4289fccd421 e1b9f4d9a424402a969a75d0e1a09aa4 - - default default] Lock "0442f276-6f69-4c83-85ea-e2053526fc53" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:34 np0005603621 podman[296269]: 2026-01-31 08:08:34.499525983 +0000 UTC m=+0.050165680 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:08:34 np0005603621 podman[296270]: 2026-01-31 08:08:34.523464437 +0000 UTC m=+0.073776944 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.588 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:34 np0005603621 nova_compute[247399]: 2026-01-31 08:08:34.621 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:35 np0005603621 nova_compute[247399]: 2026-01-31 08:08:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:08:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:35.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:35.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 326 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 162 op/s
Jan 31 03:08:36 np0005603621 nova_compute[247399]: 2026-01-31 08:08:36.427 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:37.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:37.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 326 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Jan 31 03:08:38 np0005603621 nova_compute[247399]: 2026-01-31 08:08:38.379 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:08:38
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr']
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:08:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:08:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:08:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:08:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 314 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 623 KiB/s wr, 189 op/s
Jan 31 03:08:40 np0005603621 nova_compute[247399]: 2026-01-31 08:08:40.691 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:41 np0005603621 nova_compute[247399]: 2026-01-31 08:08:41.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 314 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 178 op/s
Jan 31 03:08:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:41.770 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:08:41 np0005603621 nova_compute[247399]: 2026-01-31 08:08:41.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:41.771 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:08:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:08:42.773 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:08:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:08:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:08:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:43 np0005603621 nova_compute[247399]: 2026-01-31 08:08:43.381 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 181 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 19 KiB/s wr, 209 op/s
Jan 31 03:08:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:08:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 167 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 26 KiB/s wr, 196 op/s
Jan 31 03:08:46 np0005603621 nova_compute[247399]: 2026-01-31 08:08:46.393 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846911.391341, 0442f276-6f69-4c83-85ea-e2053526fc53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:08:46 np0005603621 nova_compute[247399]: 2026-01-31 08:08:46.393 247403 INFO nova.compute.manager [-] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:08:46 np0005603621 nova_compute[247399]: 2026-01-31 08:08:46.430 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:46 np0005603621 nova_compute[247399]: 2026-01-31 08:08:46.458 247403 DEBUG nova.compute.manager [None req-624ce889-2bb0-4947-851e-0e54fee59f6e - - - - - -] [instance: 0442f276-6f69-4c83-85ea-e2053526fc53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:08:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:47.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 190 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 167 op/s
Jan 31 03:08:48 np0005603621 nova_compute[247399]: 2026-01-31 08:08:48.383 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0038786234412804764 of space, bias 1.0, pg target 1.1635870323841428 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:08:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:49.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:08:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:49.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:08:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 168 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 786 KiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 31 03:08:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:51.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:51 np0005603621 nova_compute[247399]: 2026-01-31 08:08:51.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 168 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 250 KiB/s rd, 2.2 MiB/s wr, 107 op/s
Jan 31 03:08:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:53.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:53.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:53 np0005603621 nova_compute[247399]: 2026-01-31 08:08:53.385 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 121 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 269 KiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 31 03:08:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:55.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:55.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 94 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 258 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 31 03:08:56 np0005603621 nova_compute[247399]: 2026-01-31 08:08:56.434 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:57.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:08:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:57.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:08:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 258 KiB/s rd, 2.2 MiB/s wr, 116 op/s
Jan 31 03:08:58 np0005603621 nova_compute[247399]: 2026-01-31 08:08:58.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:08:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.141 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.141 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.160 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.253 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.254 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.263 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.264 247403 INFO nova.compute.claims [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:08:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:08:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:08:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:08:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:08:59.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.382 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:08:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 41 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 869 KiB/s wr, 68 op/s
Jan 31 03:08:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:08:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2158069570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.824 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.831 247403 DEBUG nova.compute.provider_tree [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.849 247403 DEBUG nova.scheduler.client.report [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.901 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.902 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.970 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:08:59 np0005603621 nova_compute[247399]: 2026-01-31 08:08:59.970 247403 DEBUG nova.network.neutron [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.028 247403 INFO nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.061 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.229 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.231 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.231 247403 INFO nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Creating image(s)#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.259 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.288 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.318 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.323 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.390 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.391 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.392 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.392 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.423 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.427 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 6d5ca702-c83c-4316-9a9f-3f627701e299_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:00 np0005603621 nova_compute[247399]: 2026-01-31 08:09:00.448 247403 DEBUG nova.policy [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16d731f5875748ca9b8036b2ba061042', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3469c253459e40e39dcf5bcb6a32008f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.208 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 6d5ca702-c83c-4316-9a9f-3f627701e299_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.281 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] resizing rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:09:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:01.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:09:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:01.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.385 247403 DEBUG nova.network.neutron [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Successfully created port: 8f82fbae-72f2-4300-8d43-a3c9613718bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.392 247403 DEBUG nova.objects.instance [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'migration_context' on Instance uuid 6d5ca702-c83c-4316-9a9f-3f627701e299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.409 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.409 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Ensure instance console log exists: /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.410 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.410 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.410 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:01 np0005603621 nova_compute[247399]: 2026-01-31 08:09:01.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 41 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 KiB/s wr, 51 op/s
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.561 247403 DEBUG nova.network.neutron [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Successfully updated port: 8f82fbae-72f2-4300-8d43-a3c9613718bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.756 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "refresh_cache-6d5ca702-c83c-4316-9a9f-3f627701e299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.756 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquired lock "refresh_cache-6d5ca702-c83c-4316-9a9f-3f627701e299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.756 247403 DEBUG nova.network.neutron [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.940 247403 DEBUG nova.compute.manager [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received event network-changed-8f82fbae-72f2-4300-8d43-a3c9613718bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.941 247403 DEBUG nova.compute.manager [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Refreshing instance network info cache due to event network-changed-8f82fbae-72f2-4300-8d43-a3c9613718bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:09:02 np0005603621 nova_compute[247399]: 2026-01-31 08:09:02.941 247403 DEBUG oslo_concurrency.lockutils [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-6d5ca702-c83c-4316-9a9f-3f627701e299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:03 np0005603621 nova_compute[247399]: 2026-01-31 08:09:03.006 247403 DEBUG nova.network.neutron [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:09:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:03.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:03.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:03 np0005603621 nova_compute[247399]: 2026-01-31 08:09:03.389 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 76 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.845 247403 DEBUG nova.network.neutron [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Updating instance_info_cache with network_info: [{"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.942 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Releasing lock "refresh_cache-6d5ca702-c83c-4316-9a9f-3f627701e299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.943 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Instance network_info: |[{"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.943 247403 DEBUG oslo_concurrency.lockutils [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-6d5ca702-c83c-4316-9a9f-3f627701e299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.943 247403 DEBUG nova.network.neutron [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Refreshing network info cache for port 8f82fbae-72f2-4300-8d43-a3c9613718bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.946 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Start _get_guest_xml network_info=[{"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.950 247403 WARNING nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.955 247403 DEBUG nova.virt.libvirt.host [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.955 247403 DEBUG nova.virt.libvirt.host [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.959 247403 DEBUG nova.virt.libvirt.host [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.960 247403 DEBUG nova.virt.libvirt.host [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.961 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.961 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.961 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.961 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.962 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.962 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.962 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.962 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.963 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.963 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.963 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.963 247403 DEBUG nova.virt.hardware [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:09:04 np0005603621 nova_compute[247399]: 2026-01-31 08:09:04.966 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:05.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:05.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443543777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.381 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.404 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.407 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:05 np0005603621 podman[296664]: 2026-01-31 08:09:05.483539658 +0000 UTC m=+0.044294836 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:09:05 np0005603621 podman[296665]: 2026-01-31 08:09:05.532268292 +0000 UTC m=+0.090468639 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:09:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 99 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 55 op/s
Jan 31 03:09:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051476204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.832 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.834 247403 DEBUG nova.virt.libvirt.vif [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1629229421',display_name='tempest-DeleteServersTestJSON-server-1629229421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1629229421',id=76,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-ewvpgxiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:00Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=6d5ca702-c83c-4316-9a9f-3f627701e299,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.834 247403 DEBUG nova.network.os_vif_util [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.835 247403 DEBUG nova.network.os_vif_util [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.836 247403 DEBUG nova.objects.instance [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d5ca702-c83c-4316-9a9f-3f627701e299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.912 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <uuid>6d5ca702-c83c-4316-9a9f-3f627701e299</uuid>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <name>instance-0000004c</name>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:name>tempest-DeleteServersTestJSON-server-1629229421</nova:name>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:09:04</nova:creationTime>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:user uuid="16d731f5875748ca9b8036b2ba061042">tempest-DeleteServersTestJSON-808715310-project-member</nova:user>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:project uuid="3469c253459e40e39dcf5bcb6a32008f">tempest-DeleteServersTestJSON-808715310</nova:project>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <nova:port uuid="8f82fbae-72f2-4300-8d43-a3c9613718bf">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <entry name="serial">6d5ca702-c83c-4316-9a9f-3f627701e299</entry>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <entry name="uuid">6d5ca702-c83c-4316-9a9f-3f627701e299</entry>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/6d5ca702-c83c-4316-9a9f-3f627701e299_disk">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/6d5ca702-c83c-4316-9a9f-3f627701e299_disk.config">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:40:c7:92"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <target dev="tap8f82fbae-72"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/console.log" append="off"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:09:05 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:09:05 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:09:05 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:09:05 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.913 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Preparing to wait for external event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.914 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.914 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.914 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.915 247403 DEBUG nova.virt.libvirt.vif [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1629229421',display_name='tempest-DeleteServersTestJSON-server-1629229421',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1629229421',id=76,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-ewvpgxiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:00Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=6d5ca702-c83c-4316-9a9f-3f627701e299,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.915 247403 DEBUG nova.network.os_vif_util [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.916 247403 DEBUG nova.network.os_vif_util [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.916 247403 DEBUG os_vif [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.916 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.917 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.917 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.920 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.920 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f82fbae-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.921 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f82fbae-72, col_values=(('external_ids', {'iface-id': '8f82fbae-72f2-4300-8d43-a3c9613718bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:c7:92', 'vm-uuid': '6d5ca702-c83c-4316-9a9f-3f627701e299'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.922 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:05 np0005603621 NetworkManager[49013]: <info>  [1769846945.9232] manager: (tap8f82fbae-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.926 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:09:05 np0005603621 nova_compute[247399]: 2026-01-31 08:09:05.927 247403 INFO os_vif [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72')#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.158 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.159 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.159 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] No VIF found with MAC fa:16:3e:40:c7:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.160 247403 INFO nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Using config drive#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.183 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.552 247403 DEBUG nova.network.neutron [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Updated VIF entry in instance network info cache for port 8f82fbae-72f2-4300-8d43-a3c9613718bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.553 247403 DEBUG nova.network.neutron [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Updating instance_info_cache with network_info: [{"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.599 247403 DEBUG oslo_concurrency.lockutils [req-d21b506d-0939-4bce-9f65-55aaa3a61eee req-87efbcc5-629a-4dde-ae83-159e4a3376bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-6d5ca702-c83c-4316-9a9f-3f627701e299" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.765 247403 INFO nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Creating config drive at /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/disk.config#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.769 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_q0pk7el execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.890 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_q0pk7el" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.919 247403 DEBUG nova.storage.rbd_utils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image 6d5ca702-c83c-4316-9a9f-3f627701e299_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:06 np0005603621 nova_compute[247399]: 2026-01-31 08:09:06.924 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/disk.config 6d5ca702-c83c-4316-9a9f-3f627701e299_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:07.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:07.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 134 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Jan 31 03:09:08 np0005603621 nova_compute[247399]: 2026-01-31 08:09:08.391 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:09.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:09.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 142 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.9 MiB/s wr, 66 op/s
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.105 247403 DEBUG oslo_concurrency.processutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/disk.config 6d5ca702-c83c-4316-9a9f-3f627701e299_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.106 247403 INFO nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Deleting local config drive /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299/disk.config because it was imported into RBD.#033[00m
Jan 31 03:09:10 np0005603621 kernel: tap8f82fbae-72: entered promiscuous mode
Jan 31 03:09:10 np0005603621 NetworkManager[49013]: <info>  [1769846950.1483] manager: (tap8f82fbae-72): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.149 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.151 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:10Z|00214|binding|INFO|Claiming lport 8f82fbae-72f2-4300-8d43-a3c9613718bf for this chassis.
Jan 31 03:09:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:10Z|00215|binding|INFO|8f82fbae-72f2-4300-8d43-a3c9613718bf: Claiming fa:16:3e:40:c7:92 10.100.0.6
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.154 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 systemd-udevd[296802]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:09:10 np0005603621 systemd-machined[212769]: New machine qemu-33-instance-0000004c.
Jan 31 03:09:10 np0005603621 NetworkManager[49013]: <info>  [1769846950.1803] device (tap8f82fbae-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:09:10 np0005603621 NetworkManager[49013]: <info>  [1769846950.1814] device (tap8f82fbae-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.184 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:c7:92 10.100.0.6'], port_security=['fa:16:3e:40:c7:92 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6d5ca702-c83c-4316-9a9f-3f627701e299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3469c253459e40e39dcf5bcb6a32008f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e42c06e8-2644-4a21-adfb-06ef74de77bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298bbe2a-1faa-4c77-b3c3-4633e58f5921, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=8f82fbae-72f2-4300-8d43-a3c9613718bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:10Z|00216|binding|INFO|Setting lport 8f82fbae-72f2-4300-8d43-a3c9613718bf ovn-installed in OVS
Jan 31 03:09:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:10Z|00217|binding|INFO|Setting lport 8f82fbae-72f2-4300-8d43-a3c9613718bf up in Southbound
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.186 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 8f82fbae-72f2-4300-8d43-a3c9613718bf in datapath c1c6810e-ec8f-43f3-a3c6-22606d9416b6 bound to our chassis#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.187 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.187 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1c6810e-ec8f-43f3-a3c6-22606d9416b6#033[00m
Jan 31 03:09:10 np0005603621 systemd[1]: Started Virtual Machine qemu-33-instance-0000004c.
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.196 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f7b305-6384-4004-b46b-14a5277f7e33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.197 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1c6810e-e1 in ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.199 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1c6810e-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.199 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5a11eea3-28b2-4359-b650-ae8327f208e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.200 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aafd63f7-e210-4f11-ba95-5c6d6c85c2c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.208 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c18c93e0-fcf5-409b-832b-62a3aafde385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.217 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd8e53b-6efd-428c-85f8-e8d1e92e0e94]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.239 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[59473749-342f-4ccb-b0bb-8307d286da39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.243 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a2c6a5-2190-430b-8aa0-fba8b7d23fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 NetworkManager[49013]: <info>  [1769846950.2447] manager: (tapc1c6810e-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.265 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ce118852-0069-415c-934c-9405d9cef0eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.267 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fe867028-c877-4dea-971a-897105742e03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 NetworkManager[49013]: <info>  [1769846950.2835] device (tapc1c6810e-e0): carrier: link connected
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.287 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1d102d77-e981-4bf7-a2f0-1dcef7db92b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.300 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9626f6-8f13-4758-99ae-f1c9c5960d5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1c6810e-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625579, 'reachable_time': 27672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296836, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.313 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b60aa9f0-5328-491c-bb99-f28186335c9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:9781'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625579, 'tstamp': 625579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296837, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.326 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b92dc0e8-f72c-41da-a626-1c4eb4d2b460]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1c6810e-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625579, 'reachable_time': 27672, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296838, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.349 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[67f6e97c-4106-402e-8b93-9850ea984867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.387 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c57c697b-e4f4-49da-ba7e-ca9678902460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.389 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1c6810e-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.389 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.390 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1c6810e-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:10 np0005603621 NetworkManager[49013]: <info>  [1769846950.3924] manager: (tapc1c6810e-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 kernel: tapc1c6810e-e0: entered promiscuous mode
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.395 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.395 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1c6810e-e0, col_values=(('external_ids', {'iface-id': '937542c1-ab1e-4312-ab3a-ee4483fcdf7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:10Z|00218|binding|INFO|Releasing lport 937542c1-ab1e-4312-ab3a-ee4483fcdf7b from this chassis (sb_readonly=0)
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.403 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.404 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.405 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[42b4362d-3839-4b54-b1e0-c4a4690d3d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.406 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-c1c6810e-ec8f-43f3-a3c6-22606d9416b6
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.pid.haproxy
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID c1c6810e-ec8f-43f3-a3c6-22606d9416b6
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:09:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:10.406 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'env', 'PROCESS_TAG=haproxy-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:09:10 np0005603621 podman[296905]: 2026-01-31 08:09:10.690378725 +0000 UTC m=+0.019296468 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.807 247403 DEBUG nova.compute.manager [req-fbaf23ab-8d09-44a8-b9c5-8d93aa0f8a4d req-cef35eb7-add7-4150-8c76-03374168342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.808 247403 DEBUG oslo_concurrency.lockutils [req-fbaf23ab-8d09-44a8-b9c5-8d93aa0f8a4d req-cef35eb7-add7-4150-8c76-03374168342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.808 247403 DEBUG oslo_concurrency.lockutils [req-fbaf23ab-8d09-44a8-b9c5-8d93aa0f8a4d req-cef35eb7-add7-4150-8c76-03374168342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.808 247403 DEBUG oslo_concurrency.lockutils [req-fbaf23ab-8d09-44a8-b9c5-8d93aa0f8a4d req-cef35eb7-add7-4150-8c76-03374168342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.809 247403 DEBUG nova.compute.manager [req-fbaf23ab-8d09-44a8-b9c5-8d93aa0f8a4d req-cef35eb7-add7-4150-8c76-03374168342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Processing event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.845 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.846 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846950.84615, 6d5ca702-c83c-4316-9a9f-3f627701e299 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.847 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] VM Started (Lifecycle Event)#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.851 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.856 247403 INFO nova.virt.libvirt.driver [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Instance spawned successfully.#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.857 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.902 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.909 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.914 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.915 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.915 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.916 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.916 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.916 247403 DEBUG nova.virt.libvirt.driver [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.923 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.974 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.975 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846950.8465183, 6d5ca702-c83c-4316-9a9f-3f627701e299 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:10 np0005603621 nova_compute[247399]: 2026-01-31 08:09:10.975 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.035 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.040 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846950.85184, 6d5ca702-c83c-4316-9a9f-3f627701e299 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.040 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.070 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.074 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.096 247403 INFO nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Took 10.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.097 247403 DEBUG nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.110 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.229 247403 INFO nova.compute.manager [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Took 12.02 seconds to build instance.#033[00m
Jan 31 03:09:11 np0005603621 podman[296905]: 2026-01-31 08:09:11.248559325 +0000 UTC m=+0.577477058 container create cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:09:11 np0005603621 systemd[1]: Started libpod-conmon-cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c.scope.
Jan 31 03:09:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:11.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:11.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419026cdc358173916bfb1518a2d9aa2bc17e6c4dac74fd4e8322551cd85ed30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:11 np0005603621 podman[296905]: 2026-01-31 08:09:11.399778595 +0000 UTC m=+0.728696358 container init cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:09:11 np0005603621 podman[296905]: 2026-01-31 08:09:11.405375051 +0000 UTC m=+0.734292804 container start cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:09:11 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [NOTICE]   (296930) : New worker (296932) forked
Jan 31 03:09:11 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [NOTICE]   (296930) : Loading success.
Jan 31 03:09:11 np0005603621 nova_compute[247399]: 2026-01-31 08:09:11.442 247403 DEBUG oslo_concurrency.lockutils [None req-734a9b7c-142b-4100-81fb-2b68ec60d3ea 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 142 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.9 MiB/s wr, 66 op/s
Jan 31 03:09:12 np0005603621 nova_compute[247399]: 2026-01-31 08:09:12.977 247403 DEBUG nova.compute.manager [req-840f6c70-3957-4f5c-a401-3c3bd734dccf req-54e762fe-a3ba-44ba-858f-690c9503a95a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:12 np0005603621 nova_compute[247399]: 2026-01-31 08:09:12.978 247403 DEBUG oslo_concurrency.lockutils [req-840f6c70-3957-4f5c-a401-3c3bd734dccf req-54e762fe-a3ba-44ba-858f-690c9503a95a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:12 np0005603621 nova_compute[247399]: 2026-01-31 08:09:12.979 247403 DEBUG oslo_concurrency.lockutils [req-840f6c70-3957-4f5c-a401-3c3bd734dccf req-54e762fe-a3ba-44ba-858f-690c9503a95a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:12 np0005603621 nova_compute[247399]: 2026-01-31 08:09:12.979 247403 DEBUG oslo_concurrency.lockutils [req-840f6c70-3957-4f5c-a401-3c3bd734dccf req-54e762fe-a3ba-44ba-858f-690c9503a95a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:12 np0005603621 nova_compute[247399]: 2026-01-31 08:09:12.979 247403 DEBUG nova.compute.manager [req-840f6c70-3957-4f5c-a401-3c3bd734dccf req-54e762fe-a3ba-44ba-858f-690c9503a95a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] No waiting events found dispatching network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:12 np0005603621 nova_compute[247399]: 2026-01-31 08:09:12.979 247403 WARNING nova.compute.manager [req-840f6c70-3957-4f5c-a401-3c3bd734dccf req-54e762fe-a3ba-44ba-858f-690c9503a95a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received unexpected event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf for instance with vm_state active and task_state None.#033[00m
Jan 31 03:09:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:13.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:13.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:13 np0005603621 nova_compute[247399]: 2026-01-31 08:09:13.394 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 181 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 140 op/s
Jan 31 03:09:14 np0005603621 nova_compute[247399]: 2026-01-31 08:09:14.733 247403 DEBUG nova.objects.instance [None req-0b7ecc2a-ed23-43e0-a571-d21ede3b41d5 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d5ca702-c83c-4316-9a9f-3f627701e299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:14 np0005603621 nova_compute[247399]: 2026-01-31 08:09:14.831 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846954.8310957, 6d5ca702-c83c-4316-9a9f-3f627701e299 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:14 np0005603621 nova_compute[247399]: 2026-01-31 08:09:14.831 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:09:14 np0005603621 nova_compute[247399]: 2026-01-31 08:09:14.879 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:14 np0005603621 nova_compute[247399]: 2026-01-31 08:09:14.881 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:14 np0005603621 nova_compute[247399]: 2026-01-31 08:09:14.977 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 31 03:09:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:15.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:15 np0005603621 kernel: tap8f82fbae-72 (unregistering): left promiscuous mode
Jan 31 03:09:15 np0005603621 NetworkManager[49013]: <info>  [1769846955.5083] device (tap8f82fbae-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00219|binding|INFO|Releasing lport 8f82fbae-72f2-4300-8d43-a3c9613718bf from this chassis (sb_readonly=0)
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00220|binding|INFO|Setting lport 8f82fbae-72f2-4300-8d43-a3c9613718bf down in Southbound
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00221|binding|INFO|Removing iface tap8f82fbae-72 ovn-installed in OVS
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.515 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.526 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 31 03:09:15 np0005603621 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000004c.scope: Consumed 4.567s CPU time.
Jan 31 03:09:15 np0005603621 systemd-machined[212769]: Machine qemu-33-instance-0000004c terminated.
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.586 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:c7:92 10.100.0.6'], port_security=['fa:16:3e:40:c7:92 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6d5ca702-c83c-4316-9a9f-3f627701e299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3469c253459e40e39dcf5bcb6a32008f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e42c06e8-2644-4a21-adfb-06ef74de77bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298bbe2a-1faa-4c77-b3c3-4633e58f5921, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=8f82fbae-72f2-4300-8d43-a3c9613718bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.588 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 8f82fbae-72f2-4300-8d43-a3c9613718bf in datapath c1c6810e-ec8f-43f3-a3c6-22606d9416b6 unbound from our chassis#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.589 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.590 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[10820095-8d22-425b-a3c7-47706e393064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.590 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 namespace which is not needed anymore#033[00m
Jan 31 03:09:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 181 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 162 op/s
Jan 31 03:09:15 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [NOTICE]   (296930) : haproxy version is 2.8.14-c23fe91
Jan 31 03:09:15 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [NOTICE]   (296930) : path to executable is /usr/sbin/haproxy
Jan 31 03:09:15 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [WARNING]  (296930) : Exiting Master process...
Jan 31 03:09:15 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [WARNING]  (296930) : Exiting Master process...
Jan 31 03:09:15 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [ALERT]    (296930) : Current worker (296932) exited with code 143 (Terminated)
Jan 31 03:09:15 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[296926]: [WARNING]  (296930) : All workers exited. Exiting... (0)
Jan 31 03:09:15 np0005603621 systemd[1]: libpod-cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c.scope: Deactivated successfully.
Jan 31 03:09:15 np0005603621 podman[296972]: 2026-01-31 08:09:15.72337107 +0000 UTC m=+0.055926192 container died cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:09:15 np0005603621 kernel: tap8f82fbae-72: entered promiscuous mode
Jan 31 03:09:15 np0005603621 NetworkManager[49013]: <info>  [1769846955.7434] manager: (tap8f82fbae-72): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Jan 31 03:09:15 np0005603621 kernel: tap8f82fbae-72 (unregistering): left promiscuous mode
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00222|binding|INFO|Claiming lport 8f82fbae-72f2-4300-8d43-a3c9613718bf for this chassis.
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00223|binding|INFO|8f82fbae-72f2-4300-8d43-a3c9613718bf: Claiming fa:16:3e:40:c7:92 10.100.0.6
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00224|binding|INFO|Setting lport 8f82fbae-72f2-4300-8d43-a3c9613718bf ovn-installed in OVS
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.762 247403 DEBUG nova.compute.manager [None req-0b7ecc2a-ed23-43e0-a571-d21ede3b41d5 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00225|if_status|INFO|Dropped 3 log messages in last 80 seconds (most recently, 80 seconds ago) due to excessive rate
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00226|if_status|INFO|Not setting lport 8f82fbae-72f2-4300-8d43-a3c9613718bf down as sb is readonly
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c-userdata-shm.mount: Deactivated successfully.
Jan 31 03:09:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-419026cdc358173916bfb1518a2d9aa2bc17e6c4dac74fd4e8322551cd85ed30-merged.mount: Deactivated successfully.
Jan 31 03:09:15 np0005603621 podman[296972]: 2026-01-31 08:09:15.813417664 +0000 UTC m=+0.145972766 container cleanup cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:09:15 np0005603621 systemd[1]: libpod-conmon-cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c.scope: Deactivated successfully.
Jan 31 03:09:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:15Z|00227|binding|INFO|Releasing lport 8f82fbae-72f2-4300-8d43-a3c9613718bf from this chassis (sb_readonly=0)
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.890 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:c7:92 10.100.0.6'], port_security=['fa:16:3e:40:c7:92 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6d5ca702-c83c-4316-9a9f-3f627701e299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3469c253459e40e39dcf5bcb6a32008f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e42c06e8-2644-4a21-adfb-06ef74de77bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298bbe2a-1faa-4c77-b3c3-4633e58f5921, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=8f82fbae-72f2-4300-8d43-a3c9613718bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.925 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.930 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:c7:92 10.100.0.6'], port_security=['fa:16:3e:40:c7:92 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6d5ca702-c83c-4316-9a9f-3f627701e299', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3469c253459e40e39dcf5bcb6a32008f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e42c06e8-2644-4a21-adfb-06ef74de77bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298bbe2a-1faa-4c77-b3c3-4633e58f5921, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=8f82fbae-72f2-4300-8d43-a3c9613718bf) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:15 np0005603621 podman[297012]: 2026-01-31 08:09:15.935679563 +0000 UTC m=+0.101672271 container remove cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.939 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e690e2-31d9-4704-91b5-506feca00186]: (4, ('Sat Jan 31 08:09:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 (cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c)\ncd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c\nSat Jan 31 08:09:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 (cd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c)\ncd83620c1ac5829077101ded69db11c7a3db6a4c1c3fbc638b426e769aea8d8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.942 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[af29ae9e-5260-4cd4-9977-f9226f289750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.944 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1c6810e-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 kernel: tapc1c6810e-e0: left promiscuous mode
Jan 31 03:09:15 np0005603621 nova_compute[247399]: 2026-01-31 08:09:15.956 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.960 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b840953-0235-4774-bd48-1491fa4cd945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.973 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[488e1663-5c73-4da8-b224-b763b9a1b776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.975 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[03b27592-1cdf-41eb-8634-657f37f07e2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.993 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1df917-33f7-4636-8f79-23c880bc9670]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625574, 'reachable_time': 25195, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297032, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.997 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:09:15 np0005603621 systemd[1]: run-netns-ovnmeta\x2dc1c6810e\x2dec8f\x2d43f3\x2da3c6\x2d22606d9416b6.mount: Deactivated successfully.
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.997 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[95d269dd-9406-4d97-99ed-fefa006adaf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.998 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 8f82fbae-72f2-4300-8d43-a3c9613718bf in datapath c1c6810e-ec8f-43f3-a3c6-22606d9416b6 unbound from our chassis#033[00m
Jan 31 03:09:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:15.999 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:09:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:16.000 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[920f1386-9620-4a76-95f6-b56b9534342c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:16.001 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 8f82fbae-72f2-4300-8d43-a3c9613718bf in datapath c1c6810e-ec8f-43f3-a3c6-22606d9416b6 unbound from our chassis#033[00m
Jan 31 03:09:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:16.002 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:09:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:16.002 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[53c93e14-81f0-4209-a191-1ee552a2b892]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:16 np0005603621 nova_compute[247399]: 2026-01-31 08:09:16.925 247403 DEBUG nova.compute.manager [req-0c1d3c78-19c5-4f08-85a3-c96ccb76d08d req-1a323e80-2d5a-4732-85aa-04de79c2b4c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received event network-vif-unplugged-8f82fbae-72f2-4300-8d43-a3c9613718bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:16 np0005603621 nova_compute[247399]: 2026-01-31 08:09:16.925 247403 DEBUG oslo_concurrency.lockutils [req-0c1d3c78-19c5-4f08-85a3-c96ccb76d08d req-1a323e80-2d5a-4732-85aa-04de79c2b4c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:16 np0005603621 nova_compute[247399]: 2026-01-31 08:09:16.926 247403 DEBUG oslo_concurrency.lockutils [req-0c1d3c78-19c5-4f08-85a3-c96ccb76d08d req-1a323e80-2d5a-4732-85aa-04de79c2b4c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:16 np0005603621 nova_compute[247399]: 2026-01-31 08:09:16.926 247403 DEBUG oslo_concurrency.lockutils [req-0c1d3c78-19c5-4f08-85a3-c96ccb76d08d req-1a323e80-2d5a-4732-85aa-04de79c2b4c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:16 np0005603621 nova_compute[247399]: 2026-01-31 08:09:16.926 247403 DEBUG nova.compute.manager [req-0c1d3c78-19c5-4f08-85a3-c96ccb76d08d req-1a323e80-2d5a-4732-85aa-04de79c2b4c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] No waiting events found dispatching network-vif-unplugged-8f82fbae-72f2-4300-8d43-a3c9613718bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:16 np0005603621 nova_compute[247399]: 2026-01-31 08:09:16.926 247403 WARNING nova.compute.manager [req-0c1d3c78-19c5-4f08-85a3-c96ccb76d08d req-1a323e80-2d5a-4732-85aa-04de79c2b4c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received unexpected event network-vif-unplugged-8f82fbae-72f2-4300-8d43-a3c9613718bf for instance with vm_state suspended and task_state None.#033[00m
Jan 31 03:09:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:17.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:09:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:09:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 181 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.0 MiB/s wr, 178 op/s
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.438 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.559 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.560 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.560 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.560 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.560 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.561 247403 INFO nova.compute.manager [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Terminating instance#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.562 247403 DEBUG nova.compute.manager [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.566 247403 INFO nova.virt.libvirt.driver [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Instance destroyed successfully.#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.566 247403 DEBUG nova.objects.instance [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'resources' on Instance uuid 6d5ca702-c83c-4316-9a9f-3f627701e299 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.605 247403 DEBUG nova.virt.libvirt.vif [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:08:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1629229421',display_name='tempest-DeleteServersTestJSON-server-1629229421',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1629229421',id=76,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-ewvpgxiy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:09:15Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=6d5ca702-c83c-4316-9a9f-3f627701e299,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.605 247403 DEBUG nova.network.os_vif_util [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "address": "fa:16:3e:40:c7:92", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f82fbae-72", "ovs_interfaceid": "8f82fbae-72f2-4300-8d43-a3c9613718bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.606 247403 DEBUG nova.network.os_vif_util [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.606 247403 DEBUG os_vif [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.608 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.608 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f82fbae-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.609 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.610 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:18 np0005603621 nova_compute[247399]: 2026-01-31 08:09:18.613 247403 INFO os_vif [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:c7:92,bridge_name='br-int',has_traffic_filtering=True,id=8f82fbae-72f2-4300-8d43-a3c9613718bf,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f82fbae-72')#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.044 247403 INFO nova.virt.libvirt.driver [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Deleting instance files /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299_del#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.044 247403 INFO nova.virt.libvirt.driver [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Deletion of /var/lib/nova/instances/6d5ca702-c83c-4316-9a9f-3f627701e299_del complete#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.122 247403 DEBUG nova.compute.manager [req-09613eaf-15b5-4b7c-bee3-b7b4cda49ff7 req-3ec238f1-fe8b-40cd-8d09-86eabbe3552b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.122 247403 DEBUG oslo_concurrency.lockutils [req-09613eaf-15b5-4b7c-bee3-b7b4cda49ff7 req-3ec238f1-fe8b-40cd-8d09-86eabbe3552b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.123 247403 DEBUG oslo_concurrency.lockutils [req-09613eaf-15b5-4b7c-bee3-b7b4cda49ff7 req-3ec238f1-fe8b-40cd-8d09-86eabbe3552b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.123 247403 DEBUG oslo_concurrency.lockutils [req-09613eaf-15b5-4b7c-bee3-b7b4cda49ff7 req-3ec238f1-fe8b-40cd-8d09-86eabbe3552b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.123 247403 DEBUG nova.compute.manager [req-09613eaf-15b5-4b7c-bee3-b7b4cda49ff7 req-3ec238f1-fe8b-40cd-8d09-86eabbe3552b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] No waiting events found dispatching network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.123 247403 WARNING nova.compute.manager [req-09613eaf-15b5-4b7c-bee3-b7b4cda49ff7 req-3ec238f1-fe8b-40cd-8d09-86eabbe3552b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received unexpected event network-vif-plugged-8f82fbae-72f2-4300-8d43-a3c9613718bf for instance with vm_state suspended and task_state deleting.#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.184 247403 INFO nova.compute.manager [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.185 247403 DEBUG oslo.service.loopingcall [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.185 247403 DEBUG nova.compute.manager [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:09:19 np0005603621 nova_compute[247399]: 2026-01-31 08:09:19.185 247403 DEBUG nova.network.neutron [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:09:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:19.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 157 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 225 op/s
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.234 247403 DEBUG nova.network.neutron [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.427 247403 INFO nova.compute.manager [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Took 1.24 seconds to deallocate network for instance.#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.447 247403 DEBUG nova.compute.manager [req-7d79bd0c-4a0a-4d69-ac3f-719fa2f140f8 req-0af6e0e0-0de1-4511-9f33-fcd454db1379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Received event network-vif-deleted-8f82fbae-72f2-4300-8d43-a3c9613718bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.447 247403 INFO nova.compute.manager [req-7d79bd0c-4a0a-4d69-ac3f-719fa2f140f8 req-0af6e0e0-0de1-4511-9f33-fcd454db1379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Neutron deleted interface 8f82fbae-72f2-4300-8d43-a3c9613718bf; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.447 247403 DEBUG nova.network.neutron [req-7d79bd0c-4a0a-4d69-ac3f-719fa2f140f8 req-0af6e0e0-0de1-4511-9f33-fcd454db1379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.527 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.528 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.530 247403 DEBUG nova.compute.manager [req-7d79bd0c-4a0a-4d69-ac3f-719fa2f140f8 req-0af6e0e0-0de1-4511-9f33-fcd454db1379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Detach interface failed, port_id=8f82fbae-72f2-4300-8d43-a3c9613718bf, reason: Instance 6d5ca702-c83c-4316-9a9f-3f627701e299 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:09:20 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.582 247403 DEBUG oslo_concurrency.processutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536630612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:21 np0005603621 nova_compute[247399]: 2026-01-31 08:09:20.999 247403 DEBUG oslo_concurrency.processutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:21 np0005603621 nova_compute[247399]: 2026-01-31 08:09:21.004 247403 DEBUG nova.compute.provider_tree [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:21 np0005603621 nova_compute[247399]: 2026-01-31 08:09:21.031 247403 DEBUG nova.scheduler.client.report [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:21 np0005603621 nova_compute[247399]: 2026-01-31 08:09:21.060 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:21 np0005603621 nova_compute[247399]: 2026-01-31 08:09:21.106 247403 INFO nova.scheduler.client.report [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Deleted allocations for instance 6d5ca702-c83c-4316-9a9f-3f627701e299#033[00m
Jan 31 03:09:21 np0005603621 nova_compute[247399]: 2026-01-31 08:09:21.222 247403 DEBUG oslo_concurrency.lockutils [None req-ba31cd64-9eef-4c6d-af9e-b3158f21d093 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "6d5ca702-c83c-4316-9a9f-3f627701e299" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:21.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:21.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 157 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 204 op/s
Jan 31 03:09:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:23 np0005603621 nova_compute[247399]: 2026-01-31 08:09:23.441 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:23 np0005603621 nova_compute[247399]: 2026-01-31 08:09:23.610 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 88 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.5 MiB/s wr, 276 op/s
Jan 31 03:09:25 np0005603621 nova_compute[247399]: 2026-01-31 08:09:25.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:25 np0005603621 nova_compute[247399]: 2026-01-31 08:09:25.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:09:25 np0005603621 nova_compute[247399]: 2026-01-31 08:09:25.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:09:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:25.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:09:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:25.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:09:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 88 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 15 KiB/s wr, 214 op/s
Jan 31 03:09:25 np0005603621 nova_compute[247399]: 2026-01-31 08:09:25.776 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:09:27 np0005603621 nova_compute[247399]: 2026-01-31 08:09:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:27.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 88 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 15 KiB/s wr, 190 op/s
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.443 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.715 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.716 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.758 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.904 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.905 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.913 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:09:28 np0005603621 nova_compute[247399]: 2026-01-31 08:09:28.914 247403 INFO nova.compute.claims [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.068 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:29.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:29.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 88 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 14 KiB/s wr, 145 op/s
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346609043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.705 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.712 247403 DEBUG nova.compute.provider_tree [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.752 247403 DEBUG nova.scheduler.client.report [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:09:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6a6da8d7-5599-4207-9507-882c83045d11 does not exist
Jan 31 03:09:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c43fa664-756b-4c79-80e1-b79ed1a9e115 does not exist
Jan 31 03:09:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2ff274fe-2cbd-40cc-972a-432afa0f7e09 does not exist
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.787 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.788 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.867 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.867 247403 DEBUG nova.network.neutron [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:09:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.890 247403 INFO nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:09:29 np0005603621 nova_compute[247399]: 2026-01-31 08:09:29.913 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.040 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.041 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.041 247403 INFO nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Creating image(s)#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.063 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.090 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.118 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.124 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.208 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.209 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.210 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.210 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.241 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.244 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e528a53a-8ada-4966-912c-1f15ed61e649_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:30 np0005603621 podman[297492]: 2026-01-31 08:09:30.255526183 +0000 UTC m=+0.022335974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.417 247403 DEBUG nova.policy [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '16d731f5875748ca9b8036b2ba061042', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3469c253459e40e39dcf5bcb6a32008f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:09:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:30.490 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:30.491 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:30.492 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:30 np0005603621 podman[297492]: 2026-01-31 08:09:30.712458595 +0000 UTC m=+0.479268336 container create 33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.764 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769846955.761556, 6d5ca702-c83c-4316-9a9f-3f627701e299 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.765 247403 INFO nova.compute.manager [-] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:09:30 np0005603621 nova_compute[247399]: 2026-01-31 08:09:30.865 247403 DEBUG nova.compute.manager [None req-b6664c3c-c67e-4644-af11-e7243cc44806 - - - - - -] [instance: 6d5ca702-c83c-4316-9a9f-3f627701e299] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:31 np0005603621 systemd[1]: Started libpod-conmon-33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd.scope.
Jan 31 03:09:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:09:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:09:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:31.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:31 np0005603621 podman[297492]: 2026-01-31 08:09:31.376393284 +0000 UTC m=+1.143203045 container init 33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:09:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:31.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:31 np0005603621 podman[297492]: 2026-01-31 08:09:31.38483719 +0000 UTC m=+1.151646941 container start 33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:09:31 np0005603621 keen_tharp[297528]: 167 167
Jan 31 03:09:31 np0005603621 systemd[1]: libpod-33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd.scope: Deactivated successfully.
Jan 31 03:09:31 np0005603621 conmon[297528]: conmon 33d6950e5c9c39b271d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd.scope/container/memory.events
Jan 31 03:09:31 np0005603621 nova_compute[247399]: 2026-01-31 08:09:31.417 247403 DEBUG nova.network.neutron [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Successfully created port: 22e87c0f-c5ce-4700-94a2-0d70f1b9655f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:09:31 np0005603621 podman[297492]: 2026-01-31 08:09:31.461216125 +0000 UTC m=+1.228025896 container attach 33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:09:31 np0005603621 podman[297492]: 2026-01-31 08:09:31.462248768 +0000 UTC m=+1.229058519 container died 33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:09:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 88 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 84 op/s
Jan 31 03:09:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0d44b507d4e352b7ad2a37d31a5beb76ed7b8ce249e9fcc37d32771646ac002d-merged.mount: Deactivated successfully.
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:32 np0005603621 podman[297492]: 2026-01-31 08:09:32.254456824 +0000 UTC m=+2.021266595 container remove 33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:09:32 np0005603621 systemd[1]: libpod-conmon-33d6950e5c9c39b271d742c1b4d557d514d35da9eab6f3b9b515b479539435fd.scope: Deactivated successfully.
Jan 31 03:09:32 np0005603621 podman[297556]: 2026-01-31 08:09:32.408503983 +0000 UTC m=+0.028588661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:09:32 np0005603621 podman[297556]: 2026-01-31 08:09:32.60380971 +0000 UTC m=+0.223894368 container create 50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:09:32 np0005603621 systemd[1]: Started libpod-conmon-50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382.scope.
Jan 31 03:09:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f029c70296f6b73f67ef9b6d2e6c28e3ae812f0213290337924f349ddfd68e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f029c70296f6b73f67ef9b6d2e6c28e3ae812f0213290337924f349ddfd68e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f029c70296f6b73f67ef9b6d2e6c28e3ae812f0213290337924f349ddfd68e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f029c70296f6b73f67ef9b6d2e6c28e3ae812f0213290337924f349ddfd68e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f029c70296f6b73f67ef9b6d2e6c28e3ae812f0213290337924f349ddfd68e2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.748 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e528a53a-8ada-4966-912c-1f15ed61e649_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.855 247403 DEBUG nova.network.neutron [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Successfully updated port: 22e87c0f-c5ce-4700-94a2-0d70f1b9655f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:09:32 np0005603621 podman[297556]: 2026-01-31 08:09:32.895948015 +0000 UTC m=+0.516032683 container init 50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.902 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.903 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquired lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:32 np0005603621 podman[297556]: 2026-01-31 08:09:32.904070371 +0000 UTC m=+0.524155029 container start 50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.903 247403 DEBUG nova.network.neutron [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:32 np0005603621 nova_compute[247399]: 2026-01-31 08:09:32.915 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] resizing rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:09:32 np0005603621 podman[297556]: 2026-01-31 08:09:32.992570327 +0000 UTC m=+0.612654985 container attach 50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.042 247403 DEBUG nova.compute.manager [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-changed-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.042 247403 DEBUG nova.compute.manager [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Refreshing instance network info cache due to event network-changed-22e87c0f-c5ce-4700-94a2-0d70f1b9655f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.043 247403 DEBUG oslo_concurrency.lockutils [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:09:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:33.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:33.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.444 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.562 247403 DEBUG nova.network.neutron [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.614 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 115 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 108 op/s
Jan 31 03:09:33 np0005603621 loving_kalam[297572]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:09:33 np0005603621 loving_kalam[297572]: --> relative data size: 1.0
Jan 31 03:09:33 np0005603621 loving_kalam[297572]: --> All data devices are unavailable
Jan 31 03:09:33 np0005603621 systemd[1]: libpod-50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382.scope: Deactivated successfully.
Jan 31 03:09:33 np0005603621 podman[297556]: 2026-01-31 08:09:33.731722844 +0000 UTC m=+1.351807502 container died 50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:33 np0005603621 nova_compute[247399]: 2026-01-31 08:09:33.977 247403 DEBUG nova.objects.instance [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'migration_context' on Instance uuid e528a53a-8ada-4966-912c-1f15ed61e649 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.007 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.008 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Ensure instance console log exists: /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.008 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.008 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.009 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.201 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.202 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.202 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f029c70296f6b73f67ef9b6d2e6c28e3ae812f0213290337924f349ddfd68e2f-merged.mount: Deactivated successfully.
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.236 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.236 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.236 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.236 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:34 np0005603621 podman[297556]: 2026-01-31 08:09:34.731320269 +0000 UTC m=+2.351404927 container remove 50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_kalam, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/111847710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.758 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:34 np0005603621 systemd[1]: libpod-conmon-50e6daa2d4647c39a7a4eb577fd072136948fcca39c31d7b8b531017923b6382.scope: Deactivated successfully.
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.978 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.980 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=20.94390869140625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:34 np0005603621 nova_compute[247399]: 2026-01-31 08:09:34.981 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.055 247403 DEBUG nova.network.neutron [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updating instance_info_cache with network_info: [{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.127 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e528a53a-8ada-4966-912c-1f15ed61e649 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.127 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.128 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.172 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:35.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:35.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:35 np0005603621 podman[297849]: 2026-01-31 08:09:35.332914395 +0000 UTC m=+0.027472166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.465 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Releasing lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.466 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance network_info: |[{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.466 247403 DEBUG oslo_concurrency.lockutils [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.467 247403 DEBUG nova.network.neutron [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Refreshing network info cache for port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.470 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Start _get_guest_xml network_info=[{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.475 247403 WARNING nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.480 247403 DEBUG nova.virt.libvirt.host [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.481 247403 DEBUG nova.virt.libvirt.host [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.488 247403 DEBUG nova.virt.libvirt.host [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.488 247403 DEBUG nova.virt.libvirt.host [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.490 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.490 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.491 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.491 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.491 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.491 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.492 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.493 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.493 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.493 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.494 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.494 247403 DEBUG nova.virt.hardware [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.497 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:35 np0005603621 podman[297849]: 2026-01-31 08:09:35.510426523 +0000 UTC m=+0.204984244 container create 95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:09:35 np0005603621 systemd[1]: Started libpod-conmon-95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2.scope.
Jan 31 03:09:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:09:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496045816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:09:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.606 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.611 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:09:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 141 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 60 op/s
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.632 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.656 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:09:35 np0005603621 nova_compute[247399]: 2026-01-31 08:09:35.656 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:35 np0005603621 podman[297849]: 2026-01-31 08:09:35.747166824 +0000 UTC m=+0.441724605 container init 95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:09:35 np0005603621 podman[297849]: 2026-01-31 08:09:35.754910908 +0000 UTC m=+0.449468629 container start 95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:35 np0005603621 podman[297866]: 2026-01-31 08:09:35.755231228 +0000 UTC m=+0.211663513 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:09:35 np0005603621 brave_germain[297886]: 167 167
Jan 31 03:09:35 np0005603621 systemd[1]: libpod-95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2.scope: Deactivated successfully.
Jan 31 03:09:36 np0005603621 podman[297849]: 2026-01-31 08:09:35.999788227 +0000 UTC m=+0.694345978 container attach 95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:09:36 np0005603621 podman[297849]: 2026-01-31 08:09:36.001513871 +0000 UTC m=+0.696071602 container died 95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536736988' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.139 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.167 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.172 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:09:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278275723' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.609 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.612 247403 DEBUG nova.virt.libvirt.vif [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1368556693',display_name='tempest-DeleteServersTestJSON-server-1368556693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1368556693',id=79,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-n1ah7sd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:29Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=e528a53a-8ada-4966-912c-1f15ed61e649,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.612 247403 DEBUG nova.network.os_vif_util [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.614 247403 DEBUG nova.network.os_vif_util [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.616 247403 DEBUG nova.objects.instance [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'pci_devices' on Instance uuid e528a53a-8ada-4966-912c-1f15ed61e649 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:09:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-098376dce03cbe3965de6ed9e67e1d22a63c2d1c67e9bdb10a6f349e456d4f9d-merged.mount: Deactivated successfully.
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.661 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <uuid>e528a53a-8ada-4966-912c-1f15ed61e649</uuid>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <name>instance-0000004f</name>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:name>tempest-DeleteServersTestJSON-server-1368556693</nova:name>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:09:35</nova:creationTime>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:user uuid="16d731f5875748ca9b8036b2ba061042">tempest-DeleteServersTestJSON-808715310-project-member</nova:user>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:project uuid="3469c253459e40e39dcf5bcb6a32008f">tempest-DeleteServersTestJSON-808715310</nova:project>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <nova:port uuid="22e87c0f-c5ce-4700-94a2-0d70f1b9655f">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <entry name="serial">e528a53a-8ada-4966-912c-1f15ed61e649</entry>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <entry name="uuid">e528a53a-8ada-4966-912c-1f15ed61e649</entry>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e528a53a-8ada-4966-912c-1f15ed61e649_disk">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e528a53a-8ada-4966-912c-1f15ed61e649_disk.config">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1c:4e:08"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <target dev="tap22e87c0f-c5"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/console.log" append="off"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:09:36 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:09:36 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:09:36 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:09:36 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.662 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Preparing to wait for external event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.662 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.662 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.663 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.663 247403 DEBUG nova.virt.libvirt.vif [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1368556693',display_name='tempest-DeleteServersTestJSON-server-1368556693',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1368556693',id=79,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-n1ah7sd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:09:29Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=e528a53a-8ada-4966-912c-1f15ed61e649,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.663 247403 DEBUG nova.network.os_vif_util [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.664 247403 DEBUG nova.network.os_vif_util [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.664 247403 DEBUG os_vif [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.665 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.665 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.665 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.668 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.668 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22e87c0f-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.668 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22e87c0f-c5, col_values=(('external_ids', {'iface-id': '22e87c0f-c5ce-4700-94a2-0d70f1b9655f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:4e:08', 'vm-uuid': 'e528a53a-8ada-4966-912c-1f15ed61e649'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:36 np0005603621 NetworkManager[49013]: <info>  [1769846976.6709] manager: (tap22e87c0f-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.672 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.680 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.680 247403 INFO os_vif [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5')#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.852 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.853 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.853 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] No VIF found with MAC fa:16:3e:1c:4e:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.853 247403 INFO nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Using config drive#033[00m
Jan 31 03:09:36 np0005603621 nova_compute[247399]: 2026-01-31 08:09:36.923 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.021 247403 DEBUG nova.network.neutron [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updated VIF entry in instance network info cache for port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.021 247403 DEBUG nova.network.neutron [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updating instance_info_cache with network_info: [{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.037 247403 DEBUG oslo_concurrency.lockutils [req-3405925e-c10c-4160-8254-b0b8eaebc445 req-a9f32a2d-54e8-4964-8591-66bcd371757d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:37 np0005603621 podman[297849]: 2026-01-31 08:09:37.125674417 +0000 UTC m=+1.820232138 container remove 95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:37 np0005603621 podman[297887]: 2026-01-31 08:09:37.136183617 +0000 UTC m=+1.543107944 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:09:37 np0005603621 systemd[1]: libpod-conmon-95dd400fda3214ee0b3dca1af5c9b69d3561afb0eb157910e23befb35e4f16b2.scope: Deactivated successfully.
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.324 247403 INFO nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Creating config drive at /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/disk.config#033[00m
Jan 31 03:09:37 np0005603621 podman[298023]: 2026-01-31 08:09:37.231781366 +0000 UTC m=+0.021695434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.327 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp39uvzi0i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:37.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:37.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.457 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp39uvzi0i" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:37 np0005603621 podman[298023]: 2026-01-31 08:09:37.458424231 +0000 UTC m=+0.248338309 container create 3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.497 247403 DEBUG nova.storage.rbd_utils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] rbd image e528a53a-8ada-4966-912c-1f15ed61e649_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:09:37 np0005603621 nova_compute[247399]: 2026-01-31 08:09:37.501 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/disk.config e528a53a-8ada-4966-912c-1f15ed61e649_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 108 MiB data, 702 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 69 op/s
Jan 31 03:09:37 np0005603621 systemd[1]: Started libpod-conmon-3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892.scope.
Jan 31 03:09:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400cbf9350a93ea3e4735c63ae7f6462109009d6fdbaaf7841fdf61377b34858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400cbf9350a93ea3e4735c63ae7f6462109009d6fdbaaf7841fdf61377b34858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400cbf9350a93ea3e4735c63ae7f6462109009d6fdbaaf7841fdf61377b34858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400cbf9350a93ea3e4735c63ae7f6462109009d6fdbaaf7841fdf61377b34858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:37 np0005603621 podman[298023]: 2026-01-31 08:09:37.962311162 +0000 UTC m=+0.752225260 container init 3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:09:37 np0005603621 podman[298023]: 2026-01-31 08:09:37.971319845 +0000 UTC m=+0.761233913 container start 3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:09:38 np0005603621 podman[298023]: 2026-01-31 08:09:38.071472468 +0000 UTC m=+0.861386586 container attach 3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:38 np0005603621 nova_compute[247399]: 2026-01-31 08:09:38.446 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:09:38
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms']
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:09:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:38 np0005603621 nova_compute[247399]: 2026-01-31 08:09:38.652 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]: {
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:    "0": [
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:        {
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "devices": [
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "/dev/loop3"
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            ],
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "lv_name": "ceph_lv0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "lv_size": "7511998464",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "name": "ceph_lv0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "tags": {
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.cluster_name": "ceph",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.crush_device_class": "",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.encrypted": "0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.osd_id": "0",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.type": "block",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:                "ceph.vdo": "0"
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            },
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "type": "block",
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:            "vg_name": "ceph_vg0"
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:        }
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]:    ]
Jan 31 03:09:38 np0005603621 vibrant_goldstine[298078]: }
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:09:38 np0005603621 systemd[1]: libpod-3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892.scope: Deactivated successfully.
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:09:38 np0005603621 podman[298023]: 2026-01-31 08:09:38.74662302 +0000 UTC m=+1.536537098 container died 3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:09:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:09:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:09:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:09:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:39.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Jan 31 03:09:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-400cbf9350a93ea3e4735c63ae7f6462109009d6fdbaaf7841fdf61377b34858-merged.mount: Deactivated successfully.
Jan 31 03:09:40 np0005603621 podman[298023]: 2026-01-31 08:09:40.126032931 +0000 UTC m=+2.915946999 container remove 3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:09:40 np0005603621 systemd[1]: libpod-conmon-3d8f291a72b94adc4685341bdb37ad300a1367b885067ef7797089a2896b2892.scope: Deactivated successfully.
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.415 247403 DEBUG oslo_concurrency.processutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/disk.config e528a53a-8ada-4966-912c-1f15ed61e649_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.914s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.417 247403 INFO nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Deleting local config drive /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/disk.config because it was imported into RBD.#033[00m
Jan 31 03:09:40 np0005603621 kernel: tap22e87c0f-c5: entered promiscuous mode
Jan 31 03:09:40 np0005603621 NetworkManager[49013]: <info>  [1769846980.4804] manager: (tap22e87c0f-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/114)
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:40Z|00228|binding|INFO|Claiming lport 22e87c0f-c5ce-4700-94a2-0d70f1b9655f for this chassis.
Jan 31 03:09:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:40Z|00229|binding|INFO|22e87c0f-c5ce-4700-94a2-0d70f1b9655f: Claiming fa:16:3e:1c:4e:08 10.100.0.7
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.488 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:40Z|00230|binding|INFO|Setting lport 22e87c0f-c5ce-4700-94a2-0d70f1b9655f ovn-installed in OVS
Jan 31 03:09:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:40Z|00231|binding|INFO|Setting lport 22e87c0f-c5ce-4700-94a2-0d70f1b9655f up in Southbound
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.497 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.497 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.494 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:4e:08 10.100.0.7'], port_security=['fa:16:3e:1c:4e:08 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e528a53a-8ada-4966-912c-1f15ed61e649', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3469c253459e40e39dcf5bcb6a32008f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e42c06e8-2644-4a21-adfb-06ef74de77bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298bbe2a-1faa-4c77-b3c3-4633e58f5921, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=22e87c0f-c5ce-4700-94a2-0d70f1b9655f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.496 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f in datapath c1c6810e-ec8f-43f3-a3c6-22606d9416b6 bound to our chassis#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.497 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1c6810e-ec8f-43f3-a3c6-22606d9416b6#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.507 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9aca3b86-4a7f-4d06-9435-c5a05b7fa430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.508 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1c6810e-e1 in ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:09:40 np0005603621 systemd-udevd[298218]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.512 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1c6810e-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.513 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6569e0-5b37-4400-9816-3f424b59099e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.514 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c58ee08e-562b-4aca-955a-32c809ea4760]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.522 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[5953f8a5-6182-486a-a2f1-9ba0cde1536f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 systemd-machined[212769]: New machine qemu-34-instance-0000004f.
Jan 31 03:09:40 np0005603621 NetworkManager[49013]: <info>  [1769846980.5289] device (tap22e87c0f-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:09:40 np0005603621 NetworkManager[49013]: <info>  [1769846980.5298] device (tap22e87c0f-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:09:40 np0005603621 systemd[1]: Started Virtual Machine qemu-34-instance-0000004f.
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.536 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37bb6a54-841f-4e2a-b501-6f07c1c9a00b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.559 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[37fcd9e2-0056-4221-b36f-6eda906d0792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 NetworkManager[49013]: <info>  [1769846980.5663] manager: (tapc1c6810e-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/115)
Jan 31 03:09:40 np0005603621 systemd-udevd[298225]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.564 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd3a0c1-c068-4040-94c4-7f0d1258d2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.609 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3e4590-d1ad-4aaf-8654-d831904b40bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.614 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a1203908-da8a-4a42-a1bf-9b3ff4f62c5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 NetworkManager[49013]: <info>  [1769846980.6363] device (tapc1c6810e-e0): carrier: link connected
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.639 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2d886c9c-9557-4f7e-9710-b1e4a561d087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.656 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6497cf02-b9cd-4fff-93d5-696f6f38061f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1c6810e-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628615, 'reachable_time': 41435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298279, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.668 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[90de7e49-f9ef-41bf-80ca-5a4f789e1361]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:9781'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628615, 'tstamp': 628615}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298287, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.688 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f843cb9d-6e2c-4027-82eb-2f5a20c1791a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1c6810e-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:97:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628615, 'reachable_time': 41435, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298289, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.716 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c09f1779-437d-45bc-a259-8ec66e4be6ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.778 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[99c628e4-bda6-4203-b004-35961fd90afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.780 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1c6810e-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.780 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.781 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1c6810e-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:40 np0005603621 kernel: tapc1c6810e-e0: entered promiscuous mode
Jan 31 03:09:40 np0005603621 NetworkManager[49013]: <info>  [1769846980.7831] manager: (tapc1c6810e-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.784 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.785 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1c6810e-e0, col_values=(('external_ids', {'iface-id': '937542c1-ab1e-4312-ab3a-ee4483fcdf7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:40Z|00232|binding|INFO|Releasing lport 937542c1-ab1e-4312-ab3a-ee4483fcdf7b from this chassis (sb_readonly=0)
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.788 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.789 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0f147032-23ea-4548-86a4-393b1fa45fd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.790 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-c1c6810e-ec8f-43f3-a3c6-22606d9416b6
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.pid.haproxy
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID c1c6810e-ec8f-43f3-a3c6-22606d9416b6
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:09:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:40.791 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'env', 'PROCESS_TAG=haproxy-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1c6810e-ec8f-43f3-a3c6-22606d9416b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.794 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:40 np0005603621 podman[298293]: 2026-01-31 08:09:40.728003448 +0000 UTC m=+0.024810542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.915 247403 DEBUG nova.compute.manager [req-10cc6100-39c5-480a-9d97-e514baec70ae req-766d0c2e-7947-4b6f-876d-03edcc516985 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.916 247403 DEBUG oslo_concurrency.lockutils [req-10cc6100-39c5-480a-9d97-e514baec70ae req-766d0c2e-7947-4b6f-876d-03edcc516985 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.916 247403 DEBUG oslo_concurrency.lockutils [req-10cc6100-39c5-480a-9d97-e514baec70ae req-766d0c2e-7947-4b6f-876d-03edcc516985 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.916 247403 DEBUG oslo_concurrency.lockutils [req-10cc6100-39c5-480a-9d97-e514baec70ae req-766d0c2e-7947-4b6f-876d-03edcc516985 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:40 np0005603621 nova_compute[247399]: 2026-01-31 08:09:40.916 247403 DEBUG nova.compute.manager [req-10cc6100-39c5-480a-9d97-e514baec70ae req-766d0c2e-7947-4b6f-876d-03edcc516985 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Processing event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:09:40 np0005603621 podman[298293]: 2026-01-31 08:09:40.975300732 +0000 UTC m=+0.272107806 container create 781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:09:41 np0005603621 systemd[1]: Started libpod-conmon-781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039.scope.
Jan 31 03:09:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:41.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:41 np0005603621 podman[298293]: 2026-01-31 08:09:41.535494385 +0000 UTC m=+0.832301489 container init 781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 03:09:41 np0005603621 podman[298293]: 2026-01-31 08:09:41.544097626 +0000 UTC m=+0.840904700 container start 781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:41 np0005603621 strange_shirley[298403]: 167 167
Jan 31 03:09:41 np0005603621 systemd[1]: libpod-781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039.scope: Deactivated successfully.
Jan 31 03:09:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 88 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.7 MiB/s wr, 93 op/s
Jan 31 03:09:41 np0005603621 podman[298293]: 2026-01-31 08:09:41.637007871 +0000 UTC m=+0.933814935 container attach 781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:41 np0005603621 podman[298293]: 2026-01-31 08:09:41.641574674 +0000 UTC m=+0.938381778 container died 781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.771 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846981.770218, e528a53a-8ada-4966-912c-1f15ed61e649 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.773 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] VM Started (Lifecycle Event)#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.777 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.782 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.786 247403 INFO nova.virt.libvirt.driver [-] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance spawned successfully.#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.786 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.816 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.825 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.830 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.831 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.832 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.832 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.833 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.833 247403 DEBUG nova.virt.libvirt.driver [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.866 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.866 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846981.772499, e528a53a-8ada-4966-912c-1f15ed61e649 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.867 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.901 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.905 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769846981.7812088, e528a53a-8ada-4966-912c-1f15ed61e649 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.905 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.918 247403 INFO nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Took 11.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.918 247403 DEBUG nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.931 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.935 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.961 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:09:41 np0005603621 nova_compute[247399]: 2026-01-31 08:09:41.993 247403 INFO nova.compute.manager [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Took 13.14 seconds to build instance.#033[00m
Jan 31 03:09:42 np0005603621 nova_compute[247399]: 2026-01-31 08:09:42.010 247403 DEBUG oslo_concurrency.lockutils [None req-0b0d0d5c-8f57-4d8e-abe5-e5b77b7a93a4 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ed072000183e14fba2529caf4f18090d7e22a55d9229e429500c73c24e9f4c9b-merged.mount: Deactivated successfully.
Jan 31 03:09:42 np0005603621 podman[298293]: 2026-01-31 08:09:42.741052964 +0000 UTC m=+2.037860078 container remove 781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_shirley, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:09:42 np0005603621 systemd[1]: libpod-conmon-781d9f9acce04aa8916d02e55ced42243526590353dc7803fca685e813432039.scope: Deactivated successfully.
Jan 31 03:09:42 np0005603621 podman[298411]: 2026-01-31 08:09:42.792391879 +0000 UTC m=+1.563998771 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.107 247403 DEBUG nova.compute.manager [req-961493d5-c0e9-44d2-9331-adf0ef3b5dbd req-aae1bd1f-deaa-4544-8faf-6990aeb1208e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.108 247403 DEBUG oslo_concurrency.lockutils [req-961493d5-c0e9-44d2-9331-adf0ef3b5dbd req-aae1bd1f-deaa-4544-8faf-6990aeb1208e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.108 247403 DEBUG oslo_concurrency.lockutils [req-961493d5-c0e9-44d2-9331-adf0ef3b5dbd req-aae1bd1f-deaa-4544-8faf-6990aeb1208e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.108 247403 DEBUG oslo_concurrency.lockutils [req-961493d5-c0e9-44d2-9331-adf0ef3b5dbd req-aae1bd1f-deaa-4544-8faf-6990aeb1208e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.108 247403 DEBUG nova.compute.manager [req-961493d5-c0e9-44d2-9331-adf0ef3b5dbd req-aae1bd1f-deaa-4544-8faf-6990aeb1208e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] No waiting events found dispatching network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.109 247403 WARNING nova.compute.manager [req-961493d5-c0e9-44d2-9331-adf0ef3b5dbd req-aae1bd1f-deaa-4544-8faf-6990aeb1208e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received unexpected event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f for instance with vm_state active and task_state None.#033[00m
Jan 31 03:09:43 np0005603621 podman[298472]: 2026-01-31 08:09:43.125619888 +0000 UTC m=+0.246679685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:09:43 np0005603621 podman[298411]: 2026-01-31 08:09:43.291308954 +0000 UTC m=+2.062915846 container create 944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:09:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:43.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:43 np0005603621 nova_compute[247399]: 2026-01-31 08:09:43.448 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:43 np0005603621 systemd[1]: Started libpod-conmon-944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c.scope.
Jan 31 03:09:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae97e16a9d03a983a0ab6cff541208c8981d853d07a9ebbe50b297c2f05e8b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 105 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.5 MiB/s wr, 114 op/s
Jan 31 03:09:43 np0005603621 podman[298472]: 2026-01-31 08:09:43.645821833 +0000 UTC m=+0.766881590 container create aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:09:43 np0005603621 podman[298411]: 2026-01-31 08:09:43.711327425 +0000 UTC m=+2.482934317 container init 944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:09:43 np0005603621 podman[298411]: 2026-01-31 08:09:43.72005472 +0000 UTC m=+2.491661572 container start 944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:09:43 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [NOTICE]   (298494) : New worker (298496) forked
Jan 31 03:09:43 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [NOTICE]   (298494) : Loading success.
Jan 31 03:09:44 np0005603621 systemd[1]: Started libpod-conmon-aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc.scope.
Jan 31 03:09:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9b19cd6256e31ba5efe71fd4b6a79120feb243b17073b430c7dbb467f586f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9b19cd6256e31ba5efe71fd4b6a79120feb243b17073b430c7dbb467f586f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9b19cd6256e31ba5efe71fd4b6a79120feb243b17073b430c7dbb467f586f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0f9b19cd6256e31ba5efe71fd4b6a79120feb243b17073b430c7dbb467f586f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:09:44 np0005603621 podman[298472]: 2026-01-31 08:09:44.305113896 +0000 UTC m=+1.426173683 container init aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:09:44 np0005603621 podman[298472]: 2026-01-31 08:09:44.314901204 +0000 UTC m=+1.435961001 container start aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:09:44 np0005603621 podman[298472]: 2026-01-31 08:09:44.421906012 +0000 UTC m=+1.542965819 container attach aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]: {
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:        "osd_id": 0,
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:        "type": "bluestore"
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]:    }
Jan 31 03:09:45 np0005603621 friendly_engelbart[298507]: }
Jan 31 03:09:45 np0005603621 systemd[1]: libpod-aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc.scope: Deactivated successfully.
Jan 31 03:09:45 np0005603621 podman[298528]: 2026-01-31 08:09:45.233287743 +0000 UTC m=+0.032114813 container died aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:09:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:45.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 134 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.1 MiB/s wr, 127 op/s
Jan 31 03:09:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d0f9b19cd6256e31ba5efe71fd4b6a79120feb243b17073b430c7dbb467f586f-merged.mount: Deactivated successfully.
Jan 31 03:09:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:45.773 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:09:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:45.775 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:09:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:09:45.776 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:09:45 np0005603621 nova_compute[247399]: 2026-01-31 08:09:45.778 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:46 np0005603621 podman[298528]: 2026-01-31 08:09:46.356318002 +0000 UTC m=+1.155145042 container remove aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:09:46 np0005603621 systemd[1]: libpod-conmon-aabe0927d2979a52e42b42eb5537b0ae464e9395df0b944498e466445cffc3fc.scope: Deactivated successfully.
Jan 31 03:09:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:09:46 np0005603621 nova_compute[247399]: 2026-01-31 08:09:46.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:09:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:09:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:09:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e0623307-f50a-4082-a89f-d20d3c218922 does not exist
Jan 31 03:09:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 909d5866-ee6f-4a2d-b1d7-ca9d2cdf3c70 does not exist
Jan 31 03:09:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev feb522ec-9f41-48bf-9cd4-a7d54a7a9c18 does not exist
Jan 31 03:09:47 np0005603621 nova_compute[247399]: 2026-01-31 08:09:47.377 247403 DEBUG oslo_concurrency.lockutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:09:47 np0005603621 nova_compute[247399]: 2026-01-31 08:09:47.378 247403 DEBUG oslo_concurrency.lockutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquired lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:09:47 np0005603621 nova_compute[247399]: 2026-01-31 08:09:47.378 247403 DEBUG nova.network.neutron [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:09:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:09:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:09:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:47.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 134 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 145 op/s
Jan 31 03:09:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:09:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:09:48 np0005603621 nova_compute[247399]: 2026-01-31 08:09:48.490 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:09:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8578 writes, 37K keys, 8574 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8578 writes, 8574 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1780 writes, 7925 keys, 1780 commit groups, 1.0 writes per commit group, ingest: 11.56 MB, 0.02 MB/s#012Interval WAL: 1781 writes, 1781 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     30.2      1.59              0.11        22    0.072       0      0       0.0       0.0#012  L6      1/0    9.92 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     47.3     39.4      4.81              0.43        21    0.229    113K    12K       0.0       0.0#012 Sum      1/0    9.92 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     35.6     37.1      6.40              0.54        43    0.149    113K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.9     39.5     40.3      1.47              0.14        10    0.147     33K   3120       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     47.3     39.4      4.81              0.43        21    0.229    113K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     30.2      1.59              0.11        21    0.076       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.08 MB/s read, 6.4 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 25.88 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000431 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1499,25.00 MB,8.22249%) FilterBlock(44,320.11 KB,0.102831%) IndexBlock(44,582.23 KB,0.187036%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:09:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2967855771915131 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.374 247403 DEBUG nova.network.neutron [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updating instance_info_cache with network_info: [{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:09:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.396 247403 DEBUG oslo_concurrency.lockutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Releasing lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:09:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:49.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.501 247403 DEBUG nova.virt.libvirt.driver [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.501 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Creating file /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/d0f36505d6674723b376ae124b3c4bd9.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.502 247403 DEBUG oslo_concurrency.processutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/d0f36505d6674723b376ae124b3c4bd9.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 134 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.963 247403 DEBUG oslo_concurrency.processutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/d0f36505d6674723b376ae124b3c4bd9.tmp" returned: 1 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.964 247403 DEBUG oslo_concurrency.processutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649/d0f36505d6674723b376ae124b3c4bd9.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.965 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Creating directory /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 03:09:49 np0005603621 nova_compute[247399]: 2026-01-31 08:09:49.965 247403 DEBUG oslo_concurrency.processutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:09:50 np0005603621 nova_compute[247399]: 2026-01-31 08:09:50.211 247403 DEBUG oslo_concurrency.processutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/e528a53a-8ada-4966-912c-1f15ed61e649" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:09:50 np0005603621 nova_compute[247399]: 2026-01-31 08:09:50.217 247403 DEBUG nova.virt.libvirt.driver [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:09:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 134 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 31 03:09:51 np0005603621 nova_compute[247399]: 2026-01-31 08:09:51.676 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:53 np0005603621 nova_compute[247399]: 2026-01-31 08:09:53.492 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 134 MiB data, 707 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:09:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:55.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:09:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:55.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:09:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 163 MiB data, 730 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 91 op/s
Jan 31 03:09:56 np0005603621 nova_compute[247399]: 2026-01-31 08:09:56.679 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:57.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 200 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 103 op/s
Jan 31 03:09:58 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:58Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:4e:08 10.100.0.7
Jan 31 03:09:58 np0005603621 ovn_controller[149152]: 2026-01-31T08:09:58Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:4e:08 10.100.0.7
Jan 31 03:09:58 np0005603621 nova_compute[247399]: 2026-01-31 08:09:58.522 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:09:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:09:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:09:59.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:09:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:09:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:09:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:09:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 206 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 117 op/s
Jan 31 03:10:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 03:10:00 np0005603621 nova_compute[247399]: 2026-01-31 08:10:00.272 247403 DEBUG nova.virt.libvirt.driver [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:10:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 03:10:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:01.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 206 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.8 MiB/s wr, 117 op/s
Jan 31 03:10:01 np0005603621 nova_compute[247399]: 2026-01-31 08:10:01.682 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:03.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:03.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:03 np0005603621 nova_compute[247399]: 2026-01-31 08:10:03.524 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 213 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Jan 31 03:10:04 np0005603621 kernel: tap22e87c0f-c5 (unregistering): left promiscuous mode
Jan 31 03:10:04 np0005603621 NetworkManager[49013]: <info>  [1769847004.7556] device (tap22e87c0f-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:10:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:04Z|00233|binding|INFO|Releasing lport 22e87c0f-c5ce-4700-94a2-0d70f1b9655f from this chassis (sb_readonly=0)
Jan 31 03:10:04 np0005603621 nova_compute[247399]: 2026-01-31 08:10:04.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:04Z|00234|binding|INFO|Setting lport 22e87c0f-c5ce-4700-94a2-0d70f1b9655f down in Southbound
Jan 31 03:10:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:04Z|00235|binding|INFO|Removing iface tap22e87c0f-c5 ovn-installed in OVS
Jan 31 03:10:04 np0005603621 nova_compute[247399]: 2026-01-31 08:10:04.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:04.769 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:4e:08 10.100.0.7'], port_security=['fa:16:3e:1c:4e:08 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e528a53a-8ada-4966-912c-1f15ed61e649', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3469c253459e40e39dcf5bcb6a32008f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e42c06e8-2644-4a21-adfb-06ef74de77bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=298bbe2a-1faa-4c77-b3c3-4633e58f5921, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=22e87c0f-c5ce-4700-94a2-0d70f1b9655f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:04.770 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f in datapath c1c6810e-ec8f-43f3-a3c6-22606d9416b6 unbound from our chassis#033[00m
Jan 31 03:10:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:04.772 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1c6810e-ec8f-43f3-a3c6-22606d9416b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:10:04 np0005603621 nova_compute[247399]: 2026-01-31 08:10:04.772 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:04.773 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[51ad4197-1a27-4095-9f31-e91d106b4294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:04.774 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 namespace which is not needed anymore#033[00m
Jan 31 03:10:04 np0005603621 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Jan 31 03:10:04 np0005603621 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000004f.scope: Consumed 13.206s CPU time.
Jan 31 03:10:04 np0005603621 systemd-machined[212769]: Machine qemu-34-instance-0000004f terminated.
Jan 31 03:10:04 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [NOTICE]   (298494) : haproxy version is 2.8.14-c23fe91
Jan 31 03:10:04 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [NOTICE]   (298494) : path to executable is /usr/sbin/haproxy
Jan 31 03:10:04 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [WARNING]  (298494) : Exiting Master process...
Jan 31 03:10:04 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [ALERT]    (298494) : Current worker (298496) exited with code 143 (Terminated)
Jan 31 03:10:04 np0005603621 neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6[298490]: [WARNING]  (298494) : All workers exited. Exiting... (0)
Jan 31 03:10:04 np0005603621 systemd[1]: libpod-944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c.scope: Deactivated successfully.
Jan 31 03:10:04 np0005603621 podman[298680]: 2026-01-31 08:10:04.906159801 +0000 UTC m=+0.043384316 container died 944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c-userdata-shm.mount: Deactivated successfully.
Jan 31 03:10:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8ae97e16a9d03a983a0ab6cff541208c8981d853d07a9ebbe50b297c2f05e8b3-merged.mount: Deactivated successfully.
Jan 31 03:10:04 np0005603621 podman[298680]: 2026-01-31 08:10:04.953812871 +0000 UTC m=+0.091037386 container cleanup 944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:10:04 np0005603621 systemd[1]: libpod-conmon-944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c.scope: Deactivated successfully.
Jan 31 03:10:04 np0005603621 nova_compute[247399]: 2026-01-31 08:10:04.987 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:04 np0005603621 nova_compute[247399]: 2026-01-31 08:10:04.990 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 podman[298714]: 2026-01-31 08:10:05.016248666 +0000 UTC m=+0.047570608 container remove 944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.020 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b006d6b6-ad43-4e2b-b6de-dd478356ca4a]: (4, ('Sat Jan 31 08:10:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 (944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c)\n944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c\nSat Jan 31 08:10:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 (944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c)\n944e9bf8e122d479cd79458d25a9f5cdeb1a8ebbab9f3db395e0ab692c307f7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.023 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1e9a5f-01dd-422f-941b-35d2fbc0b484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.024 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1c6810e-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.026 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 kernel: tapc1c6810e-e0: left promiscuous mode
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.032 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.038 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[50a38a53-f440-4e12-887d-761e0bb7bb11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.055 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbd56cc-02e3-4a25-9719-3410bb2477ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.056 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e5df1e87-d3bd-4b76-8413-2a82b03cfa9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.069 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0929b9-ce0d-4869-a0b6-f414a1e655c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628606, 'reachable_time': 39764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298742, 'error': None, 'target': 'ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.073 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1c6810e-ec8f-43f3-a3c6-22606d9416b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:10:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:05.074 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f84a4f02-ef35-493c-b44c-d2ed7a5d646e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:05 np0005603621 systemd[1]: run-netns-ovnmeta\x2dc1c6810e\x2dec8f\x2d43f3\x2da3c6\x2d22606d9416b6.mount: Deactivated successfully.
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.147 247403 DEBUG nova.compute.manager [req-81adb845-4e39-4acc-93af-d59b7b43284f req-485bbb59-0ef3-4ab1-b588-711638f96d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-vif-unplugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.148 247403 DEBUG oslo_concurrency.lockutils [req-81adb845-4e39-4acc-93af-d59b7b43284f req-485bbb59-0ef3-4ab1-b588-711638f96d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.148 247403 DEBUG oslo_concurrency.lockutils [req-81adb845-4e39-4acc-93af-d59b7b43284f req-485bbb59-0ef3-4ab1-b588-711638f96d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.149 247403 DEBUG oslo_concurrency.lockutils [req-81adb845-4e39-4acc-93af-d59b7b43284f req-485bbb59-0ef3-4ab1-b588-711638f96d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.149 247403 DEBUG nova.compute.manager [req-81adb845-4e39-4acc-93af-d59b7b43284f req-485bbb59-0ef3-4ab1-b588-711638f96d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] No waiting events found dispatching network-vif-unplugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.149 247403 WARNING nova.compute.manager [req-81adb845-4e39-4acc-93af-d59b7b43284f req-485bbb59-0ef3-4ab1-b588-711638f96d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received unexpected event network-vif-unplugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.293 247403 INFO nova.virt.libvirt.driver [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance shutdown successfully after 15 seconds.#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.299 247403 INFO nova.virt.libvirt.driver [-] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Instance destroyed successfully.#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.301 247403 DEBUG nova.virt.libvirt.vif [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1368556693',display_name='tempest-DeleteServersTestJSON-server-1368556693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1368556693',id=79,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-n1ah7sd5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:09:46Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=e528a53a-8ada-4966-912c-1f15ed61e649,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-676112264-network", "vif_mac": "fa:16:3e:1c:4e:08"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.301 247403 DEBUG nova.network.os_vif_util [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-676112264-network", "vif_mac": "fa:16:3e:1c:4e:08"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.302 247403 DEBUG nova.network.os_vif_util [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.303 247403 DEBUG os_vif [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.305 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22e87c0f-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.308 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.310 247403 INFO os_vif [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5')#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.313 247403 DEBUG nova.virt.libvirt.driver [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.314 247403 DEBUG nova.virt.libvirt.driver [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:10:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:05.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:05.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:05 np0005603621 nova_compute[247399]: 2026-01-31 08:10:05.479 247403 DEBUG neutronclient.v2_0.client [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:10:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 214 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 207 op/s
Jan 31 03:10:06 np0005603621 podman[298744]: 2026-01-31 08:10:06.511031658 +0000 UTC m=+0.062719325 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.274 247403 DEBUG oslo_concurrency.lockutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.274 247403 DEBUG oslo_concurrency.lockutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.274 247403 DEBUG oslo_concurrency.lockutils [None req-024ec3a6-b778-4b7d-80fd-39f80b5c3bb6 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.279 247403 DEBUG nova.compute.manager [req-2cdb5eee-b294-4c78-a9e9-1817030350ba req-17858fe0-a46c-4afd-b52e-728e4af4acd1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.280 247403 DEBUG oslo_concurrency.lockutils [req-2cdb5eee-b294-4c78-a9e9-1817030350ba req-17858fe0-a46c-4afd-b52e-728e4af4acd1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.280 247403 DEBUG oslo_concurrency.lockutils [req-2cdb5eee-b294-4c78-a9e9-1817030350ba req-17858fe0-a46c-4afd-b52e-728e4af4acd1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.280 247403 DEBUG oslo_concurrency.lockutils [req-2cdb5eee-b294-4c78-a9e9-1817030350ba req-17858fe0-a46c-4afd-b52e-728e4af4acd1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.280 247403 DEBUG nova.compute.manager [req-2cdb5eee-b294-4c78-a9e9-1817030350ba req-17858fe0-a46c-4afd-b52e-728e4af4acd1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] No waiting events found dispatching network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:07 np0005603621 nova_compute[247399]: 2026-01-31 08:10:07.281 247403 WARNING nova.compute.manager [req-2cdb5eee-b294-4c78-a9e9-1817030350ba req-17858fe0-a46c-4afd-b52e-728e4af4acd1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received unexpected event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:10:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:07.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:07.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:07 np0005603621 podman[298764]: 2026-01-31 08:10:07.518471199 +0000 UTC m=+0.075767825 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:10:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 214 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Jan 31 03:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:08 np0005603621 nova_compute[247399]: 2026-01-31 08:10:08.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.162 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.163 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.229 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.370 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.371 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.378 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.378 247403 INFO nova.compute.claims [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:10:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:09.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:09.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.442 247403 DEBUG nova.compute.manager [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-changed-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.443 247403 DEBUG nova.compute.manager [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Refreshing instance network info cache due to event network-changed-22e87c0f-c5ce-4700-94a2-0d70f1b9655f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.443 247403 DEBUG oslo_concurrency.lockutils [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.444 247403 DEBUG oslo_concurrency.lockutils [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.444 247403 DEBUG nova.network.neutron [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Refreshing network info cache for port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:10:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 218 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 692 KiB/s wr, 191 op/s
Jan 31 03:10:09 np0005603621 nova_compute[247399]: 2026-01-31 08:10:09.730 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/631949995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.169 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.177 247403 DEBUG nova.compute.provider_tree [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.201 247403 DEBUG nova.scheduler.client.report [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.268 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.269 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.307 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.334 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.334 247403 DEBUG nova.network.neutron [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.369 247403 INFO nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.412 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.547 247403 DEBUG nova.policy [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c85c4974d1024573aedad360db16c274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a328413b1f5d4bdb90716c256341492f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.580 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.583 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.584 247403 INFO nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Creating image(s)#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.624 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.658 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.688 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.692 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.762 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.763 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.765 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.765 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.794 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:10 np0005603621 nova_compute[247399]: 2026-01-31 08:10:10.798 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b0764374-5236-4f05-95f2-67dbf334ca3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 31 03:10:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 31 03:10:11 np0005603621 nova_compute[247399]: 2026-01-31 08:10:11.277 247403 DEBUG nova.network.neutron [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updated VIF entry in instance network info cache for port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:10:11 np0005603621 nova_compute[247399]: 2026-01-31 08:10:11.277 247403 DEBUG nova.network.neutron [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updating instance_info_cache with network_info: [{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:11 np0005603621 nova_compute[247399]: 2026-01-31 08:10:11.335 247403 DEBUG oslo_concurrency.lockutils [req-4695ad8b-1c4b-42fe-8131-0aa8bee329e7 req-00752347-91cd-44a1-8951-a678f1c4f1c2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:11.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:11.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 218 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 692 KiB/s wr, 163 op/s
Jan 31 03:10:11 np0005603621 nova_compute[247399]: 2026-01-31 08:10:11.744 247403 DEBUG nova.network.neutron [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Successfully created port: 31ff06b2-fb86-4823-aadc-993bb07532e3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.612 247403 DEBUG nova.network.neutron [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Successfully updated port: 31ff06b2-fb86-4823-aadc-993bb07532e3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.630 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.631 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquired lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.631 247403 DEBUG nova.network.neutron [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.746 247403 DEBUG nova.compute.manager [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-changed-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.746 247403 DEBUG nova.compute.manager [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Refreshing instance network info cache due to event network-changed-31ff06b2-fb86-4823-aadc-993bb07532e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.747 247403 DEBUG oslo_concurrency.lockutils [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:12 np0005603621 nova_compute[247399]: 2026-01-31 08:10:12.830 247403 DEBUG nova.network.neutron [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.205 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b0764374-5236-4f05-95f2-67dbf334ca3a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.294 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] resizing rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:10:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:13.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:13.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.429 247403 DEBUG nova.objects.instance [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lazy-loading 'migration_context' on Instance uuid b0764374-5236-4f05-95f2-67dbf334ca3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.443 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.443 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Ensure instance console log exists: /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.443 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.444 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.444 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 242 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 165 op/s
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.705 247403 DEBUG nova.network.neutron [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updating instance_info_cache with network_info: [{"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.726 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Releasing lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.726 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Instance network_info: |[{"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.727 247403 DEBUG oslo_concurrency.lockutils [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.727 247403 DEBUG nova.network.neutron [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Refreshing network info cache for port 31ff06b2-fb86-4823-aadc-993bb07532e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.733 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Start _get_guest_xml network_info=[{"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.739 247403 WARNING nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.746 247403 DEBUG nova.virt.libvirt.host [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.747 247403 DEBUG nova.virt.libvirt.host [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.758 247403 DEBUG nova.virt.libvirt.host [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.759 247403 DEBUG nova.virt.libvirt.host [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.761 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.762 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.763 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.764 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.764 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.764 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.765 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.765 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.765 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.765 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.766 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.766 247403 DEBUG nova.virt.hardware [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:10:13 np0005603621 nova_compute[247399]: 2026-01-31 08:10:13.770 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941773553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.247 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.275 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.280 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1264328145' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1264328145' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:10:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229559504' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.737 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.740 247403 DEBUG nova.virt.libvirt.vif [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=82,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuwfv4KVCUaqYhLtpEW0AzWZwj8k1Y6gpEu3aWg+fnncJvXzkgmuBOtnZ6rrUXcWkdoogcyPBM2d9saPUxw7UEkRf1nTWKktAunyJRssYwxrxM6+fzliYqi/dvZo/Pz5A==',key_name='tempest-keypair-1486137479',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a328413b1f5d4bdb90716c256341492f',ramdisk_id='',reservation_id='r-qaan87is',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-915733051',owner_user_name='tempest-ServersV294TestFqdnHostnames-915733051-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c85c4974d1024573aedad360db16c274',uuid=b0764374-5236-4f05-95f2-67dbf334ca3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.741 247403 DEBUG nova.network.os_vif_util [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Converting VIF {"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.743 247403 DEBUG nova.network.os_vif_util [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.745 247403 DEBUG nova.objects.instance [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lazy-loading 'pci_devices' on Instance uuid b0764374-5236-4f05-95f2-67dbf334ca3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.766 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <uuid>b0764374-5236-4f05-95f2-67dbf334ca3a</uuid>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <name>instance-00000052</name>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:name>guest-instance-1</nova:name>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:10:13</nova:creationTime>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:user uuid="c85c4974d1024573aedad360db16c274">tempest-ServersV294TestFqdnHostnames-915733051-project-member</nova:user>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:project uuid="a328413b1f5d4bdb90716c256341492f">tempest-ServersV294TestFqdnHostnames-915733051</nova:project>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <nova:port uuid="31ff06b2-fb86-4823-aadc-993bb07532e3">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <entry name="serial">b0764374-5236-4f05-95f2-67dbf334ca3a</entry>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <entry name="uuid">b0764374-5236-4f05-95f2-67dbf334ca3a</entry>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b0764374-5236-4f05-95f2-67dbf334ca3a_disk">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b0764374-5236-4f05-95f2-67dbf334ca3a_disk.config">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:7e:ef:44"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <target dev="tap31ff06b2-fb"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/console.log" append="off"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:10:14 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:10:14 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:10:14 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:10:14 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.769 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Preparing to wait for external event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.769 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.770 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.770 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.771 247403 DEBUG nova.virt.libvirt.vif [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=82,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuwfv4KVCUaqYhLtpEW0AzWZwj8k1Y6gpEu3aWg+fnncJvXzkgmuBOtnZ6rrUXcWkdoogcyPBM2d9saPUxw7UEkRf1nTWKktAunyJRssYwxrxM6+fzliYqi/dvZo/Pz5A==',key_name='tempest-keypair-1486137479',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a328413b1f5d4bdb90716c256341492f',ramdisk_id='',reservation_id='r-qaan87is',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-915733051',owner_user_name='tempest-ServersV294TestFqdnHostnames-915733051-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:10:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c85c4974d1024573aedad360db16c274',uuid=b0764374-5236-4f05-95f2-67dbf334ca3a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.772 247403 DEBUG nova.network.os_vif_util [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Converting VIF {"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.773 247403 DEBUG nova.network.os_vif_util [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.774 247403 DEBUG os_vif [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.775 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.776 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.776 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.781 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31ff06b2-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.782 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31ff06b2-fb, col_values=(('external_ids', {'iface-id': '31ff06b2-fb86-4823-aadc-993bb07532e3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:ef:44', 'vm-uuid': 'b0764374-5236-4f05-95f2-67dbf334ca3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:14 np0005603621 NetworkManager[49013]: <info>  [1769847014.7860] manager: (tap31ff06b2-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.789 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.792 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.793 247403 INFO os_vif [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb')#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.860 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.861 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.861 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] No VIF found with MAC fa:16:3e:7e:ef:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.862 247403 INFO nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Using config drive#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.891 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.901 247403 DEBUG nova.compute.manager [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.902 247403 DEBUG oslo_concurrency.lockutils [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.902 247403 DEBUG oslo_concurrency.lockutils [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.902 247403 DEBUG oslo_concurrency.lockutils [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.903 247403 DEBUG nova.compute.manager [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] No waiting events found dispatching network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.903 247403 WARNING nova.compute.manager [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received unexpected event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.903 247403 DEBUG nova.compute.manager [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.903 247403 DEBUG oslo_concurrency.lockutils [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.903 247403 DEBUG oslo_concurrency.lockutils [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.904 247403 DEBUG oslo_concurrency.lockutils [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.904 247403 DEBUG nova.compute.manager [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] No waiting events found dispatching network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:14 np0005603621 nova_compute[247399]: 2026-01-31 08:10:14.904 247403 WARNING nova.compute.manager [req-1e7e3caf-1eea-467d-b4cc-36ad88f31d0f req-ccd3db2c-6818-4c36-9a06-c5dfc739dbb9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Received unexpected event network-vif-plugged-22e87c0f-c5ce-4700-94a2-0d70f1b9655f for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:10:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:15.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 281 MiB data, 853 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.9 MiB/s wr, 159 op/s
Jan 31 03:10:16 np0005603621 nova_compute[247399]: 2026-01-31 08:10:16.845 247403 INFO nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Creating config drive at /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/disk.config#033[00m
Jan 31 03:10:16 np0005603621 nova_compute[247399]: 2026-01-31 08:10:16.853 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxoq7wo2_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:16 np0005603621 nova_compute[247399]: 2026-01-31 08:10:16.984 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxoq7wo2_" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.026 247403 DEBUG nova.storage.rbd_utils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] rbd image b0764374-5236-4f05-95f2-67dbf334ca3a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.030 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/disk.config b0764374-5236-4f05-95f2-67dbf334ca3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.226 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:10:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:17.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:17.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 291 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 5.3 MiB/s wr, 170 op/s
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.771 247403 DEBUG oslo_concurrency.processutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/disk.config b0764374-5236-4f05-95f2-67dbf334ca3a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.772 247403 INFO nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Deleting local config drive /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a/disk.config because it was imported into RBD.#033[00m
Jan 31 03:10:17 np0005603621 kernel: tap31ff06b2-fb: entered promiscuous mode
Jan 31 03:10:17 np0005603621 NetworkManager[49013]: <info>  [1769847017.8443] manager: (tap31ff06b2-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Jan 31 03:10:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:17Z|00236|binding|INFO|Claiming lport 31ff06b2-fb86-4823-aadc-993bb07532e3 for this chassis.
Jan 31 03:10:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:17Z|00237|binding|INFO|31ff06b2-fb86-4823-aadc-993bb07532e3: Claiming fa:16:3e:7e:ef:44 10.100.0.5
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.847 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.850 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.872 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:ef:44 10.100.0.5'], port_security=['fa:16:3e:7e:ef:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b0764374-5236-4f05-95f2-67dbf334ca3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a328413b1f5d4bdb90716c256341492f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b2ea5e63-4091-4468-82e2-6e6bb302064e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02b05f34-fa14-4a03-b7cc-c551b074d28d, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=31ff06b2-fb86-4823-aadc-993bb07532e3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:17 np0005603621 systemd-machined[212769]: New machine qemu-35-instance-00000052.
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.873 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 31ff06b2-fb86-4823-aadc-993bb07532e3 in datapath 17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 bound to our chassis#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.875 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 17abb1a3-e12b-4eb4-9ed5-d259aff06ce5#033[00m
Jan 31 03:10:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:17Z|00238|binding|INFO|Setting lport 31ff06b2-fb86-4823-aadc-993bb07532e3 ovn-installed in OVS
Jan 31 03:10:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:17Z|00239|binding|INFO|Setting lport 31ff06b2-fb86-4823-aadc-993bb07532e3 up in Southbound
Jan 31 03:10:17 np0005603621 nova_compute[247399]: 2026-01-31 08:10:17.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.881 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4439ec-d7fd-498f-b667-5a4cb11a2816]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.882 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap17abb1a3-e1 in ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.884 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap17abb1a3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.884 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[51565c7d-3378-4214-a075-4c64df955db3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.885 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ee803299-767c-4925-af48-2d3fc5bdbb34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 systemd[1]: Started Virtual Machine qemu-35-instance-00000052.
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.893 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[00d86c87-e559-47a5-b2fa-a934993059ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 systemd-udevd[299123]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:10:17 np0005603621 NetworkManager[49013]: <info>  [1769847017.9065] device (tap31ff06b2-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:10:17 np0005603621 NetworkManager[49013]: <info>  [1769847017.9073] device (tap31ff06b2-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.919 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c1100a-2669-4b80-b251-e37e93a64e54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.950 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ab34b33c-ff9f-4033-ad3f-52615e9716a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 NetworkManager[49013]: <info>  [1769847017.9574] manager: (tap17abb1a3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.956 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0844f9e0-4fe7-459c-a2f8-3c3a80f1f4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.987 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[953999dc-1d5a-4636-af4d-899d32a711cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:17.990 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c94ea9c8-3ffc-49f5-bcc0-fd2adb401df0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 NetworkManager[49013]: <info>  [1769847018.0105] device (tap17abb1a3-e0): carrier: link connected
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.014 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d1eb61-910f-4bed-838c-79100be9cca0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.033 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3105bcf6-0b99-4cea-832b-40c90361a0d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17abb1a3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:61:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632352, 'reachable_time': 44170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299154, 'error': None, 'target': 'ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.046 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fa190b26-a0cf-4d3d-91cb-a3b4fef3abc7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe61:615b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632352, 'tstamp': 632352}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299155, 'error': None, 'target': 'ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.064 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8309b774-e202-49ac-a013-57795121068a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17abb1a3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:61:61:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632352, 'reachable_time': 44170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299156, 'error': None, 'target': 'ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.096 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[899a28fb-27ec-4caf-a513-2c4098e519f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.115 247403 DEBUG nova.network.neutron [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updated VIF entry in instance network info cache for port 31ff06b2-fb86-4823-aadc-993bb07532e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.116 247403 DEBUG nova.network.neutron [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updating instance_info_cache with network_info: [{"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.135 247403 DEBUG oslo_concurrency.lockutils [req-5f80a7de-c76a-41ee-b299-e4c2f326b481 req-a94b4423-373c-4c36-936f-a39d095eeaba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.160 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[96f21185-3820-4143-84b8-9c24becec8d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.163 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17abb1a3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.164 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.165 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17abb1a3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:18 np0005603621 NetworkManager[49013]: <info>  [1769847018.1685] manager: (tap17abb1a3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Jan 31 03:10:18 np0005603621 kernel: tap17abb1a3-e0: entered promiscuous mode
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.171 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17abb1a3-e0, col_values=(('external_ids', {'iface-id': 'd87b5523-2aff-4114-b005-d26dc8dfd909'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:18Z|00240|binding|INFO|Releasing lport d87b5523-2aff-4114-b005-d26dc8dfd909 from this chassis (sb_readonly=0)
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.173 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.174 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17abb1a3-e12b-4eb4-9ed5-d259aff06ce5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17abb1a3-e12b-4eb4-9ed5-d259aff06ce5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.175 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd13bb93-45bb-420b-bfaa-71627ba84f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.176 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/17abb1a3-e12b-4eb4-9ed5-d259aff06ce5.pid.haproxy
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 17abb1a3-e12b-4eb4-9ed5-d259aff06ce5
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:10:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:18.178 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'env', 'PROCESS_TAG=haproxy-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17abb1a3-e12b-4eb4-9ed5-d259aff06ce5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.178 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.234 247403 DEBUG nova.compute.manager [req-0c225548-3c73-46d6-8481-eb50dec18cea req-f4a7aa35-7ea6-4c08-aa0d-57ff76bdbaf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.235 247403 DEBUG oslo_concurrency.lockutils [req-0c225548-3c73-46d6-8481-eb50dec18cea req-f4a7aa35-7ea6-4c08-aa0d-57ff76bdbaf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.235 247403 DEBUG oslo_concurrency.lockutils [req-0c225548-3c73-46d6-8481-eb50dec18cea req-f4a7aa35-7ea6-4c08-aa0d-57ff76bdbaf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.236 247403 DEBUG oslo_concurrency.lockutils [req-0c225548-3c73-46d6-8481-eb50dec18cea req-f4a7aa35-7ea6-4c08-aa0d-57ff76bdbaf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.236 247403 DEBUG nova.compute.manager [req-0c225548-3c73-46d6-8481-eb50dec18cea req-f4a7aa35-7ea6-4c08-aa0d-57ff76bdbaf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Processing event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.506 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.507 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847018.5053358, b0764374-5236-4f05-95f2-67dbf334ca3a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.507 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.512 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.517 247403 INFO nova.virt.libvirt.driver [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Instance spawned successfully.#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.518 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.530 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.534 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.540 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.545 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.546 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.546 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.547 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.547 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.548 247403 DEBUG nova.virt.libvirt.driver [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.558 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.559 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847018.509241, b0764374-5236-4f05-95f2-67dbf334ca3a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.559 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.579 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.583 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847018.5119395, b0764374-5236-4f05-95f2-67dbf334ca3a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.583 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:10:18 np0005603621 podman[299227]: 2026-01-31 08:10:18.493346479 +0000 UTC m=+0.022711326 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.602 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.606 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.609 247403 INFO nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Took 8.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.609 247403 DEBUG nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:18 np0005603621 podman[299227]: 2026-01-31 08:10:18.618248991 +0000 UTC m=+0.147613848 container create 83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.632 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:10:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.676 247403 INFO nova.compute.manager [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Took 9.33 seconds to build instance.#033[00m
Jan 31 03:10:18 np0005603621 systemd[1]: Started libpod-conmon-83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4.scope.
Jan 31 03:10:18 np0005603621 nova_compute[247399]: 2026-01-31 08:10:18.691 247403 DEBUG oslo_concurrency.lockutils [None req-2286cb27-2583-4ccc-ad10-df9c472f2f18 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e1e324e5531a065bbf179e98d4b4f00b4c90f4889dac99cd0c2ee49f165984d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:18 np0005603621 podman[299227]: 2026-01-31 08:10:18.783438711 +0000 UTC m=+0.312803558 container init 83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:10:18 np0005603621 podman[299227]: 2026-01-31 08:10:18.787839319 +0000 UTC m=+0.317204156 container start 83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:18 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [NOTICE]   (299247) : New worker (299249) forked
Jan 31 03:10:18 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [NOTICE]   (299247) : Loading success.
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.199 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "e528a53a-8ada-4966-912c-1f15ed61e649" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.200 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.200 247403 DEBUG nova.compute.manager [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:10:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:19.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 326 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.7 MiB/s wr, 282 op/s
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.941 247403 DEBUG neutronclient.v2_0.client [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 22e87c0f-c5ce-4700-94a2-0d70f1b9655f for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.942 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.942 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquired lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.942 247403 DEBUG nova.network.neutron [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:10:19 np0005603621 nova_compute[247399]: 2026-01-31 08:10:19.943 247403 DEBUG nova.objects.instance [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'info_cache' on Instance uuid e528a53a-8ada-4966-912c-1f15ed61e649 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.004 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847005.0032187, e528a53a-8ada-4966-912c-1f15ed61e649 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.004 247403 INFO nova.compute.manager [-] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.024 247403 DEBUG nova.compute.manager [None req-d4df1db2-44a3-4cc5-bb32-7a67a9918fff - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.027 247403 DEBUG nova.compute.manager [None req-d4df1db2-44a3-4cc5-bb32-7a67a9918fff - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.046 247403 INFO nova.compute.manager [None req-d4df1db2-44a3-4cc5-bb32-7a67a9918fff - - - - - -] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.355 247403 DEBUG nova.compute.manager [req-df4fa87c-1116-4000-89b6-a583039c1f0f req-68e37a57-586c-4ce7-9dee-876a4af840ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.355 247403 DEBUG oslo_concurrency.lockutils [req-df4fa87c-1116-4000-89b6-a583039c1f0f req-68e37a57-586c-4ce7-9dee-876a4af840ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.356 247403 DEBUG oslo_concurrency.lockutils [req-df4fa87c-1116-4000-89b6-a583039c1f0f req-68e37a57-586c-4ce7-9dee-876a4af840ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.356 247403 DEBUG oslo_concurrency.lockutils [req-df4fa87c-1116-4000-89b6-a583039c1f0f req-68e37a57-586c-4ce7-9dee-876a4af840ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.357 247403 DEBUG nova.compute.manager [req-df4fa87c-1116-4000-89b6-a583039c1f0f req-68e37a57-586c-4ce7-9dee-876a4af840ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] No waiting events found dispatching network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.357 247403 WARNING nova.compute.manager [req-df4fa87c-1116-4000-89b6-a583039c1f0f req-68e37a57-586c-4ce7-9dee-876a4af840ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received unexpected event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:10:20 np0005603621 NetworkManager[49013]: <info>  [1769847020.9300] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Jan 31 03:10:20 np0005603621 NetworkManager[49013]: <info>  [1769847020.9312] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.929 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:20 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:20Z|00241|binding|INFO|Releasing lport d87b5523-2aff-4114-b005-d26dc8dfd909 from this chassis (sb_readonly=0)
Jan 31 03:10:20 np0005603621 nova_compute[247399]: 2026-01-31 08:10:20.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:21 np0005603621 nova_compute[247399]: 2026-01-31 08:10:21.201 247403 DEBUG nova.compute.manager [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-changed-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:21 np0005603621 nova_compute[247399]: 2026-01-31 08:10:21.203 247403 DEBUG nova.compute.manager [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Refreshing instance network info cache due to event network-changed-31ff06b2-fb86-4823-aadc-993bb07532e3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:10:21 np0005603621 nova_compute[247399]: 2026-01-31 08:10:21.203 247403 DEBUG oslo_concurrency.lockutils [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:21 np0005603621 nova_compute[247399]: 2026-01-31 08:10:21.205 247403 DEBUG oslo_concurrency.lockutils [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:21 np0005603621 nova_compute[247399]: 2026-01-31 08:10:21.205 247403 DEBUG nova.network.neutron [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Refreshing network info cache for port 31ff06b2-fb86-4823-aadc-993bb07532e3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:10:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:21.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:21.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 326 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.2 MiB/s wr, 261 op/s
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.188 247403 DEBUG nova.network.neutron [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] [instance: e528a53a-8ada-4966-912c-1f15ed61e649] Updating instance_info_cache with network_info: [{"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.333 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Releasing lock "refresh_cache-e528a53a-8ada-4966-912c-1f15ed61e649" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.334 247403 DEBUG nova.objects.instance [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lazy-loading 'migration_context' on Instance uuid e528a53a-8ada-4966-912c-1f15ed61e649 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.479 247403 DEBUG nova.storage.rbd_utils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] removing snapshot(nova-resize) on rbd image(e528a53a-8ada-4966-912c-1f15ed61e649_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:10:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 31 03:10:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 31 03:10:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.802 247403 DEBUG nova.virt.libvirt.vif [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:09:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1368556693',display_name='tempest-DeleteServersTestJSON-server-1368556693',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1368556693',id=79,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3469c253459e40e39dcf5bcb6a32008f',ramdisk_id='',reservation_id='r-n1ah7sd5',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-808715310',owner_user_name='tempest-DeleteServersTestJSON-808715310-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:19Z,user_data=None,user_id='16d731f5875748ca9b8036b2ba061042',uuid=e528a53a-8ada-4966-912c-1f15ed61e649,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.803 247403 DEBUG nova.network.os_vif_util [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converting VIF {"id": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "address": "fa:16:3e:1c:4e:08", "network": {"id": "c1c6810e-ec8f-43f3-a3c6-22606d9416b6", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-676112264-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3469c253459e40e39dcf5bcb6a32008f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22e87c0f-c5", "ovs_interfaceid": "22e87c0f-c5ce-4700-94a2-0d70f1b9655f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.804 247403 DEBUG nova.network.os_vif_util [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.804 247403 DEBUG os_vif [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.808 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22e87c0f-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.808 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.814 247403 INFO os_vif [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:4e:08,bridge_name='br-int',has_traffic_filtering=True,id=22e87c0f-c5ce-4700-94a2-0d70f1b9655f,network=Network(c1c6810e-ec8f-43f3-a3c6-22606d9416b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22e87c0f-c5')#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.815 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.815 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:22 np0005603621 nova_compute[247399]: 2026-01-31 08:10:22.943 247403 DEBUG oslo_concurrency.processutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.045 247403 DEBUG nova.network.neutron [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updated VIF entry in instance network info cache for port 31ff06b2-fb86-4823-aadc-993bb07532e3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.046 247403 DEBUG nova.network.neutron [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updating instance_info_cache with network_info: [{"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.074 247403 DEBUG oslo_concurrency.lockutils [req-31a6bc9a-3d99-4378-bcba-4eee65f02462 req-09e8478f-05d3-44f5-b180-64493d8fef68 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/710773089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.388 247403 DEBUG oslo_concurrency.processutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.394 247403 DEBUG nova.compute.provider_tree [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.426 247403 DEBUG nova.scheduler.client.report [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:10:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:23.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:23.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.476 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.570 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.642 247403 INFO nova.scheduler.client.report [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Deleted allocation for migration a83b42a4-5af9-4f21-8f46-bf5820c39bec#033[00m
Jan 31 03:10:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 326 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.9 MiB/s wr, 317 op/s
Jan 31 03:10:23 np0005603621 nova_compute[247399]: 2026-01-31 08:10:23.702 247403 DEBUG oslo_concurrency.lockutils [None req-c20ad082-5775-4f52-8ea8-ec777f79ef06 16d731f5875748ca9b8036b2ba061042 3469c253459e40e39dcf5bcb6a32008f - - default default] Lock "e528a53a-8ada-4966-912c-1f15ed61e649" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:24 np0005603621 nova_compute[247399]: 2026-01-31 08:10:24.788 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.225 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.226 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.226 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.440 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.441 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.441 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:10:25 np0005603621 nova_compute[247399]: 2026-01-31 08:10:25.441 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b0764374-5236-4f05-95f2-67dbf334ca3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:25.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 250 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.4 MiB/s wr, 288 op/s
Jan 31 03:10:26 np0005603621 nova_compute[247399]: 2026-01-31 08:10:26.870 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updating instance_info_cache with network_info: [{"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:26 np0005603621 nova_compute[247399]: 2026-01-31 08:10:26.887 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-b0764374-5236-4f05-95f2-67dbf334ca3a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:10:26 np0005603621 nova_compute[247399]: 2026-01-31 08:10:26.887 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:10:27 np0005603621 nova_compute[247399]: 2026-01-31 08:10:27.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:27.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:27.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 294 active+clean; 229 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.9 MiB/s wr, 281 op/s
Jan 31 03:10:28 np0005603621 nova_compute[247399]: 2026-01-31 08:10:28.572 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:29Z|00242|binding|INFO|Releasing lport d87b5523-2aff-4114-b005-d26dc8dfd909 from this chassis (sb_readonly=0)
Jan 31 03:10:29 np0005603621 nova_compute[247399]: 2026-01-31 08:10:29.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:29.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:29.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 167 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 39 KiB/s wr, 172 op/s
Jan 31 03:10:29 np0005603621 nova_compute[247399]: 2026-01-31 08:10:29.803 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:30 np0005603621 nova_compute[247399]: 2026-01-31 08:10:30.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:30.492 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:30.493 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:30.496 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:31.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 167 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 39 KiB/s wr, 172 op/s
Jan 31 03:10:32 np0005603621 nova_compute[247399]: 2026-01-31 08:10:32.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:32Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:ef:44 10.100.0.5
Jan 31 03:10:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:32Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:ef:44 10.100.0.5
Jan 31 03:10:33 np0005603621 nova_compute[247399]: 2026-01-31 08:10:33.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:33 np0005603621 nova_compute[247399]: 2026-01-31 08:10:33.575 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 31 03:10:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 31 03:10:33 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 31 03:10:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 226 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 537 KiB/s rd, 3.7 MiB/s wr, 173 op/s
Jan 31 03:10:33 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:33Z|00243|binding|INFO|Releasing lport d87b5523-2aff-4114-b005-d26dc8dfd909 from this chassis (sb_readonly=0)
Jan 31 03:10:33 np0005603621 nova_compute[247399]: 2026-01-31 08:10:33.852 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:34 np0005603621 nova_compute[247399]: 2026-01-31 08:10:34.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:34 np0005603621 nova_compute[247399]: 2026-01-31 08:10:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:34 np0005603621 nova_compute[247399]: 2026-01-31 08:10:34.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:10:34 np0005603621 nova_compute[247399]: 2026-01-31 08:10:34.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:34 np0005603621 nova_compute[247399]: 2026-01-31 08:10:34.807 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:35 np0005603621 nova_compute[247399]: 2026-01-31 08:10:35.257 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:35.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 246 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 4.7 MiB/s wr, 173 op/s
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.231 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.231 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.232 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.232 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021800586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:36 np0005603621 nova_compute[247399]: 2026-01-31 08:10:36.750 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:36 np0005603621 podman[299400]: 2026-01-31 08:10:36.895354905 +0000 UTC m=+0.080895817 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.114 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.115 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:37.287 159995 DEBUG eventlet.wsgi.server [-] (159995) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:37.292 159995 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: Accept: */*#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: Connection: close#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: Content-Type: text/plain#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: Host: 169.254.169.254#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: User-Agent: curl/7.84.0#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: X-Forwarded-For: 10.100.0.5#015
Jan 31 03:10:37 np0005603621 ovn_metadata_agent[159729]: X-Ovn-Network-Id: 17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.324 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.326 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4327MB free_disk=20.876514434814453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.326 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.327 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:37.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:37.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.491 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance b0764374-5236-4f05-95f2-67dbf334ca3a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.492 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.492 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:10:37 np0005603621 nova_compute[247399]: 2026-01-31 08:10:37.536 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 247 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 427 KiB/s rd, 4.7 MiB/s wr, 131 op/s
Jan 31 03:10:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:10:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2858967002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.059 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.067 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.123 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.351 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.352 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.353 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.353 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:10:38
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', 'default.rgw.meta', '.rgw.root', 'vms']
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:10:38 np0005603621 podman[299444]: 2026-01-31 08:10:38.53410552 +0000 UTC m=+0.089126528 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 31 03:10:38 np0005603621 nova_compute[247399]: 2026-01-31 08:10:38.577 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:10:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:10:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:39.369 159995 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 31 03:10:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:39.370 159995 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1673 time: 2.0790524#033[00m
Jan 31 03:10:39 np0005603621 haproxy-metadata-proxy-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299249]: 10.100.0.5:46116 [31/Jan/2026:08:10:37.285] listener listener/metadata 0/0/0/2085/2085 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 03:10:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:39.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 247 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.7 MiB/s wr, 160 op/s
Jan 31 03:10:39 np0005603621 nova_compute[247399]: 2026-01-31 08:10:39.810 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.029 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.050 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.051 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.052 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.053 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.053 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.056 247403 INFO nova.compute.manager [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Terminating instance#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.058 247403 DEBUG nova.compute.manager [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:10:40 np0005603621 kernel: tap31ff06b2-fb (unregistering): left promiscuous mode
Jan 31 03:10:40 np0005603621 NetworkManager[49013]: <info>  [1769847040.2038] device (tap31ff06b2-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.210 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:40Z|00244|binding|INFO|Releasing lport 31ff06b2-fb86-4823-aadc-993bb07532e3 from this chassis (sb_readonly=0)
Jan 31 03:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:40Z|00245|binding|INFO|Setting lport 31ff06b2-fb86-4823-aadc-993bb07532e3 down in Southbound
Jan 31 03:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:10:40Z|00246|binding|INFO|Removing iface tap31ff06b2-fb ovn-installed in OVS
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.214 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.222 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.230 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:ef:44 10.100.0.5'], port_security=['fa:16:3e:7e:ef:44 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b0764374-5236-4f05-95f2-67dbf334ca3a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a328413b1f5d4bdb90716c256341492f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b2ea5e63-4091-4468-82e2-6e6bb302064e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02b05f34-fa14-4a03-b7cc-c551b074d28d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=31ff06b2-fb86-4823-aadc-993bb07532e3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.232 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 31ff06b2-fb86-4823-aadc-993bb07532e3 in datapath 17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 unbound from our chassis#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.234 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.237 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[01e314de-19f1-4530-9b6f-4b065e14e790]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.238 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 namespace which is not needed anymore#033[00m
Jan 31 03:10:40 np0005603621 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000052.scope: Deactivated successfully.
Jan 31 03:10:40 np0005603621 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000052.scope: Consumed 14.230s CPU time.
Jan 31 03:10:40 np0005603621 systemd-machined[212769]: Machine qemu-35-instance-00000052 terminated.
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.301 247403 INFO nova.virt.libvirt.driver [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Instance destroyed successfully.#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.301 247403 DEBUG nova.objects.instance [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lazy-loading 'resources' on Instance uuid b0764374-5236-4f05-95f2-67dbf334ca3a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:10:40 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [NOTICE]   (299247) : haproxy version is 2.8.14-c23fe91
Jan 31 03:10:40 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [NOTICE]   (299247) : path to executable is /usr/sbin/haproxy
Jan 31 03:10:40 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [WARNING]  (299247) : Exiting Master process...
Jan 31 03:10:40 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [WARNING]  (299247) : Exiting Master process...
Jan 31 03:10:40 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [ALERT]    (299247) : Current worker (299249) exited with code 143 (Terminated)
Jan 31 03:10:40 np0005603621 neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5[299241]: [WARNING]  (299247) : All workers exited. Exiting... (0)
Jan 31 03:10:40 np0005603621 systemd[1]: libpod-83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4.scope: Deactivated successfully.
Jan 31 03:10:40 np0005603621 podman[299505]: 2026-01-31 08:10:40.387177449 +0000 UTC m=+0.055258731 container died 83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.396 247403 DEBUG nova.virt.libvirt.vif [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=82,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBuwfv4KVCUaqYhLtpEW0AzWZwj8k1Y6gpEu3aWg+fnncJvXzkgmuBOtnZ6rrUXcWkdoogcyPBM2d9saPUxw7UEkRf1nTWKktAunyJRssYwxrxM6+fzliYqi/dvZo/Pz5A==',key_name='tempest-keypair-1486137479',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:10:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a328413b1f5d4bdb90716c256341492f',ramdisk_id='',reservation_id='r-qaan87is',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-915733051',owner_user_name='tempest-ServersV294TestFqdnHostnames-915733051-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:10:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c85c4974d1024573aedad360db16c274',uuid=b0764374-5236-4f05-95f2-67dbf334ca3a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.397 247403 DEBUG nova.network.os_vif_util [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Converting VIF {"id": "31ff06b2-fb86-4823-aadc-993bb07532e3", "address": "fa:16:3e:7e:ef:44", "network": {"id": "17abb1a3-e12b-4eb4-9ed5-d259aff06ce5", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-893779567-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a328413b1f5d4bdb90716c256341492f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ff06b2-fb", "ovs_interfaceid": "31ff06b2-fb86-4823-aadc-993bb07532e3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.398 247403 DEBUG nova.network.os_vif_util [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.399 247403 DEBUG os_vif [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.402 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31ff06b2-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.404 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.406 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.412 247403 INFO os_vif [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:ef:44,bridge_name='br-int',has_traffic_filtering=True,id=31ff06b2-fb86-4823-aadc-993bb07532e3,network=Network(17abb1a3-e12b-4eb4-9ed5-d259aff06ce5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ff06b2-fb')#033[00m
Jan 31 03:10:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4-userdata-shm.mount: Deactivated successfully.
Jan 31 03:10:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9e1e324e5531a065bbf179e98d4b4f00b4c90f4889dac99cd0c2ee49f165984d-merged.mount: Deactivated successfully.
Jan 31 03:10:40 np0005603621 podman[299505]: 2026-01-31 08:10:40.613012917 +0000 UTC m=+0.281094189 container cleanup 83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:10:40 np0005603621 systemd[1]: libpod-conmon-83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4.scope: Deactivated successfully.
Jan 31 03:10:40 np0005603621 podman[299551]: 2026-01-31 08:10:40.815892344 +0000 UTC m=+0.180967188 container remove 83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.821 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cbf28e-04ea-4c16-b641-6ea3b89f4db1]: (4, ('Sat Jan 31 08:10:40 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 (83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4)\n83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4\nSat Jan 31 08:10:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 (83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4)\n83f0f93f7f035ca6ff6f965594c6014e4eef6737bd00d9ed66eadb52adb624c4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.825 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d3d627b-aeed-4d85-8413-019384cdd6bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.826 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17abb1a3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.828 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 kernel: tap17abb1a3-e0: left promiscuous mode
Jan 31 03:10:40 np0005603621 nova_compute[247399]: 2026-01-31 08:10:40.835 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.839 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[359b7faf-b7a3-47a7-bf3f-dd9d24508981]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.853 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bee2fc27-d99e-4194-8d72-73aea2ac7f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.855 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd90e0a9-f593-460b-bbe5-5b1688711fb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.869 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29fd3585-70f0-46b0-a215-0e3906c1a1fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632346, 'reachable_time': 30415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299566, 'error': None, 'target': 'ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:40 np0005603621 systemd[1]: run-netns-ovnmeta\x2d17abb1a3\x2de12b\x2d4eb4\x2d9ed5\x2dd259aff06ce5.mount: Deactivated successfully.
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.876 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17abb1a3-e12b-4eb4-9ed5-d259aff06ce5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:40.876 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3757e1-ce81-4ef7-9b39-725d3f2a51dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:10:41 np0005603621 nova_compute[247399]: 2026-01-31 08:10:41.379 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:41.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:41.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 247 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.7 MiB/s wr, 160 op/s
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.336 247403 INFO nova.virt.libvirt.driver [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Deleting instance files /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a_del#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.337 247403 INFO nova.virt.libvirt.driver [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Deletion of /var/lib/nova/instances/b0764374-5236-4f05-95f2-67dbf334ca3a_del complete#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.523 247403 INFO nova.compute.manager [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Took 2.46 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.524 247403 DEBUG oslo.service.loopingcall [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.524 247403 DEBUG nova.compute.manager [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.524 247403 DEBUG nova.network.neutron [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.846 247403 DEBUG nova.compute.manager [req-a1606084-fe20-417c-bc8a-b5ca10dba497 req-4a818fb1-92b7-42ef-b63b-647c133859d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-vif-unplugged-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.846 247403 DEBUG oslo_concurrency.lockutils [req-a1606084-fe20-417c-bc8a-b5ca10dba497 req-4a818fb1-92b7-42ef-b63b-647c133859d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.846 247403 DEBUG oslo_concurrency.lockutils [req-a1606084-fe20-417c-bc8a-b5ca10dba497 req-4a818fb1-92b7-42ef-b63b-647c133859d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.847 247403 DEBUG oslo_concurrency.lockutils [req-a1606084-fe20-417c-bc8a-b5ca10dba497 req-4a818fb1-92b7-42ef-b63b-647c133859d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.847 247403 DEBUG nova.compute.manager [req-a1606084-fe20-417c-bc8a-b5ca10dba497 req-4a818fb1-92b7-42ef-b63b-647c133859d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] No waiting events found dispatching network-vif-unplugged-31ff06b2-fb86-4823-aadc-993bb07532e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:42 np0005603621 nova_compute[247399]: 2026-01-31 08:10:42.847 247403 DEBUG nova.compute.manager [req-a1606084-fe20-417c-bc8a-b5ca10dba497 req-4a818fb1-92b7-42ef-b63b-647c133859d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-vif-unplugged-31ff06b2-fb86-4823-aadc-993bb07532e3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:10:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:43.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:43.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:43 np0005603621 nova_compute[247399]: 2026-01-31 08:10:43.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 181 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 145 op/s
Jan 31 03:10:44 np0005603621 nova_compute[247399]: 2026-01-31 08:10:44.985 247403 DEBUG nova.compute.manager [req-5d9c5c9e-04e7-4d65-b2b7-673e3967bc6f req-55dc83cc-6df3-4249-8a07-616863abb69e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:44 np0005603621 nova_compute[247399]: 2026-01-31 08:10:44.986 247403 DEBUG oslo_concurrency.lockutils [req-5d9c5c9e-04e7-4d65-b2b7-673e3967bc6f req-55dc83cc-6df3-4249-8a07-616863abb69e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:44 np0005603621 nova_compute[247399]: 2026-01-31 08:10:44.986 247403 DEBUG oslo_concurrency.lockutils [req-5d9c5c9e-04e7-4d65-b2b7-673e3967bc6f req-55dc83cc-6df3-4249-8a07-616863abb69e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:44 np0005603621 nova_compute[247399]: 2026-01-31 08:10:44.986 247403 DEBUG oslo_concurrency.lockutils [req-5d9c5c9e-04e7-4d65-b2b7-673e3967bc6f req-55dc83cc-6df3-4249-8a07-616863abb69e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:44 np0005603621 nova_compute[247399]: 2026-01-31 08:10:44.987 247403 DEBUG nova.compute.manager [req-5d9c5c9e-04e7-4d65-b2b7-673e3967bc6f req-55dc83cc-6df3-4249-8a07-616863abb69e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] No waiting events found dispatching network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:10:44 np0005603621 nova_compute[247399]: 2026-01-31 08:10:44.987 247403 WARNING nova.compute.manager [req-5d9c5c9e-04e7-4d65-b2b7-673e3967bc6f req-55dc83cc-6df3-4249-8a07-616863abb69e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received unexpected event network-vif-plugged-31ff06b2-fb86-4823-aadc-993bb07532e3 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.228 247403 DEBUG nova.network.neutron [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.322 247403 INFO nova.compute.manager [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Took 2.80 seconds to deallocate network for instance.#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.328 247403 DEBUG nova.compute.manager [req-104b17a5-bede-402c-906f-c2658739f976 req-dbb0ca8b-45cf-46c6-b9d2-f57d18d48def fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Received event network-vif-deleted-31ff06b2-fb86-4823-aadc-993bb07532e3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.329 247403 INFO nova.compute.manager [req-104b17a5-bede-402c-906f-c2658739f976 req-dbb0ca8b-45cf-46c6-b9d2-f57d18d48def fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Neutron deleted interface 31ff06b2-fb86-4823-aadc-993bb07532e3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.330 247403 DEBUG nova.network.neutron [req-104b17a5-bede-402c-906f-c2658739f976 req-dbb0ca8b-45cf-46c6-b9d2-f57d18d48def fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.391 247403 DEBUG nova.compute.manager [req-104b17a5-bede-402c-906f-c2658739f976 req-dbb0ca8b-45cf-46c6-b9d2-f57d18d48def fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Detach interface failed, port_id=31ff06b2-fb86-4823-aadc-993bb07532e3, reason: Instance b0764374-5236-4f05-95f2-67dbf334ca3a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.405 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.449 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.450 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:10:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:45.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:45.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.513 247403 DEBUG oslo_concurrency.processutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:10:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 167 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 920 KiB/s wr, 124 op/s
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.959 247403 DEBUG oslo_concurrency.processutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:10:45 np0005603621 nova_compute[247399]: 2026-01-31 08:10:45.965 247403 DEBUG nova.compute.provider_tree [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:10:46 np0005603621 nova_compute[247399]: 2026-01-31 08:10:46.007 247403 DEBUG nova.scheduler.client.report [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:10:46 np0005603621 nova_compute[247399]: 2026-01-31 08:10:46.064 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:46 np0005603621 nova_compute[247399]: 2026-01-31 08:10:46.148 247403 INFO nova.scheduler.client.report [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Deleted allocations for instance b0764374-5236-4f05-95f2-67dbf334ca3a#033[00m
Jan 31 03:10:46 np0005603621 nova_compute[247399]: 2026-01-31 08:10:46.269 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:46 np0005603621 nova_compute[247399]: 2026-01-31 08:10:46.367 247403 DEBUG oslo_concurrency.lockutils [None req-1c2c34e0-d046-457d-bdb1-aa271d6fc723 c85c4974d1024573aedad360db16c274 a328413b1f5d4bdb90716c256341492f - - default default] Lock "b0764374-5236-4f05-95f2-67dbf334ca3a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:10:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:47.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:47.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 167 MiB data, 774 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 102 op/s
Jan 31 03:10:48 np0005603621 nova_compute[247399]: 2026-01-31 08:10:48.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:48 np0005603621 nova_compute[247399]: 2026-01-31 08:10:48.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003168693583659036 of space, bias 1.0, pg target 0.9506080750977107 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:10:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:49.259 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:10:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:49.260 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:10:49 np0005603621 nova_compute[247399]: 2026-01-31 08:10:49.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:49.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:49.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 46efa9f3-fb83-4794-8348-68fb77d78341 does not exist
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 23ccfdc4-536b-4d7f-83c1-3a5148aff4c7 does not exist
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6b9f33e3-b766-4c25-a165-1968e4636756 does not exist
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:10:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 171 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 646 KiB/s wr, 112 op/s
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.219788153 +0000 UTC m=+0.046766474 container create 2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:50 np0005603621 systemd[1]: Started libpod-conmon-2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361.scope.
Jan 31 03:10:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.196273442 +0000 UTC m=+0.023251783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.308768413 +0000 UTC m=+0.135746754 container init 2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.316002461 +0000 UTC m=+0.142980782 container start 2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.321556525 +0000 UTC m=+0.148534846 container attach 2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:10:50 np0005603621 relaxed_williamson[299934]: 167 167
Jan 31 03:10:50 np0005603621 systemd[1]: libpod-2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361.scope: Deactivated successfully.
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.323173187 +0000 UTC m=+0.150151508 container died 2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:10:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-429c181ea679e5fbec4038f6eabe94e0ae9e80cf93bf9fed3aa357baa0aba954-merged.mount: Deactivated successfully.
Jan 31 03:10:50 np0005603621 podman[299917]: 2026-01-31 08:10:50.376297219 +0000 UTC m=+0.203275540 container remove 2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:10:50 np0005603621 systemd[1]: libpod-conmon-2ca069fadf23cc723a77fb5c28a7ff1d0d17f931e3eda0a02b14c56a40175361.scope: Deactivated successfully.
Jan 31 03:10:50 np0005603621 nova_compute[247399]: 2026-01-31 08:10:50.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:50 np0005603621 podman[299958]: 2026-01-31 08:10:50.566976181 +0000 UTC m=+0.088327502 container create 5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:10:50 np0005603621 podman[299958]: 2026-01-31 08:10:50.504728232 +0000 UTC m=+0.026079563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:10:50 np0005603621 systemd[1]: Started libpod-conmon-5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01.scope.
Jan 31 03:10:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d24b399c29b8b3f7dc1f52cc850489002b8293f123a00b454c809b4bf34c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d24b399c29b8b3f7dc1f52cc850489002b8293f123a00b454c809b4bf34c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d24b399c29b8b3f7dc1f52cc850489002b8293f123a00b454c809b4bf34c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d24b399c29b8b3f7dc1f52cc850489002b8293f123a00b454c809b4bf34c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce7d24b399c29b8b3f7dc1f52cc850489002b8293f123a00b454c809b4bf34c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:50 np0005603621 podman[299958]: 2026-01-31 08:10:50.723386254 +0000 UTC m=+0.244737595 container init 5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:10:50 np0005603621 podman[299958]: 2026-01-31 08:10:50.72993759 +0000 UTC m=+0.251288911 container start 5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:10:50 np0005603621 podman[299958]: 2026-01-31 08:10:50.733959727 +0000 UTC m=+0.255311048 container attach 5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:10:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:51.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:10:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:51.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:51 np0005603621 sleepy_khayyam[299975]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:10:51 np0005603621 sleepy_khayyam[299975]: --> relative data size: 1.0
Jan 31 03:10:51 np0005603621 sleepy_khayyam[299975]: --> All data devices are unavailable
Jan 31 03:10:51 np0005603621 systemd[1]: libpod-5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01.scope: Deactivated successfully.
Jan 31 03:10:51 np0005603621 podman[299958]: 2026-01-31 08:10:51.608449243 +0000 UTC m=+1.129800564 container died 5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:10:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 171 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 632 KiB/s wr, 72 op/s
Jan 31 03:10:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2ce7d24b399c29b8b3f7dc1f52cc850489002b8293f123a00b454c809b4bf34c-merged.mount: Deactivated successfully.
Jan 31 03:10:51 np0005603621 podman[299958]: 2026-01-31 08:10:51.919466563 +0000 UTC m=+1.440817884 container remove 5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:10:51 np0005603621 systemd[1]: libpod-conmon-5fd26f606b68dd99de07ae1f4a93cdac4baf681912c1a4ef4d520c314df3ee01.scope: Deactivated successfully.
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.516039602 +0000 UTC m=+0.035862380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.614587884 +0000 UTC m=+0.134410582 container create a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:10:52 np0005603621 systemd[1]: Started libpod-conmon-a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638.scope.
Jan 31 03:10:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.717605156 +0000 UTC m=+0.237427874 container init a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.725687811 +0000 UTC m=+0.245510519 container start a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.730936747 +0000 UTC m=+0.250759455 container attach a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:10:52 np0005603621 eager_zhukovsky[300162]: 167 167
Jan 31 03:10:52 np0005603621 systemd[1]: libpod-a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638.scope: Deactivated successfully.
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.733622241 +0000 UTC m=+0.253444949 container died a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:10:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f79945f560014699232be13a907da32a12bbd006d70f1ff2fdd39734b96937c9-merged.mount: Deactivated successfully.
Jan 31 03:10:52 np0005603621 podman[300145]: 2026-01-31 08:10:52.776320655 +0000 UTC m=+0.296143333 container remove a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_zhukovsky, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:10:52 np0005603621 systemd[1]: libpod-conmon-a22592b3b077313617a1142a7f9d3d90d96c8b7bc9e0eb78a5991ccd63600638.scope: Deactivated successfully.
Jan 31 03:10:52 np0005603621 podman[300186]: 2026-01-31 08:10:52.895521617 +0000 UTC m=+0.041232559 container create 7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:52 np0005603621 podman[300186]: 2026-01-31 08:10:52.877807389 +0000 UTC m=+0.023518381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:10:52 np0005603621 nova_compute[247399]: 2026-01-31 08:10:52.973 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:52 np0005603621 systemd[1]: Started libpod-conmon-7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631.scope.
Jan 31 03:10:52 np0005603621 nova_compute[247399]: 2026-01-31 08:10:52.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2b5229f9345e56a49b96050588b66cf9aa30dec8d3adb1b720ac900046dad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2b5229f9345e56a49b96050588b66cf9aa30dec8d3adb1b720ac900046dad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2b5229f9345e56a49b96050588b66cf9aa30dec8d3adb1b720ac900046dad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2b5229f9345e56a49b96050588b66cf9aa30dec8d3adb1b720ac900046dad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:53 np0005603621 podman[300186]: 2026-01-31 08:10:53.035868005 +0000 UTC m=+0.181578947 container init 7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:10:53 np0005603621 podman[300186]: 2026-01-31 08:10:53.044383383 +0000 UTC m=+0.190094325 container start 7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:10:53 np0005603621 podman[300186]: 2026-01-31 08:10:53.047304325 +0000 UTC m=+0.193015287 container attach 7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:10:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:53.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:53.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:53 np0005603621 nova_compute[247399]: 2026-01-31 08:10:53.583 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 198 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 123 op/s
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]: {
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:    "0": [
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:        {
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "devices": [
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "/dev/loop3"
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            ],
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "lv_name": "ceph_lv0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "lv_size": "7511998464",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "name": "ceph_lv0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "tags": {
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.cluster_name": "ceph",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.crush_device_class": "",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.encrypted": "0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.osd_id": "0",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.type": "block",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:                "ceph.vdo": "0"
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            },
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "type": "block",
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:            "vg_name": "ceph_vg0"
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:        }
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]:    ]
Jan 31 03:10:53 np0005603621 distracted_meninsky[300203]: }
Jan 31 03:10:53 np0005603621 systemd[1]: libpod-7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631.scope: Deactivated successfully.
Jan 31 03:10:53 np0005603621 conmon[300203]: conmon 7ef90e9cc01a1b659e23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631.scope/container/memory.events
Jan 31 03:10:53 np0005603621 podman[300186]: 2026-01-31 08:10:53.870811306 +0000 UTC m=+1.016522268 container died 7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:10:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ab2b5229f9345e56a49b96050588b66cf9aa30dec8d3adb1b720ac900046dad7-merged.mount: Deactivated successfully.
Jan 31 03:10:53 np0005603621 podman[300186]: 2026-01-31 08:10:53.920955434 +0000 UTC m=+1.066666376 container remove 7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:10:53 np0005603621 systemd[1]: libpod-conmon-7ef90e9cc01a1b659e233ae04e9be0bc7b44be41d07358d3f098a10677e3c631.scope: Deactivated successfully.
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.415257424 +0000 UTC m=+0.043142819 container create 21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:10:54 np0005603621 systemd[1]: Started libpod-conmon-21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea.scope.
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.394254203 +0000 UTC m=+0.022139648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:10:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.50917468 +0000 UTC m=+0.137060125 container init 21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.521449467 +0000 UTC m=+0.149334862 container start 21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.525203505 +0000 UTC m=+0.153088930 container attach 21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:10:54 np0005603621 condescending_northcutt[300384]: 167 167
Jan 31 03:10:54 np0005603621 systemd[1]: libpod-21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea.scope: Deactivated successfully.
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.52951229 +0000 UTC m=+0.157397725 container died 21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-534ea8d330a5899cbff3aaae053d3c89ba269f8947c333a1fd9451c49f891bea-merged.mount: Deactivated successfully.
Jan 31 03:10:54 np0005603621 podman[300368]: 2026-01-31 08:10:54.58063285 +0000 UTC m=+0.208518285 container remove 21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_northcutt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:10:54 np0005603621 systemd[1]: libpod-conmon-21508c26e6acc3d3b87a36e99d3ba30f0cd8d0653ac59f6bcc720276a915e7ea.scope: Deactivated successfully.
Jan 31 03:10:54 np0005603621 podman[300408]: 2026-01-31 08:10:54.725645224 +0000 UTC m=+0.042045284 container create 89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ellis, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:10:54 np0005603621 systemd[1]: Started libpod-conmon-89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f.scope.
Jan 31 03:10:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:10:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae291f2bc09023a61c110496d9279d700d90b5218a6aa2a73314e1d589a90a35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae291f2bc09023a61c110496d9279d700d90b5218a6aa2a73314e1d589a90a35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae291f2bc09023a61c110496d9279d700d90b5218a6aa2a73314e1d589a90a35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae291f2bc09023a61c110496d9279d700d90b5218a6aa2a73314e1d589a90a35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:10:54 np0005603621 podman[300408]: 2026-01-31 08:10:54.706848083 +0000 UTC m=+0.023247953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:10:54 np0005603621 podman[300408]: 2026-01-31 08:10:54.810854527 +0000 UTC m=+0.127254397 container init 89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:54 np0005603621 podman[300408]: 2026-01-31 08:10:54.8185923 +0000 UTC m=+0.134992150 container start 89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:10:54 np0005603621 podman[300408]: 2026-01-31 08:10:54.821878144 +0000 UTC m=+0.138277994 container attach 89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:10:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:10:55.263 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:10:55 np0005603621 nova_compute[247399]: 2026-01-31 08:10:55.299 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847040.2983606, b0764374-5236-4f05-95f2-67dbf334ca3a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:10:55 np0005603621 nova_compute[247399]: 2026-01-31 08:10:55.301 247403 INFO nova.compute.manager [-] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:10:55 np0005603621 nova_compute[247399]: 2026-01-31 08:10:55.334 247403 DEBUG nova.compute.manager [None req-6227751d-713c-4c86-a630-19dc1f27fd57 - - - - - -] [instance: b0764374-5236-4f05-95f2-67dbf334ca3a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:10:55 np0005603621 nova_compute[247399]: 2026-01-31 08:10:55.383 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:10:55 np0005603621 nova_compute[247399]: 2026-01-31 08:10:55.450 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:10:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:55.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:10:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:55.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:55 np0005603621 focused_ellis[300425]: {
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:        "osd_id": 0,
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:        "type": "bluestore"
Jan 31 03:10:55 np0005603621 focused_ellis[300425]:    }
Jan 31 03:10:55 np0005603621 focused_ellis[300425]: }
Jan 31 03:10:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Jan 31 03:10:55 np0005603621 systemd[1]: libpod-89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f.scope: Deactivated successfully.
Jan 31 03:10:55 np0005603621 podman[300408]: 2026-01-31 08:10:55.681051938 +0000 UTC m=+0.997451788 container died 89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:10:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ae291f2bc09023a61c110496d9279d700d90b5218a6aa2a73314e1d589a90a35-merged.mount: Deactivated successfully.
Jan 31 03:10:55 np0005603621 podman[300408]: 2026-01-31 08:10:55.960162414 +0000 UTC m=+1.276562294 container remove 89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:10:55 np0005603621 systemd[1]: libpod-conmon-89bc9c6bc1e235acc7a54c79a13431958b45873e745c430a418b54296a817e0f.scope: Deactivated successfully.
Jan 31 03:10:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:10:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:10:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cff9e91c-37e4-47de-a5a3-9ff4529a2624 does not exist
Jan 31 03:10:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 452beecb-f3aa-4c62-9efd-3422b4092d1f does not exist
Jan 31 03:10:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6bfb01f4-0e6f-4697-8408-f679c1cfa162 does not exist
Jan 31 03:10:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:10:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:57.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:57.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 31 03:10:58 np0005603621 nova_compute[247399]: 2026-01-31 08:10:58.585 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:10:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:10:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:10:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:10:59.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:10:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:10:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:10:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:10:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:10:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 157 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 3.2 MiB/s wr, 97 op/s
Jan 31 03:11:00 np0005603621 nova_compute[247399]: 2026-01-31 08:11:00.454 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:01.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:01.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 157 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.6 MiB/s wr, 87 op/s
Jan 31 03:11:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:03.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:03.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:03 np0005603621 nova_compute[247399]: 2026-01-31 08:11:03.587 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 201 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 335 KiB/s rd, 4.7 MiB/s wr, 116 op/s
Jan 31 03:11:05 np0005603621 nova_compute[247399]: 2026-01-31 08:11:05.459 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:05.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 213 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Jan 31 03:11:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:11:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:07.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:11:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:07.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:07 np0005603621 podman[300567]: 2026-01-31 08:11:07.504930241 +0000 UTC m=+0.057244743 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:11:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 213 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 179 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Jan 31 03:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:08 np0005603621 nova_compute[247399]: 2026-01-31 08:11:08.589 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:09.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:09.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:09 np0005603621 podman[300589]: 2026-01-31 08:11:09.536111087 +0000 UTC m=+0.096827629 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:11:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 213 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 148 op/s
Jan 31 03:11:10 np0005603621 nova_compute[247399]: 2026-01-31 08:11:10.462 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:11.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:11.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 213 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Jan 31 03:11:12 np0005603621 nova_compute[247399]: 2026-01-31 08:11:12.956 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:13.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:13.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:13 np0005603621 nova_compute[247399]: 2026-01-31 08:11:13.591 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 214 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.5 MiB/s wr, 176 op/s
Jan 31 03:11:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:11:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1751563398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:11:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:11:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1751563398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:11:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.153 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.153 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.185 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.307 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.308 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.316 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.317 247403 INFO nova.compute.claims [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.465 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.507 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:15.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:15.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 195 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 454 KiB/s wr, 181 op/s
Jan 31 03:11:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961603440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.971 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:15 np0005603621 nova_compute[247399]: 2026-01-31 08:11:15.979 247403 DEBUG nova.compute.provider_tree [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.017 247403 DEBUG nova.scheduler.client.report [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.098 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.099 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.168 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.169 247403 DEBUG nova.network.neutron [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.197 247403 INFO nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.237 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.365 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.366 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.367 247403 INFO nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Creating image(s)#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.395 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.426 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.453 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.458 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.541 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.543 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.544 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.544 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.573 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.577 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 aa734433-4d60-4a63-9587-234fea7bc0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.633 247403 DEBUG nova.policy [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '80d33e2f57b64bd78a04cd8875660772', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96cffd653fc04612bc1b3434529fb946', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:11:16 np0005603621 nova_compute[247399]: 2026-01-31 08:11:16.997 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 aa734433-4d60-4a63-9587-234fea7bc0d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.080 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] resizing rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.197 247403 DEBUG nova.objects.instance [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'migration_context' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.212 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.212 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Ensure instance console log exists: /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.213 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.213 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.213 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:17.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 195 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 116 KiB/s wr, 181 op/s
Jan 31 03:11:17 np0005603621 nova_compute[247399]: 2026-01-31 08:11:17.730 247403 DEBUG nova.network.neutron [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Successfully created port: 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:11:18 np0005603621 nova_compute[247399]: 2026-01-31 08:11:18.595 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.077 247403 DEBUG nova.network.neutron [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Successfully updated port: 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.142 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.143 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquired lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.143 247403 DEBUG nova.network.neutron [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.159 247403 DEBUG nova.compute.manager [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-changed-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.160 247403 DEBUG nova.compute.manager [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Refreshing instance network info cache due to event network-changed-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.160 247403 DEBUG oslo_concurrency.lockutils [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:19 np0005603621 nova_compute[247399]: 2026-01-31 08:11:19.320 247403 DEBUG nova.network.neutron [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:19.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 213 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.493 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.610 247403 DEBUG nova.network.neutron [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.741 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Releasing lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.741 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Instance network_info: |[{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.742 247403 DEBUG oslo_concurrency.lockutils [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.742 247403 DEBUG nova.network.neutron [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Refreshing network info cache for port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.745 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Start _get_guest_xml network_info=[{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.751 247403 WARNING nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.756 247403 DEBUG nova.virt.libvirt.host [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.756 247403 DEBUG nova.virt.libvirt.host [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.759 247403 DEBUG nova.virt.libvirt.host [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.759 247403 DEBUG nova.virt.libvirt.host [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.760 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.760 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.761 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.761 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.761 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.761 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.762 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.762 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.762 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.762 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.762 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.762 247403 DEBUG nova.virt.hardware [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:20 np0005603621 nova_compute[247399]: 2026-01-31 08:11:20.765 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698829873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.185 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.211 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.216 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:21.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997546608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.674 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.675 247403 DEBUG nova.virt.libvirt.vif [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.676 247403 DEBUG nova.network.os_vif_util [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.677 247403 DEBUG nova.network.os_vif_util [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.678 247403 DEBUG nova.objects.instance [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'pci_devices' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 213 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.714 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <uuid>aa734433-4d60-4a63-9587-234fea7bc0d1</uuid>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <name>instance-00000055</name>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:name>tempest-device-tagging-server-314100001</nova:name>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:11:20</nova:creationTime>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:user uuid="80d33e2f57b64bd78a04cd8875660772">tempest-TaggedAttachmentsTest-1830131156-project-member</nova:user>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:project uuid="96cffd653fc04612bc1b3434529fb946">tempest-TaggedAttachmentsTest-1830131156</nova:project>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <nova:port uuid="531d3df2-94ae-4b9b-902c-35f7c2b5f6f8">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <entry name="serial">aa734433-4d60-4a63-9587-234fea7bc0d1</entry>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <entry name="uuid">aa734433-4d60-4a63-9587-234fea7bc0d1</entry>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/aa734433-4d60-4a63-9587-234fea7bc0d1_disk">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1b:d8:88"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <target dev="tap531d3df2-94"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/console.log" append="off"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:11:21 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:11:21 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:11:21 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:11:21 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.715 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Preparing to wait for external event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.715 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.716 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.716 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.717 247403 DEBUG nova.virt.libvirt.vif [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.717 247403 DEBUG nova.network.os_vif_util [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.718 247403 DEBUG nova.network.os_vif_util [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.718 247403 DEBUG os_vif [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.719 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.720 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.723 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.723 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap531d3df2-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.724 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap531d3df2-94, col_values=(('external_ids', {'iface-id': '531d3df2-94ae-4b9b-902c-35f7c2b5f6f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:d8:88', 'vm-uuid': 'aa734433-4d60-4a63-9587-234fea7bc0d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603621 NetworkManager[49013]: <info>  [1769847081.7275] manager: (tap531d3df2-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.728 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.734 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.735 247403 INFO os_vif [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94')#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.900 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.901 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.902 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No VIF found with MAC fa:16:3e:1b:d8:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.903 247403 INFO nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Using config drive#033[00m
Jan 31 03:11:21 np0005603621 nova_compute[247399]: 2026-01-31 08:11:21.937 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.221 247403 DEBUG nova.network.neutron [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updated VIF entry in instance network info cache for port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.222 247403 DEBUG nova.network.neutron [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.262 247403 DEBUG oslo_concurrency.lockutils [req-56abfb83-8cb4-4aaf-9687-c8dac7fd49a1 req-d897b3fe-fae8-421c-b91c-d8f2adc325af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.335 247403 INFO nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Creating config drive at /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/disk.config#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.340 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpezf_dyzm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.479 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpezf_dyzm" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.509 247403 DEBUG nova.storage.rbd_utils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] rbd image aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.513 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/disk.config aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.854 247403 DEBUG oslo_concurrency.processutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/disk.config aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.855 247403 INFO nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Deleting local config drive /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/disk.config because it was imported into RBD.#033[00m
Jan 31 03:11:22 np0005603621 kernel: tap531d3df2-94: entered promiscuous mode
Jan 31 03:11:22 np0005603621 NetworkManager[49013]: <info>  [1769847082.8958] manager: (tap531d3df2-94): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:22Z|00247|binding|INFO|Claiming lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 for this chassis.
Jan 31 03:11:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:22Z|00248|binding|INFO|531d3df2-94ae-4b9b-902c-35f7c2b5f6f8: Claiming fa:16:3e:1b:d8:88 10.100.0.11
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.898 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.911 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:d8:88 10.100.0.11'], port_security=['fa:16:3e:1b:d8:88 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aa734433-4d60-4a63-9587-234fea7bc0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96cffd653fc04612bc1b3434529fb946', 'neutron:revision_number': '2', 'neutron:security_group_ids': '91c1f88f-ef9d-4d77-9620-d0ecd9807582', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ceb64776-1821-47e1-ab8e-32281ea4db98, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.912 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 in datapath cc06a3f7-7401-4f89-8964-7d6d4156be1a bound to our chassis#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.914 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cc06a3f7-7401-4f89-8964-7d6d4156be1a#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.923 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[20ceb7ba-ee5d-44a8-a8e8-a60a6942f2c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.925 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcc06a3f7-71 in ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:22 np0005603621 systemd-udevd[300998]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:22 np0005603621 systemd-machined[212769]: New machine qemu-36-instance-00000055.
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.928 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.928 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcc06a3f7-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.928 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f03ced8-3408-48b4-acb9-e7b2a0bd642a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.930 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4895ac14-1e73-4952-a371-f937a5596e6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:22Z|00249|binding|INFO|Setting lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 ovn-installed in OVS
Jan 31 03:11:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:22Z|00250|binding|INFO|Setting lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 up in Southbound
Jan 31 03:11:22 np0005603621 nova_compute[247399]: 2026-01-31 08:11:22.932 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:22 np0005603621 NetworkManager[49013]: <info>  [1769847082.9368] device (tap531d3df2-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:22 np0005603621 systemd[1]: Started Virtual Machine qemu-36-instance-00000055.
Jan 31 03:11:22 np0005603621 NetworkManager[49013]: <info>  [1769847082.9377] device (tap531d3df2-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.944 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[ff67c29c-6bd4-43f1-829a-709efc6b32a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.965 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f690b413-78ce-4e43-9a43-b262b0c0cce3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.994 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[12d5356a-e8bd-4242-bae2-0c77b83fd72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:22.999 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7073d283-8e3b-4745-b63d-4a52f36a4622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 NetworkManager[49013]: <info>  [1769847083.0007] manager: (tapcc06a3f7-70): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.021 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[41d3edc4-4aaf-422c-9aaa-6b45d1e7bc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.024 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[558b5a77-1bb7-42e7-bb13-4b69512ad0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 NetworkManager[49013]: <info>  [1769847083.0425] device (tapcc06a3f7-70): carrier: link connected
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.046 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c60d56-2371-475d-8cfc-60892ac76fdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.058 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a2f2f6-af9e-4deb-ae32-14d66ad520c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc06a3f7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:6d:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638855, 'reachable_time': 20371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301030, 'error': None, 'target': 'ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.069 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb6b86a-cacc-47e0-9b8c-e9e77be397ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:6de1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 638855, 'tstamp': 638855}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301031, 'error': None, 'target': 'ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.079 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1a238376-2576-4edc-9c12-3711059ca241]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcc06a3f7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:6d:e1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638855, 'reachable_time': 20371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301032, 'error': None, 'target': 'ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.098 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf1e092-8b45-42ba-8765-4456eb24410f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.131 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f24317d5-8d53-4bb9-840e-5d5b9bd652b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.133 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc06a3f7-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.133 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.134 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc06a3f7-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603621 NetworkManager[49013]: <info>  [1769847083.1368] manager: (tapcc06a3f7-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Jan 31 03:11:23 np0005603621 kernel: tapcc06a3f7-70: entered promiscuous mode
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.139 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.141 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcc06a3f7-70, col_values=(('external_ids', {'iface-id': '3e3a1fb5-1d5a-41d3-9b2a-9cc08273e527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.142 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:23Z|00251|binding|INFO|Releasing lport 3e3a1fb5-1d5a-41d3-9b2a-9cc08273e527 from this chassis (sb_readonly=0)
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.145 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cc06a3f7-7401-4f89-8964-7d6d4156be1a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cc06a3f7-7401-4f89-8964-7d6d4156be1a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.146 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[88116ff9-5322-4f0e-ade5-30c48a2bc2d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.147 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-cc06a3f7-7401-4f89-8964-7d6d4156be1a
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/cc06a3f7-7401-4f89-8964-7d6d4156be1a.pid.haproxy
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID cc06a3f7-7401-4f89-8964-7d6d4156be1a
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:23.148 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'env', 'PROCESS_TAG=haproxy-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cc06a3f7-7401-4f89-8964-7d6d4156be1a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:23 np0005603621 podman[301105]: 2026-01-31 08:11:23.460078126 +0000 UTC m=+0.040782555 container create a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.481 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847083.480464, aa734433-4d60-4a63-9587-234fea7bc0d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.481 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:23 np0005603621 systemd[1]: Started libpod-conmon-a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876.scope.
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.522 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:23.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:11:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.528 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847083.4815583, aa734433-4d60-4a63-9587-234fea7bc0d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.528 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:11:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c348ea3f68bb899a0b82d728b4ed00605382b3466fb94d5889983e638f059994/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:23 np0005603621 podman[301105]: 2026-01-31 08:11:23.437686631 +0000 UTC m=+0.018391060 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:23 np0005603621 podman[301105]: 2026-01-31 08:11:23.538089381 +0000 UTC m=+0.118793830 container init a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:11:23 np0005603621 podman[301105]: 2026-01-31 08:11:23.542361416 +0000 UTC m=+0.123065845 container start a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.553 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.556 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [NOTICE]   (301125) : New worker (301128) forked
Jan 31 03:11:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [NOTICE]   (301125) : Loading success.
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.581 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:23 np0005603621 nova_compute[247399]: 2026-01-31 08:11:23.596 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 217 MiB data, 791 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 31 03:11:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:25.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 233 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 509 KiB/s rd, 3.1 MiB/s wr, 90 op/s
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.224 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.225 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.225 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.244 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.244 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.771 247403 DEBUG nova.compute.manager [req-1e408ee9-4dad-4228-8455-7616d0d9ded3 req-e0b69913-741a-4565-8ca3-dff1b52f342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.772 247403 DEBUG oslo_concurrency.lockutils [req-1e408ee9-4dad-4228-8455-7616d0d9ded3 req-e0b69913-741a-4565-8ca3-dff1b52f342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.772 247403 DEBUG oslo_concurrency.lockutils [req-1e408ee9-4dad-4228-8455-7616d0d9ded3 req-e0b69913-741a-4565-8ca3-dff1b52f342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.772 247403 DEBUG oslo_concurrency.lockutils [req-1e408ee9-4dad-4228-8455-7616d0d9ded3 req-e0b69913-741a-4565-8ca3-dff1b52f342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.772 247403 DEBUG nova.compute.manager [req-1e408ee9-4dad-4228-8455-7616d0d9ded3 req-e0b69913-741a-4565-8ca3-dff1b52f342a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Processing event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.773 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.776 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847086.776092, aa734433-4d60-4a63-9587-234fea7bc0d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.776 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.778 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.780 247403 INFO nova.virt.libvirt.driver [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Instance spawned successfully.#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.780 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.816 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.819 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.831 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.832 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.832 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.832 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.833 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.833 247403 DEBUG nova.virt.libvirt.driver [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.846 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.902 247403 INFO nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Took 10.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.903 247403 DEBUG nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.972 247403 INFO nova.compute.manager [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Took 11.71 seconds to build instance.#033[00m
Jan 31 03:11:26 np0005603621 nova_compute[247399]: 2026-01-31 08:11:26.988 247403 DEBUG oslo_concurrency.lockutils [None req-38e58f0f-5cd7-49b2-932d-8f35f109b9c7 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 263 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 175 KiB/s rd, 4.5 MiB/s wr, 87 op/s
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.600 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.963 247403 DEBUG nova.compute.manager [req-3329c378-a6b2-45ca-bd5e-aa334d43560f req-c299fb53-672d-4ff3-8968-c1658173f67f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.964 247403 DEBUG oslo_concurrency.lockutils [req-3329c378-a6b2-45ca-bd5e-aa334d43560f req-c299fb53-672d-4ff3-8968-c1658173f67f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.964 247403 DEBUG oslo_concurrency.lockutils [req-3329c378-a6b2-45ca-bd5e-aa334d43560f req-c299fb53-672d-4ff3-8968-c1658173f67f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.964 247403 DEBUG oslo_concurrency.lockutils [req-3329c378-a6b2-45ca-bd5e-aa334d43560f req-c299fb53-672d-4ff3-8968-c1658173f67f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.965 247403 DEBUG nova.compute.manager [req-3329c378-a6b2-45ca-bd5e-aa334d43560f req-c299fb53-672d-4ff3-8968-c1658173f67f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:28 np0005603621 nova_compute[247399]: 2026-01-31 08:11:28.965 247403 WARNING nova.compute.manager [req-3329c378-a6b2-45ca-bd5e-aa334d43560f req-c299fb53-672d-4ff3-8968-c1658173f67f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received unexpected event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:11:29 np0005603621 nova_compute[247399]: 2026-01-31 08:11:29.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:29.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:11:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:29.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:11:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 288 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 182 op/s
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:30 np0005603621 NetworkManager[49013]: <info>  [1769847090.2191] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Jan 31 03:11:30 np0005603621 NetworkManager[49013]: <info>  [1769847090.2200] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.280 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:30Z|00252|binding|INFO|Releasing lport 3e3a1fb5-1d5a-41d3-9b2a-9cc08273e527 from this chassis (sb_readonly=0)
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:30.493 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:30.494 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:30.494 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.602 247403 DEBUG nova.compute.manager [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-changed-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.603 247403 DEBUG nova.compute.manager [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Refreshing instance network info cache due to event network-changed-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.603 247403 DEBUG oslo_concurrency.lockutils [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.603 247403 DEBUG oslo_concurrency.lockutils [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:30 np0005603621 nova_compute[247399]: 2026-01-31 08:11:30.603 247403 DEBUG nova.network.neutron [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Refreshing network info cache for port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:31.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 288 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 173 op/s
Jan 31 03:11:31 np0005603621 nova_compute[247399]: 2026-01-31 08:11:31.766 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:32 np0005603621 nova_compute[247399]: 2026-01-31 08:11:32.422 247403 DEBUG nova.network.neutron [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updated VIF entry in instance network info cache for port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:11:32 np0005603621 nova_compute[247399]: 2026-01-31 08:11:32.422 247403 DEBUG nova.network.neutron [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:32 np0005603621 nova_compute[247399]: 2026-01-31 08:11:32.462 247403 DEBUG oslo_concurrency.lockutils [req-b9036180-c71d-4a07-92d1-6e33477849bf req-6b9e7e92-0a1a-465c-a5dd-e213c4936332 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:33 np0005603621 nova_compute[247399]: 2026-01-31 08:11:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:33.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:33.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:33 np0005603621 nova_compute[247399]: 2026-01-31 08:11:33.601 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 292 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.0 MiB/s wr, 245 op/s
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.850873) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847093850922, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2168, "num_deletes": 253, "total_data_size": 3726077, "memory_usage": 3792368, "flush_reason": "Manual Compaction"}
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847093881021, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3647581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36922, "largest_seqno": 39088, "table_properties": {"data_size": 3637977, "index_size": 5970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20941, "raw_average_key_size": 20, "raw_value_size": 3618368, "raw_average_value_size": 3578, "num_data_blocks": 259, "num_entries": 1011, "num_filter_entries": 1011, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769846893, "oldest_key_time": 1769846893, "file_creation_time": 1769847093, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 30253 microseconds, and 7281 cpu microseconds.
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.881120) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3647581 bytes OK
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.881150) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.884323) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.884385) EVENT_LOG_v1 {"time_micros": 1769847093884369, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.884419) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3717200, prev total WAL file size 3717200, number of live WAL files 2.
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.885426) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3562KB)], [80(10161KB)]
Jan 31 03:11:33 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847093885671, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14053030, "oldest_snapshot_seqno": -1}
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6780 keys, 12003769 bytes, temperature: kUnknown
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847094030499, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 12003769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11956098, "index_size": 29639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 173620, "raw_average_key_size": 25, "raw_value_size": 11832579, "raw_average_value_size": 1745, "num_data_blocks": 1182, "num_entries": 6780, "num_filter_entries": 6780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847093, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.031151) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 12003769 bytes
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.035645) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.8 rd, 82.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7306, records dropped: 526 output_compression: NoCompression
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.035676) EVENT_LOG_v1 {"time_micros": 1769847094035659, "job": 46, "event": "compaction_finished", "compaction_time_micros": 145149, "compaction_time_cpu_micros": 28654, "output_level": 6, "num_output_files": 1, "total_output_size": 12003769, "num_input_records": 7306, "num_output_records": 6780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847094036343, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847094037604, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:33.885323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.037641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.037647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.037649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.037651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:11:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:11:34.037653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:11:34 np0005603621 nova_compute[247399]: 2026-01-31 08:11:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:34 np0005603621 nova_compute[247399]: 2026-01-31 08:11:34.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:11:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:11:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:11:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 306 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.1 MiB/s wr, 235 op/s
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.224 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.226 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.226 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.226 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133747526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.662 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.735 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.736 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.768 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.871 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.872 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4310MB free_disk=20.859493255615234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.872 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:36 np0005603621 nova_compute[247399]: 2026-01-31 08:11:36.873 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.020 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance aa734433-4d60-4a63-9587-234fea7bc0d1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.021 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.021 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.098 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.167 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.167 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.180 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.200 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.236 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:37.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:37.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 306 MiB data, 838 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.2 MiB/s wr, 237 op/s
Jan 31 03:11:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091759747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.746 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.751 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.790 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.844 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:11:37 np0005603621 nova_compute[247399]: 2026-01-31 08:11:37.845 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:11:38 np0005603621 podman[301190]: 2026-01-31 08:11:38.510821882 +0000 UTC m=+0.055449966 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:11:38
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:11:38 np0005603621 nova_compute[247399]: 2026-01-31 08:11:38.603 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:11:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:11:38 np0005603621 nova_compute[247399]: 2026-01-31 08:11:38.845 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 317 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 295 op/s
Jan 31 03:11:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:40Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:d8:88 10.100.0.11
Jan 31 03:11:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:40Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:d8:88 10.100.0.11
Jan 31 03:11:40 np0005603621 podman[301210]: 2026-01-31 08:11:40.533089778 +0000 UTC m=+0.077773349 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:11:41 np0005603621 nova_compute[247399]: 2026-01-31 08:11:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:11:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:41.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 317 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 31 03:11:41 np0005603621 nova_compute[247399]: 2026-01-31 08:11:41.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:43.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:43 np0005603621 nova_compute[247399]: 2026-01-31 08:11:43.606 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 317 MiB data, 852 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 5.3 MiB/s wr, 294 op/s
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.537 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.537 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:45.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.562 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.635 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.636 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.644 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.644 247403 INFO nova.compute.claims [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:11:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 303 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.7 MiB/s wr, 252 op/s
Jan 31 03:11:45 np0005603621 nova_compute[247399]: 2026-01-31 08:11:45.755 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:11:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218372919' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.200 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.204 247403 DEBUG nova.compute.provider_tree [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.223 247403 DEBUG nova.scheduler.client.report [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.250 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.251 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.313 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.313 247403 DEBUG nova.network.neutron [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.335 247403 INFO nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.355 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.445 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.446 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.446 247403 INFO nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Creating image(s)#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.473 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.502 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.531 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.536 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.596 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.597 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.598 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.598 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.625 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.629 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.651 247403 DEBUG nova.policy [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '111fdaf79c084a91902fe37a7a502020', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '58e900992be7400fb940ca20f13e12d1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.772 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:46 np0005603621 nova_compute[247399]: 2026-01-31 08:11:46.954 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.040 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] resizing rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.176 247403 DEBUG nova.objects.instance [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.198 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.199 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Ensure instance console log exists: /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.199 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.200 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.200 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:47.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:47.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:47 np0005603621 nova_compute[247399]: 2026-01-31 08:11:47.636 247403 DEBUG nova.network.neutron [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Successfully created port: 59a695f7-8b56-4370-b276-673072319aa3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:11:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 301 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.7 MiB/s wr, 302 op/s
Jan 31 03:11:48 np0005603621 nova_compute[247399]: 2026-01-31 08:11:48.609 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:48 np0005603621 nova_compute[247399]: 2026-01-31 08:11:48.685 247403 DEBUG nova.network.neutron [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Successfully updated port: 59a695f7-8b56-4370-b276-673072319aa3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:11:48 np0005603621 nova_compute[247399]: 2026-01-31 08:11:48.700 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "refresh_cache-7d3c5986-9e8f-45d2-96c0-7e45646d8d52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:48 np0005603621 nova_compute[247399]: 2026-01-31 08:11:48.701 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquired lock "refresh_cache-7d3c5986-9e8f-45d2-96c0-7e45646d8d52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:48 np0005603621 nova_compute[247399]: 2026-01-31 08:11:48.701 247403 DEBUG nova.network.neutron [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:11:48 np0005603621 nova_compute[247399]: 2026-01-31 08:11:48.899 247403 DEBUG nova.network.neutron [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006606746114833427 of space, bias 1.0, pg target 1.982023834450028 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027172174530057695 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:11:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:11:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:49.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:11:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:49.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 342 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.0 MiB/s wr, 395 op/s
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.120 247403 DEBUG nova.compute.manager [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-changed-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.121 247403 DEBUG nova.compute.manager [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Refreshing instance network info cache due to event network-changed-59a695f7-8b56-4370-b276-673072319aa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.121 247403 DEBUG oslo_concurrency.lockutils [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-7d3c5986-9e8f-45d2-96c0-7e45646d8d52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.130 247403 DEBUG nova.network.neutron [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Updating instance_info_cache with network_info: [{"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.151 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Releasing lock "refresh_cache-7d3c5986-9e8f-45d2-96c0-7e45646d8d52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.152 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance network_info: |[{"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.152 247403 DEBUG oslo_concurrency.lockutils [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-7d3c5986-9e8f-45d2-96c0-7e45646d8d52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.152 247403 DEBUG nova.network.neutron [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Refreshing network info cache for port 59a695f7-8b56-4370-b276-673072319aa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.155 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Start _get_guest_xml network_info=[{"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.159 247403 WARNING nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.165 247403 DEBUG nova.virt.libvirt.host [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.165 247403 DEBUG nova.virt.libvirt.host [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.168 247403 DEBUG nova.virt.libvirt.host [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.169 247403 DEBUG nova.virt.libvirt.host [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.170 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.170 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.171 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.171 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.171 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.171 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.172 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.172 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.172 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.172 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.172 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.173 247403 DEBUG nova.virt.hardware [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.175 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174508176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.585 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.613 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:50 np0005603621 nova_compute[247399]: 2026-01-31 08:11:50.617 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3573457633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.051 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.053 247403 DEBUG nova.virt.libvirt.vif [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1076782147',display_name='tempest-ServerDiskConfigTestJSON-server-1076782147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1076782147',id=88,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-ik2aud0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:46Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=7d3c5986-9e8f-45d2-96c0-7e45646d8d52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.053 247403 DEBUG nova.network.os_vif_util [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.054 247403 DEBUG nova.network.os_vif_util [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.055 247403 DEBUG nova.objects.instance [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.074 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <uuid>7d3c5986-9e8f-45d2-96c0-7e45646d8d52</uuid>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <name>instance-00000058</name>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1076782147</nova:name>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:11:50</nova:creationTime>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:user uuid="111fdaf79c084a91902fe37a7a502020">tempest-ServerDiskConfigTestJSON-855158150-project-member</nova:user>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:project uuid="58e900992be7400fb940ca20f13e12d1">tempest-ServerDiskConfigTestJSON-855158150</nova:project>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <nova:port uuid="59a695f7-8b56-4370-b276-673072319aa3">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <entry name="serial">7d3c5986-9e8f-45d2-96c0-7e45646d8d52</entry>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <entry name="uuid">7d3c5986-9e8f-45d2-96c0-7e45646d8d52</entry>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:e9:d6:fa"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <target dev="tap59a695f7-8b"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/console.log" append="off"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:11:51 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:11:51 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:11:51 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:11:51 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.075 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Preparing to wait for external event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.075 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.076 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.076 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.077 247403 DEBUG nova.virt.libvirt.vif [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:11:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1076782147',display_name='tempest-ServerDiskConfigTestJSON-server-1076782147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1076782147',id=88,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-ik2aud0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:46Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=7d3c5986-9e8f-45d2-96c0-7e45646d8d52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.077 247403 DEBUG nova.network.os_vif_util [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.078 247403 DEBUG nova.network.os_vif_util [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.078 247403 DEBUG os_vif [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.079 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.080 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.083 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.084 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59a695f7-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.084 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59a695f7-8b, col_values=(('external_ids', {'iface-id': '59a695f7-8b56-4370-b276-673072319aa3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:d6:fa', 'vm-uuid': '7d3c5986-9e8f-45d2-96c0-7e45646d8d52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.085 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603621 NetworkManager[49013]: <info>  [1769847111.0873] manager: (tap59a695f7-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.088 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.092 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.093 247403 INFO os_vif [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b')#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.144 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.145 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.145 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No VIF found with MAC fa:16:3e:e9:d6:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.146 247403 INFO nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Using config drive#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.172 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:51.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 342 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 306 op/s
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.883 247403 INFO nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Creating config drive at /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.887 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgab2udwu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.907 247403 DEBUG nova.network.neutron [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Updated VIF entry in instance network info cache for port 59a695f7-8b56-4370-b276-673072319aa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.907 247403 DEBUG nova.network.neutron [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Updating instance_info_cache with network_info: [{"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:11:51 np0005603621 nova_compute[247399]: 2026-01-31 08:11:51.927 247403 DEBUG oslo_concurrency.lockutils [req-438f72e4-9aa9-4abf-85de-38076f37f081 req-8daefa0a-14e9-4315-9e0b-4a93baaf4769 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-7d3c5986-9e8f-45d2-96c0-7e45646d8d52" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.011 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgab2udwu" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.040 247403 DEBUG nova.storage.rbd_utils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.044 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.165 247403 DEBUG oslo_concurrency.processutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.166 247403 INFO nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deleting local config drive /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config because it was imported into RBD.#033[00m
Jan 31 03:11:52 np0005603621 kernel: tap59a695f7-8b: entered promiscuous mode
Jan 31 03:11:52 np0005603621 NetworkManager[49013]: <info>  [1769847112.2112] manager: (tap59a695f7-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:52Z|00253|binding|INFO|Claiming lport 59a695f7-8b56-4370-b276-673072319aa3 for this chassis.
Jan 31 03:11:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:52Z|00254|binding|INFO|59a695f7-8b56-4370-b276-673072319aa3: Claiming fa:16:3e:e9:d6:fa 10.100.0.9
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.218 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:d6:fa 10.100.0.9'], port_security=['fa:16:3e:e9:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7d3c5986-9e8f-45d2-96c0-7e45646d8d52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=59a695f7-8b56-4370-b276-673072319aa3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:52Z|00255|binding|INFO|Setting lport 59a695f7-8b56-4370-b276-673072319aa3 ovn-installed in OVS
Jan 31 03:11:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:52Z|00256|binding|INFO|Setting lport 59a695f7-8b56-4370-b276-673072319aa3 up in Southbound
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.220 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 59a695f7-8b56-4370-b276-673072319aa3 in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 bound to our chassis#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.223 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f218695f-c744-4bd8-b2d8-122a920c7ca0#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.231 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcc99a8-d168-4d9c-8a6a-1f3136c24984]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.232 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf218695f-c1 in ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.235 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf218695f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.235 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[95149f43-610c-4792-b612-d9ae93f43280]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 systemd-udevd[301618]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.236 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc71ab36-282f-4810-97f5-4d6b3bcd4789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 systemd-machined[212769]: New machine qemu-37-instance-00000058.
Jan 31 03:11:52 np0005603621 NetworkManager[49013]: <info>  [1769847112.2480] device (tap59a695f7-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:11:52 np0005603621 NetworkManager[49013]: <info>  [1769847112.2488] device (tap59a695f7-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.250 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[dff264b0-22f3-4462-9067-f1e53cbcba01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 systemd[1]: Started Virtual Machine qemu-37-instance-00000058.
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.262 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a94e5adb-ef85-4fc3-97d4-8ef824789333]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.286 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc804fa-ca40-45c4-965b-885ff234248f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 systemd-udevd[301622]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:11:52 np0005603621 NetworkManager[49013]: <info>  [1769847112.2939] manager: (tapf218695f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.292 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e377e23a-8f8d-47eb-887d-cd380ae3e98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.319 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfca56d-4585-4453-b6ce-dcd3051f7284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.322 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[208bbdf5-1397-4238-96f0-147e878d9f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 NetworkManager[49013]: <info>  [1769847112.3440] device (tapf218695f-c0): carrier: link connected
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.348 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[eab9c29b-659e-42a9-92cf-afdea9edad0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.362 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e488d1-ff5f-4215-9ba7-4afa1040df4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641785, 'reachable_time': 26196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301651, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.374 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[44cbd750-7dfc-4999-ab68-5174457d6d52]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:830'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641785, 'tstamp': 641785}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301652, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.389 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e422324-443e-460f-a6d2-4a7b701e4206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641785, 'reachable_time': 26196, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301653, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.410 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[18c0e1d6-eb97-45e9-b839-f97407fd3a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.454 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f44c75b9-cf87-4bd6-b1b2-414575ad85ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.457 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.457 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.457 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf218695f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:52 np0005603621 NetworkManager[49013]: <info>  [1769847112.4599] manager: (tapf218695f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.459 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 kernel: tapf218695f-c0: entered promiscuous mode
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.463 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf218695f-c0, col_values=(('external_ids', {'iface-id': 'd3a551a2-38e3-48d3-bdee-f2493a79eca0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:11:52Z|00257|binding|INFO|Releasing lport d3a551a2-38e3-48d3-bdee-f2493a79eca0 from this chassis (sb_readonly=0)
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.466 247403 DEBUG nova.compute.manager [req-0cbbab2c-a19c-4345-b7a0-7c0c3f659a80 req-ca802208-b0d0-4c6b-a888-f2c30fdfdf6d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.466 247403 DEBUG oslo_concurrency.lockutils [req-0cbbab2c-a19c-4345-b7a0-7c0c3f659a80 req-ca802208-b0d0-4c6b-a888-f2c30fdfdf6d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.466 247403 DEBUG oslo_concurrency.lockutils [req-0cbbab2c-a19c-4345-b7a0-7c0c3f659a80 req-ca802208-b0d0-4c6b-a888-f2c30fdfdf6d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.467 247403 DEBUG oslo_concurrency.lockutils [req-0cbbab2c-a19c-4345-b7a0-7c0c3f659a80 req-ca802208-b0d0-4c6b-a888-f2c30fdfdf6d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.467 247403 DEBUG nova.compute.manager [req-0cbbab2c-a19c-4345-b7a0-7c0c3f659a80 req-ca802208-b0d0-4c6b-a888-f2c30fdfdf6d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Processing event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.467 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.468 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.469 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[efe7e398-9026-4ff1-a3e3-9d33e7b28334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.470 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.471 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'env', 'PROCESS_TAG=haproxy-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f218695f-c744-4bd8-b2d8-122a920c7ca0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.472 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.611 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.612 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847112.6107907, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.612 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Started (Lifecycle Event)#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.617 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.621 247403 INFO nova.virt.libvirt.driver [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance spawned successfully.#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.622 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.633 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.637 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.657 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.658 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847112.6109624, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.658 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.663 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.664 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.664 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.664 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.665 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.665 247403 DEBUG nova.virt.libvirt.driver [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.693 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.697 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847112.6162448, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.698 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.738 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.745 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.749 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.790 247403 INFO nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Took 6.35 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.791 247403 DEBUG nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.808 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:11:52 np0005603621 podman[301727]: 2026-01-31 08:11:52.845686676 +0000 UTC m=+0.072732731 container create 938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:11:52 np0005603621 systemd[1]: Started libpod-conmon-938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0.scope.
Jan 31 03:11:52 np0005603621 podman[301727]: 2026-01-31 08:11:52.794868476 +0000 UTC m=+0.021914551 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:11:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.909 247403 INFO nova.compute.manager [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Took 7.30 seconds to build instance.#033[00m
Jan 31 03:11:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de97dddf4b917684ddeeb3995e35f8e0be7f9255703a33dcfb30a753829fe38a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:52 np0005603621 podman[301727]: 2026-01-31 08:11:52.932037103 +0000 UTC m=+0.159083178 container init 938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:11:52 np0005603621 nova_compute[247399]: 2026-01-31 08:11:52.933 247403 DEBUG oslo_concurrency.lockutils [None req-a9fc73a5-289a-4a52-b7e7-a5d2dd4cb2b1 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:52 np0005603621 podman[301727]: 2026-01-31 08:11:52.937726373 +0000 UTC m=+0.164772428 container start 938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:11:52 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [NOTICE]   (301746) : New worker (301748) forked
Jan 31 03:11:52 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [NOTICE]   (301746) : Loading success.
Jan 31 03:11:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:52.996 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:11:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:53.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:53 np0005603621 nova_compute[247399]: 2026-01-31 08:11:53.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 326 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 5.4 MiB/s wr, 338 op/s
Jan 31 03:11:54 np0005603621 nova_compute[247399]: 2026-01-31 08:11:54.548 247403 DEBUG nova.compute.manager [req-674bffaa-17d7-4b06-8293-70d08ca35849 req-02197d8f-7d1e-4710-8b51-5c8d2b98a509 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:11:54 np0005603621 nova_compute[247399]: 2026-01-31 08:11:54.549 247403 DEBUG oslo_concurrency.lockutils [req-674bffaa-17d7-4b06-8293-70d08ca35849 req-02197d8f-7d1e-4710-8b51-5c8d2b98a509 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:11:54 np0005603621 nova_compute[247399]: 2026-01-31 08:11:54.549 247403 DEBUG oslo_concurrency.lockutils [req-674bffaa-17d7-4b06-8293-70d08ca35849 req-02197d8f-7d1e-4710-8b51-5c8d2b98a509 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:11:54 np0005603621 nova_compute[247399]: 2026-01-31 08:11:54.549 247403 DEBUG oslo_concurrency.lockutils [req-674bffaa-17d7-4b06-8293-70d08ca35849 req-02197d8f-7d1e-4710-8b51-5c8d2b98a509 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:11:54 np0005603621 nova_compute[247399]: 2026-01-31 08:11:54.550 247403 DEBUG nova.compute.manager [req-674bffaa-17d7-4b06-8293-70d08ca35849 req-02197d8f-7d1e-4710-8b51-5c8d2b98a509 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] No waiting events found dispatching network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:11:54 np0005603621 nova_compute[247399]: 2026-01-31 08:11:54.550 247403 WARNING nova.compute.manager [req-674bffaa-17d7-4b06-8293-70d08ca35849 req-02197d8f-7d1e-4710-8b51-5c8d2b98a509 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received unexpected event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:11:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:55.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 326 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 263 op/s
Jan 31 03:11:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:11:55.998 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:11:56 np0005603621 nova_compute[247399]: 2026-01-31 08:11:56.088 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.097 247403 INFO nova.compute.manager [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Rebuilding instance#033[00m
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:11:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1bee6131-f271-4f0c-9c19-a7fe8a1103da does not exist
Jan 31 03:11:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6896631b-b1ab-4da6-a6a1-c2ff00e3028f does not exist
Jan 31 03:11:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 530f2de9-aaed-4e4a-bb4d-9a6097665c2b does not exist
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.375 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:11:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.437 247403 DEBUG nova.compute.manager [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.495 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.522 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.547 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'resources' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.560 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:11:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:57.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:11:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.571 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:11:57 np0005603621 nova_compute[247399]: 2026-01-31 08:11:57.576 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:11:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 326 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 243 op/s
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.839386343 +0000 UTC m=+0.037490361 container create ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:11:57 np0005603621 systemd[1]: Started libpod-conmon-ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f.scope.
Jan 31 03:11:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.821882342 +0000 UTC m=+0.019986380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.921212099 +0000 UTC m=+0.119316137 container init ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_murdock, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.928005082 +0000 UTC m=+0.126109100 container start ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.932130433 +0000 UTC m=+0.130234631 container attach ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:11:57 np0005603621 nice_murdock[302048]: 167 167
Jan 31 03:11:57 np0005603621 systemd[1]: libpod-ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f.scope: Deactivated successfully.
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.936268133 +0000 UTC m=+0.134372151 container died ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_murdock, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:11:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-858011eedafba2e5eb6ca3152d59d32dd63766991914016992974d4d7ab123db-merged.mount: Deactivated successfully.
Jan 31 03:11:57 np0005603621 podman[302032]: 2026-01-31 08:11:57.979962209 +0000 UTC m=+0.178066227 container remove ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:11:57 np0005603621 systemd[1]: libpod-conmon-ece57c4af34e0b55e0426ae4e0b8b3de6ca07910165bb1aad613cfaa1af89b8f.scope: Deactivated successfully.
Jan 31 03:11:58 np0005603621 podman[302073]: 2026-01-31 08:11:58.117941331 +0000 UTC m=+0.037047886 container create 21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_napier, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:11:58 np0005603621 systemd[1]: Started libpod-conmon-21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be.scope.
Jan 31 03:11:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:11:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3cf7a619408024f60a3d2eb9adf47b1aae409e827e249c2f0b8ad4f3a77247/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3cf7a619408024f60a3d2eb9adf47b1aae409e827e249c2f0b8ad4f3a77247/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3cf7a619408024f60a3d2eb9adf47b1aae409e827e249c2f0b8ad4f3a77247/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3cf7a619408024f60a3d2eb9adf47b1aae409e827e249c2f0b8ad4f3a77247/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b3cf7a619408024f60a3d2eb9adf47b1aae409e827e249c2f0b8ad4f3a77247/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:11:58 np0005603621 podman[302073]: 2026-01-31 08:11:58.101562466 +0000 UTC m=+0.020669041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:11:58 np0005603621 podman[302073]: 2026-01-31 08:11:58.208037607 +0000 UTC m=+0.127144182 container init 21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:11:58 np0005603621 podman[302073]: 2026-01-31 08:11:58.214687397 +0000 UTC m=+0.133793952 container start 21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_napier, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:11:58 np0005603621 podman[302073]: 2026-01-31 08:11:58.218340642 +0000 UTC m=+0.137447217 container attach 21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:11:58 np0005603621 nova_compute[247399]: 2026-01-31 08:11:58.615 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:11:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:11:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:11:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2571178593' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:11:58 np0005603621 bold_napier[302089]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:11:58 np0005603621 bold_napier[302089]: --> relative data size: 1.0
Jan 31 03:11:58 np0005603621 bold_napier[302089]: --> All data devices are unavailable
Jan 31 03:11:59 np0005603621 systemd[1]: libpod-21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be.scope: Deactivated successfully.
Jan 31 03:11:59 np0005603621 podman[302073]: 2026-01-31 08:11:59.007672068 +0000 UTC m=+0.926778623 container died 21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_napier, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:11:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2b3cf7a619408024f60a3d2eb9adf47b1aae409e827e249c2f0b8ad4f3a77247-merged.mount: Deactivated successfully.
Jan 31 03:11:59 np0005603621 podman[302073]: 2026-01-31 08:11:59.060256323 +0000 UTC m=+0.979362878 container remove 21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:11:59 np0005603621 systemd[1]: libpod-conmon-21ce1ea511752c0cd240e13322c0af4cf6fb4b6ecd4dfba11e6f8adadd4146be.scope: Deactivated successfully.
Jan 31 03:11:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:11:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:11:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:11:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:11:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:11:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.63613126 +0000 UTC m=+0.047478906 container create 1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:11:59 np0005603621 systemd[1]: Started libpod-conmon-1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49.scope.
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.613721884 +0000 UTC m=+0.025069550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:11:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:11:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 326 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 200 op/s
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.731837403 +0000 UTC m=+0.143185059 container init 1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_blackwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.741682222 +0000 UTC m=+0.153029868 container start 1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_blackwell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.745584005 +0000 UTC m=+0.156931711 container attach 1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:11:59 np0005603621 intelligent_blackwell[302275]: 167 167
Jan 31 03:11:59 np0005603621 systemd[1]: libpod-1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49.scope: Deactivated successfully.
Jan 31 03:11:59 np0005603621 conmon[302275]: conmon 1cfe4844fb4b6295f03a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49.scope/container/memory.events
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.748020582 +0000 UTC m=+0.159368268 container died 1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_blackwell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:11:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7d5ee725365d2b575b705445cb07901c138ee30d05e24ff45c4cdea8319e7c24-merged.mount: Deactivated successfully.
Jan 31 03:11:59 np0005603621 podman[302258]: 2026-01-31 08:11:59.789094555 +0000 UTC m=+0.200442231 container remove 1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:11:59 np0005603621 systemd[1]: libpod-conmon-1cfe4844fb4b6295f03ab29b17d207c2bcb7f987c81e3f3ff3c66ace86f49b49.scope: Deactivated successfully.
Jan 31 03:11:59 np0005603621 podman[302300]: 2026-01-31 08:11:59.965799407 +0000 UTC m=+0.049607613 container create 112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:12:00 np0005603621 systemd[1]: Started libpod-conmon-112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f.scope.
Jan 31 03:12:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:12:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a01e59269b80b646f73198538f4826ab611fd02e67fb1c81e35466a30a1b39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:00 np0005603621 podman[302300]: 2026-01-31 08:11:59.938370794 +0000 UTC m=+0.022179040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:12:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a01e59269b80b646f73198538f4826ab611fd02e67fb1c81e35466a30a1b39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a01e59269b80b646f73198538f4826ab611fd02e67fb1c81e35466a30a1b39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a01e59269b80b646f73198538f4826ab611fd02e67fb1c81e35466a30a1b39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:00 np0005603621 podman[302300]: 2026-01-31 08:12:00.058325259 +0000 UTC m=+0.142133505 container init 112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:12:00 np0005603621 podman[302300]: 2026-01-31 08:12:00.066211138 +0000 UTC m=+0.150019344 container start 112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_grothendieck, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:12:00 np0005603621 podman[302300]: 2026-01-31 08:12:00.070016137 +0000 UTC m=+0.153824383 container attach 112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_grothendieck, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]: {
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:    "0": [
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:        {
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "devices": [
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "/dev/loop3"
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            ],
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "lv_name": "ceph_lv0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "lv_size": "7511998464",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "name": "ceph_lv0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "tags": {
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.cluster_name": "ceph",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.crush_device_class": "",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.encrypted": "0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.osd_id": "0",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.type": "block",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:                "ceph.vdo": "0"
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            },
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "type": "block",
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:            "vg_name": "ceph_vg0"
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:        }
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]:    ]
Jan 31 03:12:00 np0005603621 distracted_grothendieck[302316]: }
Jan 31 03:12:00 np0005603621 systemd[1]: libpod-112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f.scope: Deactivated successfully.
Jan 31 03:12:00 np0005603621 podman[302300]: 2026-01-31 08:12:00.852838379 +0000 UTC m=+0.936646615 container died 112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_grothendieck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:12:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a0a01e59269b80b646f73198538f4826ab611fd02e67fb1c81e35466a30a1b39-merged.mount: Deactivated successfully.
Jan 31 03:12:00 np0005603621 podman[302300]: 2026-01-31 08:12:00.906322252 +0000 UTC m=+0.990130468 container remove 112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_grothendieck, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:12:00 np0005603621 systemd[1]: libpod-conmon-112fc20011a0ed5d99e5b33e572184a85d25bec33174e019aeca2f338ece8e5f.scope: Deactivated successfully.
Jan 31 03:12:01 np0005603621 nova_compute[247399]: 2026-01-31 08:12:01.093 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:12:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 26K writes, 102K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 26K writes, 8597 syncs, 3.08 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7619 writes, 28K keys, 7619 commit groups, 1.0 writes per commit group, ingest: 27.79 MB, 0.05 MB/s#012Interval WAL: 7619 writes, 2923 syncs, 2.61 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.524611755 +0000 UTC m=+0.037066478 container create e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:12:01 np0005603621 systemd[1]: Started libpod-conmon-e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4.scope.
Jan 31 03:12:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:01.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:01.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.510087567 +0000 UTC m=+0.022542310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.606876373 +0000 UTC m=+0.119331096 container init e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.616178147 +0000 UTC m=+0.128632880 container start e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.620402299 +0000 UTC m=+0.132857022 container attach e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:12:01 np0005603621 cranky_pare[302495]: 167 167
Jan 31 03:12:01 np0005603621 systemd[1]: libpod-e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4.scope: Deactivated successfully.
Jan 31 03:12:01 np0005603621 conmon[302495]: conmon e8aac5d54c5034202542 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4.scope/container/memory.events
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.623984932 +0000 UTC m=+0.136439655 container died e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:12:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ede56424ec1b12a59e2de12a527549ebd4c899ca54f193d0b71154f62e186316-merged.mount: Deactivated successfully.
Jan 31 03:12:01 np0005603621 podman[302478]: 2026-01-31 08:12:01.656018481 +0000 UTC m=+0.168473204 container remove e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:12:01 np0005603621 systemd[1]: libpod-conmon-e8aac5d54c50342025429c1ea4d351e4b04f65015d2d78c30ef1f1c8a05480b4.scope: Deactivated successfully.
Jan 31 03:12:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 326 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 93 KiB/s wr, 99 op/s
Jan 31 03:12:01 np0005603621 podman[302519]: 2026-01-31 08:12:01.823363188 +0000 UTC m=+0.045092390 container create 79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:12:01 np0005603621 systemd[1]: Started libpod-conmon-79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed.scope.
Jan 31 03:12:01 np0005603621 podman[302519]: 2026-01-31 08:12:01.804615188 +0000 UTC m=+0.026344420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:12:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:12:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17faa2df9808dd95b97dd8c0a91c0f97e4ec590ff1f08287c2355a11da7bf9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17faa2df9808dd95b97dd8c0a91c0f97e4ec590ff1f08287c2355a11da7bf9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17faa2df9808dd95b97dd8c0a91c0f97e4ec590ff1f08287c2355a11da7bf9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a17faa2df9808dd95b97dd8c0a91c0f97e4ec590ff1f08287c2355a11da7bf9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:01 np0005603621 podman[302519]: 2026-01-31 08:12:01.947731242 +0000 UTC m=+0.169460494 container init 79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_colden, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:12:01 np0005603621 podman[302519]: 2026-01-31 08:12:01.959376079 +0000 UTC m=+0.181105281 container start 79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 31 03:12:01 np0005603621 podman[302519]: 2026-01-31 08:12:01.962552289 +0000 UTC m=+0.184281511 container attach 79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:12:02 np0005603621 pensive_colden[302549]: {
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:        "osd_id": 0,
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:        "type": "bluestore"
Jan 31 03:12:02 np0005603621 pensive_colden[302549]:    }
Jan 31 03:12:02 np0005603621 pensive_colden[302549]: }
Jan 31 03:12:02 np0005603621 systemd[1]: libpod-79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed.scope: Deactivated successfully.
Jan 31 03:12:02 np0005603621 podman[302519]: 2026-01-31 08:12:02.821247238 +0000 UTC m=+1.042976460 container died 79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:12:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a17faa2df9808dd95b97dd8c0a91c0f97e4ec590ff1f08287c2355a11da7bf9c-merged.mount: Deactivated successfully.
Jan 31 03:12:02 np0005603621 podman[302519]: 2026-01-31 08:12:02.876613532 +0000 UTC m=+1.098342734 container remove 79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_colden, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:12:02 np0005603621 systemd[1]: libpod-conmon-79f99ef1d10adf9672531f3fe373f59269d0555fdb0c5912dcd47061869155ed.scope: Deactivated successfully.
Jan 31 03:12:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:12:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:12:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:12:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:12:02 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f9b2b30e-4681-4b25-9cfb-dff862b7f205 does not exist
Jan 31 03:12:02 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b9faf264-63f0-4352-8f6d-9e2ccc533897 does not exist
Jan 31 03:12:02 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ff41a39a-49c5-4d14-ac2f-aae90910b2e1 does not exist
Jan 31 03:12:03 np0005603621 nova_compute[247399]: 2026-01-31 08:12:03.055 247403 DEBUG oslo_concurrency.lockutils [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "interface-aa734433-4d60-4a63-9587-234fea7bc0d1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:03 np0005603621 nova_compute[247399]: 2026-01-31 08:12:03.056 247403 DEBUG oslo_concurrency.lockutils [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "interface-aa734433-4d60-4a63-9587-234fea7bc0d1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:03 np0005603621 nova_compute[247399]: 2026-01-31 08:12:03.057 247403 DEBUG nova.objects.instance [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'flavor' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:03.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:12:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:03.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:12:03 np0005603621 nova_compute[247399]: 2026-01-31 08:12:03.616 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 353 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 116 op/s
Jan 31 03:12:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:12:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:12:03 np0005603621 nova_compute[247399]: 2026-01-31 08:12:03.970 247403 DEBUG nova.objects.instance [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'pci_requests' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:03 np0005603621 nova_compute[247399]: 2026-01-31 08:12:03.986 247403 DEBUG nova.network.neutron [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:12:04 np0005603621 nova_compute[247399]: 2026-01-31 08:12:04.221 247403 DEBUG nova.policy [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '80d33e2f57b64bd78a04cd8875660772', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96cffd653fc04612bc1b3434529fb946', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:12:04 np0005603621 nova_compute[247399]: 2026-01-31 08:12:04.813 247403 DEBUG nova.network.neutron [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Successfully created port: 31b86ebd-10fb-49a2-8013-feec6913e5a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:12:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:05Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:d6:fa 10.100.0.9
Jan 31 03:12:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:05Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:d6:fa 10.100.0.9
Jan 31 03:12:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:05.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 378 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.865 247403 DEBUG nova.network.neutron [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Successfully updated port: 31b86ebd-10fb-49a2-8013-feec6913e5a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.898 247403 DEBUG oslo_concurrency.lockutils [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.899 247403 DEBUG oslo_concurrency.lockutils [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquired lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.899 247403 DEBUG nova.network.neutron [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.972 247403 DEBUG nova.compute.manager [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-changed-31b86ebd-10fb-49a2-8013-feec6913e5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.972 247403 DEBUG nova.compute.manager [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Refreshing instance network info cache due to event network-changed-31b86ebd-10fb-49a2-8013-feec6913e5a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:12:05 np0005603621 nova_compute[247399]: 2026-01-31 08:12:05.973 247403 DEBUG oslo_concurrency.lockutils [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:06 np0005603621 nova_compute[247399]: 2026-01-31 08:12:06.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:12:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:12:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:07.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:12:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:07.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:07 np0005603621 nova_compute[247399]: 2026-01-31 08:12:07.639 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:12:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 377 MiB data, 908 MiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 3.2 MiB/s wr, 86 op/s
Jan 31 03:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:08 np0005603621 nova_compute[247399]: 2026-01-31 08:12:08.621 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.167 247403 DEBUG nova.network.neutron [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.334 247403 DEBUG oslo_concurrency.lockutils [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Releasing lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.336 247403 DEBUG oslo_concurrency.lockutils [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.337 247403 DEBUG nova.network.neutron [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Refreshing network info cache for port 31b86ebd-10fb-49a2-8013-feec6913e5a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.340 247403 DEBUG nova.virt.libvirt.vif [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.340 247403 DEBUG nova.network.os_vif_util [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.341 247403 DEBUG nova.network.os_vif_util [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.341 247403 DEBUG os_vif [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.342 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.343 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.346 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.346 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31b86ebd-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.347 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31b86ebd-10, col_values=(('external_ids', {'iface-id': '31b86ebd-10fb-49a2-8013-feec6913e5a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:77:bd', 'vm-uuid': 'aa734433-4d60-4a63-9587-234fea7bc0d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.348 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.3501] manager: (tap31b86ebd-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.357 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.359 247403 INFO os_vif [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10')#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.360 247403 DEBUG nova.virt.libvirt.vif [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.360 247403 DEBUG nova.network.os_vif_util [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.361 247403 DEBUG nova.network.os_vif_util [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.364 247403 DEBUG nova.virt.libvirt.guest [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] attach device xml: <interface type="ethernet">
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:b3:77:bd"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <target dev="tap31b86ebd-10"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:12:09 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:12:09 np0005603621 kernel: tap31b86ebd-10: entered promiscuous mode
Jan 31 03:12:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:09Z|00258|binding|INFO|Claiming lport 31b86ebd-10fb-49a2-8013-feec6913e5a2 for this chassis.
Jan 31 03:12:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:09Z|00259|binding|INFO|31b86ebd-10fb-49a2-8013-feec6913e5a2: Claiming fa:16:3e:b3:77:bd 10.10.10.122
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.3779] manager: (tap31b86ebd-10): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:09Z|00260|binding|INFO|Setting lport 31b86ebd-10fb-49a2-8013-feec6913e5a2 ovn-installed in OVS
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 systemd-udevd[302685]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.4288] device (tap31b86ebd-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.4299] device (tap31b86ebd-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:12:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:09Z|00261|binding|INFO|Setting lport 31b86ebd-10fb-49a2-8013-feec6913e5a2 up in Southbound
Jan 31 03:12:09 np0005603621 podman[302678]: 2026-01-31 08:12:09.460431052 +0000 UTC m=+0.056055106 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.460 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:77:bd 10.10.10.122'], port_security=['fa:16:3e:b3:77:bd 10.10.10.122'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.122/24', 'neutron:device_id': 'aa734433-4d60-4a63-9587-234fea7bc0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96cffd653fc04612bc1b3434529fb946', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4963ae94-2a25-4edf-a864-abd23e9e6bd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6d3c7304-ad38-454d-8885-d943e3534e29, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=31b86ebd-10fb-49a2-8013-feec6913e5a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.461 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 31b86ebd-10fb-49a2-8013-feec6913e5a2 in datapath b8b9c1e2-08e9-43e7-96b0-463d95892627 bound to our chassis#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.463 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b8b9c1e2-08e9-43e7-96b0-463d95892627#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.472 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f54d6fb8-673a-4b14-b2de-f79c5d775888]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.472 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb8b9c1e2-01 in ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.474 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb8b9c1e2-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.474 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[749cd0c0-8986-469f-a3ba-390540e16597]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.475 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[900e247e-1d70-41f3-bf06-ed8dd80ff718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.482 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fca9b82c-461d-4ce9-bc5e-01a14b66895d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.494 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[85737c62-53fa-433e-9367-0e957685398b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.517 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[59a40f44-0b1e-41c6-b448-94fd6377c0a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 systemd-udevd[302691]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.523 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d105fb1-63b0-4b25-b771-99dc0b09d34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.5242] manager: (tapb8b9c1e2-00): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.549 247403 DEBUG nova.virt.libvirt.driver [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.550 247403 DEBUG nova.virt.libvirt.driver [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.549 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6406ee92-0d87-468b-b78c-8ff372df734c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.550 247403 DEBUG nova.virt.libvirt.driver [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No VIF found with MAC fa:16:3e:1b:d8:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.552 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3f42a3-f03c-44c2-97b5-6fe69a43a4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.5689] device (tapb8b9c1e2-00): carrier: link connected
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.571 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[51bf3326-59e3-40ca-b388-877d06996e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.582 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[572a3290-f899-4332-834f-f9c0745e2854]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8b9c1e2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:64:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643508, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302727, 'error': None, 'target': 'ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:09.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.584 247403 DEBUG nova.virt.libvirt.guest [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:name>tempest-device-tagging-server-314100001</nova:name>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:12:09</nova:creationTime>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:user uuid="80d33e2f57b64bd78a04cd8875660772">tempest-TaggedAttachmentsTest-1830131156-project-member</nova:user>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:project uuid="96cffd653fc04612bc1b3434529fb946">tempest-TaggedAttachmentsTest-1830131156</nova:project>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:port uuid="531d3df2-94ae-4b9b-902c-35f7c2b5f6f8">
Jan 31 03:12:09 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    <nova:port uuid="31b86ebd-10fb-49a2-8013-feec6913e5a2">
Jan 31 03:12:09 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.10.10.122" ipVersion="4"/>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:09 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:12:09 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:12:09 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:12:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.593 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e7fb32d5-a183-45ac-a49a-60eb94a99ff2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:64cd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643508, 'tstamp': 643508}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302728, 'error': None, 'target': 'ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.609 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[64de8a94-5f59-41fc-a05c-afe94f2a5f32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb8b9c1e2-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:64:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643508, 'reachable_time': 43689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302729, 'error': None, 'target': 'ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.620 247403 DEBUG oslo_concurrency.lockutils [None req-25a6c7ac-c815-4fb0-8699-03213b03584a 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "interface-aa734433-4d60-4a63-9587-234fea7bc0d1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.630 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1a21191a-e781-499c-ab0e-a6dc05e69176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.670 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0550ce-67a5-4976-805b-45dfb474dedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.671 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8b9c1e2-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.671 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.671 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb8b9c1e2-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:09 np0005603621 NetworkManager[49013]: <info>  [1769847129.6743] manager: (tapb8b9c1e2-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Jan 31 03:12:09 np0005603621 kernel: tapb8b9c1e2-00: entered promiscuous mode
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.676 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb8b9c1e2-00, col_values=(('external_ids', {'iface-id': '5d07fb9d-7155-4702-8855-81b7e38e9c70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:09Z|00262|binding|INFO|Releasing lport 5d07fb9d-7155-4702-8855-81b7e38e9c70 from this chassis (sb_readonly=0)
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.677 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 nova_compute[247399]: 2026-01-31 08:12:09.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.686 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b8b9c1e2-08e9-43e7-96b0-463d95892627.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b8b9c1e2-08e9-43e7-96b0-463d95892627.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.687 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d70d8ef-f7af-425d-aafa-a01a1423c8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.688 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-b8b9c1e2-08e9-43e7-96b0-463d95892627
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/b8b9c1e2-08e9-43e7-96b0-463d95892627.pid.haproxy
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID b8b9c1e2-08e9-43e7-96b0-463d95892627
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:12:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:09.688 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'env', 'PROCESS_TAG=haproxy-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b8b9c1e2-08e9-43e7-96b0-463d95892627.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:12:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 350 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 760 KiB/s rd, 4.9 MiB/s wr, 144 op/s
Jan 31 03:12:10 np0005603621 podman[302761]: 2026-01-31 08:12:10.006231972 +0000 UTC m=+0.045620157 container create 7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 03:12:10 np0005603621 kernel: tap59a695f7-8b (unregistering): left promiscuous mode
Jan 31 03:12:10 np0005603621 NetworkManager[49013]: <info>  [1769847130.0346] device (tap59a695f7-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:10Z|00263|binding|INFO|Releasing lport 59a695f7-8b56-4370-b276-673072319aa3 from this chassis (sb_readonly=0)
Jan 31 03:12:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:10Z|00264|binding|INFO|Setting lport 59a695f7-8b56-4370-b276-673072319aa3 down in Southbound
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.042 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:10Z|00265|binding|INFO|Removing iface tap59a695f7-8b ovn-installed in OVS
Jan 31 03:12:10 np0005603621 systemd[1]: Started libpod-conmon-7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb.scope.
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.051 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.055 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:d6:fa 10.100.0.9'], port_security=['fa:16:3e:e9:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7d3c5986-9e8f-45d2-96c0-7e45646d8d52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=59a695f7-8b56-4370-b276-673072319aa3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:10 np0005603621 podman[302761]: 2026-01-31 08:12:09.980869244 +0000 UTC m=+0.020257449 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:12:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:12:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc38d58f6c2ddca0f901d1ef926e7fbc24f8399ee3c8e44f3e8e1c1e4790aeb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:10 np0005603621 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 31 03:12:10 np0005603621 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000058.scope: Consumed 13.203s CPU time.
Jan 31 03:12:10 np0005603621 podman[302761]: 2026-01-31 08:12:10.095300166 +0000 UTC m=+0.134688401 container init 7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:12:10 np0005603621 systemd-machined[212769]: Machine qemu-37-instance-00000058 terminated.
Jan 31 03:12:10 np0005603621 podman[302761]: 2026-01-31 08:12:10.100500989 +0000 UTC m=+0.139889184 container start 7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.103 247403 DEBUG nova.compute.manager [req-62aff90b-e067-49c7-bf62-d1ae35580b8f req-29c96745-8280-4333-be63-ea2135b8029f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.104 247403 DEBUG oslo_concurrency.lockutils [req-62aff90b-e067-49c7-bf62-d1ae35580b8f req-29c96745-8280-4333-be63-ea2135b8029f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.105 247403 DEBUG oslo_concurrency.lockutils [req-62aff90b-e067-49c7-bf62-d1ae35580b8f req-29c96745-8280-4333-be63-ea2135b8029f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.105 247403 DEBUG oslo_concurrency.lockutils [req-62aff90b-e067-49c7-bf62-d1ae35580b8f req-29c96745-8280-4333-be63-ea2135b8029f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.105 247403 DEBUG nova.compute.manager [req-62aff90b-e067-49c7-bf62-d1ae35580b8f req-29c96745-8280-4333-be63-ea2135b8029f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.105 247403 WARNING nova.compute.manager [req-62aff90b-e067-49c7-bf62-d1ae35580b8f req-29c96745-8280-4333-be63-ea2135b8029f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received unexpected event network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [NOTICE]   (302784) : New worker (302786) forked
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [NOTICE]   (302784) : Loading success.
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.186 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 59a695f7-8b56-4370-b276-673072319aa3 in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 unbound from our chassis#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.188 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f218695f-c744-4bd8-b2d8-122a920c7ca0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.189 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5840b78c-0c70-41f4-a4c1-18877af4e84a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.190 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace which is not needed anymore#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.282 247403 DEBUG oslo_concurrency.lockutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.284 247403 DEBUG oslo_concurrency.lockutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [NOTICE]   (301746) : haproxy version is 2.8.14-c23fe91
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [NOTICE]   (301746) : path to executable is /usr/sbin/haproxy
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [WARNING]  (301746) : Exiting Master process...
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [ALERT]    (301746) : Current worker (301748) exited with code 143 (Terminated)
Jan 31 03:12:10 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[301742]: [WARNING]  (301746) : All workers exited. Exiting... (0)
Jan 31 03:12:10 np0005603621 systemd[1]: libpod-938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0.scope: Deactivated successfully.
Jan 31 03:12:10 np0005603621 podman[302813]: 2026-01-31 08:12:10.309525479 +0000 UTC m=+0.048374674 container died 938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:12:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0-userdata-shm.mount: Deactivated successfully.
Jan 31 03:12:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-de97dddf4b917684ddeeb3995e35f8e0be7f9255703a33dcfb30a753829fe38a-merged.mount: Deactivated successfully.
Jan 31 03:12:10 np0005603621 podman[302813]: 2026-01-31 08:12:10.345849823 +0000 UTC m=+0.084699018 container cleanup 938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.348 247403 DEBUG nova.objects.instance [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'flavor' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:10 np0005603621 systemd[1]: libpod-conmon-938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0.scope: Deactivated successfully.
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.387 247403 DEBUG oslo_concurrency.lockutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:10 np0005603621 podman[302852]: 2026-01-31 08:12:10.405804699 +0000 UTC m=+0.036553301 container remove 938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.411 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4322e0-60a7-4bfb-8eb9-ec9a621cac78]: (4, ('Sat Jan 31 08:12:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0)\n938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0\nSat Jan 31 08:12:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0)\n938c2302f1b607e742e9a8bff56127846e6c1e23749bbbf7de9977da3b8f05a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.412 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9fe690-6388-40a9-8456-95aa38641bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.413 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.459 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603621 kernel: tapf218695f-c0: left promiscuous mode
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.469 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.472 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fd33d3-e97f-4220-b0df-ed11feb9ff0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.490 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8751dd-8694-4b69-aff5-fd430636650a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.491 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8daf0eb3-028d-410e-9a2e-738adacd083d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.504 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e182f3-e256-48a7-b41d-47464cbe3f29]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641779, 'reachable_time': 22603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302870, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 systemd[1]: run-netns-ovnmeta\x2df218695f\x2dc744\x2d4bd8\x2db2d8\x2d122a920c7ca0.mount: Deactivated successfully.
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.509 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:12:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:10.509 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[0614abaf-14ff-4b56-8e1a-fc9e71a5aa3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.613 247403 DEBUG oslo_concurrency.lockutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.614 247403 DEBUG oslo_concurrency.lockutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.614 247403 INFO nova.compute.manager [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Attaching volume 551f775f-0622-435f-b9d7-dcb85f156132 to /dev/vdb#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.653 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.659 247403 INFO nova.virt.libvirt.driver [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance destroyed successfully.#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.665 247403 INFO nova.virt.libvirt.driver [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance destroyed successfully.#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.665 247403 DEBUG nova.virt.libvirt.vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1076782147',display_name='tempest-ServerDiskConfigTestJSON-server-1076782147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1076782147',id=88,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-ik2aud0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:11:56Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=7d3c5986-9e8f-45d2-96c0-7e45646d8d52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.666 247403 DEBUG nova.network.os_vif_util [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.667 247403 DEBUG nova.network.os_vif_util [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.667 247403 DEBUG os_vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.669 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.669 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59a695f7-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.672 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.675 247403 INFO os_vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b')#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.822 247403 DEBUG os_brick.utils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.824 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.834 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.835 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[821f47da-2db6-4f59-9863-a68b87beceac]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.836 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.841 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.841 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[069ef686-3177-40bf-830a-e9c7aa22f6e7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.842 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.846 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.846 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[48c7e35d-6595-447b-96dd-c9a5c4947887]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.847 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8b9232-f1e1-4b32-9be9-2f5a9ae1ed1e]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.848 247403 DEBUG oslo_concurrency.processutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:10Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:77:bd 10.10.10.122
Jan 31 03:12:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:10Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:77:bd 10.10.10.122
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.874 247403 DEBUG oslo_concurrency.processutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.877 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.878 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.878 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.879 247403 DEBUG os_brick.utils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:12:10 np0005603621 nova_compute[247399]: 2026-01-31 08:12:10.879 247403 DEBUG nova.virt.block_device [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating existing volume attachment record: d99a2a8d-ec49-470e-8e1e-15a8ba4bf847 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.128 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deleting instance files /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_del#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.129 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deletion of /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_del complete#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.358 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.359 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Creating image(s)#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.388 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.427 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.465 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.475 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:11 np0005603621 podman[302931]: 2026-01-31 08:12:11.539676561 +0000 UTC m=+0.090315194 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.546 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.548 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "365f9823d2619ef09948bdeed685488da63755b5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.548 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.549 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.580 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.585 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:11.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:11.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.677 247403 DEBUG nova.network.neutron [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updated VIF entry in instance network info cache for port 31b86ebd-10fb-49a2-8013-feec6913e5a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.677 247403 DEBUG nova.network.neutron [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.696 247403 DEBUG oslo_concurrency.lockutils [req-c3255e9d-9934-480c-9fa2-7baeba088d1a req-4bca4bc5-390c-4fb5-b65f-a7dac65372f6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 350 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 4.9 MiB/s wr, 129 op/s
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.857 247403 DEBUG nova.objects.instance [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'flavor' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.882 247403 DEBUG nova.virt.libvirt.driver [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Attempting to attach volume 551f775f-0622-435f-b9d7-dcb85f156132 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.886 247403 DEBUG nova.virt.libvirt.guest [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-551f775f-0622-435f-b9d7-dcb85f156132">
Jan 31 03:12:11 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:12:11 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:12:11 np0005603621 nova_compute[247399]:  <serial>551f775f-0622-435f-b9d7-dcb85f156132</serial>
Jan 31 03:12:11 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:12:11 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.910 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:11 np0005603621 nova_compute[247399]: 2026-01-31 08:12:11.993 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] resizing rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.056 247403 DEBUG nova.virt.libvirt.driver [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.056 247403 DEBUG nova.virt.libvirt.driver [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.057 247403 DEBUG nova.virt.libvirt.driver [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] No VIF found with MAC fa:16:3e:1b:d8:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.113 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.114 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Ensure instance console log exists: /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.115 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.115 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.115 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.118 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Start _get_guest_xml network_info=[{"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.123 247403 WARNING nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.129 247403 DEBUG nova.virt.libvirt.host [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.130 247403 DEBUG nova.virt.libvirt.host [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.133 247403 DEBUG nova.virt.libvirt.host [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.134 247403 DEBUG nova.virt.libvirt.host [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.135 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.135 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.136 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.136 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.136 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.137 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.137 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.137 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.137 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.138 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.138 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.138 247403 DEBUG nova.virt.hardware [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.138 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.159 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.227 247403 DEBUG nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.227 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.227 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.228 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.228 247403 DEBUG nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.228 247403 WARNING nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received unexpected event network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.228 247403 DEBUG nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-unplugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.228 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.229 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.229 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.229 247403 DEBUG nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] No waiting events found dispatching network-vif-unplugged-59a695f7-8b56-4370-b276-673072319aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.229 247403 WARNING nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received unexpected event network-vif-unplugged-59a695f7-8b56-4370-b276-673072319aa3 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.229 247403 DEBUG nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.230 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.230 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.230 247403 DEBUG oslo_concurrency.lockutils [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.230 247403 DEBUG nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] No waiting events found dispatching network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.230 247403 WARNING nova.compute.manager [req-ac9f7953-b896-4349-9697-2090af8ce90c req-e8d54f84-edf4-4cfb-9b3d-9aa036b83032 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received unexpected event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.392 247403 DEBUG oslo_concurrency.lockutils [None req-fecb19b6-419b-4cb3-898c-88b983df4c54 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/655075853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.597 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.628 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:12 np0005603621 nova_compute[247399]: 2026-01-31 08:12:12.633 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:12:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1846562966' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.083 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.085 247403 DEBUG nova.virt.libvirt.vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:11:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1076782147',display_name='tempest-ServerDiskConfigTestJSON-server-1076782147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1076782147',id=88,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-ik2aud0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:11Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=7d3c5986-9e8f-45d2-96c0-7e45646d8d52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.086 247403 DEBUG nova.network.os_vif_util [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.088 247403 DEBUG nova.network.os_vif_util [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.092 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <uuid>7d3c5986-9e8f-45d2-96c0-7e45646d8d52</uuid>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <name>instance-00000058</name>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1076782147</nova:name>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:12:12</nova:creationTime>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:user uuid="111fdaf79c084a91902fe37a7a502020">tempest-ServerDiskConfigTestJSON-855158150-project-member</nova:user>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:project uuid="58e900992be7400fb940ca20f13e12d1">tempest-ServerDiskConfigTestJSON-855158150</nova:project>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="0864ca59-9877-4e6d-adfc-f0a3204ed8f8"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <nova:port uuid="59a695f7-8b56-4370-b276-673072319aa3">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <entry name="serial">7d3c5986-9e8f-45d2-96c0-7e45646d8d52</entry>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <entry name="uuid">7d3c5986-9e8f-45d2-96c0-7e45646d8d52</entry>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:e9:d6:fa"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <target dev="tap59a695f7-8b"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/console.log" append="off"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:12:13 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:12:13 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:12:13 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:12:13 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.093 247403 DEBUG nova.compute.manager [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Preparing to wait for external event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.093 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.094 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.094 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.095 247403 DEBUG nova.virt.libvirt.vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:11:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1076782147',display_name='tempest-ServerDiskConfigTestJSON-server-1076782147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1076782147',id=88,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-ik2aud0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:11Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=7d3c5986-9e8f-45d2-96c0-7e45646d8d52,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.095 247403 DEBUG nova.network.os_vif_util [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.096 247403 DEBUG nova.network.os_vif_util [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.097 247403 DEBUG os_vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.098 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.099 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.099 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.102 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59a695f7-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.103 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59a695f7-8b, col_values=(('external_ids', {'iface-id': '59a695f7-8b56-4370-b276-673072319aa3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:d6:fa', 'vm-uuid': '7d3c5986-9e8f-45d2-96c0-7e45646d8d52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.105 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:13 np0005603621 NetworkManager[49013]: <info>  [1769847133.1065] manager: (tap59a695f7-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.111 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.112 247403 INFO os_vif [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b')#033[00m
Jan 31 03:12:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:13.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:13.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.623 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.659 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.659 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.659 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No VIF found with MAC fa:16:3e:e9:d6:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.660 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Using config drive#033[00m
Jan 31 03:12:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.691 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.712 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 358 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 419 KiB/s rd, 7.1 MiB/s wr, 209 op/s
Jan 31 03:12:13 np0005603621 nova_compute[247399]: 2026-01-31 08:12:13.748 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'keypairs' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:14.027 159995 DEBUG eventlet.wsgi.server [-] (159995) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:14.028 159995 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: Accept: */*#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: Connection: close#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: Content-Type: text/plain#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: Host: 169.254.169.254#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: User-Agent: curl/7.84.0#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: X-Forwarded-For: 10.100.0.11#015
Jan 31 03:12:14 np0005603621 ovn_metadata_agent[159729]: X-Ovn-Network-Id: cc06a3f7-7401-4f89-8964-7d6d4156be1a __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 31 03:12:14 np0005603621 nova_compute[247399]: 2026-01-31 08:12:14.898 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Creating config drive at /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config#033[00m
Jan 31 03:12:14 np0005603621 nova_compute[247399]: 2026-01-31 08:12:14.904 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8ykj2fe2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.002 159995 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.002 159995 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1914 time: 0.9739423#033[00m
Jan 31 03:12:15 np0005603621 haproxy-metadata-proxy-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301128]: 10.100.0.11:33110 [31/Jan/2026:08:12:14.026] listener listener/metadata 0/0/0/976/976 200 1898 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.044 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8ykj2fe2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.087 247403 DEBUG nova.storage.rbd_utils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] rbd image 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.093 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.288 247403 DEBUG oslo_concurrency.processutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config 7d3c5986-9e8f-45d2-96c0-7e45646d8d52_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.290 247403 INFO nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deleting local config drive /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52/disk.config because it was imported into RBD.#033[00m
Jan 31 03:12:15 np0005603621 NetworkManager[49013]: <info>  [1769847135.3305] manager: (tap59a695f7-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Jan 31 03:12:15 np0005603621 kernel: tap59a695f7-8b: entered promiscuous mode
Jan 31 03:12:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:15Z|00266|binding|INFO|Claiming lport 59a695f7-8b56-4370-b276-673072319aa3 for this chassis.
Jan 31 03:12:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:15Z|00267|binding|INFO|59a695f7-8b56-4370-b276-673072319aa3: Claiming fa:16:3e:e9:d6:fa 10.100.0.9
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.335 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.344 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:d6:fa 10.100.0.9'], port_security=['fa:16:3e:e9:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7d3c5986-9e8f-45d2-96c0-7e45646d8d52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=59a695f7-8b56-4370-b276-673072319aa3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:15Z|00268|binding|INFO|Setting lport 59a695f7-8b56-4370-b276-673072319aa3 ovn-installed in OVS
Jan 31 03:12:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:15Z|00269|binding|INFO|Setting lport 59a695f7-8b56-4370-b276-673072319aa3 up in Southbound
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.346 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 59a695f7-8b56-4370-b276-673072319aa3 in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 bound to our chassis#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.347 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f218695f-c744-4bd8-b2d8-122a920c7ca0#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:15 np0005603621 systemd-udevd[303248]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.361 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9f15bec9-2d72-4b59-b138-2e14402b0f22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.362 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf218695f-c1 in ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.364 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf218695f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.364 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[233250cf-f2db-41df-82ec-83ac70797840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 systemd-machined[212769]: New machine qemu-38-instance-00000058.
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.369 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eadd1f72-f453-4d09-8ec0-c6b8575a8704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 NetworkManager[49013]: <info>  [1769847135.3730] device (tap59a695f7-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:12:15 np0005603621 NetworkManager[49013]: <info>  [1769847135.3748] device (tap59a695f7-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.381 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccf6c4f-c8fe-4e12-914e-f87a7aea12d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 systemd[1]: Started Virtual Machine qemu-38-instance-00000058.
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.406 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[52d55973-75e7-4a91-ac35-ce846395414a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.440 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2b004b35-2140-4234-a003-8604081a89d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 systemd-udevd[303252]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:12:15 np0005603621 NetworkManager[49013]: <info>  [1769847135.4479] manager: (tapf218695f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.447 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[19efccd3-9f2c-4ad5-9d0c-cdf4a664db10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.489 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2e18617c-7e6f-4dc9-a54e-c8b6f5db160b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.493 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[137d2f61-9d27-4fbc-95a2-3195c553c430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.515 247403 DEBUG oslo_concurrency.lockutils [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.516 247403 DEBUG oslo_concurrency.lockutils [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:15 np0005603621 NetworkManager[49013]: <info>  [1769847135.5192] device (tapf218695f-c0): carrier: link connected
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.525 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2c620812-d02f-4ab8-9e13-c2fc9a695875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.534 247403 INFO nova.compute.manager [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Detaching volume 551f775f-0622-435f-b9d7-dcb85f156132#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.555 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[96aa8830-b4a0-406a-bc12-32f846f43dcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644103, 'reachable_time': 21130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303281, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.580 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9c5b0c2b-e8b7-40dc-ba05-2bd384e1683d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:830'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 644103, 'tstamp': 644103}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303283, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:15.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:15.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.595 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a4907fa6-9ee0-47c3-a05e-aa943fba5a88]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644103, 'reachable_time': 21130, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303284, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.631 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[33944a17-ea84-4596-9a25-c569c927fff5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.681 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d6281e48-c399-41b6-a459-06e94d70f66e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.683 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.684 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.684 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf218695f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:15 np0005603621 kernel: tapf218695f-c0: entered promiscuous mode
Jan 31 03:12:15 np0005603621 NetworkManager[49013]: <info>  [1769847135.6873] manager: (tapf218695f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.690 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf218695f-c0, col_values=(('external_ids', {'iface-id': 'd3a551a2-38e3-48d3-bdee-f2493a79eca0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:15Z|00270|binding|INFO|Releasing lport d3a551a2-38e3-48d3-bdee-f2493a79eca0 from this chassis (sb_readonly=0)
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.693 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.694 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad2ce4d-753a-46f5-ab69-de1f645c3abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.696 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:12:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:15.697 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'env', 'PROCESS_TAG=haproxy-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f218695f-c744-4bd8-b2d8-122a920c7ca0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:12:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 339 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.4 MiB/s wr, 247 op/s
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.741 247403 INFO nova.virt.block_device [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Attempting to driver detach volume 551f775f-0622-435f-b9d7-dcb85f156132 from mountpoint /dev/vdb#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.745 247403 DEBUG nova.compute.manager [req-3b875921-d4a4-4533-8756-96d07fe6fec0 req-01b72a0f-60df-4eba-bab0-16798a1c4d4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.745 247403 DEBUG oslo_concurrency.lockutils [req-3b875921-d4a4-4533-8756-96d07fe6fec0 req-01b72a0f-60df-4eba-bab0-16798a1c4d4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.745 247403 DEBUG oslo_concurrency.lockutils [req-3b875921-d4a4-4533-8756-96d07fe6fec0 req-01b72a0f-60df-4eba-bab0-16798a1c4d4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.746 247403 DEBUG oslo_concurrency.lockutils [req-3b875921-d4a4-4533-8756-96d07fe6fec0 req-01b72a0f-60df-4eba-bab0-16798a1c4d4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.746 247403 DEBUG nova.compute.manager [req-3b875921-d4a4-4533-8756-96d07fe6fec0 req-01b72a0f-60df-4eba-bab0-16798a1c4d4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Processing event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.756 247403 DEBUG nova.virt.libvirt.driver [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Attempting to detach device vdb from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.757 247403 DEBUG nova.virt.libvirt.guest [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-551f775f-0622-435f-b9d7-dcb85f156132">
Jan 31 03:12:15 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <serial>551f775f-0622-435f-b9d7-dcb85f156132</serial>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:12:15 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.766 247403 INFO nova.virt.libvirt.driver [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully detached device vdb from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the persistent domain config.#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.766 247403 DEBUG nova.virt.libvirt.driver [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.767 247403 DEBUG nova.virt.libvirt.guest [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-551f775f-0622-435f-b9d7-dcb85f156132">
Jan 31 03:12:15 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <serial>551f775f-0622-435f-b9d7-dcb85f156132</serial>
Jan 31 03:12:15 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:12:15 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:12:15 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.821 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769847135.821459, aa734433-4d60-4a63-9587-234fea7bc0d1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.830 247403 DEBUG nova.virt.libvirt.driver [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance aa734433-4d60-4a63-9587-234fea7bc0d1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.833 247403 INFO nova.virt.libvirt.driver [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully detached device vdb from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the live domain config.#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.991 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.992 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847135.9910927, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.992 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Started (Lifecycle Event)#033[00m
Jan 31 03:12:15 np0005603621 nova_compute[247399]: 2026-01-31 08:12:15.995 247403 DEBUG nova.compute.manager [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.000 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.005 247403 DEBUG nova.objects.instance [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'flavor' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.009 247403 INFO nova.virt.libvirt.driver [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance spawned successfully.#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.009 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.013 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.017 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.035 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.035 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.036 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.036 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.037 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.037 247403 DEBUG nova.virt.libvirt.driver [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.042 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.042 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847135.9938645, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.042 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:12:16 np0005603621 podman[303360]: 2026-01-31 08:12:16.077781378 +0000 UTC m=+0.051825392 container create 53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.102 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.108 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847136.0001168, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.109 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:12:16 np0005603621 systemd[1]: Started libpod-conmon-53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b.scope.
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.142 247403 DEBUG nova.compute.manager [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:16 np0005603621 podman[303360]: 2026-01-31 08:12:16.048947881 +0000 UTC m=+0.022991915 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.145 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.146 247403 DEBUG oslo_concurrency.lockutils [None req-400751b4-09cf-4f7e-968c-48817e8d478c 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.156 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:12:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:12:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef484a4788feffa0fab7166d9403cd3ba2b52ac1b826239d59e6f80279391b58/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:12:16 np0005603621 podman[303360]: 2026-01-31 08:12:16.192236681 +0000 UTC m=+0.166280715 container init 53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.193 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:12:16 np0005603621 podman[303360]: 2026-01-31 08:12:16.197605 +0000 UTC m=+0.171649014 container start 53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.214 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.215 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.216 247403 DEBUG nova.objects.instance [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:12:16 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [NOTICE]   (303379) : New worker (303381) forked
Jan 31 03:12:16 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [NOTICE]   (303379) : Loading success.
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.283 247403 DEBUG oslo_concurrency.lockutils [None req-6db66325-44b9-490d-848a-e7a40767ad83 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.844 247403 DEBUG oslo_concurrency.lockutils [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "interface-aa734433-4d60-4a63-9587-234fea7bc0d1-31b86ebd-10fb-49a2-8013-feec6913e5a2" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.845 247403 DEBUG oslo_concurrency.lockutils [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "interface-aa734433-4d60-4a63-9587-234fea7bc0d1-31b86ebd-10fb-49a2-8013-feec6913e5a2" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.868 247403 DEBUG nova.objects.instance [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'flavor' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.896 247403 DEBUG nova.virt.libvirt.vif [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.896 247403 DEBUG nova.network.os_vif_util [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.898 247403 DEBUG nova.network.os_vif_util [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.901 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b3:77:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31b86ebd-10"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.905 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b3:77:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31b86ebd-10"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.908 247403 DEBUG nova.virt.libvirt.driver [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Attempting to detach device tap31b86ebd-10 from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.908 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:b3:77:bd"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <target dev="tap31b86ebd-10"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.915 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b3:77:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31b86ebd-10"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.918 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b3:77:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31b86ebd-10"/></interface>not found in domain: <domain type='kvm' id='36'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <name>instance-00000055</name>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <uuid>aa734433-4d60-4a63-9587-234fea7bc0d1</uuid>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:name>tempest-device-tagging-server-314100001</nova:name>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:12:09</nova:creationTime>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:user uuid="80d33e2f57b64bd78a04cd8875660772">tempest-TaggedAttachmentsTest-1830131156-project-member</nova:user>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:project uuid="96cffd653fc04612bc1b3434529fb946">tempest-TaggedAttachmentsTest-1830131156</nova:project>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:port uuid="531d3df2-94ae-4b9b-902c-35f7c2b5f6f8">
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <nova:port uuid="31b86ebd-10fb-49a2-8013-feec6913e5a2">
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.10.10.122" ipVersion="4"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <resource>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </resource>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <entry name='serial'>aa734433-4d60-4a63-9587-234fea7bc0d1</entry>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <entry name='uuid'>aa734433-4d60-4a63-9587-234fea7bc0d1</entry>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/aa734433-4d60-4a63-9587-234fea7bc0d1_disk' index='2'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config' index='1'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:1b:d8:88'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target dev='tap531d3df2-94'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:b3:77:bd'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target dev='tap31b86ebd-10'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='net1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/console.log' append='off'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      </target>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/console.log' append='off'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </console>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c651,c693</label>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c651,c693</imagelabel>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.934 247403 INFO nova.virt.libvirt.driver [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully detached device tap31b86ebd-10 from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the persistent domain config.#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.935 247403 DEBUG nova.virt.libvirt.driver [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] (1/8): Attempting to detach device tap31b86ebd-10 with device alias net1 from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:12:16 np0005603621 nova_compute[247399]: 2026-01-31 08:12:16.935 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:b3:77:bd"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]:  <target dev="tap31b86ebd-10"/>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:12:16 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:12:17 np0005603621 kernel: tap31b86ebd-10 (unregistering): left promiscuous mode
Jan 31 03:12:17 np0005603621 NetworkManager[49013]: <info>  [1769847137.0468] device (tap31b86ebd-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.054 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:17Z|00271|binding|INFO|Releasing lport 31b86ebd-10fb-49a2-8013-feec6913e5a2 from this chassis (sb_readonly=0)
Jan 31 03:12:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:17Z|00272|binding|INFO|Setting lport 31b86ebd-10fb-49a2-8013-feec6913e5a2 down in Southbound
Jan 31 03:12:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:17Z|00273|binding|INFO|Removing iface tap31b86ebd-10 ovn-installed in OVS
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.059 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:77:bd 10.10.10.122'], port_security=['fa:16:3e:b3:77:bd 10.10.10.122'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.122/24', 'neutron:device_id': 'aa734433-4d60-4a63-9587-234fea7bc0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96cffd653fc04612bc1b3434529fb946', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4963ae94-2a25-4edf-a864-abd23e9e6bd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6d3c7304-ad38-454d-8885-d943e3534e29, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=31b86ebd-10fb-49a2-8013-feec6913e5a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.061 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 31b86ebd-10fb-49a2-8013-feec6913e5a2 in datapath b8b9c1e2-08e9-43e7-96b0-463d95892627 unbound from our chassis#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.065 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b8b9c1e2-08e9-43e7-96b0-463d95892627, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.066 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.066 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[486fc2a3-409c-4a10-b50e-72f34d3ce024]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.067 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627 namespace which is not needed anymore#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.069 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769847137.0576165, aa734433-4d60-4a63-9587-234fea7bc0d1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.070 247403 DEBUG nova.virt.libvirt.driver [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Start waiting for the detach event from libvirt for device tap31b86ebd-10 with device alias net1 for instance aa734433-4d60-4a63-9587-234fea7bc0d1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.070 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b3:77:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31b86ebd-10"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.075 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b3:77:bd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31b86ebd-10"/></interface>not found in domain: <domain type='kvm' id='36'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <name>instance-00000055</name>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <uuid>aa734433-4d60-4a63-9587-234fea7bc0d1</uuid>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:name>tempest-device-tagging-server-314100001</nova:name>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:12:09</nova:creationTime>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:user uuid="80d33e2f57b64bd78a04cd8875660772">tempest-TaggedAttachmentsTest-1830131156-project-member</nova:user>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:project uuid="96cffd653fc04612bc1b3434529fb946">tempest-TaggedAttachmentsTest-1830131156</nova:project>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:port uuid="531d3df2-94ae-4b9b-902c-35f7c2b5f6f8">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:port uuid="31b86ebd-10fb-49a2-8013-feec6913e5a2">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.10.10.122" ipVersion="4"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:12:17 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <resource>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </resource>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <entry name='serial'>aa734433-4d60-4a63-9587-234fea7bc0d1</entry>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <entry name='uuid'>aa734433-4d60-4a63-9587-234fea7bc0d1</entry>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/aa734433-4d60-4a63-9587-234fea7bc0d1_disk' index='2'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/aa734433-4d60-4a63-9587-234fea7bc0d1_disk.config' index='1'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:1b:d8:88'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target dev='tap531d3df2-94'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/console.log' append='off'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      </target>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1/console.log' append='off'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </console>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c651,c693</label>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c651,c693</imagelabel>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:12:17 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:12:17 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.075 247403 INFO nova.virt.libvirt.driver [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully detached device tap31b86ebd-10 from instance aa734433-4d60-4a63-9587-234fea7bc0d1 from the live domain config.#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.076 247403 DEBUG nova.virt.libvirt.vif [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.076 247403 DEBUG nova.network.os_vif_util [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "address": "fa:16:3e:b3:77:bd", "network": {"id": "b8b9c1e2-08e9-43e7-96b0-463d95892627", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-1977481331", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.122", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b86ebd-10", "ovs_interfaceid": "31b86ebd-10fb-49a2-8013-feec6913e5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.077 247403 DEBUG nova.network.os_vif_util [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.077 247403 DEBUG os_vif [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.079 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31b86ebd-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.083 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.085 247403 INFO os_vif [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:77:bd,bridge_name='br-int',has_traffic_filtering=True,id=31b86ebd-10fb-49a2-8013-feec6913e5a2,network=Network(b8b9c1e2-08e9-43e7-96b0-463d95892627),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31b86ebd-10')#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.086 247403 DEBUG nova.virt.libvirt.guest [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:name>tempest-device-tagging-server-314100001</nova:name>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:12:17</nova:creationTime>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:user uuid="80d33e2f57b64bd78a04cd8875660772">tempest-TaggedAttachmentsTest-1830131156-project-member</nova:user>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:project uuid="96cffd653fc04612bc1b3434529fb946">tempest-TaggedAttachmentsTest-1830131156</nova:project>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    <nova:port uuid="531d3df2-94ae-4b9b-902c-35f7c2b5f6f8">
Jan 31 03:12:17 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:12:17 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:12:17 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:12:17 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:12:17 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [NOTICE]   (302784) : haproxy version is 2.8.14-c23fe91
Jan 31 03:12:17 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [NOTICE]   (302784) : path to executable is /usr/sbin/haproxy
Jan 31 03:12:17 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [WARNING]  (302784) : Exiting Master process...
Jan 31 03:12:17 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [WARNING]  (302784) : Exiting Master process...
Jan 31 03:12:17 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [ALERT]    (302784) : Current worker (302786) exited with code 143 (Terminated)
Jan 31 03:12:17 np0005603621 neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627[302780]: [WARNING]  (302784) : All workers exited. Exiting... (0)
Jan 31 03:12:17 np0005603621 systemd[1]: libpod-7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb.scope: Deactivated successfully.
Jan 31 03:12:17 np0005603621 podman[303411]: 2026-01-31 08:12:17.203270306 +0000 UTC m=+0.048137967 container died 7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:12:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb-userdata-shm.mount: Deactivated successfully.
Jan 31 03:12:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cc38d58f6c2ddca0f901d1ef926e7fbc24f8399ee3c8e44f3e8e1c1e4790aeb7-merged.mount: Deactivated successfully.
Jan 31 03:12:17 np0005603621 podman[303411]: 2026-01-31 08:12:17.238218146 +0000 UTC m=+0.083085807 container cleanup 7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:12:17 np0005603621 systemd[1]: libpod-conmon-7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb.scope: Deactivated successfully.
Jan 31 03:12:17 np0005603621 podman[303438]: 2026-01-31 08:12:17.306429012 +0000 UTC m=+0.043368625 container remove 7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.310 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[802457e9-4da6-4a2e-9fb8-92ae5c9d4553]: (4, ('Sat Jan 31 08:12:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627 (7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb)\n7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb\nSat Jan 31 08:12:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627 (7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb)\n7d22df7c6d1ab36c178e21bfb1e5aa0feaecba027b9c4c8a5fd4bc392fa4a8cb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.312 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5be78d59-3204-4a59-a694-fe9109f78689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.313 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb8b9c1e2-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:17 np0005603621 kernel: tapb8b9c1e2-00: left promiscuous mode
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.319 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.319 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c43caaef-c1dc-4e60-871a-e83c1776fd43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.333 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad19afc-fea4-49b0-adf7-30bd46dca8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.335 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2839622e-32f2-4ebd-8451-d69a281b79c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.353 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0456a7bb-4c98-45c5-a939-52c2a15efe0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643502, 'reachable_time': 35439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303452, 'error': None, 'target': 'ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 systemd[1]: run-netns-ovnmeta\x2db8b9c1e2\x2d08e9\x2d43e7\x2d96b0\x2d463d95892627.mount: Deactivated successfully.
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.356 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b8b9c1e2-08e9-43e7-96b0-463d95892627 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:12:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:17.357 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[be1a8056-e9d4-491c-ba82-0f3638acddcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:17.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.671 247403 DEBUG nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.672 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.672 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.672 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.672 247403 DEBUG nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] No waiting events found dispatching network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.673 247403 WARNING nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received unexpected event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.673 247403 DEBUG nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-unplugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.673 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.673 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.673 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.673 247403 DEBUG nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-unplugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.674 247403 WARNING nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received unexpected event network-vif-unplugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.674 247403 DEBUG nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.674 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.674 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.674 247403 DEBUG oslo_concurrency.lockutils [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.674 247403 DEBUG nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.675 247403 WARNING nova.compute.manager [req-440a8ab5-4c2a-40ee-bf45-12094ba617e2 req-6339d643-ff46-44a6-9a21-7b2688587e1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received unexpected event network-vif-plugged-31b86ebd-10fb-49a2-8013-feec6913e5a2 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:12:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 339 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.0 MiB/s wr, 265 op/s
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.748 247403 DEBUG oslo_concurrency.lockutils [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.748 247403 DEBUG oslo_concurrency.lockutils [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquired lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:12:17 np0005603621 nova_compute[247399]: 2026-01-31 08:12:17.748 247403 DEBUG nova.network.neutron [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.625 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.932 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.933 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.933 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.933 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.934 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.935 247403 INFO nova.compute.manager [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Terminating instance#033[00m
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.936 247403 DEBUG nova.compute.manager [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:12:18 np0005603621 kernel: tap59a695f7-8b (unregistering): left promiscuous mode
Jan 31 03:12:18 np0005603621 NetworkManager[49013]: <info>  [1769847138.9814] device (tap59a695f7-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.991 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:18Z|00274|binding|INFO|Releasing lport 59a695f7-8b56-4370-b276-673072319aa3 from this chassis (sb_readonly=0)
Jan 31 03:12:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:18Z|00275|binding|INFO|Setting lport 59a695f7-8b56-4370-b276-673072319aa3 down in Southbound
Jan 31 03:12:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:18Z|00276|binding|INFO|Removing iface tap59a695f7-8b ovn-installed in OVS
Jan 31 03:12:18 np0005603621 nova_compute[247399]: 2026-01-31 08:12:18.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.008 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.021 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:d6:fa 10.100.0.9'], port_security=['fa:16:3e:e9:d6:fa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7d3c5986-9e8f-45d2-96c0-7e45646d8d52', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=59a695f7-8b56-4370-b276-673072319aa3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.022 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 59a695f7-8b56-4370-b276-673072319aa3 in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 unbound from our chassis#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.024 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f218695f-c744-4bd8-b2d8-122a920c7ca0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.025 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ecaee761-e0a8-4eda-9f8d-427a93a69909]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.026 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace which is not needed anymore#033[00m
Jan 31 03:12:19 np0005603621 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 31 03:12:19 np0005603621 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000058.scope: Consumed 3.609s CPU time.
Jan 31 03:12:19 np0005603621 systemd-machined[212769]: Machine qemu-38-instance-00000058 terminated.
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.166 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [NOTICE]   (303379) : haproxy version is 2.8.14-c23fe91
Jan 31 03:12:19 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [NOTICE]   (303379) : path to executable is /usr/sbin/haproxy
Jan 31 03:12:19 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [WARNING]  (303379) : Exiting Master process...
Jan 31 03:12:19 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [ALERT]    (303379) : Current worker (303381) exited with code 143 (Terminated)
Jan 31 03:12:19 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[303375]: [WARNING]  (303379) : All workers exited. Exiting... (0)
Jan 31 03:12:19 np0005603621 systemd[1]: libpod-53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b.scope: Deactivated successfully.
Jan 31 03:12:19 np0005603621 podman[303478]: 2026-01-31 08:12:19.182687382 +0000 UTC m=+0.056009484 container died 53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.184 247403 INFO nova.virt.libvirt.driver [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Instance destroyed successfully.#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.185 247403 DEBUG nova.objects.instance [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'resources' on Instance uuid 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b-userdata-shm.mount: Deactivated successfully.
Jan 31 03:12:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ef484a4788feffa0fab7166d9403cd3ba2b52ac1b826239d59e6f80279391b58-merged.mount: Deactivated successfully.
Jan 31 03:12:19 np0005603621 podman[303478]: 2026-01-31 08:12:19.216651341 +0000 UTC m=+0.089973423 container cleanup 53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:12:19 np0005603621 systemd[1]: libpod-conmon-53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b.scope: Deactivated successfully.
Jan 31 03:12:19 np0005603621 podman[303515]: 2026-01-31 08:12:19.281283556 +0000 UTC m=+0.044475391 container remove 53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.283 247403 DEBUG nova.virt.libvirt.vif [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:11:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1076782147',display_name='tempest-ServerDiskConfigTestJSON-server-1076782147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1076782147',id=88,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-ik2aud0e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:12:16Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=7d3c5986-9e8f-45d2-96c0-7e45646d8d52,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.283 247403 DEBUG nova.network.os_vif_util [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "59a695f7-8b56-4370-b276-673072319aa3", "address": "fa:16:3e:e9:d6:fa", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59a695f7-8b", "ovs_interfaceid": "59a695f7-8b56-4370-b276-673072319aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.284 247403 DEBUG nova.network.os_vif_util [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.284 247403 DEBUG os_vif [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.286 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.286 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59a695f7-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.287 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc183d6-0773-42e6-b49e-e60a4305799c]: (4, ('Sat Jan 31 08:12:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b)\n53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b\nSat Jan 31 08:12:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b)\n53a4be02bdbcb6620aae19329172cd75320b13f95da3a789b5b384747ee3686b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.288 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.290 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.289 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[04e2b6c4-d730-4e4b-9706-357a1bdd29ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.292 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:19 np0005603621 kernel: tapf218695f-c0: left promiscuous mode
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.294 247403 INFO nova.network.neutron [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Port 31b86ebd-10fb-49a2-8013-feec6913e5a2 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.295 247403 DEBUG nova.network.neutron [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [{"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.297 247403 INFO os_vif [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:d6:fa,bridge_name='br-int',has_traffic_filtering=True,id=59a695f7-8b56-4370-b276-673072319aa3,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59a695f7-8b')#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.299 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c728a98c-8eee-4ea4-98ae-ab4f2fc9cad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.314 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[613a311d-cc0c-45bc-910f-90c8946ae78f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.316 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d16862f-d6e7-4454-9298-35d1185395c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.333 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[56621577-f857-4f01-bc97-23843029e9d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644094, 'reachable_time': 33705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303545, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.335 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:12:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:19.335 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[873ec9df-3df6-4ee5-8ea2-1e1622c1c728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:19 np0005603621 systemd[1]: run-netns-ovnmeta\x2df218695f\x2dc744\x2d4bd8\x2db2d8\x2d122a920c7ca0.mount: Deactivated successfully.
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.582 247403 DEBUG oslo_concurrency.lockutils [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Releasing lock "refresh_cache-aa734433-4d60-4a63-9587-234fea7bc0d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:12:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:19.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.703 247403 DEBUG oslo_concurrency.lockutils [None req-f7f6a8c4-7375-4c5a-a3d3-b52c69021d06 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "interface-aa734433-4d60-4a63-9587-234fea7bc0d1-31b86ebd-10fb-49a2-8013-feec6913e5a2" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.716 247403 INFO nova.virt.libvirt.driver [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deleting instance files /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_del#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.718 247403 INFO nova.virt.libvirt.driver [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deletion of /var/lib/nova/instances/7d3c5986-9e8f-45d2-96c0-7e45646d8d52_del complete#033[00m
Jan 31 03:12:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 290 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.3 MiB/s wr, 401 op/s
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.799 247403 DEBUG nova.compute.manager [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-deleted-31b86ebd-10fb-49a2-8013-feec6913e5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.799 247403 DEBUG nova.compute.manager [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-unplugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.799 247403 DEBUG oslo_concurrency.lockutils [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.800 247403 DEBUG oslo_concurrency.lockutils [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.800 247403 DEBUG oslo_concurrency.lockutils [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.800 247403 DEBUG nova.compute.manager [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] No waiting events found dispatching network-vif-unplugged-59a695f7-8b56-4370-b276-673072319aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.800 247403 DEBUG nova.compute.manager [req-adbf506f-7435-4759-b88f-f855cff36925 req-3557c684-947c-48bd-998f-055a83f2a912 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-unplugged-59a695f7-8b56-4370-b276-673072319aa3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.868 247403 INFO nova.compute.manager [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.868 247403 DEBUG oslo.service.loopingcall [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.869 247403 DEBUG nova.compute.manager [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:12:19 np0005603621 nova_compute[247399]: 2026-01-31 08:12:19.869 247403 DEBUG nova.network.neutron [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:12:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:21.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 290 MiB data, 859 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 334 op/s
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.832 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.834 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.834 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.835 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.835 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.837 247403 INFO nova.compute.manager [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Terminating instance#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.839 247403 DEBUG nova.compute.manager [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.897 247403 DEBUG nova.compute.manager [req-1da4abeb-74ba-4dab-b9d9-bfab5ae138fd req-3a5bbe2f-faa1-4da2-abbf-89559b638ecc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.898 247403 DEBUG oslo_concurrency.lockutils [req-1da4abeb-74ba-4dab-b9d9-bfab5ae138fd req-3a5bbe2f-faa1-4da2-abbf-89559b638ecc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.898 247403 DEBUG oslo_concurrency.lockutils [req-1da4abeb-74ba-4dab-b9d9-bfab5ae138fd req-3a5bbe2f-faa1-4da2-abbf-89559b638ecc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:22 np0005603621 kernel: tap531d3df2-94 (unregistering): left promiscuous mode
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.899 247403 DEBUG oslo_concurrency.lockutils [req-1da4abeb-74ba-4dab-b9d9-bfab5ae138fd req-3a5bbe2f-faa1-4da2-abbf-89559b638ecc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.900 247403 DEBUG nova.compute.manager [req-1da4abeb-74ba-4dab-b9d9-bfab5ae138fd req-3a5bbe2f-faa1-4da2-abbf-89559b638ecc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] No waiting events found dispatching network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.900 247403 WARNING nova.compute.manager [req-1da4abeb-74ba-4dab-b9d9-bfab5ae138fd req-3a5bbe2f-faa1-4da2-abbf-89559b638ecc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received unexpected event network-vif-plugged-59a695f7-8b56-4370-b276-673072319aa3 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:12:22 np0005603621 NetworkManager[49013]: <info>  [1769847142.9043] device (tap531d3df2-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:22Z|00277|binding|INFO|Releasing lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 from this chassis (sb_readonly=0)
Jan 31 03:12:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:22Z|00278|binding|INFO|Setting lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 down in Southbound
Jan 31 03:12:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:22Z|00279|binding|INFO|Removing iface tap531d3df2-94 ovn-installed in OVS
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.937 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.940 247403 DEBUG nova.network.neutron [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.947 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:22 np0005603621 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 31 03:12:22 np0005603621 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000055.scope: Consumed 14.772s CPU time.
Jan 31 03:12:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:22.945 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:d8:88 10.100.0.11'], port_security=['fa:16:3e:1b:d8:88 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aa734433-4d60-4a63-9587-234fea7bc0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96cffd653fc04612bc1b3434529fb946', 'neutron:revision_number': '4', 'neutron:security_group_ids': '91c1f88f-ef9d-4d77-9620-d0ecd9807582', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ceb64776-1821-47e1-ab8e-32281ea4db98, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:22.947 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 in datapath cc06a3f7-7401-4f89-8964-7d6d4156be1a unbound from our chassis#033[00m
Jan 31 03:12:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:22.950 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc06a3f7-7401-4f89-8964-7d6d4156be1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:22.951 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[91071735-745f-4b66-a981-d5d16f9bedfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:22.952 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a namespace which is not needed anymore#033[00m
Jan 31 03:12:22 np0005603621 systemd-machined[212769]: Machine qemu-36-instance-00000055 terminated.
Jan 31 03:12:22 np0005603621 nova_compute[247399]: 2026-01-31 08:12:22.978 247403 INFO nova.compute.manager [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Took 3.11 seconds to deallocate network for instance.#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.042 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.043 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:23 np0005603621 kernel: tap531d3df2-94: entered promiscuous mode
Jan 31 03:12:23 np0005603621 kernel: tap531d3df2-94 (unregistering): left promiscuous mode
Jan 31 03:12:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:23Z|00280|binding|INFO|Claiming lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 for this chassis.
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.068 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:23Z|00281|binding|INFO|531d3df2-94ae-4b9b-902c-35f7c2b5f6f8: Claiming fa:16:3e:1b:d8:88 10.100.0.11
Jan 31 03:12:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [NOTICE]   (301125) : haproxy version is 2.8.14-c23fe91
Jan 31 03:12:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [NOTICE]   (301125) : path to executable is /usr/sbin/haproxy
Jan 31 03:12:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [WARNING]  (301125) : Exiting Master process...
Jan 31 03:12:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [WARNING]  (301125) : Exiting Master process...
Jan 31 03:12:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [ALERT]    (301125) : Current worker (301128) exited with code 143 (Terminated)
Jan 31 03:12:23 np0005603621 neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a[301121]: [WARNING]  (301125) : All workers exited. Exiting... (0)
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.076 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:d8:88 10.100.0.11'], port_security=['fa:16:3e:1b:d8:88 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aa734433-4d60-4a63-9587-234fea7bc0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96cffd653fc04612bc1b3434529fb946', 'neutron:revision_number': '4', 'neutron:security_group_ids': '91c1f88f-ef9d-4d77-9620-d0ecd9807582', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ceb64776-1821-47e1-ab8e-32281ea4db98, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:23 np0005603621 systemd[1]: libpod-a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876.scope: Deactivated successfully.
Jan 31 03:12:23 np0005603621 conmon[301121]: conmon a25714cf3876dde25f64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876.scope/container/memory.events
Jan 31 03:12:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:12:23Z|00282|binding|INFO|Releasing lport 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 from this chassis (sb_readonly=0)
Jan 31 03:12:23 np0005603621 podman[303627]: 2026-01-31 08:12:23.082638983 +0000 UTC m=+0.047111404 container died a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.082 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.087 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:d8:88 10.100.0.11'], port_security=['fa:16:3e:1b:d8:88 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aa734433-4d60-4a63-9587-234fea7bc0d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96cffd653fc04612bc1b3434529fb946', 'neutron:revision_number': '4', 'neutron:security_group_ids': '91c1f88f-ef9d-4d77-9620-d0ecd9807582', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ceb64776-1821-47e1-ab8e-32281ea4db98, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.089 247403 INFO nova.virt.libvirt.driver [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Instance destroyed successfully.#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.089 247403 DEBUG nova.objects.instance [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lazy-loading 'resources' on Instance uuid aa734433-4d60-4a63-9587-234fea7bc0d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.108 247403 DEBUG nova.virt.libvirt.vif [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-314100001',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-314100001',id=85,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/jmz0wtSTiY0UVxAbcSSoCSc73heIvxhziA6kVLB9xgrYP6zu4uucn7uMuK6w20bBg6aHmMVIKLKKJj7mSiMVfJ3qkQer683l+36ud331vsPhkewLKmn0FDlUfns4ZWA==',key_name='tempest-keypair-618240313',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96cffd653fc04612bc1b3434529fb946',ramdisk_id='',reservation_id='r-2v72giv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-1830131156',owner_user_name='tempest-TaggedAttachmentsTest-1830131156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:11:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80d33e2f57b64bd78a04cd8875660772',uuid=aa734433-4d60-4a63-9587-234fea7bc0d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.109 247403 DEBUG nova.network.os_vif_util [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converting VIF {"id": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "address": "fa:16:3e:1b:d8:88", "network": {"id": "cc06a3f7-7401-4f89-8964-7d6d4156be1a", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-801024139-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96cffd653fc04612bc1b3434529fb946", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap531d3df2-94", "ovs_interfaceid": "531d3df2-94ae-4b9b-902c-35f7c2b5f6f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.110 247403 DEBUG nova.network.os_vif_util [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.112 247403 DEBUG os_vif [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.113 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.114 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap531d3df2-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876-userdata-shm.mount: Deactivated successfully.
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.120 247403 DEBUG oslo_concurrency.processutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c348ea3f68bb899a0b82d728b4ed00605382b3466fb94d5889983e638f059994-merged.mount: Deactivated successfully.
Jan 31 03:12:23 np0005603621 podman[303627]: 2026-01-31 08:12:23.126877085 +0000 UTC m=+0.091349506 container cleanup a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:12:23 np0005603621 systemd[1]: libpod-conmon-a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876.scope: Deactivated successfully.
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.147 247403 INFO os_vif [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:d8:88,bridge_name='br-int',has_traffic_filtering=True,id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8,network=Network(cc06a3f7-7401-4f89-8964-7d6d4156be1a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap531d3df2-94')#033[00m
Jan 31 03:12:23 np0005603621 podman[303667]: 2026-01-31 08:12:23.190113156 +0000 UTC m=+0.044912745 container remove a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.197 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0e0340-f05d-474e-ae69-8050019ce24e]: (4, ('Sat Jan 31 08:12:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a (a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876)\na25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876\nSat Jan 31 08:12:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a (a25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876)\na25714cf3876dde25f64ff0ffd455ce6f76da662b66773d2d2bdd9bf383bf876\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.199 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9c3b7f9b-5ad4-4cd2-b647-e654ac535c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.200 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc06a3f7-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:23 np0005603621 kernel: tapcc06a3f7-70: left promiscuous mode
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.204 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.214 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ba26024b-63bd-46ee-8f22-890c757fbb98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.236 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[76f63829-733a-4b61-9dd7-411eb7ed231a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.238 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6a386f-88cc-494f-9612-73b3396d1ce9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.257 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[adc0ed8a-498e-4b4f-82f1-28c0963e98ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 638850, 'reachable_time': 38312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303698, 'error': None, 'target': 'ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 systemd[1]: run-netns-ovnmeta\x2dcc06a3f7\x2d7401\x2d4f89\x2d8964\x2d7d6d4156be1a.mount: Deactivated successfully.
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.260 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cc06a3f7-7401-4f89-8964-7d6d4156be1a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.260 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4ebf25-f882-4bc9-a2e9-c668c158324a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.262 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 in datapath cc06a3f7-7401-4f89-8964-7d6d4156be1a unbound from our chassis#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.264 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc06a3f7-7401-4f89-8964-7d6d4156be1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.265 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[08040efd-e380-4722-9d05-6f9fc2312eef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.267 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 in datapath cc06a3f7-7401-4f89-8964-7d6d4156be1a unbound from our chassis#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.269 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cc06a3f7-7401-4f89-8964-7d6d4156be1a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:12:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:23.270 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[03dc56cc-f695-4bc2-89f9-d39b48b3b369]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:12:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649475861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.611 247403 DEBUG oslo_concurrency.processutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.619 247403 DEBUG nova.compute.provider_tree [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.627 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.637 247403 DEBUG nova.scheduler.client.report [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.654 247403 INFO nova.virt.libvirt.driver [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Deleting instance files /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1_del#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.654 247403 INFO nova.virt.libvirt.driver [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Deletion of /var/lib/nova/instances/aa734433-4d60-4a63-9587-234fea7bc0d1_del complete#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.660 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.682 247403 DEBUG nova.compute.manager [req-128d72c3-13a2-4be8-83d8-0b2a705c7a5a req-11b2b071-07f5-4250-ba8e-8331eca54b0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-unplugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.683 247403 DEBUG oslo_concurrency.lockutils [req-128d72c3-13a2-4be8-83d8-0b2a705c7a5a req-11b2b071-07f5-4250-ba8e-8331eca54b0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.683 247403 DEBUG oslo_concurrency.lockutils [req-128d72c3-13a2-4be8-83d8-0b2a705c7a5a req-11b2b071-07f5-4250-ba8e-8331eca54b0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.683 247403 DEBUG oslo_concurrency.lockutils [req-128d72c3-13a2-4be8-83d8-0b2a705c7a5a req-11b2b071-07f5-4250-ba8e-8331eca54b0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.683 247403 DEBUG nova.compute.manager [req-128d72c3-13a2-4be8-83d8-0b2a705c7a5a req-11b2b071-07f5-4250-ba8e-8331eca54b0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-unplugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.683 247403 DEBUG nova.compute.manager [req-128d72c3-13a2-4be8-83d8-0b2a705c7a5a req-11b2b071-07f5-4250-ba8e-8331eca54b0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-unplugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.705 247403 INFO nova.scheduler.client.report [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Deleted allocations for instance 7d3c5986-9e8f-45d2-96c0-7e45646d8d52#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.722 247403 INFO nova.compute.manager [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.723 247403 DEBUG oslo.service.loopingcall [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.723 247403 DEBUG nova.compute.manager [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.723 247403 DEBUG nova.network.neutron [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:12:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 225 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.6 MiB/s wr, 374 op/s
Jan 31 03:12:23 np0005603621 nova_compute[247399]: 2026-01-31 08:12:23.774 247403 DEBUG oslo_concurrency.lockutils [None req-de21cca5-4c38-4f49-9c7d-111a3bc50271 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "7d3c5986-9e8f-45d2-96c0-7e45646d8d52" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:24 np0005603621 nova_compute[247399]: 2026-01-31 08:12:24.960 247403 DEBUG nova.network.neutron [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.241 247403 DEBUG nova.compute.manager [req-70f29fec-0a87-415d-a12d-b096443c2f5d req-e38f8142-1b3b-4a45-b4b2-6b26aef08043 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Received event network-vif-deleted-59a695f7-8b56-4370-b276-673072319aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.241 247403 DEBUG nova.compute.manager [req-70f29fec-0a87-415d-a12d-b096443c2f5d req-e38f8142-1b3b-4a45-b4b2-6b26aef08043 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-deleted-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.242 247403 INFO nova.compute.manager [req-70f29fec-0a87-415d-a12d-b096443c2f5d req-e38f8142-1b3b-4a45-b4b2-6b26aef08043 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Neutron deleted interface 531d3df2-94ae-4b9b-902c-35f7c2b5f6f8; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.242 247403 DEBUG nova.network.neutron [req-70f29fec-0a87-415d-a12d-b096443c2f5d req-e38f8142-1b3b-4a45-b4b2-6b26aef08043 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.271 247403 INFO nova.compute.manager [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Took 1.55 seconds to deallocate network for instance.#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.358 247403 DEBUG nova.compute.manager [req-70f29fec-0a87-415d-a12d-b096443c2f5d req-e38f8142-1b3b-4a45-b4b2-6b26aef08043 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Detach interface failed, port_id=531d3df2-94ae-4b9b-902c-35f7c2b5f6f8, reason: Instance aa734433-4d60-4a63-9587-234fea7bc0d1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:12:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:25.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.643 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.643 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.716 247403 DEBUG oslo_concurrency.processutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 180 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 400 KiB/s wr, 309 op/s
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.846 247403 DEBUG nova.compute.manager [req-b439f319-1eb9-4b82-9e6a-461761bdd87e req-8414a91a-bcd4-4d52-9095-09bfc241c18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.847 247403 DEBUG oslo_concurrency.lockutils [req-b439f319-1eb9-4b82-9e6a-461761bdd87e req-8414a91a-bcd4-4d52-9095-09bfc241c18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.847 247403 DEBUG oslo_concurrency.lockutils [req-b439f319-1eb9-4b82-9e6a-461761bdd87e req-8414a91a-bcd4-4d52-9095-09bfc241c18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.847 247403 DEBUG oslo_concurrency.lockutils [req-b439f319-1eb9-4b82-9e6a-461761bdd87e req-8414a91a-bcd4-4d52-9095-09bfc241c18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.848 247403 DEBUG nova.compute.manager [req-b439f319-1eb9-4b82-9e6a-461761bdd87e req-8414a91a-bcd4-4d52-9095-09bfc241c18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] No waiting events found dispatching network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:25 np0005603621 nova_compute[247399]: 2026-01-31 08:12:25.848 247403 WARNING nova.compute.manager [req-b439f319-1eb9-4b82-9e6a-461761bdd87e req-8414a91a-bcd4-4d52-9095-09bfc241c18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Received unexpected event network-vif-plugged-531d3df2-94ae-4b9b-902c-35f7c2b5f6f8 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:12:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194429971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:26 np0005603621 nova_compute[247399]: 2026-01-31 08:12:26.282 247403 DEBUG oslo_concurrency.processutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:26 np0005603621 nova_compute[247399]: 2026-01-31 08:12:26.291 247403 DEBUG nova.compute.provider_tree [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:26 np0005603621 nova_compute[247399]: 2026-01-31 08:12:26.356 247403 DEBUG nova.scheduler.client.report [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:26 np0005603621 nova_compute[247399]: 2026-01-31 08:12:26.498 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:26 np0005603621 nova_compute[247399]: 2026-01-31 08:12:26.604 247403 INFO nova.scheduler.client.report [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Deleted allocations for instance aa734433-4d60-4a63-9587-234fea7bc0d1#033[00m
Jan 31 03:12:26 np0005603621 nova_compute[247399]: 2026-01-31 08:12:26.715 247403 DEBUG oslo_concurrency.lockutils [None req-603ab297-6c9f-4ada-b3fe-d0f1ce3c1566 80d33e2f57b64bd78a04cd8875660772 96cffd653fc04612bc1b3434529fb946 - - default default] Lock "aa734433-4d60-4a63-9587-234fea7bc0d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:27.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 163 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 68 KiB/s wr, 269 op/s
Jan 31 03:12:28 np0005603621 nova_compute[247399]: 2026-01-31 08:12:28.118 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:28 np0005603621 nova_compute[247399]: 2026-01-31 08:12:28.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:28 np0005603621 nova_compute[247399]: 2026-01-31 08:12:28.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:12:28 np0005603621 nova_compute[247399]: 2026-01-31 08:12:28.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:12:28 np0005603621 nova_compute[247399]: 2026-01-31 08:12:28.227 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:12:28 np0005603621 nova_compute[247399]: 2026-01-31 08:12:28.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.730831) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847148730889, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 250, "total_data_size": 976729, "memory_usage": 991160, "flush_reason": "Manual Compaction"}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847148738551, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 643124, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39090, "largest_seqno": 39880, "table_properties": {"data_size": 639804, "index_size": 1100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9368, "raw_average_key_size": 20, "raw_value_size": 632583, "raw_average_value_size": 1405, "num_data_blocks": 48, "num_entries": 450, "num_filter_entries": 450, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847094, "oldest_key_time": 1769847094, "file_creation_time": 1769847148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7773 microseconds, and 4241 cpu microseconds.
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.738606) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 643124 bytes OK
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.738627) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.741302) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.741362) EVENT_LOG_v1 {"time_micros": 1769847148741312, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.741384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 972819, prev total WAL file size 972819, number of live WAL files 2.
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.741848) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(628KB)], [83(11MB)]
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847148741878, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12646893, "oldest_snapshot_seqno": -1}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6738 keys, 9122876 bytes, temperature: kUnknown
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847148838459, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9122876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9079661, "index_size": 25247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 172980, "raw_average_key_size": 25, "raw_value_size": 8961132, "raw_average_value_size": 1329, "num_data_blocks": 999, "num_entries": 6738, "num_filter_entries": 6738, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.838967) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9122876 bytes
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.841213) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.8 rd, 94.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.4 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(33.9) write-amplify(14.2) OK, records in: 7230, records dropped: 492 output_compression: NoCompression
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.841235) EVENT_LOG_v1 {"time_micros": 1769847148841225, "job": 48, "event": "compaction_finished", "compaction_time_micros": 96673, "compaction_time_cpu_micros": 19083, "output_level": 6, "num_output_files": 1, "total_output_size": 9122876, "num_input_records": 7230, "num_output_records": 6738, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847148841566, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847148842395, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.741777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.842449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.842456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.842457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.842458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:12:28 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:12:28.842460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:12:29 np0005603621 nova_compute[247399]: 2026-01-31 08:12:29.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:29.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 255 op/s
Jan 31 03:12:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:30.494 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:30.495 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:30.495 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:30 np0005603621 nova_compute[247399]: 2026-01-31 08:12:30.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:30 np0005603621 nova_compute[247399]: 2026-01-31 08:12:30.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 72 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 31 03:12:32 np0005603621 nova_compute[247399]: 2026-01-31 08:12:32.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:33 np0005603621 nova_compute[247399]: 2026-01-31 08:12:33.122 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:33 np0005603621 nova_compute[247399]: 2026-01-31 08:12:33.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:33.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:12:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:12:33 np0005603621 nova_compute[247399]: 2026-01-31 08:12:33.632 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 78 KiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 31 03:12:34 np0005603621 nova_compute[247399]: 2026-01-31 08:12:34.180 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847139.1800327, 7d3c5986-9e8f-45d2-96c0-7e45646d8d52 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:34 np0005603621 nova_compute[247399]: 2026-01-31 08:12:34.181 247403 INFO nova.compute.manager [-] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:12:34 np0005603621 nova_compute[247399]: 2026-01-31 08:12:34.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:34 np0005603621 nova_compute[247399]: 2026-01-31 08:12:34.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:12:34 np0005603621 nova_compute[247399]: 2026-01-31 08:12:34.298 247403 DEBUG nova.compute.manager [None req-d4f30bba-77bf-42d3-b389-a4a82ba32fd3 - - - - - -] [instance: 7d3c5986-9e8f-45d2-96c0-7e45646d8d52] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:35 np0005603621 nova_compute[247399]: 2026-01-31 08:12:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:35.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.410 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.411 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.411 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.411 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.412 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2201175102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:36 np0005603621 nova_compute[247399]: 2026-01-31 08:12:36.827 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.063 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.065 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4419MB free_disk=20.92190170288086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.066 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.066 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.159 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.160 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.195 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2107650455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:12:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:37.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:12:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.625 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.631 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.679 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 182 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.910 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:12:37 np0005603621 nova_compute[247399]: 2026-01-31 08:12:37.910 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:38 np0005603621 nova_compute[247399]: 2026-01-31 08:12:38.086 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847143.0852401, aa734433-4d60-4a63-9587-234fea7bc0d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:12:38 np0005603621 nova_compute[247399]: 2026-01-31 08:12:38.086 247403 INFO nova.compute.manager [-] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:12:38 np0005603621 nova_compute[247399]: 2026-01-31 08:12:38.183 247403 DEBUG nova.compute.manager [None req-c5b3bea0-8378-4709-9f9e-1d5eb0909d70 - - - - - -] [instance: aa734433-4d60-4a63-9587-234fea7bc0d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:12:38 np0005603621 nova_compute[247399]: 2026-01-31 08:12:38.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:12:38
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta']
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:12:38 np0005603621 nova_compute[247399]: 2026-01-31 08:12:38.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:12:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:12:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:39.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 213 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 149 op/s
Jan 31 03:12:39 np0005603621 nova_compute[247399]: 2026-01-31 08:12:39.911 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:39 np0005603621 nova_compute[247399]: 2026-01-31 08:12:39.912 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:40 np0005603621 podman[303797]: 2026-01-31 08:12:40.512794775 +0000 UTC m=+0.059720301 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:12:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:41.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:41.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 213 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 31 03:12:42 np0005603621 podman[303842]: 2026-01-31 08:12:42.313440875 +0000 UTC m=+0.087083372 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 03:12:43 np0005603621 nova_compute[247399]: 2026-01-31 08:12:43.186 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:43 np0005603621 nova_compute[247399]: 2026-01-31 08:12:43.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:12:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:43.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:43 np0005603621 nova_compute[247399]: 2026-01-31 08:12:43.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 213 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 31 03:12:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:45.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:45.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 213 MiB data, 799 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 31 03:12:46 np0005603621 nova_compute[247399]: 2026-01-31 08:12:46.875 247403 DEBUG nova.compute.manager [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.066 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.067 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.222 247403 DEBUG nova.objects.instance [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.246 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.246 247403 INFO nova.compute.claims [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.247 247403 DEBUG nova.objects.instance [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'resources' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.298 247403 DEBUG nova.objects.instance [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.434 247403 INFO nova.compute.resource_tracker [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updating resource usage from migration 7ca3af68-bbd7-45fb-8062-e32509f344f5#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.435 247403 DEBUG nova.compute.resource_tracker [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Starting to track incoming migration 7ca3af68-bbd7-45fb-8062-e32509f344f5 with flavor f75c4aee-d826-4343-a7e3-f06a4b21de52 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.528 247403 DEBUG oslo_concurrency.processutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:12:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:47.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 222 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 128 op/s
Jan 31 03:12:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:12:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4134016019' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.965 247403 DEBUG oslo_concurrency.processutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:12:47 np0005603621 nova_compute[247399]: 2026-01-31 08:12:47.973 247403 DEBUG nova.compute.provider_tree [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:12:48 np0005603621 nova_compute[247399]: 2026-01-31 08:12:48.025 247403 DEBUG nova.scheduler.client.report [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:12:48 np0005603621 nova_compute[247399]: 2026-01-31 08:12:48.146 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:48 np0005603621 nova_compute[247399]: 2026-01-31 08:12:48.147 247403 INFO nova.compute.manager [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Migrating#033[00m
Jan 31 03:12:48 np0005603621 nova_compute[247399]: 2026-01-31 08:12:48.195 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:48 np0005603621 nova_compute[247399]: 2026-01-31 08:12:48.701 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004441696328866555 of space, bias 1.0, pg target 1.3325088986599665 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:12:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:49.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:49.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 244 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 217 op/s
Jan 31 03:12:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:51.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:51.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 244 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 31 03:12:53 np0005603621 nova_compute[247399]: 2026-01-31 08:12:53.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:53.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:53.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:53 np0005603621 nova_compute[247399]: 2026-01-31 08:12:53.702 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 03:12:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:54.699 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:12:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:54.701 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:12:54 np0005603621 nova_compute[247399]: 2026-01-31 08:12:54.702 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:55 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:12:55 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:12:55 np0005603621 systemd-logind[818]: New session 58 of user nova.
Jan 31 03:12:55 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:12:55 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:12:55 np0005603621 systemd[303923]: Queued start job for default target Main User Target.
Jan 31 03:12:55 np0005603621 systemd[303923]: Created slice User Application Slice.
Jan 31 03:12:55 np0005603621 systemd[303923]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:12:55 np0005603621 systemd[303923]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:12:55 np0005603621 systemd[303923]: Reached target Paths.
Jan 31 03:12:55 np0005603621 systemd[303923]: Reached target Timers.
Jan 31 03:12:55 np0005603621 systemd[303923]: Starting D-Bus User Message Bus Socket...
Jan 31 03:12:55 np0005603621 systemd[303923]: Starting Create User's Volatile Files and Directories...
Jan 31 03:12:55 np0005603621 systemd[303923]: Finished Create User's Volatile Files and Directories.
Jan 31 03:12:55 np0005603621 systemd[303923]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:12:55 np0005603621 systemd[303923]: Reached target Sockets.
Jan 31 03:12:55 np0005603621 systemd[303923]: Reached target Basic System.
Jan 31 03:12:55 np0005603621 systemd[303923]: Reached target Main User Target.
Jan 31 03:12:55 np0005603621 systemd[303923]: Startup finished in 132ms.
Jan 31 03:12:55 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:12:55 np0005603621 systemd[1]: Started Session 58 of User nova.
Jan 31 03:12:55 np0005603621 systemd[1]: session-58.scope: Deactivated successfully.
Jan 31 03:12:55 np0005603621 systemd-logind[818]: Session 58 logged out. Waiting for processes to exit.
Jan 31 03:12:55 np0005603621 systemd-logind[818]: Removed session 58.
Jan 31 03:12:55 np0005603621 systemd-logind[818]: New session 60 of user nova.
Jan 31 03:12:55 np0005603621 systemd[1]: Started Session 60 of User nova.
Jan 31 03:12:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:55.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:55 np0005603621 systemd[1]: session-60.scope: Deactivated successfully.
Jan 31 03:12:55 np0005603621 systemd-logind[818]: Session 60 logged out. Waiting for processes to exit.
Jan 31 03:12:55 np0005603621 systemd-logind[818]: Removed session 60.
Jan 31 03:12:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 31 03:12:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:57.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:57.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 31 03:12:58 np0005603621 nova_compute[247399]: 2026-01-31 08:12:58.204 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:12:58 np0005603621 nova_compute[247399]: 2026-01-31 08:12:58.754 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:12:59 np0005603621 nova_compute[247399]: 2026-01-31 08:12:59.525 247403 DEBUG nova.compute.manager [req-ef06d781-b2f9-4936-8313-0717905761a6 req-828772f8-9f0f-4bc0-b426-0fc3580fda22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:12:59 np0005603621 nova_compute[247399]: 2026-01-31 08:12:59.525 247403 DEBUG oslo_concurrency.lockutils [req-ef06d781-b2f9-4936-8313-0717905761a6 req-828772f8-9f0f-4bc0-b426-0fc3580fda22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:12:59 np0005603621 nova_compute[247399]: 2026-01-31 08:12:59.526 247403 DEBUG oslo_concurrency.lockutils [req-ef06d781-b2f9-4936-8313-0717905761a6 req-828772f8-9f0f-4bc0-b426-0fc3580fda22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:12:59 np0005603621 nova_compute[247399]: 2026-01-31 08:12:59.526 247403 DEBUG oslo_concurrency.lockutils [req-ef06d781-b2f9-4936-8313-0717905761a6 req-828772f8-9f0f-4bc0-b426-0fc3580fda22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:12:59 np0005603621 nova_compute[247399]: 2026-01-31 08:12:59.526 247403 DEBUG nova.compute.manager [req-ef06d781-b2f9-4936-8313-0717905761a6 req-828772f8-9f0f-4bc0-b426-0fc3580fda22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:12:59 np0005603621 nova_compute[247399]: 2026-01-31 08:12:59.527 247403 WARNING nova.compute.manager [req-ef06d781-b2f9-4936-8313-0717905761a6 req-828772f8-9f0f-4bc0-b426-0fc3580fda22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:12:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:12:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:12:59.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:12:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:12:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:12:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:12:59.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:12:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:12:59.705 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:12:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 267 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Jan 31 03:13:00 np0005603621 nova_compute[247399]: 2026-01-31 08:13:00.644 247403 INFO nova.network.neutron [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updating port 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:13:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:13:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:01.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:13:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:01.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:01 np0005603621 nova_compute[247399]: 2026-01-31 08:13:01.711 247403 DEBUG nova.compute.manager [req-9ecb4aa9-a718-46a5-8bff-a571aec9b67d req-e6e7529a-e1d4-4e98-b294-b9ac3520db18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:01 np0005603621 nova_compute[247399]: 2026-01-31 08:13:01.711 247403 DEBUG oslo_concurrency.lockutils [req-9ecb4aa9-a718-46a5-8bff-a571aec9b67d req-e6e7529a-e1d4-4e98-b294-b9ac3520db18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:01 np0005603621 nova_compute[247399]: 2026-01-31 08:13:01.711 247403 DEBUG oslo_concurrency.lockutils [req-9ecb4aa9-a718-46a5-8bff-a571aec9b67d req-e6e7529a-e1d4-4e98-b294-b9ac3520db18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:01 np0005603621 nova_compute[247399]: 2026-01-31 08:13:01.712 247403 DEBUG oslo_concurrency.lockutils [req-9ecb4aa9-a718-46a5-8bff-a571aec9b67d req-e6e7529a-e1d4-4e98-b294-b9ac3520db18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:01 np0005603621 nova_compute[247399]: 2026-01-31 08:13:01.712 247403 DEBUG nova.compute.manager [req-9ecb4aa9-a718-46a5-8bff-a571aec9b67d req-e6e7529a-e1d4-4e98-b294-b9ac3520db18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:01 np0005603621 nova_compute[247399]: 2026-01-31 08:13:01.712 247403 WARNING nova.compute.manager [req-9ecb4aa9-a718-46a5-8bff-a571aec9b67d req-e6e7529a-e1d4-4e98-b294-b9ac3520db18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:13:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 267 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 1.3 MiB/s wr, 33 op/s
Jan 31 03:13:03 np0005603621 nova_compute[247399]: 2026-01-31 08:13:03.209 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:03.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 275 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 31 03:13:03 np0005603621 nova_compute[247399]: 2026-01-31 08:13:03.756 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.232 247403 DEBUG nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.233 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.233 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.233 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.233 247403 DEBUG nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.233 247403 WARNING nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.233 247403 DEBUG nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.234 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.234 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.234 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.234 247403 DEBUG nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.234 247403 WARNING nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.234 247403 DEBUG nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.235 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.235 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.235 247403 DEBUG oslo_concurrency.lockutils [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.235 247403 DEBUG nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.235 247403 WARNING nova.compute.manager [req-0c7f2b1d-5155-4246-9991-b65ec2a4ce38 req-2ca438ba-2436-4b04-b7e6-cf5e267a652d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:13:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 561e50ec-7225-4bfd-b2ad-782761397b4d does not exist
Jan 31 03:13:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4c90cf7f-a27b-4bcc-af37-d5eb4e4f4f7e does not exist
Jan 31 03:13:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8900680e-a5dc-432e-a8fe-a8211a7a1146 does not exist
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:13:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.401 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.401 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquired lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:04 np0005603621 nova_compute[247399]: 2026-01-31 08:13:04.401 247403 DEBUG nova.network.neutron [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.734427747 +0000 UTC m=+0.033456004 container create a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:04 np0005603621 systemd[1]: Started libpod-conmon-a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab.scope.
Jan 31 03:13:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.808154768 +0000 UTC m=+0.107183045 container init a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.813279359 +0000 UTC m=+0.112307616 container start a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.718453275 +0000 UTC m=+0.017481552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:13:04 np0005603621 crazy_banzai[304289]: 167 167
Jan 31 03:13:04 np0005603621 systemd[1]: libpod-a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab.scope: Deactivated successfully.
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.819709722 +0000 UTC m=+0.118737979 container attach a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.820018752 +0000 UTC m=+0.119047009 container died a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:13:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6239dfb8eb5edc056d8f204ad77eecd195ec00067cf252966c5303583375bd94-merged.mount: Deactivated successfully.
Jan 31 03:13:04 np0005603621 podman[304273]: 2026-01-31 08:13:04.861733935 +0000 UTC m=+0.160762192 container remove a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:13:04 np0005603621 systemd[1]: libpod-conmon-a382ed70f547921e3b32fdfa44dacdc76f97d368fd68154e8118aad85d1a2cab.scope: Deactivated successfully.
Jan 31 03:13:04 np0005603621 podman[304313]: 2026-01-31 08:13:04.97913296 +0000 UTC m=+0.040833037 container create c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:13:05 np0005603621 systemd[1]: Started libpod-conmon-c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec.scope.
Jan 31 03:13:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:13:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:13:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:13:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c3cf5df813b83b432c729b85c171eae945a330a43e87051a2c59136baf6883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c3cf5df813b83b432c729b85c171eae945a330a43e87051a2c59136baf6883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c3cf5df813b83b432c729b85c171eae945a330a43e87051a2c59136baf6883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c3cf5df813b83b432c729b85c171eae945a330a43e87051a2c59136baf6883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:05 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c3cf5df813b83b432c729b85c171eae945a330a43e87051a2c59136baf6883/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:05 np0005603621 podman[304313]: 2026-01-31 08:13:04.963186879 +0000 UTC m=+0.024886976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:13:05 np0005603621 podman[304313]: 2026-01-31 08:13:05.06236249 +0000 UTC m=+0.124062597 container init c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:13:05 np0005603621 podman[304313]: 2026-01-31 08:13:05.068208844 +0000 UTC m=+0.129908921 container start c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 31 03:13:05 np0005603621 podman[304313]: 2026-01-31 08:13:05.074801452 +0000 UTC m=+0.136501549 container attach c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:13:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:05.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Jan 31 03:13:05 np0005603621 adoring_williamson[304329]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:13:05 np0005603621 adoring_williamson[304329]: --> relative data size: 1.0
Jan 31 03:13:05 np0005603621 adoring_williamson[304329]: --> All data devices are unavailable
Jan 31 03:13:05 np0005603621 systemd[1]: libpod-c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec.scope: Deactivated successfully.
Jan 31 03:13:05 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:13:05 np0005603621 systemd[303923]: Activating special unit Exit the Session...
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped target Main User Target.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped target Basic System.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped target Paths.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped target Sockets.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped target Timers.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:13:05 np0005603621 systemd[303923]: Closed D-Bus User Message Bus Socket.
Jan 31 03:13:05 np0005603621 systemd[303923]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:13:05 np0005603621 systemd[303923]: Removed slice User Application Slice.
Jan 31 03:13:05 np0005603621 systemd[303923]: Reached target Shutdown.
Jan 31 03:13:05 np0005603621 systemd[303923]: Finished Exit the Session.
Jan 31 03:13:05 np0005603621 systemd[303923]: Reached target Exit the Session.
Jan 31 03:13:05 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:13:05 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:13:05 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:13:05 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:13:05 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:13:05 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:13:05 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:13:05 np0005603621 podman[304345]: 2026-01-31 08:13:05.930149405 +0000 UTC m=+0.035024403 container died c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-25c3cf5df813b83b432c729b85c171eae945a330a43e87051a2c59136baf6883-merged.mount: Deactivated successfully.
Jan 31 03:13:05 np0005603621 podman[304345]: 2026-01-31 08:13:05.980864782 +0000 UTC m=+0.085739770 container remove c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_williamson, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:13:05 np0005603621 systemd[1]: libpod-conmon-c94328c451a877296b45ab2aff388b15638a10ecb8c9e023195405d341dce3ec.scope: Deactivated successfully.
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.469173453 +0000 UTC m=+0.036948614 container create b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_khayyam, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:13:06 np0005603621 systemd[1]: Started libpod-conmon-b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112.scope.
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.452516059 +0000 UTC m=+0.020291240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:13:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.558 247403 DEBUG nova.compute.manager [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.561 247403 DEBUG oslo_concurrency.lockutils [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.562 247403 DEBUG oslo_concurrency.lockutils [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.562 247403 DEBUG oslo_concurrency.lockutils [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.562 247403 DEBUG nova.compute.manager [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.563 247403 WARNING nova.compute.manager [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.563 247403 DEBUG nova.compute.manager [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-changed-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.563 247403 DEBUG nova.compute.manager [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Refreshing instance network info cache due to event network-changed-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:13:06 np0005603621 nova_compute[247399]: 2026-01-31 08:13:06.563 247403 DEBUG oslo_concurrency.lockutils [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.610652656 +0000 UTC m=+0.178427857 container init b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.621552669 +0000 UTC m=+0.189327840 container start b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:13:06 np0005603621 nostalgic_khayyam[304517]: 167 167
Jan 31 03:13:06 np0005603621 systemd[1]: libpod-b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112.scope: Deactivated successfully.
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.66191715 +0000 UTC m=+0.229692311 container attach b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.66253955 +0000 UTC m=+0.230314711 container died b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:13:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c99f4d5019430e45203de3a59a2b5f89dc037e7994ad921724045cb55d826361-merged.mount: Deactivated successfully.
Jan 31 03:13:06 np0005603621 podman[304501]: 2026-01-31 08:13:06.773686578 +0000 UTC m=+0.341461729 container remove b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_khayyam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:13:06 np0005603621 systemd[1]: libpod-conmon-b05cde66aed30b8c348845995425ae0eaf71f7a2e64e86c838ef5310e6acf112.scope: Deactivated successfully.
Jan 31 03:13:06 np0005603621 podman[304541]: 2026-01-31 08:13:06.929152811 +0000 UTC m=+0.059508543 container create 6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:13:06 np0005603621 systemd[1]: Started libpod-conmon-6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09.scope.
Jan 31 03:13:06 np0005603621 podman[304541]: 2026-01-31 08:13:06.903613207 +0000 UTC m=+0.033969029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:13:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff6461da6eff5279e244a2f07a8197178601ff92cebf0c96113f23941d1b6bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff6461da6eff5279e244a2f07a8197178601ff92cebf0c96113f23941d1b6bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff6461da6eff5279e244a2f07a8197178601ff92cebf0c96113f23941d1b6bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff6461da6eff5279e244a2f07a8197178601ff92cebf0c96113f23941d1b6bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:07 np0005603621 podman[304541]: 2026-01-31 08:13:07.016641055 +0000 UTC m=+0.146996817 container init 6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:13:07 np0005603621 podman[304541]: 2026-01-31 08:13:07.025117152 +0000 UTC m=+0.155472884 container start 6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:07 np0005603621 podman[304541]: 2026-01-31 08:13:07.028942663 +0000 UTC m=+0.159298405 container attach 6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:13:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:13:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:07.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:13:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:07 np0005603621 nova_compute[247399]: 2026-01-31 08:13:07.727 247403 DEBUG nova.network.neutron [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updating instance_info_cache with network_info: [{"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 31 03:13:07 np0005603621 reverent_easley[304558]: {
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:    "0": [
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:        {
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "devices": [
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "/dev/loop3"
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            ],
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "lv_name": "ceph_lv0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "lv_size": "7511998464",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "name": "ceph_lv0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "tags": {
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.cluster_name": "ceph",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.crush_device_class": "",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.encrypted": "0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.osd_id": "0",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.type": "block",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:                "ceph.vdo": "0"
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            },
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "type": "block",
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:            "vg_name": "ceph_vg0"
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:        }
Jan 31 03:13:07 np0005603621 reverent_easley[304558]:    ]
Jan 31 03:13:07 np0005603621 reverent_easley[304558]: }
Jan 31 03:13:07 np0005603621 systemd[1]: libpod-6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09.scope: Deactivated successfully.
Jan 31 03:13:07 np0005603621 podman[304568]: 2026-01-31 08:13:07.871852086 +0000 UTC m=+0.031510653 container died 6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:13:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bff6461da6eff5279e244a2f07a8197178601ff92cebf0c96113f23941d1b6bc-merged.mount: Deactivated successfully.
Jan 31 03:13:07 np0005603621 podman[304568]: 2026-01-31 08:13:07.92408753 +0000 UTC m=+0.083746087 container remove 6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 03:13:07 np0005603621 systemd[1]: libpod-conmon-6d952fdf02678b081f54aa26dd222020805fca76e6edc29a76e9f71abdb9df09.scope: Deactivated successfully.
Jan 31 03:13:08 np0005603621 nova_compute[247399]: 2026-01-31 08:13:08.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.497784988 +0000 UTC m=+0.039974439 container create 0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 03:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:08 np0005603621 systemd[1]: Started libpod-conmon-0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3.scope.
Jan 31 03:13:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.572718467 +0000 UTC m=+0.114907958 container init 0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.477351435 +0000 UTC m=+0.019540886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.579970116 +0000 UTC m=+0.122159557 container start 0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:08 np0005603621 quizzical_lamport[304738]: 167 167
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.585088537 +0000 UTC m=+0.127278058 container attach 0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:13:08 np0005603621 systemd[1]: libpod-0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3.scope: Deactivated successfully.
Jan 31 03:13:08 np0005603621 conmon[304738]: conmon 0f66d38b24c8f5b5f7d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3.scope/container/memory.events
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.587089209 +0000 UTC m=+0.129278640 container died 0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:13:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ffc86d2c597166f1541620a15dff400fb8316c8a88abfc09b7676f46d2bb1a2b-merged.mount: Deactivated successfully.
Jan 31 03:13:08 np0005603621 podman[304722]: 2026-01-31 08:13:08.627087879 +0000 UTC m=+0.169277310 container remove 0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:13:08 np0005603621 nova_compute[247399]: 2026-01-31 08:13:08.627 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Releasing lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:08 np0005603621 nova_compute[247399]: 2026-01-31 08:13:08.633 247403 DEBUG oslo_concurrency.lockutils [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:08 np0005603621 nova_compute[247399]: 2026-01-31 08:13:08.633 247403 DEBUG nova.network.neutron [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Refreshing network info cache for port 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:13:08 np0005603621 systemd[1]: libpod-conmon-0f66d38b24c8f5b5f7d5122f3d94e22bef4781a8b03bf7c86265f1e1400456e3.scope: Deactivated successfully.
Jan 31 03:13:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:08 np0005603621 nova_compute[247399]: 2026-01-31 08:13:08.758 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:08 np0005603621 podman[304762]: 2026-01-31 08:13:08.771527835 +0000 UTC m=+0.055080495 container create 8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shannon, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:13:08 np0005603621 systemd[1]: Started libpod-conmon-8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0.scope.
Jan 31 03:13:08 np0005603621 podman[304762]: 2026-01-31 08:13:08.742365217 +0000 UTC m=+0.025917897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:13:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2837f7ad4f74f5b3da4c1f17d50f4e9229f04473ff47bbac402c355f8a7cf2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2837f7ad4f74f5b3da4c1f17d50f4e9229f04473ff47bbac402c355f8a7cf2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2837f7ad4f74f5b3da4c1f17d50f4e9229f04473ff47bbac402c355f8a7cf2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2837f7ad4f74f5b3da4c1f17d50f4e9229f04473ff47bbac402c355f8a7cf2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:08 np0005603621 podman[304762]: 2026-01-31 08:13:08.866971219 +0000 UTC m=+0.150523889 container init 8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:13:08 np0005603621 podman[304762]: 2026-01-31 08:13:08.872824844 +0000 UTC m=+0.156377494 container start 8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:13:08 np0005603621 podman[304762]: 2026-01-31 08:13:08.876336304 +0000 UTC m=+0.159888954 container attach 8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shannon, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]: {
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:        "osd_id": 0,
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:        "type": "bluestore"
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]:    }
Jan 31 03:13:09 np0005603621 peaceful_shannon[304778]: }
Jan 31 03:13:09 np0005603621 systemd[1]: libpod-8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0.scope: Deactivated successfully.
Jan 31 03:13:09 np0005603621 podman[304762]: 2026-01-31 08:13:09.633931081 +0000 UTC m=+0.917483731 container died 8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:13:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:09.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:09.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8c2837f7ad4f74f5b3da4c1f17d50f4e9229f04473ff47bbac402c355f8a7cf2-merged.mount: Deactivated successfully.
Jan 31 03:13:09 np0005603621 podman[304762]: 2026-01-31 08:13:09.691419371 +0000 UTC m=+0.974972031 container remove 8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:13:09 np0005603621 systemd[1]: libpod-conmon-8072f6fc975d48ee50e26d298c85112cb9ef0da22e1d3a93d3324bbb269b69f0.scope: Deactivated successfully.
Jan 31 03:13:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:13:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:13:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:13:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 03:13:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:13:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b93b350f-5a42-47e1-91bd-3e36707a41f0 does not exist
Jan 31 03:13:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bb254533-07c5-49c5-b46b-4087e2001889 does not exist
Jan 31 03:13:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b8e759ea-a5a4-4eca-90e4-eb1d3facd480 does not exist
Jan 31 03:13:10 np0005603621 nova_compute[247399]: 2026-01-31 08:13:10.710 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:13:10 np0005603621 nova_compute[247399]: 2026-01-31 08:13:10.711 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:13:10 np0005603621 nova_compute[247399]: 2026-01-31 08:13:10.711 247403 INFO nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Creating image(s)#033[00m
Jan 31 03:13:10 np0005603621 nova_compute[247399]: 2026-01-31 08:13:10.745 247403 DEBUG nova.storage.rbd_utils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] creating snapshot(nova-resize) on rbd image(9e88446e-2147-4f66-9f77-23949a27f7e6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:13:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:13:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:13:11 np0005603621 podman[304900]: 2026-01-31 08:13:11.492950095 +0000 UTC m=+0.047780049 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 03:13:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 311 KiB/s rd, 915 KiB/s wr, 42 op/s
Jan 31 03:13:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 31 03:13:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 31 03:13:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 31 03:13:11 np0005603621 nova_compute[247399]: 2026-01-31 08:13:11.837 247403 DEBUG nova.objects.instance [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.007 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.008 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Ensure instance console log exists: /var/lib/nova/instances/9e88446e-2147-4f66-9f77-23949a27f7e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.009 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.010 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.010 247403 DEBUG oslo_concurrency.lockutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.014 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Start _get_guest_xml network_info=[{"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:39:73:b2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.020 247403 WARNING nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.028 247403 DEBUG nova.virt.libvirt.host [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.030 247403 DEBUG nova.virt.libvirt.host [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.035 247403 DEBUG nova.virt.libvirt.host [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.036 247403 DEBUG nova.virt.libvirt.host [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.037 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.038 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f75c4aee-d826-4343-a7e3-f06a4b21de52',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.038 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.039 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.039 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.040 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.040 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.040 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.041 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.041 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.041 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.042 247403 DEBUG nova.virt.hardware [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.042 247403 DEBUG nova.objects.instance [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.124 247403 DEBUG oslo_concurrency.processutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:12 np0005603621 podman[304977]: 2026-01-31 08:13:12.547834856 +0000 UTC m=+0.107782868 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:13:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684770731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.597 247403 DEBUG oslo_concurrency.processutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.646 247403 DEBUG oslo_concurrency.processutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.909 247403 DEBUG nova.network.neutron [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updated VIF entry in instance network info cache for port 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.911 247403 DEBUG nova.network.neutron [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updating instance_info_cache with network_info: [{"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:12 np0005603621 nova_compute[247399]: 2026-01-31 08:13:12.948 247403 DEBUG oslo_concurrency.lockutils [req-8e9d0e71-3397-4d0c-a7d6-7ee985d4eca3 req-ac4aedcf-accc-4810-9fdf-c6231beb7e17 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:13:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072070693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.126 247403 DEBUG oslo_concurrency.processutils [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.128 247403 DEBUG nova.virt.libvirt.vif [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2042269499',display_name='tempest-ServerDiskConfigTestJSON-server-2042269499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2042269499',id=90,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-619dusqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:59Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=9e88446e-2147-4f66-9f77-23949a27f7e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:39:73:b2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.128 247403 DEBUG nova.network.os_vif_util [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:39:73:b2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.129 247403 DEBUG nova.network.os_vif_util [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.131 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <uuid>9e88446e-2147-4f66-9f77-23949a27f7e6</uuid>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <name>instance-0000005a</name>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <memory>196608</memory>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-2042269499</nova:name>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:13:12</nova:creationTime>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.micro">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:memory>192</nova:memory>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:user uuid="111fdaf79c084a91902fe37a7a502020">tempest-ServerDiskConfigTestJSON-855158150-project-member</nova:user>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:project uuid="58e900992be7400fb940ca20f13e12d1">tempest-ServerDiskConfigTestJSON-855158150</nova:project>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <nova:port uuid="4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <entry name="serial">9e88446e-2147-4f66-9f77-23949a27f7e6</entry>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <entry name="uuid">9e88446e-2147-4f66-9f77-23949a27f7e6</entry>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/9e88446e-2147-4f66-9f77-23949a27f7e6_disk">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/9e88446e-2147-4f66-9f77-23949a27f7e6_disk.config">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:39:73:b2"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <target dev="tap4d96f0c7-92"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/9e88446e-2147-4f66-9f77-23949a27f7e6/console.log" append="off"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:13:13 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:13:13 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:13:13 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:13:13 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.133 247403 DEBUG nova.virt.libvirt.vif [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2042269499',display_name='tempest-ServerDiskConfigTestJSON-server-2042269499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2042269499',id=90,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:12:34Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-619dusqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:12:59Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=9e88446e-2147-4f66-9f77-23949a27f7e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:39:73:b2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.134 247403 DEBUG nova.network.os_vif_util [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:39:73:b2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.135 247403 DEBUG nova.network.os_vif_util [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.135 247403 DEBUG os_vif [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.136 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.136 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.137 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.140 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d96f0c7-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.141 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4d96f0c7-92, col_values=(('external_ids', {'iface-id': '4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:73:b2', 'vm-uuid': '9e88446e-2147-4f66-9f77-23949a27f7e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.1444] manager: (tap4d96f0c7-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.151 247403 INFO os_vif [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92')#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.247 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.248 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.248 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No VIF found with MAC fa:16:3e:39:73:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.248 247403 INFO nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Using config drive#033[00m
Jan 31 03:13:13 np0005603621 kernel: tap4d96f0c7-92: entered promiscuous mode
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.3406] manager: (tap4d96f0c7-92): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Jan 31 03:13:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:13Z|00283|binding|INFO|Claiming lport 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for this chassis.
Jan 31 03:13:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:13Z|00284|binding|INFO|4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb: Claiming fa:16:3e:39:73:b2 10.100.0.13
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 systemd-udevd[305076]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:13Z|00285|binding|INFO|Setting lport 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb ovn-installed in OVS
Jan 31 03:13:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:13Z|00286|binding|INFO|Setting lport 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb up in Southbound
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.372 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:73:b2 10.100.0.13'], port_security=['fa:16:3e:39:73:b2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9e88446e-2147-4f66-9f77-23949a27f7e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '7', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.373 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.376 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 bound to our chassis#033[00m
Jan 31 03:13:13 np0005603621 systemd-machined[212769]: New machine qemu-39-instance-0000005a.
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.380 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f218695f-c744-4bd8-b2d8-122a920c7ca0#033[00m
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.3858] device (tap4d96f0c7-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.3872] device (tap4d96f0c7-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.393 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[01d875a7-ffda-478c-a618-3f8664f00643]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.394 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf218695f-c1 in ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:13:13 np0005603621 systemd[1]: Started Virtual Machine qemu-39-instance-0000005a.
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.396 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf218695f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.396 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[af7d3897-7441-4d4d-8b93-7131e184b3a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.397 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[24939313-cba0-4897-a83d-86beef55770d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.410 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[254292ad-0460-483e-9fff-6a14adac4af7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.423 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d573e88-b7bc-4ac7-840c-54de6b578569]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.443 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3e6ede-28b3-4f11-8053-547d998be57a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.4503] manager: (tapf218695f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.449 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[924e456f-27e3-49b0-8c01-0136c3bea9fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 systemd-udevd[305082]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.486 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[95d26237-5211-4bbc-881b-dc164a948b33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.490 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e9be455f-c283-4169-8c2d-aa7d6044fff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.5135] device (tapf218695f-c0): carrier: link connected
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.523 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2cd501-a89a-415e-91e9-030505a6bc84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.540 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[82abf885-cb06-450c-99d5-5661c07ef123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649902, 'reachable_time': 24696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305112, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.556 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad6f3bc-aea5-4787-b804-f6d520029382]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:830'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 649902, 'tstamp': 649902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305113, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.573 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74e37b2f-4936-43b1-85fa-56dbd0e9c97f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649902, 'reachable_time': 24696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305114, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.614 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c47bc892-1814-49aa-85ce-8d58c1d6b6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:13:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:13:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:13:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:13.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.684 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cec39012-082c-4a23-8f26-5c9aa867f5ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.686 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.686 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.687 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf218695f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:13 np0005603621 kernel: tapf218695f-c0: entered promiscuous mode
Jan 31 03:13:13 np0005603621 NetworkManager[49013]: <info>  [1769847193.6903] manager: (tapf218695f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.689 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.695 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf218695f-c0, col_values=(('external_ids', {'iface-id': 'd3a551a2-38e3-48d3-bdee-f2493a79eca0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:13Z|00287|binding|INFO|Releasing lport d3a551a2-38e3-48d3-bdee-f2493a79eca0 from this chassis (sb_readonly=0)
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.699 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.700 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.701 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcc9a46-8e22-4167-8e7a-6ac5c8b4a3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.702 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:13:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:13.703 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'env', 'PROCESS_TAG=haproxy-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f218695f-c744-4bd8-b2d8-122a920c7ca0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.706 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 23 KiB/s wr, 24 op/s
Jan 31 03:13:13 np0005603621 nova_compute[247399]: 2026-01-31 08:13:13.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:14 np0005603621 podman[305183]: 2026-01-31 08:13:14.078779843 +0000 UTC m=+0.047063045 container create 6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:13:14 np0005603621 systemd[1]: Started libpod-conmon-6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d.scope.
Jan 31 03:13:14 np0005603621 podman[305183]: 2026-01-31 08:13:14.053229364 +0000 UTC m=+0.021512566 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:13:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:13:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/421c17ef34233753f42cc77bca93597a9cef37f3b7713ed0b89cf39728bd202f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.164 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847194.1633518, 9e88446e-2147-4f66-9f77-23949a27f7e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.166 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:13:14 np0005603621 podman[305183]: 2026-01-31 08:13:14.166622026 +0000 UTC m=+0.134905248 container init 6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.168 247403 DEBUG nova.compute.manager [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.173 247403 INFO nova.virt.libvirt.driver [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Instance running successfully.#033[00m
Jan 31 03:13:14 np0005603621 podman[305183]: 2026-01-31 08:13:14.173359357 +0000 UTC m=+0.141642559 container start 6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:13:14 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.176 247403 DEBUG nova.virt.libvirt.guest [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.177 247403 DEBUG nova.virt.libvirt.driver [None req-8ee045f2-a13d-45b8-8a67-60816e85fcf6 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:13:14 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [NOTICE]   (305208) : New worker (305210) forked
Jan 31 03:13:14 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [NOTICE]   (305208) : Loading success.
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.215 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.219 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.269 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.270 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847194.1636333, 9e88446e-2147-4f66-9f77-23949a27f7e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.271 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] VM Started (Lifecycle Event)#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.767 247403 DEBUG nova.compute.manager [req-d9a4943f-cea2-44f1-b901-9d323a3e897b req-446d9094-312d-4ef6-a6ab-620a4a86655d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.768 247403 DEBUG oslo_concurrency.lockutils [req-d9a4943f-cea2-44f1-b901-9d323a3e897b req-446d9094-312d-4ef6-a6ab-620a4a86655d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.768 247403 DEBUG oslo_concurrency.lockutils [req-d9a4943f-cea2-44f1-b901-9d323a3e897b req-446d9094-312d-4ef6-a6ab-620a4a86655d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.769 247403 DEBUG oslo_concurrency.lockutils [req-d9a4943f-cea2-44f1-b901-9d323a3e897b req-446d9094-312d-4ef6-a6ab-620a4a86655d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.769 247403 DEBUG nova.compute.manager [req-d9a4943f-cea2-44f1-b901-9d323a3e897b req-446d9094-312d-4ef6-a6ab-620a4a86655d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.769 247403 WARNING nova.compute.manager [req-d9a4943f-cea2-44f1-b901-9d323a3e897b req-446d9094-312d-4ef6-a6ab-620a4a86655d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.790 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:14 np0005603621 nova_compute[247399]: 2026-01-31 08:13:14.794 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:13:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:13:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:13:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:15.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 839 KiB/s rd, 14 KiB/s wr, 24 op/s
Jan 31 03:13:17 np0005603621 nova_compute[247399]: 2026-01-31 08:13:17.064 247403 DEBUG nova.compute.manager [req-b6bc0b19-8d13-433e-8349-b35acf2609bd req-8d6e0f5b-9b17-4270-8956-29dfa6332349 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:17 np0005603621 nova_compute[247399]: 2026-01-31 08:13:17.065 247403 DEBUG oslo_concurrency.lockutils [req-b6bc0b19-8d13-433e-8349-b35acf2609bd req-8d6e0f5b-9b17-4270-8956-29dfa6332349 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:17 np0005603621 nova_compute[247399]: 2026-01-31 08:13:17.065 247403 DEBUG oslo_concurrency.lockutils [req-b6bc0b19-8d13-433e-8349-b35acf2609bd req-8d6e0f5b-9b17-4270-8956-29dfa6332349 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:17 np0005603621 nova_compute[247399]: 2026-01-31 08:13:17.065 247403 DEBUG oslo_concurrency.lockutils [req-b6bc0b19-8d13-433e-8349-b35acf2609bd req-8d6e0f5b-9b17-4270-8956-29dfa6332349 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:17 np0005603621 nova_compute[247399]: 2026-01-31 08:13:17.065 247403 DEBUG nova.compute.manager [req-b6bc0b19-8d13-433e-8349-b35acf2609bd req-8d6e0f5b-9b17-4270-8956-29dfa6332349 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:17 np0005603621 nova_compute[247399]: 2026-01-31 08:13:17.065 247403 WARNING nova.compute.manager [req-b6bc0b19-8d13-433e-8349-b35acf2609bd req-8d6e0f5b-9b17-4270-8956-29dfa6332349 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:13:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:17.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:17.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 38 op/s
Jan 31 03:13:18 np0005603621 nova_compute[247399]: 2026-01-31 08:13:18.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:18 np0005603621 nova_compute[247399]: 2026-01-31 08:13:18.762 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:19.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:19.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.3 KiB/s wr, 113 op/s
Jan 31 03:13:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:21.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:21.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 279 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.3 KiB/s wr, 113 op/s
Jan 31 03:13:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 31 03:13:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 31 03:13:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 31 03:13:23 np0005603621 nova_compute[247399]: 2026-01-31 08:13:23.147 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:23.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 299 MiB data, 881 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.0 MiB/s wr, 119 op/s
Jan 31 03:13:23 np0005603621 nova_compute[247399]: 2026-01-31 08:13:23.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 314 MiB data, 884 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.3 MiB/s wr, 138 op/s
Jan 31 03:13:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:26Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:73:b2 10.100.0.13
Jan 31 03:13:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:27.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 2 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 293 active+clean; 326 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 149 op/s
Jan 31 03:13:28 np0005603621 nova_compute[247399]: 2026-01-31 08:13:28.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:28 np0005603621 nova_compute[247399]: 2026-01-31 08:13:28.766 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.621 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.622 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.622 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:13:29 np0005603621 nova_compute[247399]: 2026-01-31 08:13:29.622 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:29.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 326 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 31 03:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:30.496 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:30.496 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:30.498 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:31.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 326 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.369 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.369 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.370 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.370 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.370 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.372 247403 INFO nova.compute.manager [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Terminating instance#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.373 247403 DEBUG nova.compute.manager [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:13:32 np0005603621 kernel: tap4d96f0c7-92 (unregistering): left promiscuous mode
Jan 31 03:13:32 np0005603621 NetworkManager[49013]: <info>  [1769847212.4321] device (tap4d96f0c7-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.436 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:32Z|00288|binding|INFO|Releasing lport 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb from this chassis (sb_readonly=0)
Jan 31 03:13:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:32Z|00289|binding|INFO|Setting lport 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb down in Southbound
Jan 31 03:13:32 np0005603621 ovn_controller[149152]: 2026-01-31T08:13:32Z|00290|binding|INFO|Removing iface tap4d96f0c7-92 ovn-installed in OVS
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.438 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.446 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.471 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:73:b2 10.100.0.13'], port_security=['fa:16:3e:39:73:b2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9e88446e-2147-4f66-9f77-23949a27f7e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '9', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.473 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 unbound from our chassis#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.475 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f218695f-c744-4bd8-b2d8-122a920c7ca0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.476 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aed1bf12-4670-4921-b8d8-9f5d42ba498e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.476 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace which is not needed anymore#033[00m
Jan 31 03:13:32 np0005603621 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Jan 31 03:13:32 np0005603621 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005a.scope: Consumed 12.892s CPU time.
Jan 31 03:13:32 np0005603621 systemd-machined[212769]: Machine qemu-39-instance-0000005a terminated.
Jan 31 03:13:32 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [NOTICE]   (305208) : haproxy version is 2.8.14-c23fe91
Jan 31 03:13:32 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [NOTICE]   (305208) : path to executable is /usr/sbin/haproxy
Jan 31 03:13:32 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [WARNING]  (305208) : Exiting Master process...
Jan 31 03:13:32 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [ALERT]    (305208) : Current worker (305210) exited with code 143 (Terminated)
Jan 31 03:13:32 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[305203]: [WARNING]  (305208) : All workers exited. Exiting... (0)
Jan 31 03:13:32 np0005603621 systemd[1]: libpod-6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d.scope: Deactivated successfully.
Jan 31 03:13:32 np0005603621 podman[305303]: 2026-01-31 08:13:32.581521719 +0000 UTC m=+0.039733636 container died 6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.596 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:13:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-421c17ef34233753f42cc77bca93597a9cef37f3b7713ed0b89cf39728bd202f-merged.mount: Deactivated successfully.
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.625 247403 INFO nova.virt.libvirt.driver [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Instance destroyed successfully.#033[00m
Jan 31 03:13:32 np0005603621 podman[305303]: 2026-01-31 08:13:32.626792387 +0000 UTC m=+0.085004304 container cleanup 6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.627 247403 DEBUG nova.objects.instance [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'resources' on Instance uuid 9e88446e-2147-4f66-9f77-23949a27f7e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:13:32 np0005603621 systemd[1]: libpod-conmon-6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d.scope: Deactivated successfully.
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.665 247403 DEBUG nova.virt.libvirt.vif [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-2042269499',display_name='tempest-ServerDiskConfigTestJSON-server-2042269499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-2042269499',id=90,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:14Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-619dusqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:13:23Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=9e88446e-2147-4f66-9f77-23949a27f7e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.667 247403 DEBUG nova.network.os_vif_util [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.667 247403 DEBUG nova.network.os_vif_util [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.668 247403 DEBUG os_vif [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.670 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d96f0c7-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.671 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.674 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.676 247403 INFO os_vif [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:73:b2,bridge_name='br-int',has_traffic_filtering=True,id=4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4d96f0c7-92')#033[00m
Jan 31 03:13:32 np0005603621 podman[305342]: 2026-01-31 08:13:32.705938477 +0000 UTC m=+0.062236811 container remove 6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.711 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b024a9-6691-4694-8c58-9bc8e0c8876f]: (4, ('Sat Jan 31 08:13:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d)\n6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d\nSat Jan 31 08:13:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d)\n6f584ef0c2f24741461581742b7b8e3dfc283f83b84b9707aeba8b466127764d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.713 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6cac7d08-8b2a-40a1-9f9c-83bb30c3c208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.714 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.740 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 kernel: tapf218695f-c0: left promiscuous mode
Jan 31 03:13:32 np0005603621 nova_compute[247399]: 2026-01-31 08:13:32.747 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.749 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[318a3f08-f0be-4187-97ca-6c88850731a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.764 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[23142ddb-c746-48a1-a4e2-98029e8b293a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.765 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f4edd29d-0324-43fb-823d-8c224ec4c2eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.777 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1b4ca4-b29b-40f3-a510-8f7c2c823f42]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 649895, 'reachable_time': 38324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305375, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:32 np0005603621 systemd[1]: run-netns-ovnmeta\x2df218695f\x2dc744\x2d4bd8\x2db2d8\x2d122a920c7ca0.mount: Deactivated successfully.
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.782 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:13:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:13:32.782 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dc5f8f-1518-4787-869d-14b6f412292b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.498 247403 DEBUG nova.compute.manager [req-bb0dc59a-6a49-4417-8303-ceadc50db2fe req-751784ac-f6cb-46eb-b567-7723bcdc27d1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.499 247403 DEBUG oslo_concurrency.lockutils [req-bb0dc59a-6a49-4417-8303-ceadc50db2fe req-751784ac-f6cb-46eb-b567-7723bcdc27d1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.499 247403 DEBUG oslo_concurrency.lockutils [req-bb0dc59a-6a49-4417-8303-ceadc50db2fe req-751784ac-f6cb-46eb-b567-7723bcdc27d1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.499 247403 DEBUG oslo_concurrency.lockutils [req-bb0dc59a-6a49-4417-8303-ceadc50db2fe req-751784ac-f6cb-46eb-b567-7723bcdc27d1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.500 247403 DEBUG nova.compute.manager [req-bb0dc59a-6a49-4417-8303-ceadc50db2fe req-751784ac-f6cb-46eb-b567-7723bcdc27d1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.500 247403 DEBUG nova.compute.manager [req-bb0dc59a-6a49-4417-8303-ceadc50db2fe req-751784ac-f6cb-46eb-b567-7723bcdc27d1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-unplugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:13:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:33.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 31 03:13:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 327 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 244 op/s
Jan 31 03:13:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.767 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:33 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.830 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updating instance_info_cache with network_info: [{"id": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "address": "fa:16:3e:39:73:b2", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4d96f0c7-92", "ovs_interfaceid": "4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.981 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-9e88446e-2147-4f66-9f77-23949a27f7e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.981 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:13:33 np0005603621 nova_compute[247399]: 2026-01-31 08:13:33.981 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603621 nova_compute[247399]: 2026-01-31 08:13:34.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603621 nova_compute[247399]: 2026-01-31 08:13:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:34 np0005603621 nova_compute[247399]: 2026-01-31 08:13:34.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:13:34 np0005603621 nova_compute[247399]: 2026-01-31 08:13:34.967 247403 INFO nova.virt.libvirt.driver [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Deleting instance files /var/lib/nova/instances/9e88446e-2147-4f66-9f77-23949a27f7e6_del#033[00m
Jan 31 03:13:34 np0005603621 nova_compute[247399]: 2026-01-31 08:13:34.968 247403 INFO nova.virt.libvirt.driver [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Deletion of /var/lib/nova/instances/9e88446e-2147-4f66-9f77-23949a27f7e6_del complete#033[00m
Jan 31 03:13:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:35.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 302 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 886 KiB/s wr, 294 op/s
Jan 31 03:13:36 np0005603621 nova_compute[247399]: 2026-01-31 08:13:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:37 np0005603621 nova_compute[247399]: 2026-01-31 08:13:37.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:37.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:37.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 286 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 282 op/s
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.075 247403 DEBUG nova.compute.manager [req-9df4b7fb-1ec0-41d1-9831-78ed79629346 req-e43d9b6a-9d5f-409f-b764-5fea7f2903fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.076 247403 DEBUG oslo_concurrency.lockutils [req-9df4b7fb-1ec0-41d1-9831-78ed79629346 req-e43d9b6a-9d5f-409f-b764-5fea7f2903fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.076 247403 DEBUG oslo_concurrency.lockutils [req-9df4b7fb-1ec0-41d1-9831-78ed79629346 req-e43d9b6a-9d5f-409f-b764-5fea7f2903fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.076 247403 DEBUG oslo_concurrency.lockutils [req-9df4b7fb-1ec0-41d1-9831-78ed79629346 req-e43d9b6a-9d5f-409f-b764-5fea7f2903fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.076 247403 DEBUG nova.compute.manager [req-9df4b7fb-1ec0-41d1-9831-78ed79629346 req-e43d9b6a-9d5f-409f-b764-5fea7f2903fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] No waiting events found dispatching network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.076 247403 WARNING nova.compute.manager [req-9df4b7fb-1ec0-41d1-9831-78ed79629346 req-e43d9b6a-9d5f-409f-b764-5fea7f2903fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received unexpected event network-vif-plugged-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.194 247403 INFO nova.compute.manager [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Took 5.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.194 247403 DEBUG oslo.service.loopingcall [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.195 247403 DEBUG nova.compute.manager [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.195 247403 DEBUG nova.network.neutron [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.236 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.237 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.237 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.237 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.237 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:13:38
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:13:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:13:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3189500039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.698 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.771 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:13:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.879 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.880 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4500MB free_disk=20.875896453857422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.881 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:38 np0005603621 nova_compute[247399]: 2026-01-31 08:13:38.881 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.111 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 9e88446e-2147-4f66-9f77-23949a27f7e6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.112 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.112 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.154 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:13:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181034737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.592 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.597 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:13:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:39.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 264 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 664 KiB/s wr, 293 op/s
Jan 31 03:13:39 np0005603621 nova_compute[247399]: 2026-01-31 08:13:39.837 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:13:40 np0005603621 nova_compute[247399]: 2026-01-31 08:13:40.298 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:13:40 np0005603621 nova_compute[247399]: 2026-01-31 08:13:40.299 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:40 np0005603621 nova_compute[247399]: 2026-01-31 08:13:40.672 247403 DEBUG nova.network.neutron [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:13:40 np0005603621 nova_compute[247399]: 2026-01-31 08:13:40.769 247403 INFO nova.compute.manager [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Took 2.57 seconds to deallocate network for instance.#033[00m
Jan 31 03:13:40 np0005603621 nova_compute[247399]: 2026-01-31 08:13:40.907 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:13:40 np0005603621 nova_compute[247399]: 2026-01-31 08:13:40.908 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.025 247403 DEBUG oslo_concurrency.processutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.253 247403 DEBUG nova.compute.manager [req-51760730-89e2-4aba-a80e-3944230e7062 req-cc0bb37b-6cfd-4461-8385-837847b8fae7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Received event network-vif-deleted-4d96f0c7-92c2-4d4d-9f1b-b26f7d707dbb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.301 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:13:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3984223865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.442 247403 DEBUG oslo_concurrency.processutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.447 247403 DEBUG nova.compute.provider_tree [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.578 247403 DEBUG nova.scheduler.client.report [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:13:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:41.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:41.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.718 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 264 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 664 KiB/s wr, 293 op/s
Jan 31 03:13:41 np0005603621 nova_compute[247399]: 2026-01-31 08:13:41.942 247403 INFO nova.scheduler.client.report [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Deleted allocations for instance 9e88446e-2147-4f66-9f77-23949a27f7e6#033[00m
Jan 31 03:13:42 np0005603621 nova_compute[247399]: 2026-01-31 08:13:42.496 247403 DEBUG oslo_concurrency.lockutils [None req-7d24333d-088d-4e45-9c4c-ae68812dd051 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "9e88446e-2147-4f66-9f77-23949a27f7e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:13:42 np0005603621 podman[305450]: 2026-01-31 08:13:42.504618418 +0000 UTC m=+0.058480393 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 31 03:13:42 np0005603621 nova_compute[247399]: 2026-01-31 08:13:42.843 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:42 np0005603621 podman[305494]: 2026-01-31 08:13:42.863804159 +0000 UTC m=+0.243579460 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:13:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:43.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 293 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 31 03:13:43 np0005603621 nova_compute[247399]: 2026-01-31 08:13:43.771 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:44 np0005603621 nova_compute[247399]: 2026-01-31 08:13:44.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:13:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:45.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 293 MiB data, 871 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 31 03:13:47 np0005603621 nova_compute[247399]: 2026-01-31 08:13:47.625 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847212.623034, 9e88446e-2147-4f66-9f77-23949a27f7e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:13:47 np0005603621 nova_compute[247399]: 2026-01-31 08:13:47.626 247403 INFO nova.compute.manager [-] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:13:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:47 np0005603621 nova_compute[247399]: 2026-01-31 08:13:47.689 247403 DEBUG nova.compute.manager [None req-6c0e0beb-c9c0-4a2d-8ebc-d9352aa5a629 - - - - - -] [instance: 9e88446e-2147-4f66-9f77-23949a27f7e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:13:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:47.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 305 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 31 03:13:47 np0005603621 nova_compute[247399]: 2026-01-31 08:13:47.885 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:48 np0005603621 nova_compute[247399]: 2026-01-31 08:13:48.774 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004555474129908804 of space, bias 1.0, pg target 1.3666422389726411 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001978752268285874 of space, bias 1.0, pg target 0.5916469282174763 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:13:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:49.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:49.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.6 MiB/s wr, 78 op/s
Jan 31 03:13:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:51.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.0 MiB/s wr, 40 op/s
Jan 31 03:13:52 np0005603621 nova_compute[247399]: 2026-01-31 08:13:52.888 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:53.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:53.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 96 op/s
Jan 31 03:13:53 np0005603621 nova_compute[247399]: 2026-01-31 08:13:53.776 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:55.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:13:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:57.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:57.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 31 03:13:57 np0005603621 nova_compute[247399]: 2026-01-31 08:13:57.891 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.734401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847238734423, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1080, "num_deletes": 257, "total_data_size": 1609425, "memory_usage": 1640000, "flush_reason": "Manual Compaction"}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847238744136, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1590236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39882, "largest_seqno": 40960, "table_properties": {"data_size": 1585069, "index_size": 2627, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11503, "raw_average_key_size": 19, "raw_value_size": 1574440, "raw_average_value_size": 2695, "num_data_blocks": 116, "num_entries": 584, "num_filter_entries": 584, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847149, "oldest_key_time": 1769847149, "file_creation_time": 1769847238, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 9798 microseconds, and 2982 cpu microseconds.
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.744193) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1590236 bytes OK
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.744216) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.746286) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.746303) EVENT_LOG_v1 {"time_micros": 1769847238746298, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.746324) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1604476, prev total WAL file size 1604476, number of live WAL files 2.
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.747127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1552KB)], [86(8909KB)]
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847238747158, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 10713112, "oldest_snapshot_seqno": -1}
Jan 31 03:13:58 np0005603621 nova_compute[247399]: 2026-01-31 08:13:58.777 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6791 keys, 10575362 bytes, temperature: kUnknown
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847238888060, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10575362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10529945, "index_size": 27321, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 175102, "raw_average_key_size": 25, "raw_value_size": 10408631, "raw_average_value_size": 1532, "num_data_blocks": 1085, "num_entries": 6791, "num_filter_entries": 6791, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847238, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.888303) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10575362 bytes
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.889987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.0 rd, 75.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(13.4) write-amplify(6.7) OK, records in: 7322, records dropped: 531 output_compression: NoCompression
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.890004) EVENT_LOG_v1 {"time_micros": 1769847238889996, "job": 50, "event": "compaction_finished", "compaction_time_micros": 140982, "compaction_time_cpu_micros": 21317, "output_level": 6, "num_output_files": 1, "total_output_size": 10575362, "num_input_records": 7322, "num_output_records": 6791, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847238890214, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847238890960, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.747039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.891096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.891101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.891103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.891105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:13:58.891106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:13:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:13:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:13:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:13:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:13:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:13:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:13:59.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:13:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Jan 31 03:14:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:01.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:01.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 339 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 70 op/s
Jan 31 03:14:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:02.312 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:02.313 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:14:02 np0005603621 nova_compute[247399]: 2026-01-31 08:14:02.383 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:02 np0005603621 nova_compute[247399]: 2026-01-31 08:14:02.892 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:03.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 345 MiB data, 895 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 560 KiB/s wr, 80 op/s
Jan 31 03:14:03 np0005603621 nova_compute[247399]: 2026-01-31 08:14:03.779 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:04.315 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:05 np0005603621 nova_compute[247399]: 2026-01-31 08:14:05.760 247403 DEBUG nova.compute.manager [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:14:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 354 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 55 op/s
Jan 31 03:14:05 np0005603621 nova_compute[247399]: 2026-01-31 08:14:05.903 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:05 np0005603621 nova_compute[247399]: 2026-01-31 08:14:05.904 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.074 247403 DEBUG nova.objects.instance [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8a8d8223-9051-487a-a4d6-a33911813797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.141 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.141 247403 INFO nova.compute.claims [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.142 247403 DEBUG nova.objects.instance [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lazy-loading 'resources' on Instance uuid 8a8d8223-9051-487a-a4d6-a33911813797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.292 247403 DEBUG nova.objects.instance [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8a8d8223-9051-487a-a4d6-a33911813797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.503 247403 INFO nova.compute.resource_tracker [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating resource usage from migration a5724d08-b5e3-438b-b19e-d4b60aac6c87#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.503 247403 DEBUG nova.compute.resource_tracker [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Starting to track incoming migration a5724d08-b5e3-438b-b19e-d4b60aac6c87 with flavor f75c4aee-d826-4343-a7e3-f06a4b21de52 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:14:06 np0005603621 nova_compute[247399]: 2026-01-31 08:14:06.606 247403 DEBUG oslo_concurrency.processutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038731049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:07 np0005603621 nova_compute[247399]: 2026-01-31 08:14:07.029 247403 DEBUG oslo_concurrency.processutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:07 np0005603621 nova_compute[247399]: 2026-01-31 08:14:07.034 247403 DEBUG nova.compute.provider_tree [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:14:07 np0005603621 nova_compute[247399]: 2026-01-31 08:14:07.393 247403 DEBUG nova.scheduler.client.report [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:14:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:07.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:07.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 393 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 101 op/s
Jan 31 03:14:07 np0005603621 nova_compute[247399]: 2026-01-31 08:14:07.817 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:07 np0005603621 nova_compute[247399]: 2026-01-31 08:14:07.818 247403 INFO nova.compute.manager [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Migrating#033[00m
Jan 31 03:14:07 np0005603621 nova_compute[247399]: 2026-01-31 08:14:07.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:08 np0005603621 nova_compute[247399]: 2026-01-31 08:14:08.782 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:09.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 418 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 178 op/s
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4ee661be-0579-4ee0-bf60-3e5b7e4541fc does not exist
Jan 31 03:14:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 440ea003-daec-42e0-bf78-3f207041021f does not exist
Jan 31 03:14:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5eed6071-1468-472f-bcca-159b7b2d225b does not exist
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:14:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:11.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:11.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 418 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 176 op/s
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:14:11 np0005603621 podman[306028]: 2026-01-31 08:14:11.860764654 +0000 UTC m=+0.023700784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:14:11 np0005603621 podman[306028]: 2026-01-31 08:14:11.99788072 +0000 UTC m=+0.160816770 container create aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:14:12 np0005603621 systemd[1]: Started libpod-conmon-aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641.scope.
Jan 31 03:14:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:12 np0005603621 podman[306028]: 2026-01-31 08:14:12.149592594 +0000 UTC m=+0.312528674 container init aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:14:12 np0005603621 podman[306028]: 2026-01-31 08:14:12.157490541 +0000 UTC m=+0.320426611 container start aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:14:12 np0005603621 compassionate_margulis[306045]: 167 167
Jan 31 03:14:12 np0005603621 systemd[1]: libpod-aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641.scope: Deactivated successfully.
Jan 31 03:14:12 np0005603621 podman[306028]: 2026-01-31 08:14:12.200150808 +0000 UTC m=+0.363086858 container attach aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:14:12 np0005603621 podman[306028]: 2026-01-31 08:14:12.200600951 +0000 UTC m=+0.363537001 container died aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6b6749b96379820a82cace2f7af00e4248a4004b0940e8f3deb5288161613147-merged.mount: Deactivated successfully.
Jan 31 03:14:12 np0005603621 podman[306028]: 2026-01-31 08:14:12.369031679 +0000 UTC m=+0.531967739 container remove aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:14:12 np0005603621 systemd[1]: libpod-conmon-aa932866c0b7ab0cc5533c56c450c4669342f608fc7aec75e484e722164c8641.scope: Deactivated successfully.
Jan 31 03:14:12 np0005603621 nova_compute[247399]: 2026-01-31 08:14:12.461 247403 DEBUG nova.compute.manager [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:14:12 np0005603621 podman[306071]: 2026-01-31 08:14:12.479460759 +0000 UTC m=+0.035773512 container create 07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:14:12 np0005603621 systemd[1]: Started libpod-conmon-07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2.scope.
Jan 31 03:14:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edc52e1c03175549062878dad7e6345fc9b747829d2bdd44deafedfcc353014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edc52e1c03175549062878dad7e6345fc9b747829d2bdd44deafedfcc353014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edc52e1c03175549062878dad7e6345fc9b747829d2bdd44deafedfcc353014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edc52e1c03175549062878dad7e6345fc9b747829d2bdd44deafedfcc353014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edc52e1c03175549062878dad7e6345fc9b747829d2bdd44deafedfcc353014/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:12 np0005603621 podman[306071]: 2026-01-31 08:14:12.545545059 +0000 UTC m=+0.101857812 container init 07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:14:12 np0005603621 podman[306071]: 2026-01-31 08:14:12.463610643 +0000 UTC m=+0.019923416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:14:12 np0005603621 podman[306071]: 2026-01-31 08:14:12.559266599 +0000 UTC m=+0.115579352 container start 07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:14:12 np0005603621 podman[306071]: 2026-01-31 08:14:12.562457669 +0000 UTC m=+0.118770442 container attach 07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:14:12 np0005603621 podman[306087]: 2026-01-31 08:14:12.58034323 +0000 UTC m=+0.052639840 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 03:14:12 np0005603621 nova_compute[247399]: 2026-01-31 08:14:12.784 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:12 np0005603621 nova_compute[247399]: 2026-01-31 08:14:12.785 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:12 np0005603621 nova_compute[247399]: 2026-01-31 08:14:12.936 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:12 np0005603621 nova_compute[247399]: 2026-01-31 08:14:12.996 247403 DEBUG nova.objects.instance [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_requests' on Instance uuid e54ff9a1-d1c9-4792-a837-076e8289ee23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.058 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.058 247403 INFO nova.compute.claims [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.059 247403 DEBUG nova.objects.instance [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'resources' on Instance uuid e54ff9a1-d1c9-4792-a837-076e8289ee23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.142 247403 DEBUG nova.objects.instance [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid e54ff9a1-d1c9-4792-a837-076e8289ee23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:13 np0005603621 xenodochial_hermann[306085]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:14:13 np0005603621 xenodochial_hermann[306085]: --> relative data size: 1.0
Jan 31 03:14:13 np0005603621 xenodochial_hermann[306085]: --> All data devices are unavailable
Jan 31 03:14:13 np0005603621 podman[306071]: 2026-01-31 08:14:13.324859057 +0000 UTC m=+0.881171820 container died 07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:14:13 np0005603621 systemd[1]: libpod-07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2.scope: Deactivated successfully.
Jan 31 03:14:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5edc52e1c03175549062878dad7e6345fc9b747829d2bdd44deafedfcc353014-merged.mount: Deactivated successfully.
Jan 31 03:14:13 np0005603621 systemd-logind[818]: New session 61 of user nova.
Jan 31 03:14:13 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:14:13 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:14:13 np0005603621 podman[306071]: 2026-01-31 08:14:13.371276361 +0000 UTC m=+0.927589114 container remove 07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:14:13 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:14:13 np0005603621 systemd[1]: libpod-conmon-07c7646ebdf4dc0ab40227e767071db516e1bc5218d69db63f361191c34e7bf2.scope: Deactivated successfully.
Jan 31 03:14:13 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:14:13 np0005603621 podman[306120]: 2026-01-31 08:14:13.454626563 +0000 UTC m=+0.106934251 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.498 247403 INFO nova.compute.resource_tracker [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating resource usage from migration 573cb6b9-9e94-474a-9cc7-e9a6e0bcfd43#033[00m
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.498 247403 DEBUG nova.compute.resource_tracker [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Starting to track incoming migration 573cb6b9-9e94-474a-9cc7-e9a6e0bcfd43 with flavor f75c4aee-d826-4343-a7e3-f06a4b21de52 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:14:13 np0005603621 systemd[306149]: Queued start job for default target Main User Target.
Jan 31 03:14:13 np0005603621 systemd[306149]: Created slice User Application Slice.
Jan 31 03:14:13 np0005603621 systemd[306149]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:14:13 np0005603621 systemd[306149]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:14:13 np0005603621 systemd[306149]: Reached target Paths.
Jan 31 03:14:13 np0005603621 systemd[306149]: Reached target Timers.
Jan 31 03:14:13 np0005603621 systemd[306149]: Starting D-Bus User Message Bus Socket...
Jan 31 03:14:13 np0005603621 systemd[306149]: Starting Create User's Volatile Files and Directories...
Jan 31 03:14:13 np0005603621 systemd[306149]: Finished Create User's Volatile Files and Directories.
Jan 31 03:14:13 np0005603621 systemd[306149]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:14:13 np0005603621 systemd[306149]: Reached target Sockets.
Jan 31 03:14:13 np0005603621 systemd[306149]: Reached target Basic System.
Jan 31 03:14:13 np0005603621 systemd[306149]: Reached target Main User Target.
Jan 31 03:14:13 np0005603621 systemd[306149]: Startup finished in 123ms.
Jan 31 03:14:13 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:14:13 np0005603621 systemd[1]: Started Session 61 of User nova.
Jan 31 03:14:13 np0005603621 systemd[1]: session-61.scope: Deactivated successfully.
Jan 31 03:14:13 np0005603621 systemd-logind[818]: Session 61 logged out. Waiting for processes to exit.
Jan 31 03:14:13 np0005603621 systemd-logind[818]: Removed session 61.
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.655 247403 DEBUG oslo_concurrency.processutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:13.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:13.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:13 np0005603621 systemd-logind[818]: New session 63 of user nova.
Jan 31 03:14:13 np0005603621 systemd[1]: Started Session 63 of User nova.
Jan 31 03:14:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 418 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 177 op/s
Jan 31 03:14:13 np0005603621 nova_compute[247399]: 2026-01-31 08:14:13.784 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:13 np0005603621 systemd[1]: session-63.scope: Deactivated successfully.
Jan 31 03:14:13 np0005603621 systemd-logind[818]: Session 63 logged out. Waiting for processes to exit.
Jan 31 03:14:13 np0005603621 systemd-logind[818]: Removed session 63.
Jan 31 03:14:13 np0005603621 podman[306343]: 2026-01-31 08:14:13.906586453 +0000 UTC m=+0.032893891 container create d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kalam, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:13 np0005603621 systemd[1]: Started libpod-conmon-d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74.scope.
Jan 31 03:14:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:13 np0005603621 podman[306343]: 2026-01-31 08:14:13.975329108 +0000 UTC m=+0.101636566 container init d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kalam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:14:13 np0005603621 podman[306343]: 2026-01-31 08:14:13.980978725 +0000 UTC m=+0.107286153 container start d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:14:13 np0005603621 silly_kalam[306359]: 167 167
Jan 31 03:14:13 np0005603621 systemd[1]: libpod-d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74.scope: Deactivated successfully.
Jan 31 03:14:13 np0005603621 podman[306343]: 2026-01-31 08:14:13.986395745 +0000 UTC m=+0.112703403 container attach d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:14:13 np0005603621 podman[306343]: 2026-01-31 08:14:13.986843518 +0000 UTC m=+0.113150976 container died d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kalam, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:14:13 np0005603621 podman[306343]: 2026-01-31 08:14:13.891867802 +0000 UTC m=+0.018175260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:14:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-18bbbfb8122eb322a54eaa6218bba04d01daf2a2e95afefea068633ae9b6f8d0-merged.mount: Deactivated successfully.
Jan 31 03:14:14 np0005603621 podman[306343]: 2026-01-31 08:14:14.016297641 +0000 UTC m=+0.142605079 container remove d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:14:14 np0005603621 systemd[1]: libpod-conmon-d1442646324b4e94d81f827ae11a5aeee18731c3f48723b54226f51bdedafb74.scope: Deactivated successfully.
Jan 31 03:14:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623047336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:14 np0005603621 nova_compute[247399]: 2026-01-31 08:14:14.084 247403 DEBUG oslo_concurrency.processutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:14 np0005603621 nova_compute[247399]: 2026-01-31 08:14:14.091 247403 DEBUG nova.compute.provider_tree [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:14:14 np0005603621 podman[306386]: 2026-01-31 08:14:14.137089156 +0000 UTC m=+0.036857516 container create 9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:14:14 np0005603621 systemd[1]: Started libpod-conmon-9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129.scope.
Jan 31 03:14:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ae68e526328d00e1852f8a41695a1cf6009d4c7c70fb71610f441067bc5889/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ae68e526328d00e1852f8a41695a1cf6009d4c7c70fb71610f441067bc5889/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ae68e526328d00e1852f8a41695a1cf6009d4c7c70fb71610f441067bc5889/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38ae68e526328d00e1852f8a41695a1cf6009d4c7c70fb71610f441067bc5889/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:14 np0005603621 podman[306386]: 2026-01-31 08:14:14.120980791 +0000 UTC m=+0.020749171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:14:14 np0005603621 nova_compute[247399]: 2026-01-31 08:14:14.225 247403 DEBUG nova.scheduler.client.report [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:14:14 np0005603621 podman[306386]: 2026-01-31 08:14:14.250154808 +0000 UTC m=+0.149923188 container init 9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:14:14 np0005603621 podman[306386]: 2026-01-31 08:14:14.257420406 +0000 UTC m=+0.157188776 container start 9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:14:14 np0005603621 podman[306386]: 2026-01-31 08:14:14.26140658 +0000 UTC m=+0.161174940 container attach 9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:14:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:14:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433535383' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:14:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:14:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/433535383' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:14:14 np0005603621 nova_compute[247399]: 2026-01-31 08:14:14.778 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:14 np0005603621 nova_compute[247399]: 2026-01-31 08:14:14.779 247403 INFO nova.compute.manager [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Migrating#033[00m
Jan 31 03:14:14 np0005603621 great_vaughan[306402]: {
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:    "0": [
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:        {
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "devices": [
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "/dev/loop3"
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            ],
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "lv_name": "ceph_lv0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "lv_size": "7511998464",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "name": "ceph_lv0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "tags": {
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.cluster_name": "ceph",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.crush_device_class": "",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.encrypted": "0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.osd_id": "0",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.type": "block",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:                "ceph.vdo": "0"
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            },
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "type": "block",
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:            "vg_name": "ceph_vg0"
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:        }
Jan 31 03:14:14 np0005603621 great_vaughan[306402]:    ]
Jan 31 03:14:14 np0005603621 great_vaughan[306402]: }
Jan 31 03:14:15 np0005603621 systemd[1]: libpod-9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603621 podman[306386]: 2026-01-31 08:14:15.022088215 +0000 UTC m=+0.921856585 container died 9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:14:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-38ae68e526328d00e1852f8a41695a1cf6009d4c7c70fb71610f441067bc5889-merged.mount: Deactivated successfully.
Jan 31 03:14:15 np0005603621 podman[306386]: 2026-01-31 08:14:15.074472756 +0000 UTC m=+0.974241116 container remove 9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:14:15 np0005603621 systemd[1]: libpod-conmon-9d366ccec4b29284de146fbe62744e846d41a105ac5c785aab3da04d65818129.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.55590662 +0000 UTC m=+0.035050590 container create 6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:14:15 np0005603621 systemd[1]: Started libpod-conmon-6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337.scope.
Jan 31 03:14:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.621288469 +0000 UTC m=+0.100432469 container init 6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.626694668 +0000 UTC m=+0.105838638 container start 6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.629823626 +0000 UTC m=+0.108967616 container attach 6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:14:15 np0005603621 vigilant_yonath[306582]: 167 167
Jan 31 03:14:15 np0005603621 systemd[1]: libpod-6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.6318545 +0000 UTC m=+0.110998470 container died 6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.540348483 +0000 UTC m=+0.019492483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:14:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-53b5e25352d100ac980a8d84d3fdb9c3f9e1eba441523f718bc1d1899fbe0a71-merged.mount: Deactivated successfully.
Jan 31 03:14:15 np0005603621 podman[306565]: 2026-01-31 08:14:15.666806914 +0000 UTC m=+0.145950884 container remove 6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:14:15 np0005603621 systemd[1]: libpod-conmon-6beddf7bb5f54209e6c1a0f099b8b6253d5afc8b8da62444360717795f3cf337.scope: Deactivated successfully.
Jan 31 03:14:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:15.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 418 MiB data, 957 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 171 op/s
Jan 31 03:14:15 np0005603621 podman[306605]: 2026-01-31 08:14:15.782106897 +0000 UTC m=+0.037226048 container create 306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:15 np0005603621 systemd[1]: Started libpod-conmon-306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532.scope.
Jan 31 03:14:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5b60701b90d10ee743e56fc42e7ebd2a03bc492466235ba84f7fb54eba0e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5b60701b90d10ee743e56fc42e7ebd2a03bc492466235ba84f7fb54eba0e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5b60701b90d10ee743e56fc42e7ebd2a03bc492466235ba84f7fb54eba0e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d5b60701b90d10ee743e56fc42e7ebd2a03bc492466235ba84f7fb54eba0e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:15 np0005603621 podman[306605]: 2026-01-31 08:14:15.766698555 +0000 UTC m=+0.021817706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:14:15 np0005603621 podman[306605]: 2026-01-31 08:14:15.864038104 +0000 UTC m=+0.119157235 container init 306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:14:15 np0005603621 podman[306605]: 2026-01-31 08:14:15.869894098 +0000 UTC m=+0.125013229 container start 306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:14:15 np0005603621 podman[306605]: 2026-01-31 08:14:15.872779708 +0000 UTC m=+0.127898839 container attach 306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]: {
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:        "osd_id": 0,
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:        "type": "bluestore"
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]:    }
Jan 31 03:14:16 np0005603621 busy_lumiere[306621]: }
Jan 31 03:14:16 np0005603621 systemd[1]: libpod-306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532.scope: Deactivated successfully.
Jan 31 03:14:16 np0005603621 podman[306605]: 2026-01-31 08:14:16.6755427 +0000 UTC m=+0.930661831 container died 306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:14:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a9d5b60701b90d10ee743e56fc42e7ebd2a03bc492466235ba84f7fb54eba0e4-merged.mount: Deactivated successfully.
Jan 31 03:14:16 np0005603621 podman[306605]: 2026-01-31 08:14:16.721926813 +0000 UTC m=+0.977045944 container remove 306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:14:16 np0005603621 systemd[1]: libpod-conmon-306a6bcdc1129bbc5c9f1d8ba14a48d6ad6d5fa7687d1ea401dc0a8fb9352532.scope: Deactivated successfully.
Jan 31 03:14:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:14:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:14:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev db730abd-68c8-41b6-8da5-5a6e1bc761b0 does not exist
Jan 31 03:14:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b5d353a1-2889-4f5b-a3cc-ef5571ee2034 does not exist
Jan 31 03:14:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 54f4ea32-296d-4b46-b303-beab8a855d50 does not exist
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.527 247403 DEBUG nova.compute.manager [req-b8c142cd-7f56-4e1a-a12c-40ca6b02efe7 req-91c9f679-b17a-4028-9735-3be4f57fc29a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-unplugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.528 247403 DEBUG oslo_concurrency.lockutils [req-b8c142cd-7f56-4e1a-a12c-40ca6b02efe7 req-91c9f679-b17a-4028-9735-3be4f57fc29a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.528 247403 DEBUG oslo_concurrency.lockutils [req-b8c142cd-7f56-4e1a-a12c-40ca6b02efe7 req-91c9f679-b17a-4028-9735-3be4f57fc29a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.529 247403 DEBUG oslo_concurrency.lockutils [req-b8c142cd-7f56-4e1a-a12c-40ca6b02efe7 req-91c9f679-b17a-4028-9735-3be4f57fc29a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.529 247403 DEBUG nova.compute.manager [req-b8c142cd-7f56-4e1a-a12c-40ca6b02efe7 req-91c9f679-b17a-4028-9735-3be4f57fc29a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] No waiting events found dispatching network-vif-unplugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.529 247403 WARNING nova.compute.manager [req-b8c142cd-7f56-4e1a-a12c-40ca6b02efe7 req-91c9f679-b17a-4028-9735-3be4f57fc29a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received unexpected event network-vif-unplugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:14:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:17.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:17.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:14:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 425 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Jan 31 03:14:17 np0005603621 nova_compute[247399]: 2026-01-31 08:14:17.963 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:18 np0005603621 nova_compute[247399]: 2026-01-31 08:14:18.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.597 247403 INFO nova.network.neutron [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating port f1b92dea-fcbf-4fdc-a875-d4273610d4c5 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:14:19 np0005603621 systemd-logind[818]: New session 64 of user nova.
Jan 31 03:14:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:19.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:19 np0005603621 systemd[1]: Started Session 64 of User nova.
Jan 31 03:14:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:19.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 150 op/s
Jan 31 03:14:19 np0005603621 systemd[1]: session-64.scope: Deactivated successfully.
Jan 31 03:14:19 np0005603621 systemd-logind[818]: Session 64 logged out. Waiting for processes to exit.
Jan 31 03:14:19 np0005603621 systemd-logind[818]: Removed session 64.
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.793 247403 DEBUG nova.compute.manager [req-24beae67-d7a8-4637-b383-ce0eed0eac73 req-3be78623-ab1e-4f5c-af60-f815084b43bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.794 247403 DEBUG oslo_concurrency.lockutils [req-24beae67-d7a8-4637-b383-ce0eed0eac73 req-3be78623-ab1e-4f5c-af60-f815084b43bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.794 247403 DEBUG oslo_concurrency.lockutils [req-24beae67-d7a8-4637-b383-ce0eed0eac73 req-3be78623-ab1e-4f5c-af60-f815084b43bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.794 247403 DEBUG oslo_concurrency.lockutils [req-24beae67-d7a8-4637-b383-ce0eed0eac73 req-3be78623-ab1e-4f5c-af60-f815084b43bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.794 247403 DEBUG nova.compute.manager [req-24beae67-d7a8-4637-b383-ce0eed0eac73 req-3be78623-ab1e-4f5c-af60-f815084b43bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] No waiting events found dispatching network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:19 np0005603621 nova_compute[247399]: 2026-01-31 08:14:19.795 247403 WARNING nova.compute.manager [req-24beae67-d7a8-4637-b383-ce0eed0eac73 req-3be78623-ab1e-4f5c-af60-f815084b43bf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received unexpected event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:14:19 np0005603621 systemd-logind[818]: New session 65 of user nova.
Jan 31 03:14:19 np0005603621 systemd[1]: Started Session 65 of User nova.
Jan 31 03:14:19 np0005603621 systemd[1]: session-65.scope: Deactivated successfully.
Jan 31 03:14:19 np0005603621 systemd-logind[818]: Session 65 logged out. Waiting for processes to exit.
Jan 31 03:14:19 np0005603621 systemd-logind[818]: Removed session 65.
Jan 31 03:14:21 np0005603621 nova_compute[247399]: 2026-01-31 08:14:21.095 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquiring lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:21 np0005603621 nova_compute[247399]: 2026-01-31 08:14:21.096 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquired lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:21 np0005603621 nova_compute[247399]: 2026-01-31 08:14:21.096 247403 DEBUG nova.network.neutron [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:14:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:21.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 03:14:22 np0005603621 nova_compute[247399]: 2026-01-31 08:14:22.191 247403 DEBUG nova.compute.manager [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-changed-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:22 np0005603621 nova_compute[247399]: 2026-01-31 08:14:22.191 247403 DEBUG nova.compute.manager [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Refreshing instance network info cache due to event network-changed-f1b92dea-fcbf-4fdc-a875-d4273610d4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:14:22 np0005603621 nova_compute[247399]: 2026-01-31 08:14:22.192 247403 DEBUG oslo_concurrency.lockutils [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:22 np0005603621 nova_compute[247399]: 2026-01-31 08:14:22.965 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:23.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:23.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 03:14:23 np0005603621 nova_compute[247399]: 2026-01-31 08:14:23.790 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.361 247403 DEBUG nova.compute.manager [req-f91ef9c1-0a22-48f6-b3a9-634f28e52e23 req-9ca50edc-c8b8-425d-9c7d-4df7737f92df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-unplugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.361 247403 DEBUG oslo_concurrency.lockutils [req-f91ef9c1-0a22-48f6-b3a9-634f28e52e23 req-9ca50edc-c8b8-425d-9c7d-4df7737f92df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.362 247403 DEBUG oslo_concurrency.lockutils [req-f91ef9c1-0a22-48f6-b3a9-634f28e52e23 req-9ca50edc-c8b8-425d-9c7d-4df7737f92df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.362 247403 DEBUG oslo_concurrency.lockutils [req-f91ef9c1-0a22-48f6-b3a9-634f28e52e23 req-9ca50edc-c8b8-425d-9c7d-4df7737f92df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.362 247403 DEBUG nova.compute.manager [req-f91ef9c1-0a22-48f6-b3a9-634f28e52e23 req-9ca50edc-c8b8-425d-9c7d-4df7737f92df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] No waiting events found dispatching network-vif-unplugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.362 247403 WARNING nova.compute.manager [req-f91ef9c1-0a22-48f6-b3a9-634f28e52e23 req-9ca50edc-c8b8-425d-9c7d-4df7737f92df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received unexpected event network-vif-unplugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:14:24 np0005603621 nova_compute[247399]: 2026-01-31 08:14:24.833 247403 DEBUG nova.network.neutron [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating instance_info_cache with network_info: [{"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.041 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Releasing lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.047 247403 DEBUG oslo_concurrency.lockutils [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.047 247403 DEBUG nova.network.neutron [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Refreshing network info cache for port f1b92dea-fcbf-4fdc-a875-d4273610d4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.196 247403 INFO nova.network.neutron [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating port db231dc0-94bd-47c5-bc4c-f139648e2cfa with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.514 247403 DEBUG os_brick.utils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.516 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.526 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.526 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[c42d4949-c7d0-4ab4-85b2-8e785a75abed]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.528 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.536 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.537 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[8a948fbc-56a4-446b-b98a-46b5756ad3b8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.538 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.547 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.547 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[fdae4c74-8e5e-4277-890c-992d5f00257d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.549 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[77215ec5-4539-446c-be46-c40eeea8c374]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.550 247403 DEBUG oslo_concurrency.processutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.576 247403 DEBUG oslo_concurrency.processutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.580 247403 DEBUG os_brick.initiator.connectors.lightos [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.581 247403 DEBUG os_brick.initiator.connectors.lightos [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.581 247403 DEBUG os_brick.initiator.connectors.lightos [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:14:25 np0005603621 nova_compute[247399]: 2026-01-31 08:14:25.582 247403 DEBUG os_brick.utils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:14:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:25.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 03:14:26 np0005603621 nova_compute[247399]: 2026-01-31 08:14:26.683 247403 DEBUG nova.compute.manager [req-23960eff-7331-4ffc-aadc-54fecf0f461f req-47bab3ef-1b00-4e2f-9a16-34dc49f03ad5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:26 np0005603621 nova_compute[247399]: 2026-01-31 08:14:26.684 247403 DEBUG oslo_concurrency.lockutils [req-23960eff-7331-4ffc-aadc-54fecf0f461f req-47bab3ef-1b00-4e2f-9a16-34dc49f03ad5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:26 np0005603621 nova_compute[247399]: 2026-01-31 08:14:26.684 247403 DEBUG oslo_concurrency.lockutils [req-23960eff-7331-4ffc-aadc-54fecf0f461f req-47bab3ef-1b00-4e2f-9a16-34dc49f03ad5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:26 np0005603621 nova_compute[247399]: 2026-01-31 08:14:26.684 247403 DEBUG oslo_concurrency.lockutils [req-23960eff-7331-4ffc-aadc-54fecf0f461f req-47bab3ef-1b00-4e2f-9a16-34dc49f03ad5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:26 np0005603621 nova_compute[247399]: 2026-01-31 08:14:26.684 247403 DEBUG nova.compute.manager [req-23960eff-7331-4ffc-aadc-54fecf0f461f req-47bab3ef-1b00-4e2f-9a16-34dc49f03ad5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] No waiting events found dispatching network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:26 np0005603621 nova_compute[247399]: 2026-01-31 08:14:26.685 247403 WARNING nova.compute.manager [req-23960eff-7331-4ffc-aadc-54fecf0f461f req-47bab3ef-1b00-4e2f-9a16-34dc49f03ad5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received unexpected event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:14:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3590606405' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.665 247403 DEBUG nova.network.neutron [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updated VIF entry in instance network info cache for port f1b92dea-fcbf-4fdc-a875-d4273610d4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.666 247403 DEBUG nova.network.neutron [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating instance_info_cache with network_info: [{"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:27.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.760 247403 DEBUG oslo_concurrency.lockutils [req-30966721-4382-4f62-8300-367b0ad30715 req-deb04a2a-0fee-43bd-ac14-13651554520a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.771 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.772 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.773 247403 INFO nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Creating image(s)#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.773 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.773 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Ensure instance console log exists: /var/lib/nova/instances/8a8d8223-9051-487a-a4d6-a33911813797/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.774 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.774 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.775 247403 DEBUG oslo_concurrency.lockutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.777 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Start _get_guest_xml network_info=[{"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2006849245-network", "vif_mac": "fa:16:3e:33:f6:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a072271f-7f43-470f-93ee-c6396eaabeba', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a072271f-7f43-470f-93ee-c6396eaabeba', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '8a8d8223-9051-487a-a4d6-a33911813797', 'attached_at': '2026-01-31T08:14:26.000000', 'detached_at': '', 'volume_id': 'a072271f-7f43-470f-93ee-c6396eaabeba', 'serial': 'a072271f-7f43-470f-93ee-c6396eaabeba'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': True, 'attachment_id': '22941b24-fd9c-4016-a0c7-7f7c7feacdf9', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.781 247403 WARNING nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:14:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.789 247403 DEBUG nova.virt.libvirt.host [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.790 247403 DEBUG nova.virt.libvirt.host [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.793 247403 DEBUG nova.virt.libvirt.host [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.793 247403 DEBUG nova.virt.libvirt.host [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.795 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.795 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f75c4aee-d826-4343-a7e3-f06a4b21de52',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.796 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.796 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.796 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.796 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.796 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.797 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.797 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.797 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.798 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.798 247403 DEBUG nova.virt.hardware [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.798 247403 DEBUG nova.objects.instance [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8a8d8223-9051-487a-a4d6-a33911813797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:27 np0005603621 nova_compute[247399]: 2026-01-31 08:14:27.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.086 247403 DEBUG oslo_concurrency.processutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.176 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "refresh_cache-e54ff9a1-d1c9-4792-a837-076e8289ee23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.177 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquired lock "refresh_cache-e54ff9a1-d1c9-4792-a837-076e8289ee23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.177 247403 DEBUG nova.network.neutron [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.402 247403 DEBUG nova.compute.manager [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-changed-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.403 247403 DEBUG nova.compute.manager [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Refreshing instance network info cache due to event network-changed-db231dc0-94bd-47c5-bc4c-f139648e2cfa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.403 247403 DEBUG oslo_concurrency.lockutils [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e54ff9a1-d1c9-4792-a837-076e8289ee23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902552075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.510 247403 DEBUG oslo_concurrency.processutils [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.793 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.892 247403 DEBUG nova.virt.libvirt.vif [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-865768110',display_name='tempest-ServerActionsTestOtherA-server-865768110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-865768110',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBmeoiqwLriErudm3CCwwTVCZJNSn8sMBdf3DG0cLKOiUOsjd6g3ELaDiv5VtlA1MtIeSB0EtvnrgQQVESwaz68a/c+EzXdmxnZNxj//jq+4bu6dBh/9tuewDagOu34T9w==',key_name='tempest-keypair-710425515',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9c03fec1b3664105996aa979e226d8f8',ramdisk_id='',reservation_id='r-y7gogkds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1768827668',owner_user_name='tempest-ServerActionsTestOtherA-1768827668-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='12a823bd7c6e4cf492ebf6c1d002a91f',uuid=8a8d8223-9051-487a-a4d6-a33911813797,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2006849245-network", "vif_mac": "fa:16:3e:33:f6:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.893 247403 DEBUG nova.network.os_vif_util [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Converting VIF {"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2006849245-network", "vif_mac": "fa:16:3e:33:f6:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.893 247403 DEBUG nova.network.os_vif_util [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.895 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <uuid>8a8d8223-9051-487a-a4d6-a33911813797</uuid>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <name>instance-0000005c</name>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <memory>196608</memory>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerActionsTestOtherA-server-865768110</nova:name>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:14:27</nova:creationTime>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.micro">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:memory>192</nova:memory>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:user uuid="12a823bd7c6e4cf492ebf6c1d002a91f">tempest-ServerActionsTestOtherA-1768827668-project-member</nova:user>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:project uuid="9c03fec1b3664105996aa979e226d8f8">tempest-ServerActionsTestOtherA-1768827668</nova:project>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <nova:port uuid="f1b92dea-fcbf-4fdc-a875-d4273610d4c5">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <entry name="serial">8a8d8223-9051-487a-a4d6-a33911813797</entry>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <entry name="uuid">8a8d8223-9051-487a-a4d6-a33911813797</entry>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/8a8d8223-9051-487a-a4d6-a33911813797_disk.config">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-a072271f-7f43-470f-93ee-c6396eaabeba">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <serial>a072271f-7f43-470f-93ee-c6396eaabeba</serial>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:33:f6:b6"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <target dev="tapf1b92dea-fc"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/8a8d8223-9051-487a-a4d6-a33911813797/console.log" append="off"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:14:28 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:14:28 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:14:28 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:14:28 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.896 247403 DEBUG nova.virt.libvirt.vif [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-865768110',display_name='tempest-ServerActionsTestOtherA-server-865768110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-865768110',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBmeoiqwLriErudm3CCwwTVCZJNSn8sMBdf3DG0cLKOiUOsjd6g3ELaDiv5VtlA1MtIeSB0EtvnrgQQVESwaz68a/c+EzXdmxnZNxj//jq+4bu6dBh/9tuewDagOu34T9w==',key_name='tempest-keypair-710425515',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:13:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9c03fec1b3664105996aa979e226d8f8',ramdisk_id='',reservation_id='r-y7gogkds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-1768827668',owner_user_name='tempest-ServerActionsTestOtherA-1768827668-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='12a823bd7c6e4cf492ebf6c1d002a91f',uuid=8a8d8223-9051-487a-a4d6-a33911813797,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2006849245-network", "vif_mac": "fa:16:3e:33:f6:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.897 247403 DEBUG nova.network.os_vif_util [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Converting VIF {"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2006849245-network", "vif_mac": "fa:16:3e:33:f6:b6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.898 247403 DEBUG nova.network.os_vif_util [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.898 247403 DEBUG os_vif [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.898 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.899 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.899 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.903 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1b92dea-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.903 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1b92dea-fc, col_values=(('external_ids', {'iface-id': 'f1b92dea-fcbf-4fdc-a875-d4273610d4c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:f6:b6', 'vm-uuid': '8a8d8223-9051-487a-a4d6-a33911813797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.905 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:28 np0005603621 NetworkManager[49013]: <info>  [1769847268.9057] manager: (tapf1b92dea-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:28 np0005603621 nova_compute[247399]: 2026-01-31 08:14:28.913 247403 INFO os_vif [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc')#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.159 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.160 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.160 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] No VIF found with MAC fa:16:3e:33:f6:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.161 247403 INFO nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Using config drive#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.204 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.2478] manager: (tapf1b92dea-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Jan 31 03:14:29 np0005603621 kernel: tapf1b92dea-fc: entered promiscuous mode
Jan 31 03:14:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:29Z|00291|binding|INFO|Claiming lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for this chassis.
Jan 31 03:14:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:29Z|00292|binding|INFO|f1b92dea-fcbf-4fdc-a875-d4273610d4c5: Claiming fa:16:3e:33:f6:b6 10.100.0.3
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.255 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.257 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.2722] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.2734] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 31 03:14:29 np0005603621 systemd-udevd[306846]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:14:29 np0005603621 systemd-machined[212769]: New machine qemu-40-instance-0000005c.
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.2833] device (tapf1b92dea-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.2838] device (tapf1b92dea-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.314 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:f6:b6 10.100.0.3'], port_security=['fa:16:3e:33:f6:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a8d8223-9051-487a-a4d6-a33911813797', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f564452-5f08-4a1c-921e-f2daee9ec936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c03fec1b3664105996aa979e226d8f8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4d0e926d-7b9d-4115-9719-7f0d71edaace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d620dc35-e1b1-4011-a8c1-0995d2048b09, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f1b92dea-fcbf-4fdc-a875-d4273610d4c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.315 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f1b92dea-fcbf-4fdc-a875-d4273610d4c5 in datapath 1f564452-5f08-4a1c-921e-f2daee9ec936 bound to our chassis#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.318 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f564452-5f08-4a1c-921e-f2daee9ec936#033[00m
Jan 31 03:14:29 np0005603621 systemd[1]: Started Virtual Machine qemu-40-instance-0000005c.
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.326 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[04928082-979e-44bd-873f-57914e618305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.327 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1f564452-51 in ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.329 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1f564452-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.329 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e0d347-317d-43b5-8098-4a2f6ee4cf91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.330 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[92bd8826-1c48-4453-a337-71c5370e7a76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.340 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[620feb11-6ef7-4597-9531-16e6eab2e209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.351 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[932a2009-7ad5-48c4-bb58-d80cc8a5429e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.363 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:29Z|00293|binding|INFO|Setting lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 ovn-installed in OVS
Jan 31 03:14:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:29Z|00294|binding|INFO|Setting lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 up in Southbound
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.374 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7874265c-b59a-4b57-bcf8-1ac4ea9678d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.378 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[48ffe104-7ee5-4b48-80d5-fe9e009a829c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.3795] manager: (tap1f564452-50): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.401 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1cd221-6a65-4e78-9a2f-9e1f117839f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.403 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a127c37e-627a-4f02-ad82-c1ff161ebb91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.4199] device (tap1f564452-50): carrier: link connected
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.424 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a12b7699-e26a-44ce-a5e6-7d5c89ae1e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.440 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc13ca3-c567-4445-8ad7-bfca47f7747c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f564452-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:23:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657493, 'reachable_time': 36277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306880, 'error': None, 'target': 'ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.456 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[576c1379-fe68-4953-9828-d641b44da2d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:23e8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657493, 'tstamp': 657493}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306881, 'error': None, 'target': 'ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.471 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[25cb31ae-a3e3-4caf-ab44-a8db53f43779]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f564452-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:23:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657493, 'reachable_time': 36277, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306882, 'error': None, 'target': 'ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.495 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f618b305-4019-4578-aa18-13fbaffae8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.542 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0e2c45-0ab9-4e1a-a7b8-7729990ce2af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.543 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f564452-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.543 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.543 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f564452-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.545 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 kernel: tap1f564452-50: entered promiscuous mode
Jan 31 03:14:29 np0005603621 NetworkManager[49013]: <info>  [1769847269.5472] manager: (tap1f564452-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.547 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.549 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f564452-50, col_values=(('external_ids', {'iface-id': '5bb8c1b5-edce-4f6a-8164-58b7d89a3330'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.550 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:29Z|00295|binding|INFO|Releasing lport 5bb8c1b5-edce-4f6a-8164-58b7d89a3330 from this chassis (sb_readonly=0)
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.554 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1f564452-5f08-4a1c-921e-f2daee9ec936.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1f564452-5f08-4a1c-921e-f2daee9ec936.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.555 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.556 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3de06740-e2e7-4896-9d09-41c3b688c932]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.557 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-1f564452-5f08-4a1c-921e-f2daee9ec936
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/1f564452-5f08-4a1c-921e-f2daee9ec936.pid.haproxy
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 1f564452-5f08-4a1c-921e-f2daee9ec936
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:14:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:29.558 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936', 'env', 'PROCESS_TAG=haproxy-1f564452-5f08-4a1c-921e-f2daee9ec936', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1f564452-5f08-4a1c-921e-f2daee9ec936.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:14:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:29.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 1.5 MiB/s wr, 42 op/s
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.835 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847269.8350766, 8a8d8223-9051-487a-a4d6-a33911813797 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.837 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.838 247403 DEBUG nova.compute.manager [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.842 247403 INFO nova.virt.libvirt.driver [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Instance running successfully.#033[00m
Jan 31 03:14:29 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.844 247403 DEBUG nova.virt.libvirt.guest [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.844 247403 DEBUG nova.virt.libvirt.driver [None req-660a93ce-32a9-4519-a769-89f3f0907077 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:14:29 np0005603621 podman[306957]: 2026-01-31 08:14:29.877122139 +0000 UTC m=+0.056507141 container create ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:14:29 np0005603621 systemd[1]: Started libpod-conmon-ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571.scope.
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.926 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:14:29 np0005603621 nova_compute[247399]: 2026-01-31 08:14:29.930 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:14:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/796e98b83e3272c8ef56570ddc54499f9e0d27376944780125efb0521e46b788/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:29 np0005603621 podman[306957]: 2026-01-31 08:14:29.842099272 +0000 UTC m=+0.021484294 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:14:29 np0005603621 podman[306957]: 2026-01-31 08:14:29.954634378 +0000 UTC m=+0.134019390 container init ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:14:29 np0005603621 podman[306957]: 2026-01-31 08:14:29.958904121 +0000 UTC m=+0.138289133 container start ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:14:29 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [NOTICE]   (306976) : New worker (306978) forked
Jan 31 03:14:29 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [NOTICE]   (306976) : Loading success.
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.042 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.043 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847269.8354633, 8a8d8223-9051-487a-a4d6-a33911813797 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.044 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] VM Started (Lifecycle Event)#033[00m
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.083 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.086 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:14:30 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:14:30 np0005603621 systemd[306149]: Activating special unit Exit the Session...
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped target Main User Target.
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped target Basic System.
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped target Paths.
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped target Sockets.
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped target Timers.
Jan 31 03:14:30 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:14:30 np0005603621 systemd[306149]: Closed D-Bus User Message Bus Socket.
Jan 31 03:14:30 np0005603621 systemd[306149]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:14:30 np0005603621 systemd[306149]: Removed slice User Application Slice.
Jan 31 03:14:30 np0005603621 systemd[306149]: Reached target Shutdown.
Jan 31 03:14:30 np0005603621 systemd[306149]: Finished Exit the Session.
Jan 31 03:14:30 np0005603621 systemd[306149]: Reached target Exit the Session.
Jan 31 03:14:30 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:14:30 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:14:30 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:14:30 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:14:30 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:14:30 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:14:30 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.316 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:14:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:30.498 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:30.499 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:30.500 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:30 np0005603621 nova_compute[247399]: 2026-01-31 08:14:30.881 247403 DEBUG nova.network.neutron [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating instance_info_cache with network_info: [{"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.622 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Releasing lock "refresh_cache-e54ff9a1-d1c9-4792-a837-076e8289ee23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.626 247403 DEBUG oslo_concurrency.lockutils [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e54ff9a1-d1c9-4792-a837-076e8289ee23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.626 247403 DEBUG nova.network.neutron [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Refreshing network info cache for port db231dc0-94bd-47c5-bc4c-f139648e2cfa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.687 247403 DEBUG nova.compute.manager [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.688 247403 DEBUG oslo_concurrency.lockutils [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.688 247403 DEBUG oslo_concurrency.lockutils [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.688 247403 DEBUG oslo_concurrency.lockutils [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.688 247403 DEBUG nova.compute.manager [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] No waiting events found dispatching network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.689 247403 WARNING nova.compute.manager [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received unexpected event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.689 247403 DEBUG nova.compute.manager [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.689 247403 DEBUG oslo_concurrency.lockutils [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.689 247403 DEBUG oslo_concurrency.lockutils [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.690 247403 DEBUG oslo_concurrency.lockutils [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.690 247403 DEBUG nova.compute.manager [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] No waiting events found dispatching network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.690 247403 WARNING nova.compute.manager [req-584e6873-c3a5-4c1c-9abd-dc23819e64e7 req-00abe8c0-7a82-4de6-aa3c-ab86629a23a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received unexpected event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:14:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:31.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:31.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 1.4 KiB/s rd, 24 KiB/s wr, 3 op/s
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.922 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.923 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.924 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:14:31 np0005603621 nova_compute[247399]: 2026-01-31 08:14:31.924 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8a8d8223-9051-487a-a4d6-a33911813797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:32 np0005603621 nova_compute[247399]: 2026-01-31 08:14:32.004 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:14:32 np0005603621 nova_compute[247399]: 2026-01-31 08:14:32.007 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:14:32 np0005603621 nova_compute[247399]: 2026-01-31 08:14:32.007 247403 INFO nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Creating image(s)#033[00m
Jan 31 03:14:32 np0005603621 nova_compute[247399]: 2026-01-31 08:14:32.053 247403 DEBUG nova.storage.rbd_utils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] creating snapshot(nova-resize) on rbd image(e54ff9a1-d1c9-4792-a837-076e8289ee23_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:14:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 31 03:14:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 31 03:14:32 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.240 247403 DEBUG nova.objects.instance [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e54ff9a1-d1c9-4792-a837-076e8289ee23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.659 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.660 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Ensure instance console log exists: /var/lib/nova/instances/e54ff9a1-d1c9-4792-a837-076e8289ee23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.660 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.661 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.661 247403 DEBUG oslo_concurrency.lockutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.664 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Start _get_guest_xml network_info=[{"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:0d:e9:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.668 247403 WARNING nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.674 247403 DEBUG nova.virt.libvirt.host [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.675 247403 DEBUG nova.virt.libvirt.host [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.678 247403 DEBUG nova.virt.libvirt.host [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.678 247403 DEBUG nova.virt.libvirt.host [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.679 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.679 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f75c4aee-d826-4343-a7e3-f06a4b21de52',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.680 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.680 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.680 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.681 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.681 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.681 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.682 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.682 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.682 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.683 247403 DEBUG nova.virt.hardware [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.683 247403 DEBUG nova.objects.instance [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e54ff9a1-d1c9-4792-a837-076e8289ee23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 KiB/s wr, 85 op/s
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.827 247403 DEBUG oslo_concurrency.processutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:33 np0005603621 nova_compute[247399]: 2026-01-31 08:14:33.905 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42254516' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.256 247403 DEBUG oslo_concurrency.processutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.298 247403 DEBUG oslo_concurrency.processutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.673 247403 DEBUG nova.network.neutron [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updated VIF entry in instance network info cache for port db231dc0-94bd-47c5-bc4c-f139648e2cfa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.674 247403 DEBUG nova.network.neutron [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating instance_info_cache with network_info: [{"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491720430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.741 247403 DEBUG oslo_concurrency.processutils [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.743 247403 DEBUG nova.virt.libvirt.vif [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-436373101',display_name='tempest-ServerDiskConfigTestJSON-server-436373101',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-436373101',id=93,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-0t50bqgi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:24Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=e54ff9a1-d1c9-4792-a837-076e8289ee23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:0d:e9:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.743 247403 DEBUG nova.network.os_vif_util [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:0d:e9:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.744 247403 DEBUG nova.network.os_vif_util [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.747 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <uuid>e54ff9a1-d1c9-4792-a837-076e8289ee23</uuid>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <name>instance-0000005d</name>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <memory>196608</memory>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-436373101</nova:name>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:14:33</nova:creationTime>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.micro">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:memory>192</nova:memory>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:user uuid="111fdaf79c084a91902fe37a7a502020">tempest-ServerDiskConfigTestJSON-855158150-project-member</nova:user>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:project uuid="58e900992be7400fb940ca20f13e12d1">tempest-ServerDiskConfigTestJSON-855158150</nova:project>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <nova:port uuid="db231dc0-94bd-47c5-bc4c-f139648e2cfa">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <entry name="serial">e54ff9a1-d1c9-4792-a837-076e8289ee23</entry>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <entry name="uuid">e54ff9a1-d1c9-4792-a837-076e8289ee23</entry>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e54ff9a1-d1c9-4792-a837-076e8289ee23_disk">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e54ff9a1-d1c9-4792-a837-076e8289ee23_disk.config">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:0d:e9:1e"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <target dev="tapdb231dc0-94"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e54ff9a1-d1c9-4792-a837-076e8289ee23/console.log" append="off"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:14:34 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:14:34 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:14:34 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:14:34 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.753 247403 DEBUG nova.virt.libvirt.vif [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-436373101',display_name='tempest-ServerDiskConfigTestJSON-server-436373101',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-436373101',id=93,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:04Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-0t50bqgi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:24Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=e54ff9a1-d1c9-4792-a837-076e8289ee23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:0d:e9:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.753 247403 DEBUG nova.network.os_vif_util [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "vif_mac": "fa:16:3e:0d:e9:1e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.754 247403 DEBUG nova.network.os_vif_util [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.755 247403 DEBUG os_vif [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.755 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.756 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.757 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.759 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.759 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb231dc0-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.760 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb231dc0-94, col_values=(('external_ids', {'iface-id': 'db231dc0-94bd-47c5-bc4c-f139648e2cfa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:e9:1e', 'vm-uuid': 'e54ff9a1-d1c9-4792-a837-076e8289ee23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:34 np0005603621 NetworkManager[49013]: <info>  [1769847274.7633] manager: (tapdb231dc0-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.767 247403 DEBUG oslo_concurrency.lockutils [req-4df344fc-5dce-4636-8716-d2408f03970b req-d688831b-5b79-42ea-9c05-132e1703f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e54ff9a1-d1c9-4792-a837-076e8289ee23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.769 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.771 247403 INFO os_vif [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94')#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.847 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating instance_info_cache with network_info: [{"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.961 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-8a8d8223-9051-487a-a4d6-a33911813797" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:34 np0005603621 nova_compute[247399]: 2026-01-31 08:14:34.962 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.031 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.031 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.032 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] No VIF found with MAC fa:16:3e:0d:e9:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.033 247403 INFO nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Using config drive#033[00m
Jan 31 03:14:35 np0005603621 kernel: tapdb231dc0-94: entered promiscuous mode
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.109 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 NetworkManager[49013]: <info>  [1769847275.1105] manager: (tapdb231dc0-94): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Jan 31 03:14:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:35Z|00296|binding|INFO|Claiming lport db231dc0-94bd-47c5-bc4c-f139648e2cfa for this chassis.
Jan 31 03:14:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:35Z|00297|binding|INFO|db231dc0-94bd-47c5-bc4c-f139648e2cfa: Claiming fa:16:3e:0d:e9:1e 10.100.0.9
Jan 31 03:14:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:35Z|00298|binding|INFO|Setting lport db231dc0-94bd-47c5-bc4c-f139648e2cfa ovn-installed in OVS
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.125 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.127 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.128 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:35Z|00299|binding|INFO|Setting lport db231dc0-94bd-47c5-bc4c-f139648e2cfa up in Southbound
Jan 31 03:14:35 np0005603621 systemd-machined[212769]: New machine qemu-41-instance-0000005d.
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.135 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:e9:1e 10.100.0.9'], port_security=['fa:16:3e:0d:e9:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e54ff9a1-d1c9-4792-a837-076e8289ee23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=db231dc0-94bd-47c5-bc4c-f139648e2cfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.136 159734 INFO neutron.agent.ovn.metadata.agent [-] Port db231dc0-94bd-47c5-bc4c-f139648e2cfa in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 bound to our chassis#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.138 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f218695f-c744-4bd8-b2d8-122a920c7ca0#033[00m
Jan 31 03:14:35 np0005603621 systemd[1]: Started Virtual Machine qemu-41-instance-0000005d.
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.149 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8af996-26de-433e-a1e6-c8c556ec833a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.150 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf218695f-c1 in ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.152 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf218695f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.152 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff51682-d6ed-47ee-ba46-f76b503d1f0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.154 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f0d95f-4481-4a5e-99bf-05cf94840606]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 systemd-udevd[307160]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.166 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c9ab32-6d10-4d26-8435-4793904389d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 NetworkManager[49013]: <info>  [1769847275.1772] device (tapdb231dc0-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:14:35 np0005603621 NetworkManager[49013]: <info>  [1769847275.1777] device (tapdb231dc0-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.181 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f26c6d33-f966-4c67-a464-46d14f1fffa1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.206 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c25da77f-1677-4ad5-94eb-05f0866c2361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.211 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8003893e-fcc7-4e4b-819f-b58a28797798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 NetworkManager[49013]: <info>  [1769847275.2187] manager: (tapf218695f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.244 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[352a7885-54e2-489b-a314-fccfe152b1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.247 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6cd957-56cb-4062-93b4-0c7eac6e3f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 NetworkManager[49013]: <info>  [1769847275.2676] device (tapf218695f-c0): carrier: link connected
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.274 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1241e58d-9688-4bf4-ace4-17ad73b6d1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.289 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fff4627f-bdae-4d4e-8c7c-11924fe7b62b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658078, 'reachable_time': 39096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307191, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.301 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f15254c6-cfde-4003-be79-8bd3f69bdd84]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:830'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 658078, 'tstamp': 658078}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307192, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.314 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d900e877-ea36-4cf7-a8a9-82455977ddad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf218695f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:08:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658078, 'reachable_time': 39096, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307193, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.339 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a9dcce95-3f77-4b22-90b3-6dfeda08c97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.387 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.399 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0814a77c-1ef9-4326-bfee-dd81e47aa233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.400 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.401 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.401 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf218695f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.403 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 NetworkManager[49013]: <info>  [1769847275.4038] manager: (tapf218695f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Jan 31 03:14:35 np0005603621 kernel: tapf218695f-c0: entered promiscuous mode
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.406 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.407 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf218695f-c0, col_values=(('external_ids', {'iface-id': 'd3a551a2-38e3-48d3-bdee-f2493a79eca0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.408 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.408 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:35Z|00300|binding|INFO|Releasing lport d3a551a2-38e3-48d3-bdee-f2493a79eca0 from this chassis (sb_readonly=0)
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.409 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.412 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d089fda4-39a9-4865-9cb8-abdf8b4cd29c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.413 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.414 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f218695f-c744-4bd8-b2d8-122a920c7ca0.pid.haproxy
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f218695f-c744-4bd8-b2d8-122a920c7ca0
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:14:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:35.416 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'env', 'PROCESS_TAG=haproxy-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f218695f-c744-4bd8-b2d8-122a920c7ca0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.528 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.529 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.535 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.535 247403 INFO nova.compute.claims [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:14:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:35.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:35.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.785 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847275.7850718, e54ff9a1-d1c9-4792-a837-076e8289ee23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.786 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.788 247403 DEBUG nova.compute.manager [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:14:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 94 op/s
Jan 31 03:14:35 np0005603621 podman[307262]: 2026-01-31 08:14:35.789728642 +0000 UTC m=+0.102293715 container create dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.791 247403 INFO nova.virt.libvirt.driver [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Instance running successfully.#033[00m
Jan 31 03:14:35 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.798 247403 DEBUG nova.virt.libvirt.guest [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.799 247403 DEBUG nova.virt.libvirt.driver [None req-b580fca4-b0aa-4549-bf87-a7b0125bdea7 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:14:35 np0005603621 podman[307262]: 2026-01-31 08:14:35.709769447 +0000 UTC m=+0.022334520 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.823 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:35 np0005603621 systemd[1]: Started libpod-conmon-dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d.scope.
Jan 31 03:14:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:14:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb000535557209dbb2f8944b9c238a4d8beab661629be7fa4863a87cb84aaa71/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:14:35 np0005603621 podman[307262]: 2026-01-31 08:14:35.89688788 +0000 UTC m=+0.209452953 container init dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 03:14:35 np0005603621 podman[307262]: 2026-01-31 08:14:35.903711564 +0000 UTC m=+0.216276637 container start dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.971 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:14:35 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [NOTICE]   (307289) : New worker (307292) forked
Jan 31 03:14:35 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [NOTICE]   (307289) : Loading success.
Jan 31 03:14:35 np0005603621 nova_compute[247399]: 2026-01-31 08:14:35.979 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.058 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.059 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847275.7858374, e54ff9a1-d1c9-4792-a837-076e8289ee23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.059 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] VM Started (Lifecycle Event)#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.087 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.091 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.294 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.294 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:14:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586491874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.343 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.350 247403 DEBUG nova.compute.provider_tree [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.397 247403 DEBUG nova.scheduler.client.report [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.562 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.563 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.716 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.717 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:14:36 np0005603621 nova_compute[247399]: 2026-01-31 08:14:36.775 247403 INFO nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.014 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.143 247403 INFO nova.virt.block_device [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Booting with volume 284ce110-dc52-4bb8-b8bc-a99864f7d576 at /dev/vda#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.313 247403 DEBUG os_brick.utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.315 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.323 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.324 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[98e456fd-f74b-4c32-beab-52602a741f70]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.324 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.331 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.331 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ad9b29-ccc7-4514-a494-cca10c9cddf9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.332 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.337 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.337 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[b5148a50-d345-441d-9797-0c895a727e09]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.338 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[89dbda9d-d5a4-4def-9901-77b42d6bd815]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.338 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.357 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "nvme version" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.359 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.359 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.359 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.359 247403 DEBUG os_brick.utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.360 247403 DEBUG nova.virt.block_device [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating existing volume attachment record: 1c386108-15cc-4ed2-a90d-edf67d939cb8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.457 247403 DEBUG nova.compute.manager [req-7099b6cd-85da-4623-aab8-e8935ba8f3fc req-a8532269-cf69-45b2-bd8c-58a192583cc2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.458 247403 DEBUG oslo_concurrency.lockutils [req-7099b6cd-85da-4623-aab8-e8935ba8f3fc req-a8532269-cf69-45b2-bd8c-58a192583cc2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.458 247403 DEBUG oslo_concurrency.lockutils [req-7099b6cd-85da-4623-aab8-e8935ba8f3fc req-a8532269-cf69-45b2-bd8c-58a192583cc2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.458 247403 DEBUG oslo_concurrency.lockutils [req-7099b6cd-85da-4623-aab8-e8935ba8f3fc req-a8532269-cf69-45b2-bd8c-58a192583cc2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.458 247403 DEBUG nova.compute.manager [req-7099b6cd-85da-4623-aab8-e8935ba8f3fc req-a8532269-cf69-45b2-bd8c-58a192583cc2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] No waiting events found dispatching network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.459 247403 WARNING nova.compute.manager [req-7099b6cd-85da-4623-aab8-e8935ba8f3fc req-a8532269-cf69-45b2-bd8c-58a192583cc2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received unexpected event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:14:37 np0005603621 nova_compute[247399]: 2026-01-31 08:14:37.538 247403 DEBUG nova.policy [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8080000681f449c3a9754c876165d667', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a6c60c75300483aa07e13b08923b1a1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:14:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:37.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:37.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 307 B/s wr, 126 op/s
Jan 31 03:14:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3644107052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:38 np0005603621 nova_compute[247399]: 2026-01-31 08:14:38.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:14:38
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:14:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:14:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:14:38 np0005603621 nova_compute[247399]: 2026-01-31 08:14:38.798 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:38 np0005603621 nova_compute[247399]: 2026-01-31 08:14:38.935 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully created port: 43de799d-e636-43ba-89c3-cb3a6a2ed888 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:14:38 np0005603621 nova_compute[247399]: 2026-01-31 08:14:38.950 247403 INFO nova.virt.block_device [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Booting with volume f3cb4822-8f33-4837-8242-54aa49d653b7 at /dev/vdb#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.113 247403 DEBUG os_brick.utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.116 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.127 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.127 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[60f1a54b-aebc-4610-900b-5541d9c2e226]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.129 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.135 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.135 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[60fc1d71-0519-4bd1-9acc-a1d051dc9fa4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.136 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.143 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.143 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd855b0-5e29-4a61-a71d-7a64f2f5d193]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.144 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[d8732a73-415d-4fca-b9e7-457e951e11c5]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.145 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.167 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.169 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.170 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.170 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.170 247403 DEBUG os_brick.utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.171 247403 DEBUG nova.virt.block_device [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating existing volume attachment record: d453934a-08bc-4abf-8da8-a0387ae6dcf7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.231 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.232 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.233 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.616 247403 DEBUG nova.compute.manager [req-9e65d6c2-40b1-4565-89a5-7014cbb3a06d req-818e3a51-2d9f-4444-b0a5-cc9daeacf748 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.617 247403 DEBUG oslo_concurrency.lockutils [req-9e65d6c2-40b1-4565-89a5-7014cbb3a06d req-818e3a51-2d9f-4444-b0a5-cc9daeacf748 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.619 247403 DEBUG oslo_concurrency.lockutils [req-9e65d6c2-40b1-4565-89a5-7014cbb3a06d req-818e3a51-2d9f-4444-b0a5-cc9daeacf748 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.620 247403 DEBUG oslo_concurrency.lockutils [req-9e65d6c2-40b1-4565-89a5-7014cbb3a06d req-818e3a51-2d9f-4444-b0a5-cc9daeacf748 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.620 247403 DEBUG nova.compute.manager [req-9e65d6c2-40b1-4565-89a5-7014cbb3a06d req-818e3a51-2d9f-4444-b0a5-cc9daeacf748 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] No waiting events found dispatching network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.621 247403 WARNING nova.compute.manager [req-9e65d6c2-40b1-4565-89a5-7014cbb3a06d req-818e3a51-2d9f-4444-b0a5-cc9daeacf748 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received unexpected event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:14:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103111602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.673 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:39.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:39.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:39 np0005603621 nova_compute[247399]: 2026-01-31 08:14:39.762 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 511 B/s wr, 191 op/s
Jan 31 03:14:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/92432898' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.014 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.015 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.019 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.019 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.175 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.176 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4007MB free_disk=20.85150146484375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.176 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.177 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.405 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Applying migration context for instance e54ff9a1-d1c9-4792-a837-076e8289ee23 as it has an incoming, in-progress migration 573cb6b9-9e94-474a-9cc7-e9a6e0bcfd43. Migration status is confirming _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.406 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating resource usage from migration 573cb6b9-9e94-474a-9cc7-e9a6e0bcfd43#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.813 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 8a8d8223-9051-487a-a4d6-a33911813797 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.814 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e54ff9a1-d1c9-4792-a837-076e8289ee23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.814 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.814 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:14:40 np0005603621 nova_compute[247399]: 2026-01-31 08:14:40.814 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.014 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.062 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully created port: 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.100 247403 INFO nova.virt.block_device [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Booting with volume 8211a79e-7243-4aef-aa8b-c47974e4d749 at /dev/vdc#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.241 247403 DEBUG os_brick.utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.242 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.251 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.251 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecc00cd-19bb-44ea-bda1-e568391e3984]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.253 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.258 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.258 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee3df4e-b92c-4309-872a-d0dfe0f21f03]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.260 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.266 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.266 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[e04ae8f4-9dcf-44c2-8f51-0dc2e0841eb5]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.268 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cf8a7c-4247-4690-878a-94228ceb3b17]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.269 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.296 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.299 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.300 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.301 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.301 247403 DEBUG os_brick.utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.302 247403 DEBUG nova.virt.block_device [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating existing volume attachment record: a923619c-0654-4d34-b06f-aa3c0335a84b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:14:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110956027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.450 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.454 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.602 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:14:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:41.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:41.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 511 B/s wr, 191 op/s
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.847 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:14:41 np0005603621 nova_compute[247399]: 2026-01-31 08:14:41.849 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:42 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:42Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:f6:b6 10.100.0.3
Jan 31 03:14:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:14:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2293580052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:14:42 np0005603621 nova_compute[247399]: 2026-01-31 08:14:42.888 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully created port: e76773d3-8a3a-451d-a29e-d01474b5f82f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.139 247403 INFO nova.compute.manager [None req-15933cc8-1c78-4db6-a76f-75f5a405ec6b 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Get console output#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.144 247403 INFO oslo.privsep.daemon [None req-15933cc8-1c78-4db6-a76f-75f5a405ec6b 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1z21xzi6/privsep.sock']#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.501 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.503 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.503 247403 INFO nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Creating image(s)#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.504 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.504 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Ensure instance console log exists: /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.506 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.506 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.506 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:43 np0005603621 podman[307418]: 2026-01-31 08:14:43.559610188 +0000 UTC m=+0.095922217 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 03:14:43 np0005603621 podman[307455]: 2026-01-31 08:14:43.584357513 +0000 UTC m=+0.060659531 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:14:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:43.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:43.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 KiB/s wr, 130 op/s
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.802 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.978 247403 INFO oslo.privsep.daemon [None req-15933cc8-1c78-4db6-a76f-75f5a405ec6b 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.821 307490 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.825 307490 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.828 307490 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 31 03:14:43 np0005603621 nova_compute[247399]: 2026-01-31 08:14:43.828 307490 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307490#033[00m
Jan 31 03:14:44 np0005603621 nova_compute[247399]: 2026-01-31 08:14:44.101 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:14:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 31 03:14:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 31 03:14:44 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 31 03:14:44 np0005603621 nova_compute[247399]: 2026-01-31 08:14:44.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:45 np0005603621 nova_compute[247399]: 2026-01-31 08:14:45.066 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully created port: 51338d60-6e7f-4716-b025-60a809240bd3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:14:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:45.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 15 KiB/s wr, 150 op/s
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.091370) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287091554, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 784, "num_deletes": 251, "total_data_size": 1039151, "memory_usage": 1053728, "flush_reason": "Manual Compaction"}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287172486, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1016668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40961, "largest_seqno": 41744, "table_properties": {"data_size": 1012588, "index_size": 1796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9627, "raw_average_key_size": 20, "raw_value_size": 1004193, "raw_average_value_size": 2100, "num_data_blocks": 77, "num_entries": 478, "num_filter_entries": 478, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847238, "oldest_key_time": 1769847238, "file_creation_time": 1769847287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 81579 microseconds, and 5713 cpu microseconds.
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.172944) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1016668 bytes OK
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.173008) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.217601) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.217658) EVENT_LOG_v1 {"time_micros": 1769847287217648, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.217687) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1035212, prev total WAL file size 1035212, number of live WAL files 2.
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.218438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(992KB)], [89(10MB)]
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287218492, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11592030, "oldest_snapshot_seqno": -1}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6746 keys, 9708954 bytes, temperature: kUnknown
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287430029, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9708954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9664647, "index_size": 26339, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 174943, "raw_average_key_size": 25, "raw_value_size": 9544853, "raw_average_value_size": 1414, "num_data_blocks": 1039, "num_entries": 6746, "num_filter_entries": 6746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.430431) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9708954 bytes
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.438054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.8 rd, 45.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.1 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(21.0) write-amplify(9.5) OK, records in: 7269, records dropped: 523 output_compression: NoCompression
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.438116) EVENT_LOG_v1 {"time_micros": 1769847287438094, "job": 52, "event": "compaction_finished", "compaction_time_micros": 211642, "compaction_time_cpu_micros": 19423, "output_level": 6, "num_output_files": 1, "total_output_size": 9708954, "num_input_records": 7269, "num_output_records": 6746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287438483, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847287440412, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.218329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.440501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.440510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.440514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.440517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:14:47.440520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:14:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:47.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:14:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:14:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 451 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 25 KiB/s wr, 127 op/s
Jan 31 03:14:47 np0005603621 nova_compute[247399]: 2026-01-31 08:14:47.951 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully created port: d1a1fd13-040f-49ef-b94f-98bcea71df76 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:14:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:48Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:e9:1e 10.100.0.9
Jan 31 03:14:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:48Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:e9:1e 10.100.0.9
Jan 31 03:14:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.803 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.848 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.959 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.961 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.962 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.962 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.963 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.965 247403 INFO nova.compute.manager [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Terminating instance#033[00m
Jan 31 03:14:48 np0005603621 nova_compute[247399]: 2026-01-31 08:14:48.969 247403 DEBUG nova.compute.manager [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065167780453191885 of space, bias 1.0, pg target 1.9550334135957566 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004144892576307463 of space, bias 1.0, pg target 1.2393228803159315 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:14:49 np0005603621 kernel: tapf1b92dea-fc (unregistering): left promiscuous mode
Jan 31 03:14:49 np0005603621 NetworkManager[49013]: <info>  [1769847289.6493] device (tapf1b92dea-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00301|binding|INFO|Releasing lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 from this chassis (sb_readonly=0)
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00302|binding|INFO|Setting lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 down in Southbound
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.654 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00303|binding|INFO|Removing iface tapf1b92dea-fc ovn-installed in OVS
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.660 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:49 np0005603621 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Jan 31 03:14:49 np0005603621 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d0000005c.scope: Consumed 12.666s CPU time.
Jan 31 03:14:49 np0005603621 systemd-machined[212769]: Machine qemu-40-instance-0000005c terminated.
Jan 31 03:14:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:49.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:49.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.767 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:49 np0005603621 kernel: tapf1b92dea-fc: entered promiscuous mode
Jan 31 03:14:49 np0005603621 kernel: tapf1b92dea-fc (unregistering): left promiscuous mode
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.791 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00304|if_status|INFO|Not updating pb chassis for f1b92dea-fcbf-4fdc-a875-d4273610d4c5 now as sb is readonly
Jan 31 03:14:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 453 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 41 KiB/s wr, 107 op/s
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00305|binding|INFO|Releasing lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 from this chassis (sb_readonly=1)
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.803 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00306|if_status|INFO|Dropped 4 log messages in last 334 seconds (most recently, 334 seconds ago) due to excessive rate
Jan 31 03:14:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:49Z|00307|if_status|INFO|Not setting lport f1b92dea-fcbf-4fdc-a875-d4273610d4c5 down as sb is readonly
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.809 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.810 247403 INFO nova.virt.libvirt.driver [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Instance destroyed successfully.#033[00m
Jan 31 03:14:49 np0005603621 nova_compute[247399]: 2026-01-31 08:14:49.810 247403 DEBUG nova.objects.instance [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lazy-loading 'resources' on Instance uuid 8a8d8223-9051-487a-a4d6-a33911813797 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.249 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:f6:b6 10.100.0.3'], port_security=['fa:16:3e:33:f6:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8a8d8223-9051-487a-a4d6-a33911813797', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f564452-5f08-4a1c-921e-f2daee9ec936', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9c03fec1b3664105996aa979e226d8f8', 'neutron:revision_number': '8', 'neutron:security_group_ids': '4d0e926d-7b9d-4115-9719-7f0d71edaace', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d620dc35-e1b1-4011-a8c1-0995d2048b09, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f1b92dea-fcbf-4fdc-a875-d4273610d4c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.251 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f1b92dea-fcbf-4fdc-a875-d4273610d4c5 in datapath 1f564452-5f08-4a1c-921e-f2daee9ec936 unbound from our chassis#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.253 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1f564452-5f08-4a1c-921e-f2daee9ec936, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.255 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c524ccec-4d97-4e1c-b24f-c44368e0edbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.256 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936 namespace which is not needed anymore#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.296 247403 DEBUG nova.virt.libvirt.vif [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-865768110',display_name='tempest-ServerActionsTestOtherA-server-865768110',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-865768110',id=92,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBmeoiqwLriErudm3CCwwTVCZJNSn8sMBdf3DG0cLKOiUOsjd6g3ELaDiv5VtlA1MtIeSB0EtvnrgQQVESwaz68a/c+EzXdmxnZNxj//jq+4bu6dBh/9tuewDagOu34T9w==',key_name='tempest-keypair-710425515',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:30Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9c03fec1b3664105996aa979e226d8f8',ramdisk_id='',reservation_id='r-y7gogkds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServerActionsTestOtherA-1768827668',owner_user_name='tempest-ServerActionsTestOtherA-1768827668-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='12a823bd7c6e4cf492ebf6c1d002a91f',uuid=8a8d8223-9051-487a-a4d6-a33911813797,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.296 247403 DEBUG nova.network.os_vif_util [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Converting VIF {"id": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "address": "fa:16:3e:33:f6:b6", "network": {"id": "1f564452-5f08-4a1c-921e-f2daee9ec936", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2006849245-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9c03fec1b3664105996aa979e226d8f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1b92dea-fc", "ovs_interfaceid": "f1b92dea-fcbf-4fdc-a875-d4273610d4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.297 247403 DEBUG nova.network.os_vif_util [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.298 247403 DEBUG os_vif [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.300 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1b92dea-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.303 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.303 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.308 247403 INFO os_vif [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:f6:b6,bridge_name='br-int',has_traffic_filtering=True,id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5,network=Network(1f564452-5f08-4a1c-921e-f2daee9ec936),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1b92dea-fc')#033[00m
Jan 31 03:14:50 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [NOTICE]   (306976) : haproxy version is 2.8.14-c23fe91
Jan 31 03:14:50 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [NOTICE]   (306976) : path to executable is /usr/sbin/haproxy
Jan 31 03:14:50 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [WARNING]  (306976) : Exiting Master process...
Jan 31 03:14:50 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [ALERT]    (306976) : Current worker (306978) exited with code 143 (Terminated)
Jan 31 03:14:50 np0005603621 neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936[306972]: [WARNING]  (306976) : All workers exited. Exiting... (0)
Jan 31 03:14:50 np0005603621 systemd[1]: libpod-ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571.scope: Deactivated successfully.
Jan 31 03:14:50 np0005603621 podman[307533]: 2026-01-31 08:14:50.443687538 +0000 UTC m=+0.115424888 container died ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:14:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-796e98b83e3272c8ef56570ddc54499f9e0d27376944780125efb0521e46b788-merged.mount: Deactivated successfully.
Jan 31 03:14:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571-userdata-shm.mount: Deactivated successfully.
Jan 31 03:14:50 np0005603621 podman[307533]: 2026-01-31 08:14:50.674015874 +0000 UTC m=+0.345753204 container cleanup ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:14:50 np0005603621 systemd[1]: libpod-conmon-ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571.scope: Deactivated successfully.
Jan 31 03:14:50 np0005603621 podman[307575]: 2026-01-31 08:14:50.831673434 +0000 UTC m=+0.143388503 container remove ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.835 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[98118405-0018-4df4-8757-35f203ce530e]: (4, ('Sat Jan 31 08:14:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936 (ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571)\nab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571\nSat Jan 31 08:14:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936 (ab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571)\nab009c64ab00b429b68abe243b709fca1bc85a0c2c8e4406d5223006d01d8571\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.837 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f32e2b-5aa2-4c28-96e6-d06d3a8ea6eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.838 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f564452-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:50 np0005603621 kernel: tap1f564452-50: left promiscuous mode
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.846 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.849 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c23da910-0e35-48b8-b88e-151a8db913d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.869 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[791ea1f7-0555-4f18-bb69-7b14db6873d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.871 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9e38f6b3-ee46-468d-ac00-6c9fcd47dc2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 nova_compute[247399]: 2026-01-31 08:14:50.881 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: 43de799d-e636-43ba-89c3-cb3a6a2ed888 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.887 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[32cb17a9-8eb5-4609-b963-f38c208287f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657488, 'reachable_time': 24361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307591, 'error': None, 'target': 'ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.890 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1f564452-5f08-4a1c-921e-f2daee9ec936 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:14:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:50.891 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[1c737aa6-e8b2-4b3c-bc75-c029991496b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:50 np0005603621 systemd[1]: run-netns-ovnmeta\x2d1f564452\x2d5f08\x2d4a1c\x2d921e\x2df2daee9ec936.mount: Deactivated successfully.
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.054 247403 INFO nova.virt.libvirt.driver [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Deleting instance files /var/lib/nova/instances/8a8d8223-9051-487a-a4d6-a33911813797_del#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.055 247403 INFO nova.virt.libvirt.driver [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Deletion of /var/lib/nova/instances/8a8d8223-9051-487a-a4d6-a33911813797_del complete#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.309 247403 INFO nova.compute.manager [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Took 2.34 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.310 247403 DEBUG oslo.service.loopingcall [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.313 247403 DEBUG nova.compute.manager [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.313 247403 DEBUG nova.network.neutron [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.317 247403 DEBUG nova.compute.manager [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.318 247403 DEBUG nova.compute.manager [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-43de799d-e636-43ba-89c3-cb3a6a2ed888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.318 247403 DEBUG oslo_concurrency.lockutils [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.319 247403 DEBUG oslo_concurrency.lockutils [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.319 247403 DEBUG nova.network.neutron [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port 43de799d-e636-43ba-89c3-cb3a6a2ed888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.321 247403 DEBUG nova.compute.manager [req-d7ee6983-e302-4430-b96a-1697a6dee729 req-c92a08b4-926e-4224-9ad3-cfeeafd5906f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-unplugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.321 247403 DEBUG oslo_concurrency.lockutils [req-d7ee6983-e302-4430-b96a-1697a6dee729 req-c92a08b4-926e-4224-9ad3-cfeeafd5906f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.322 247403 DEBUG oslo_concurrency.lockutils [req-d7ee6983-e302-4430-b96a-1697a6dee729 req-c92a08b4-926e-4224-9ad3-cfeeafd5906f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.322 247403 DEBUG oslo_concurrency.lockutils [req-d7ee6983-e302-4430-b96a-1697a6dee729 req-c92a08b4-926e-4224-9ad3-cfeeafd5906f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.322 247403 DEBUG nova.compute.manager [req-d7ee6983-e302-4430-b96a-1697a6dee729 req-c92a08b4-926e-4224-9ad3-cfeeafd5906f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] No waiting events found dispatching network-vif-unplugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.323 247403 DEBUG nova.compute.manager [req-d7ee6983-e302-4430-b96a-1697a6dee729 req-c92a08b4-926e-4224-9ad3-cfeeafd5906f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-unplugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:14:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 453 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 41 KiB/s wr, 107 op/s
Jan 31 03:14:51 np0005603621 nova_compute[247399]: 2026-01-31 08:14:51.823 247403 DEBUG nova.network.neutron [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.649 247403 DEBUG nova.network.neutron [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.716 247403 DEBUG oslo_concurrency.lockutils [req-64561525-41b9-4a44-b6a2-b2c78695269c req-b3388a95-d4b4-4f28-af3b-db65317b5340 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.738 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.739 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.740 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.740 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.740 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.742 247403 INFO nova.compute.manager [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Terminating instance#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.750 247403 DEBUG nova.compute.manager [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:14:52 np0005603621 kernel: tapdb231dc0-94 (unregistering): left promiscuous mode
Jan 31 03:14:52 np0005603621 NetworkManager[49013]: <info>  [1769847292.8311] device (tapdb231dc0-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:14:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:52Z|00308|binding|INFO|Releasing lport db231dc0-94bd-47c5-bc4c-f139648e2cfa from this chassis (sb_readonly=0)
Jan 31 03:14:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:52Z|00309|binding|INFO|Setting lport db231dc0-94bd-47c5-bc4c-f139648e2cfa down in Southbound
Jan 31 03:14:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:14:52Z|00310|binding|INFO|Removing iface tapdb231dc0-94 ovn-installed in OVS
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.886 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.892 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:52 np0005603621 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 31 03:14:52 np0005603621 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d0000005d.scope: Consumed 12.459s CPU time.
Jan 31 03:14:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:52.925 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:e9:1e 10.100.0.9'], port_security=['fa:16:3e:0d:e9:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e54ff9a1-d1c9-4792-a837-076e8289ee23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '58e900992be7400fb940ca20f13e12d1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '596ab0fa-9144-4a59-97b9-1afd98634ee5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bae8797c-8cfa-434b-94e1-deeda92af05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=db231dc0-94bd-47c5-bc4c-f139648e2cfa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:14:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:52.927 159734 INFO neutron.agent.ovn.metadata.agent [-] Port db231dc0-94bd-47c5-bc4c-f139648e2cfa in datapath f218695f-c744-4bd8-b2d8-122a920c7ca0 unbound from our chassis#033[00m
Jan 31 03:14:52 np0005603621 systemd-machined[212769]: Machine qemu-41-instance-0000005d terminated.
Jan 31 03:14:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:52.929 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f218695f-c744-4bd8-b2d8-122a920c7ca0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:14:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:52.930 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1e5587-ec6f-4272-9b29-83735c1e5663]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:52.930 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 namespace which is not needed anymore#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.984 247403 INFO nova.virt.libvirt.driver [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Instance destroyed successfully.#033[00m
Jan 31 03:14:52 np0005603621 nova_compute[247399]: 2026-01-31 08:14:52.984 247403 DEBUG nova.objects.instance [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lazy-loading 'resources' on Instance uuid e54ff9a1-d1c9-4792-a837-076e8289ee23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:14:53 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [NOTICE]   (307289) : haproxy version is 2.8.14-c23fe91
Jan 31 03:14:53 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [NOTICE]   (307289) : path to executable is /usr/sbin/haproxy
Jan 31 03:14:53 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [WARNING]  (307289) : Exiting Master process...
Jan 31 03:14:53 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [ALERT]    (307289) : Current worker (307292) exited with code 143 (Terminated)
Jan 31 03:14:53 np0005603621 neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0[307285]: [WARNING]  (307289) : All workers exited. Exiting... (0)
Jan 31 03:14:53 np0005603621 systemd[1]: libpod-dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d.scope: Deactivated successfully.
Jan 31 03:14:53 np0005603621 podman[307627]: 2026-01-31 08:14:53.055604574 +0000 UTC m=+0.043175994 container died dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.063 247403 DEBUG nova.virt.libvirt.vif [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-436373101',display_name='tempest-ServerDiskConfigTestJSON-server-436373101',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-436373101',id=93,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:14:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='58e900992be7400fb940ca20f13e12d1',ramdisk_id='',reservation_id='r-0t50bqgi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-855158150',owner_user_name='tempest-ServerDiskConfigTestJSON-855158150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:14:48Z,user_data=None,user_id='111fdaf79c084a91902fe37a7a502020',uuid=e54ff9a1-d1c9-4792-a837-076e8289ee23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.064 247403 DEBUG nova.network.os_vif_util [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converting VIF {"id": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "address": "fa:16:3e:0d:e9:1e", "network": {"id": "f218695f-c744-4bd8-b2d8-122a920c7ca0", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1189208428-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "58e900992be7400fb940ca20f13e12d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb231dc0-94", "ovs_interfaceid": "db231dc0-94bd-47c5-bc4c-f139648e2cfa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.064 247403 DEBUG nova.network.os_vif_util [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.065 247403 DEBUG os_vif [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.066 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.067 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb231dc0-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.068 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.070 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.072 247403 INFO os_vif [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:e9:1e,bridge_name='br-int',has_traffic_filtering=True,id=db231dc0-94bd-47c5-bc4c-f139648e2cfa,network=Network(f218695f-c744-4bd8-b2d8-122a920c7ca0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb231dc0-94')#033[00m
Jan 31 03:14:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:14:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eb000535557209dbb2f8944b9c238a4d8beab661629be7fa4863a87cb84aaa71-merged.mount: Deactivated successfully.
Jan 31 03:14:53 np0005603621 podman[307627]: 2026-01-31 08:14:53.101101649 +0000 UTC m=+0.088673039 container cleanup dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:14:53 np0005603621 systemd[1]: libpod-conmon-dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d.scope: Deactivated successfully.
Jan 31 03:14:53 np0005603621 podman[307672]: 2026-01-31 08:14:53.22562709 +0000 UTC m=+0.109629085 container remove dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.229 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd8dbc88-510d-42fd-9524-8c4d5cd97809]: (4, ('Sat Jan 31 08:14:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d)\ndd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d\nSat Jan 31 08:14:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 (dd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d)\ndd2e609cce89ab62b7340235e8e75d815a1066237dc1a808cfa6c8fe077e842d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.230 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1373f3c2-b165-412f-aaf4-42792fa179ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.231 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf218695f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.232 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 kernel: tapf218695f-c0: left promiscuous mode
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.240 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.243 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6f9e8e-5d62-41e3-9785-13b8538020fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.262 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5dddb6-a85d-4a84-9e28-6f460736acd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.263 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b18cdec5-4e10-44e7-b615-164d48fc4fde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.276 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5aff78a0-4007-4ceb-b746-f6eb6c97c16f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 658071, 'reachable_time': 33656, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307691, 'error': None, 'target': 'ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 systemd[1]: run-netns-ovnmeta\x2df218695f\x2dc744\x2d4bd8\x2db2d8\x2d122a920c7ca0.mount: Deactivated successfully.
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.278 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f218695f-c744-4bd8-b2d8-122a920c7ca0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:14:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:14:53.279 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6b115e69-faa6-4f8d-ac6e-2ee819de209e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.299 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.612 247403 DEBUG nova.compute.manager [req-6089c79c-309e-47d9-bbb2-651ed78f19cd req-be3f2995-c2f6-4eb7-9d6c-27beb2d9b762 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.612 247403 DEBUG oslo_concurrency.lockutils [req-6089c79c-309e-47d9-bbb2-651ed78f19cd req-be3f2995-c2f6-4eb7-9d6c-27beb2d9b762 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8a8d8223-9051-487a-a4d6-a33911813797-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.613 247403 DEBUG oslo_concurrency.lockutils [req-6089c79c-309e-47d9-bbb2-651ed78f19cd req-be3f2995-c2f6-4eb7-9d6c-27beb2d9b762 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.613 247403 DEBUG oslo_concurrency.lockutils [req-6089c79c-309e-47d9-bbb2-651ed78f19cd req-be3f2995-c2f6-4eb7-9d6c-27beb2d9b762 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.613 247403 DEBUG nova.compute.manager [req-6089c79c-309e-47d9-bbb2-651ed78f19cd req-be3f2995-c2f6-4eb7-9d6c-27beb2d9b762 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] No waiting events found dispatching network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.613 247403 WARNING nova.compute.manager [req-6089c79c-309e-47d9-bbb2-651ed78f19cd req-be3f2995-c2f6-4eb7-9d6c-27beb2d9b762 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received unexpected event network-vif-plugged-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.614 247403 DEBUG nova.compute.manager [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.614 247403 DEBUG nova.compute.manager [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.614 247403 DEBUG oslo_concurrency.lockutils [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.615 247403 DEBUG oslo_concurrency.lockutils [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.615 247403 DEBUG nova.network.neutron [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.705 247403 DEBUG nova.network.neutron [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 31 03:14:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 31 03:14:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:53.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:53 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 31 03:14:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:53.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 453 MiB data, 1000 MiB used, 20 GiB / 21 GiB avail; 724 KiB/s rd, 30 KiB/s wr, 88 op/s
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.806 247403 DEBUG nova.compute.manager [req-c1dc7d80-46c1-4d22-942d-4337c6ca8966 req-1d8db3d6-5bff-48b5-ac73-bb83b7908225 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Received event network-vif-deleted-f1b92dea-fcbf-4fdc-a875-d4273610d4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.806 247403 INFO nova.compute.manager [req-c1dc7d80-46c1-4d22-942d-4337c6ca8966 req-1d8db3d6-5bff-48b5-ac73-bb83b7908225 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Neutron deleted interface f1b92dea-fcbf-4fdc-a875-d4273610d4c5; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.807 247403 DEBUG nova.network.neutron [req-c1dc7d80-46c1-4d22-942d-4337c6ca8966 req-1d8db3d6-5bff-48b5-ac73-bb83b7908225 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.892 247403 INFO nova.virt.libvirt.driver [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Deleting instance files /var/lib/nova/instances/e54ff9a1-d1c9-4792-a837-076e8289ee23_del#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.894 247403 INFO nova.virt.libvirt.driver [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Deletion of /var/lib/nova/instances/e54ff9a1-d1c9-4792-a837-076e8289ee23_del complete#033[00m
Jan 31 03:14:53 np0005603621 nova_compute[247399]: 2026-01-31 08:14:53.992 247403 INFO nova.compute.manager [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Took 2.68 seconds to deallocate network for instance.#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.002 247403 DEBUG nova.compute.manager [req-c1dc7d80-46c1-4d22-942d-4337c6ca8966 req-1d8db3d6-5bff-48b5-ac73-bb83b7908225 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Detach interface failed, port_id=f1b92dea-fcbf-4fdc-a875-d4273610d4c5, reason: Instance 8a8d8223-9051-487a-a4d6-a33911813797 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.138 247403 DEBUG nova.network.neutron [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.270 247403 INFO nova.compute.manager [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.271 247403 DEBUG oslo.service.loopingcall [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.272 247403 DEBUG nova.compute.manager [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.272 247403 DEBUG nova.network.neutron [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.535 247403 DEBUG nova.compute.manager [req-5dd52732-a094-4cb4-b254-0b2ad6a3e569 req-87e5113a-0a40-4672-ac61-ad3b1f75469f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-unplugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.536 247403 DEBUG oslo_concurrency.lockutils [req-5dd52732-a094-4cb4-b254-0b2ad6a3e569 req-87e5113a-0a40-4672-ac61-ad3b1f75469f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.536 247403 DEBUG oslo_concurrency.lockutils [req-5dd52732-a094-4cb4-b254-0b2ad6a3e569 req-87e5113a-0a40-4672-ac61-ad3b1f75469f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.536 247403 DEBUG oslo_concurrency.lockutils [req-5dd52732-a094-4cb4-b254-0b2ad6a3e569 req-87e5113a-0a40-4672-ac61-ad3b1f75469f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.537 247403 DEBUG nova.compute.manager [req-5dd52732-a094-4cb4-b254-0b2ad6a3e569 req-87e5113a-0a40-4672-ac61-ad3b1f75469f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] No waiting events found dispatching network-vif-unplugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.537 247403 DEBUG nova.compute.manager [req-5dd52732-a094-4cb4-b254-0b2ad6a3e569 req-87e5113a-0a40-4672-ac61-ad3b1f75469f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-unplugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.660 247403 INFO nova.compute.manager [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Took 0.67 seconds to detach 1 volumes for instance.#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.661 247403 DEBUG nova.compute.manager [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Deleting volume: a072271f-7f43-470f-93ee-c6396eaabeba _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.663 247403 DEBUG nova.network.neutron [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:14:54 np0005603621 nova_compute[247399]: 2026-01-31 08:14:54.807 247403 DEBUG oslo_concurrency.lockutils [req-a52ce7fc-95ce-4136-921c-45628b8bd041 req-0bfde90a-1abf-4e08-a7b0-136c65083fc6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:14:55 np0005603621 nova_compute[247399]: 2026-01-31 08:14:55.490 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:55 np0005603621 nova_compute[247399]: 2026-01-31 08:14:55.490 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:55 np0005603621 nova_compute[247399]: 2026-01-31 08:14:55.573 247403 DEBUG oslo_concurrency.processutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:14:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:55.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 419 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 28 KiB/s wr, 86 op/s
Jan 31 03:14:55 np0005603621 nova_compute[247399]: 2026-01-31 08:14:55.808 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:14:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:14:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2757331132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:14:56 np0005603621 nova_compute[247399]: 2026-01-31 08:14:56.012 247403 DEBUG oslo_concurrency.processutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:14:56 np0005603621 nova_compute[247399]: 2026-01-31 08:14:56.018 247403 DEBUG nova.compute.provider_tree [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.697 247403 DEBUG nova.compute.manager [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.698 247403 DEBUG nova.compute.manager [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.698 247403 DEBUG oslo_concurrency.lockutils [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.699 247403 DEBUG oslo_concurrency.lockutils [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.699 247403 DEBUG nova.network.neutron [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.701 247403 DEBUG nova.compute.manager [req-95759a80-6502-4061-9a42-cae3e3baee10 req-f1cbfc0a-16bc-4ea8-9c00-cb3e1e398410 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.702 247403 DEBUG oslo_concurrency.lockutils [req-95759a80-6502-4061-9a42-cae3e3baee10 req-f1cbfc0a-16bc-4ea8-9c00-cb3e1e398410 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.702 247403 DEBUG oslo_concurrency.lockutils [req-95759a80-6502-4061-9a42-cae3e3baee10 req-f1cbfc0a-16bc-4ea8-9c00-cb3e1e398410 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.702 247403 DEBUG oslo_concurrency.lockutils [req-95759a80-6502-4061-9a42-cae3e3baee10 req-f1cbfc0a-16bc-4ea8-9c00-cb3e1e398410 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.702 247403 DEBUG nova.compute.manager [req-95759a80-6502-4061-9a42-cae3e3baee10 req-f1cbfc0a-16bc-4ea8-9c00-cb3e1e398410 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] No waiting events found dispatching network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.703 247403 WARNING nova.compute.manager [req-95759a80-6502-4061-9a42-cae3e3baee10 req-f1cbfc0a-16bc-4ea8-9c00-cb3e1e398410 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received unexpected event network-vif-plugged-db231dc0-94bd-47c5-bc4c-f139648e2cfa for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:14:57 np0005603621 nova_compute[247399]: 2026-01-31 08:14:57.704 247403 DEBUG nova.scheduler.client.report [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:14:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:57.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:57.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 362 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 599 KiB/s rd, 18 KiB/s wr, 93 op/s
Jan 31 03:14:58 np0005603621 nova_compute[247399]: 2026-01-31 08:14:58.070 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:14:58 np0005603621 nova_compute[247399]: 2026-01-31 08:14:58.807 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:14:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:14:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:14:59.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:14:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:14:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:14:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:14:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:14:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 3.6 KiB/s wr, 81 op/s
Jan 31 03:15:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:01 np0005603621 nova_compute[247399]: 2026-01-31 08:15:01.765 247403 DEBUG nova.network.neutron [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:01.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 3.6 KiB/s wr, 81 op/s
Jan 31 03:15:01 np0005603621 nova_compute[247399]: 2026-01-31 08:15:01.910 247403 DEBUG nova.network.neutron [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:15:02 np0005603621 nova_compute[247399]: 2026-01-31 08:15:02.919 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 7.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.075 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.437 247403 DEBUG nova.compute.manager [req-4d2c8fd0-b7ab-4f00-8669-835b5666a9a0 req-3c83792b-8877-4715-9acd-b1b32f37c752 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Received event network-vif-deleted-db231dc0-94bd-47c5-bc4c-f139648e2cfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.438 247403 INFO nova.compute.manager [req-4d2c8fd0-b7ab-4f00-8669-835b5666a9a0 req-3c83792b-8877-4715-9acd-b1b32f37c752 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Neutron deleted interface db231dc0-94bd-47c5-bc4c-f139648e2cfa; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.438 247403 DEBUG nova.network.neutron [req-4d2c8fd0-b7ab-4f00-8669-835b5666a9a0 req-3c83792b-8877-4715-9acd-b1b32f37c752 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.441 247403 INFO nova.compute.manager [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Took 9.17 seconds to deallocate network for instance.#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.684 247403 DEBUG nova.compute.manager [req-4d2c8fd0-b7ab-4f00-8669-835b5666a9a0 req-3c83792b-8877-4715-9acd-b1b32f37c752 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Detach interface failed, port_id=db231dc0-94bd-47c5-bc4c-f139648e2cfa, reason: Instance e54ff9a1-d1c9-4792-a837-076e8289ee23 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:15:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:03.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.4 KiB/s wr, 54 op/s
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.809 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.909 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:03 np0005603621 nova_compute[247399]: 2026-01-31 08:15:03.910 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:04 np0005603621 nova_compute[247399]: 2026-01-31 08:15:04.789 247403 INFO nova.scheduler.client.report [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Deleted allocations for instance 8a8d8223-9051-487a-a4d6-a33911813797#033[00m
Jan 31 03:15:04 np0005603621 nova_compute[247399]: 2026-01-31 08:15:04.807 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847289.8063846, 8a8d8223-9051-487a-a4d6-a33911813797 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:04 np0005603621 nova_compute[247399]: 2026-01-31 08:15:04.807 247403 INFO nova.compute.manager [-] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:15:04 np0005603621 nova_compute[247399]: 2026-01-31 08:15:04.898 247403 DEBUG nova.compute.manager [None req-9fb4d9f7-5735-405e-ae4d-2665c7d66407 - - - - - -] [instance: 8a8d8223-9051-487a-a4d6-a33911813797] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:04 np0005603621 nova_compute[247399]: 2026-01-31 08:15:04.939 247403 DEBUG oslo_concurrency.processutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:15:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435088463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:15:05 np0005603621 nova_compute[247399]: 2026-01-31 08:15:05.371 247403 DEBUG oslo_concurrency.processutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:05 np0005603621 nova_compute[247399]: 2026-01-31 08:15:05.379 247403 DEBUG nova.compute.provider_tree [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:15:05 np0005603621 nova_compute[247399]: 2026-01-31 08:15:05.560 247403 DEBUG nova.scheduler.client.report [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:15:05 np0005603621 nova_compute[247399]: 2026-01-31 08:15:05.767 247403 DEBUG oslo_concurrency.lockutils [None req-8d7885e7-0879-46f3-9d4e-9cc51937386c 12a823bd7c6e4cf492ebf6c1d002a91f 9c03fec1b3664105996aa979e226d8f8 - - default default] Lock "8a8d8223-9051-487a-a4d6-a33911813797" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:05.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.0 KiB/s wr, 45 op/s
Jan 31 03:15:05 np0005603621 nova_compute[247399]: 2026-01-31 08:15:05.846 247403 DEBUG nova.network.neutron [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:05 np0005603621 nova_compute[247399]: 2026-01-31 08:15:05.981 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.012 247403 DEBUG oslo_concurrency.lockutils [req-2b0a2186-9d6c-4a8d-8a42-d50d49d0ffd2 req-cd832cab-1dff-441e-8777-bf1a2338928a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.101 247403 INFO nova.scheduler.client.report [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Deleted allocations for instance e54ff9a1-d1c9-4792-a837-076e8289ee23#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.368 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.525 247403 DEBUG oslo_concurrency.lockutils [None req-3c886d99-692a-4651-9120-617d05567820 111fdaf79c084a91902fe37a7a502020 58e900992be7400fb940ca20f13e12d1 - - default default] Lock "e54ff9a1-d1c9-4792-a837-076e8289ee23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.768 247403 DEBUG nova.compute.manager [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.768 247403 DEBUG nova.compute.manager [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-31ce268e-bfe2-4d8b-acd2-5a75e9725d95. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.768 247403 DEBUG oslo_concurrency.lockutils [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.769 247403 DEBUG oslo_concurrency.lockutils [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:06 np0005603621 nova_compute[247399]: 2026-01-31 08:15:06.769 247403 DEBUG nova.network.neutron [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:15:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:07.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Jan 31 03:15:07 np0005603621 nova_compute[247399]: 2026-01-31 08:15:07.983 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847292.9823496, e54ff9a1-d1c9-4792-a837-076e8289ee23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:07 np0005603621 nova_compute[247399]: 2026-01-31 08:15:07.984 247403 INFO nova.compute.manager [-] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:15:08 np0005603621 nova_compute[247399]: 2026-01-31 08:15:08.078 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:08 np0005603621 nova_compute[247399]: 2026-01-31 08:15:08.424 247403 DEBUG nova.compute.manager [None req-a5ec5428-2044-4c84-a5bf-785dab0e710e - - - - - -] [instance: e54ff9a1-d1c9-4792-a837-076e8289ee23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:08 np0005603621 nova_compute[247399]: 2026-01-31 08:15:08.812 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:08 np0005603621 nova_compute[247399]: 2026-01-31 08:15:08.920 247403 DEBUG nova.network.neutron [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:15:09 np0005603621 nova_compute[247399]: 2026-01-31 08:15:09.752 247403 DEBUG nova.network.neutron [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:09.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:09.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 938 B/s wr, 28 op/s
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.048 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: e76773d3-8a3a-451d-a29e-d01474b5f82f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.184 247403 DEBUG oslo_concurrency.lockutils [req-1e56a3c4-4666-487d-ba02-667b72328ec2 req-62f4fd5e-1de9-4660-98a9-728138a50e79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.648 247403 DEBUG nova.compute.manager [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-e76773d3-8a3a-451d-a29e-d01474b5f82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.649 247403 DEBUG nova.compute.manager [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-e76773d3-8a3a-451d-a29e-d01474b5f82f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.649 247403 DEBUG oslo_concurrency.lockutils [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.649 247403 DEBUG oslo_concurrency.lockutils [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:10 np0005603621 nova_compute[247399]: 2026-01-31 08:15:10.649 247403 DEBUG nova.network.neutron [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port e76773d3-8a3a-451d-a29e-d01474b5f82f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:15:11 np0005603621 nova_compute[247399]: 2026-01-31 08:15:11.196 247403 DEBUG nova.network.neutron [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:15:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:11.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:11.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 293 MiB data, 909 MiB used, 20 GiB / 21 GiB avail
Jan 31 03:15:12 np0005603621 nova_compute[247399]: 2026-01-31 08:15:12.300 247403 DEBUG nova.network.neutron [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:12 np0005603621 nova_compute[247399]: 2026-01-31 08:15:12.826 247403 DEBUG oslo_concurrency.lockutils [req-d02dd818-56bb-4a9f-b871-ad578d57ede9 req-9a91d931-66fa-4ae4-81f5-4f35bb2b5d22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.081 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.320 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: 51338d60-6e7f-4716-b025-60a809240bd3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.476 247403 DEBUG nova.compute.manager [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-51338d60-6e7f-4716-b025-60a809240bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.477 247403 DEBUG nova.compute.manager [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-51338d60-6e7f-4716-b025-60a809240bd3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.477 247403 DEBUG oslo_concurrency.lockutils [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.477 247403 DEBUG oslo_concurrency.lockutils [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.477 247403 DEBUG nova.network.neutron [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port 51338d60-6e7f-4716-b025-60a809240bd3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:15:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:13.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:13.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 250 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 341 B/s wr, 13 op/s
Jan 31 03:15:13 np0005603621 nova_compute[247399]: 2026-01-31 08:15:13.814 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603621 nova_compute[247399]: 2026-01-31 08:15:14.097 247403 DEBUG nova.network.neutron [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:15:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:15:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809354953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:15:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:15:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809354953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:15:14 np0005603621 podman[307799]: 2026-01-31 08:15:14.502329822 +0000 UTC m=+0.063137458 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:15:14 np0005603621 podman[307798]: 2026-01-31 08:15:14.512597215 +0000 UTC m=+0.073677450 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 03:15:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:14.528 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:14 np0005603621 nova_compute[247399]: 2026-01-31 08:15:14.528 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:14.529 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:15:14 np0005603621 nova_compute[247399]: 2026-01-31 08:15:14.915 247403 DEBUG nova.network.neutron [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:15 np0005603621 nova_compute[247399]: 2026-01-31 08:15:15.240 247403 DEBUG oslo_concurrency.lockutils [req-7811f28a-3d8b-428d-9c1e-3b3213933f53 req-6ad66fa7-b03f-4eb2-9135-c1cffd0220ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:15.531 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:15.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 226 MiB data, 880 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 441 KiB/s wr, 29 op/s
Jan 31 03:15:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:17.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:17.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 243 MiB data, 879 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.2 MiB/s wr, 41 op/s
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:15:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 29f71a2b-5296-469d-a5eb-02c273a09329 does not exist
Jan 31 03:15:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 869788f9-7123-4572-abc6-22fe2bda1bea does not exist
Jan 31 03:15:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ed0065f5-0488-4355-bbb6-ea42ed612570 does not exist
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:15:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:15:18 np0005603621 nova_compute[247399]: 2026-01-31 08:15:18.084 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.324447976 +0000 UTC m=+0.037042531 container create 9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dirac, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.305299317 +0000 UTC m=+0.017893892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:15:18 np0005603621 systemd[1]: Started libpod-conmon-9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7.scope.
Jan 31 03:15:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.450984601 +0000 UTC m=+0.163579176 container init 9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dirac, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.456288487 +0000 UTC m=+0.168883042 container start 9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.459824608 +0000 UTC m=+0.172419163 container attach 9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:15:18 np0005603621 relaxed_dirac[308131]: 167 167
Jan 31 03:15:18 np0005603621 systemd[1]: libpod-9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7.scope: Deactivated successfully.
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.465981291 +0000 UTC m=+0.178575876 container died 9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dirac, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:15:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fe236be24d4a0cf5f251b589b79c7e5d64af907500231c4924ceb961d187c9b3-merged.mount: Deactivated successfully.
Jan 31 03:15:18 np0005603621 podman[308115]: 2026-01-31 08:15:18.507098419 +0000 UTC m=+0.219692974 container remove 9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:15:18 np0005603621 systemd[1]: libpod-conmon-9b1806dd7118374470c449f863ba7fe6059f87ed35a2cf439d1fe3943295dff7.scope: Deactivated successfully.
Jan 31 03:15:18 np0005603621 podman[308155]: 2026-01-31 08:15:18.608843328 +0000 UTC m=+0.034035448 container create 5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:15:18 np0005603621 systemd[1]: Started libpod-conmon-5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99.scope.
Jan 31 03:15:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea412f03935cb8e91ba56df934bafe836f3ae3d8fb082d383b50fee900c98dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea412f03935cb8e91ba56df934bafe836f3ae3d8fb082d383b50fee900c98dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea412f03935cb8e91ba56df934bafe836f3ae3d8fb082d383b50fee900c98dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea412f03935cb8e91ba56df934bafe836f3ae3d8fb082d383b50fee900c98dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea412f03935cb8e91ba56df934bafe836f3ae3d8fb082d383b50fee900c98dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:18 np0005603621 podman[308155]: 2026-01-31 08:15:18.677603772 +0000 UTC m=+0.102795912 container init 5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:18 np0005603621 podman[308155]: 2026-01-31 08:15:18.684099906 +0000 UTC m=+0.109292026 container start 5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:18 np0005603621 podman[308155]: 2026-01-31 08:15:18.687713598 +0000 UTC m=+0.112905728 container attach 5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:15:18 np0005603621 podman[308155]: 2026-01-31 08:15:18.594094155 +0000 UTC m=+0.019286295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:15:18 np0005603621 nova_compute[247399]: 2026-01-31 08:15:18.712 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Successfully updated port: d1a1fd13-040f-49ef-b94f-98bcea71df76 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:15:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:18 np0005603621 nova_compute[247399]: 2026-01-31 08:15:18.816 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:18 np0005603621 nova_compute[247399]: 2026-01-31 08:15:18.858 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:18 np0005603621 nova_compute[247399]: 2026-01-31 08:15:18.859 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:18 np0005603621 nova_compute[247399]: 2026-01-31 08:15:18.859 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:15:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:15:19 np0005603621 nova_compute[247399]: 2026-01-31 08:15:19.435 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:15:19 np0005603621 goofy_dijkstra[308171]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:15:19 np0005603621 goofy_dijkstra[308171]: --> relative data size: 1.0
Jan 31 03:15:19 np0005603621 goofy_dijkstra[308171]: --> All data devices are unavailable
Jan 31 03:15:19 np0005603621 systemd[1]: libpod-5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99.scope: Deactivated successfully.
Jan 31 03:15:19 np0005603621 podman[308155]: 2026-01-31 08:15:19.512903674 +0000 UTC m=+0.938095804 container died 5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:15:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3ea412f03935cb8e91ba56df934bafe836f3ae3d8fb082d383b50fee900c98dd-merged.mount: Deactivated successfully.
Jan 31 03:15:19 np0005603621 podman[308155]: 2026-01-31 08:15:19.566486603 +0000 UTC m=+0.991678723 container remove 5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:15:19 np0005603621 systemd[1]: libpod-conmon-5f7d73fe590f8951ab0f3c6ef2d610b9c17b61254ad9a40942193c9016ff1b99.scope: Deactivated successfully.
Jan 31 03:15:19 np0005603621 nova_compute[247399]: 2026-01-31 08:15:19.577 247403 DEBUG nova.compute.manager [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-d1a1fd13-040f-49ef-b94f-98bcea71df76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:19 np0005603621 nova_compute[247399]: 2026-01-31 08:15:19.577 247403 DEBUG nova.compute.manager [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-d1a1fd13-040f-49ef-b94f-98bcea71df76. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:15:19 np0005603621 nova_compute[247399]: 2026-01-31 08:15:19.578 247403 DEBUG oslo_concurrency.lockutils [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:15:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/580353688' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:15:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:19.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:19.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 260 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.044036775 +0000 UTC m=+0.034821043 container create 5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:15:20 np0005603621 systemd[1]: Started libpod-conmon-5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c.scope.
Jan 31 03:15:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.122776692 +0000 UTC m=+0.113560980 container init 5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.027318241 +0000 UTC m=+0.018102539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.127296683 +0000 UTC m=+0.118080951 container start 5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.131079912 +0000 UTC m=+0.121864210 container attach 5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:15:20 np0005603621 friendly_mestorf[308357]: 167 167
Jan 31 03:15:20 np0005603621 systemd[1]: libpod-5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c.scope: Deactivated successfully.
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.132889508 +0000 UTC m=+0.123673776 container died 5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:15:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-09a30011e8f789ff82e54eaa914a6a38d3964263fc69fca829ae47c1709ce443-merged.mount: Deactivated successfully.
Jan 31 03:15:20 np0005603621 podman[308341]: 2026-01-31 08:15:20.177330221 +0000 UTC m=+0.168114489 container remove 5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mestorf, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:15:20 np0005603621 systemd[1]: libpod-conmon-5d5f9865388cfb438999f94bfedfd506b918265d417319be0997663d901c097c.scope: Deactivated successfully.
Jan 31 03:15:20 np0005603621 podman[308380]: 2026-01-31 08:15:20.304544287 +0000 UTC m=+0.048364937 container create e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:15:20 np0005603621 systemd[1]: Started libpod-conmon-e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2.scope.
Jan 31 03:15:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecfd632c4bcb7539cc28200c21e06639be024800253e55fbf30ba284109f83a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecfd632c4bcb7539cc28200c21e06639be024800253e55fbf30ba284109f83a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecfd632c4bcb7539cc28200c21e06639be024800253e55fbf30ba284109f83a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aecfd632c4bcb7539cc28200c21e06639be024800253e55fbf30ba284109f83a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:20 np0005603621 podman[308380]: 2026-01-31 08:15:20.285114638 +0000 UTC m=+0.028935268 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:15:20 np0005603621 podman[308380]: 2026-01-31 08:15:20.393331098 +0000 UTC m=+0.137151728 container init e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:15:20 np0005603621 podman[308380]: 2026-01-31 08:15:20.408422632 +0000 UTC m=+0.152243242 container start e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:15:20 np0005603621 podman[308380]: 2026-01-31 08:15:20.411671214 +0000 UTC m=+0.155491864 container attach e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]: {
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:    "0": [
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:        {
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "devices": [
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "/dev/loop3"
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            ],
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "lv_name": "ceph_lv0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "lv_size": "7511998464",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "name": "ceph_lv0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "tags": {
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.cluster_name": "ceph",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.crush_device_class": "",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.encrypted": "0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.osd_id": "0",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.type": "block",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:                "ceph.vdo": "0"
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            },
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "type": "block",
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:            "vg_name": "ceph_vg0"
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:        }
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]:    ]
Jan 31 03:15:21 np0005603621 suspicious_hertz[308396]: }
Jan 31 03:15:21 np0005603621 systemd[1]: libpod-e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2.scope: Deactivated successfully.
Jan 31 03:15:21 np0005603621 podman[308380]: 2026-01-31 08:15:21.112930915 +0000 UTC m=+0.856751525 container died e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:15:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aecfd632c4bcb7539cc28200c21e06639be024800253e55fbf30ba284109f83a-merged.mount: Deactivated successfully.
Jan 31 03:15:21 np0005603621 podman[308380]: 2026-01-31 08:15:21.597545059 +0000 UTC m=+1.341365669 container remove e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_hertz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:15:21 np0005603621 systemd[1]: libpod-conmon-e2724b21fd34e58f27f996607af2e4fc8f397825a2b17b004591c97cf4f4f1d2.scope: Deactivated successfully.
Jan 31 03:15:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 260 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 03:15:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:21.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.085273041 +0000 UTC m=+0.038112126 container create 1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 03:15:22 np0005603621 systemd[1]: Started libpod-conmon-1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d.scope.
Jan 31 03:15:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.150853905 +0000 UTC m=+0.103693000 container init 1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.156618515 +0000 UTC m=+0.109457590 container start 1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 03:15:22 np0005603621 hungry_snyder[308576]: 167 167
Jan 31 03:15:22 np0005603621 systemd[1]: libpod-1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d.scope: Deactivated successfully.
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.162358055 +0000 UTC m=+0.115197140 container attach 1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.067446482 +0000 UTC m=+0.020285587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:15:22 np0005603621 conmon[308576]: conmon 1d49125db17b7df811b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d.scope/container/memory.events
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.163256214 +0000 UTC m=+0.116095299 container died 1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:15:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f41b7fa2b6fb40d435fe40e82a19d287e589c1bd276447f8f3e70f702538b3d7-merged.mount: Deactivated successfully.
Jan 31 03:15:22 np0005603621 podman[308560]: 2026-01-31 08:15:22.195949538 +0000 UTC m=+0.148788623 container remove 1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:15:22 np0005603621 systemd[1]: libpod-conmon-1d49125db17b7df811b724bfb3426ce4118f34c6d58bd6e971dad6472341a81d.scope: Deactivated successfully.
Jan 31 03:15:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:15:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865782832' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:15:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:15:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2865782832' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:15:22 np0005603621 podman[308598]: 2026-01-31 08:15:22.325356623 +0000 UTC m=+0.044927199 container create 5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:15:22 np0005603621 systemd[1]: Started libpod-conmon-5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6.scope.
Jan 31 03:15:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8741cfc403d4ea8a69267bbe5e6f63b7f14a165a736ff3560f0aab4aa70ae2fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8741cfc403d4ea8a69267bbe5e6f63b7f14a165a736ff3560f0aab4aa70ae2fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8741cfc403d4ea8a69267bbe5e6f63b7f14a165a736ff3560f0aab4aa70ae2fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8741cfc403d4ea8a69267bbe5e6f63b7f14a165a736ff3560f0aab4aa70ae2fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:22 np0005603621 podman[308598]: 2026-01-31 08:15:22.305625724 +0000 UTC m=+0.025196300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:15:22 np0005603621 podman[308598]: 2026-01-31 08:15:22.401360694 +0000 UTC m=+0.120931270 container init 5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:15:22 np0005603621 podman[308598]: 2026-01-31 08:15:22.408920421 +0000 UTC m=+0.128490977 container start 5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:15:22 np0005603621 podman[308598]: 2026-01-31 08:15:22.41433498 +0000 UTC m=+0.133905566 container attach 5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:15:23 np0005603621 nova_compute[247399]: 2026-01-31 08:15:23.086 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]: {
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:        "osd_id": 0,
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:        "type": "bluestore"
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]:    }
Jan 31 03:15:23 np0005603621 upbeat_goodall[308614]: }
Jan 31 03:15:23 np0005603621 systemd[1]: libpod-5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6.scope: Deactivated successfully.
Jan 31 03:15:23 np0005603621 podman[308598]: 2026-01-31 08:15:23.199915155 +0000 UTC m=+0.919485711 container died 5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:15:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8741cfc403d4ea8a69267bbe5e6f63b7f14a165a736ff3560f0aab4aa70ae2fd-merged.mount: Deactivated successfully.
Jan 31 03:15:23 np0005603621 podman[308598]: 2026-01-31 08:15:23.249222289 +0000 UTC m=+0.968792845 container remove 5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_goodall, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:15:23 np0005603621 systemd[1]: libpod-conmon-5b27f0558d6d1712ebc98ef231fb4628558f0018bd972868a37ca5f026eeb1d6.scope: Deactivated successfully.
Jan 31 03:15:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:15:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:15:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:15:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:15:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a5ca9092-ddb6-4c9f-aea3-52a9dec33646 does not exist
Jan 31 03:15:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7742877f-4e6f-4c9e-8269-5715058b051a does not exist
Jan 31 03:15:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 52976255-c70e-45ea-a93f-0c011a449f9e does not exist
Jan 31 03:15:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:15:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:23.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:15:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 260 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Jan 31 03:15:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:23 np0005603621 nova_compute[247399]: 2026-01-31 08:15:23.818 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:15:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:15:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:15:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:25.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:15:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 221 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 31 03:15:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:27.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 209 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.4 MiB/s wr, 67 op/s
Jan 31 03:15:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:28 np0005603621 nova_compute[247399]: 2026-01-31 08:15:28.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:28 np0005603621 nova_compute[247399]: 2026-01-31 08:15:28.821 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:15:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:29.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:15:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 180 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 594 KiB/s wr, 90 op/s
Jan 31 03:15:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:30 np0005603621 nova_compute[247399]: 2026-01-31 08:15:30.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:30 np0005603621 nova_compute[247399]: 2026-01-31 08:15:30.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:15:30 np0005603621 nova_compute[247399]: 2026-01-31 08:15:30.363 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:30.498 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:30.498 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:30.498 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:31 np0005603621 nova_compute[247399]: 2026-01-31 08:15:31.362 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:31.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 180 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 838 KiB/s rd, 14 KiB/s wr, 77 op/s
Jan 31 03:15:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:31.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.062 247403 DEBUG nova.network.neutron [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.423 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.424 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance network_info: |[{"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.424 247403 DEBUG oslo_concurrency.lockutils [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.424 247403 DEBUG nova.network.neutron [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port d1a1fd13-040f-49ef-b94f-98bcea71df76 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.433 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Start _get_guest_xml network_info=[{"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk', 'boot_index': '2'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk', 'boot_index': '3'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-
Jan 31 03:15:32 np0005603621 nova_compute[247399]: =0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-284ce110-dc52-4bb8-b8bc-a99864f7d576', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '284ce110-dc52-4bb8-b8bc-a99864f7d576', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'attached_at': '', 'detached_at': '', 'volume_id': '284ce110-dc52-4bb8-b8bc-a99864f7d576', 'serial': '284ce110-dc52-4bb8-b8bc-a99864f7d576'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '1c386108-15cc-4ed2-a90d-edf67d939cb8', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f3cb4822-8f33-4837-8242-54aa49d653b7', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f3cb4822-8f33-4837-8242-54aa49d653b7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'attached_at': '', 'detached_at': '', 'volume_id': 'f3cb4822-8f33-4837-8242-54aa49d653b7', 'serial': 'f3cb4822-8f33-4837-8242-54aa49d653b7'}, 'device_type': 'disk', 'boot_index': 1, 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'attachment_id': 'd453934a-08bc-4abf-8da8-a0387ae6dcf7', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}, {'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8211a79e-7243-4aef-aa8b-c47974e4d749', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8211a79e-7243-4aef-aa8b-c47974e4d749', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'attached_at': '', 'detached_at': '', 'volume_id': '8211a79e-7243-4aef-aa8b-c47974e4d749', 'serial': '8211a79e-7243-4aef-aa8b-c47974e4d749'}, 'device_type': 'disk', 'boot_index': 2, 'mount_device': '/dev/vdc', 'delete_on_termination': False, 'attachment_id': 'a923619c-0654-4d34-b06f-aa3c0335a84b', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.439 247403 WARNING nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.447 247403 DEBUG nova.virt.libvirt.host [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.448 247403 DEBUG nova.virt.libvirt.host [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.451 247403 DEBUG nova.virt.libvirt.host [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.452 247403 DEBUG nova.virt.libvirt.host [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.453 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.453 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.453 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.454 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.454 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.454 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.455 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.455 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.455 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.455 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.455 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.456 247403 DEBUG nova.virt.hardware [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.482 247403 DEBUG nova.storage.rbd_utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] rbd image 85e16f4c-d977-4032-9cbd-b904f1d789d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.486 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:15:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452244689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:15:32 np0005603621 rsyslogd[998]: message too long (8192) with configured size 8096, begin of message is: 2026-01-31 08:15:32.433 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 31 03:15:32 np0005603621 nova_compute[247399]: 2026-01-31 08:15:32.897 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.093 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.229 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.229 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.230 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.231 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.231 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.231 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.232 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.232 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.233 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.233 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.234 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.234 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.235 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.235 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.235 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.236 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.236 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.237 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.237 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.237 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.238 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.239 247403 DEBUG nova.objects.instance [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 85e16f4c-d977-4032-9cbd-b904f1d789d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.306 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.357 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <uuid>85e16f4c-d977-4032-9cbd-b904f1d789d4</uuid>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <name>instance-0000005e</name>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:name>tempest-device-tagging-server-200568798</nova:name>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:15:32</nova:creationTime>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:user uuid="8080000681f449c3a9754c876165d667">tempest-TaggedBootDevicesTest_v242-1562315172-project-member</nova:user>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:project uuid="5a6c60c75300483aa07e13b08923b1a1">tempest-TaggedBootDevicesTest_v242-1562315172</nova:project>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="43de799d-e636-43ba-89c3-cb3a6a2ed888">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.1.1.218" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="6e8e75b8-b681-4fc0-be0b-89a15bda2ac3">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.1.1.145" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="31ce268e-bfe2-4d8b-acd2-5a75e9725d95">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.1.1.39" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="e76773d3-8a3a-451d-a29e-d01474b5f82f">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.1.1.17" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="51338d60-6e7f-4716-b025-60a809240bd3">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.2.2.100" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <nova:port uuid="d1a1fd13-040f-49ef-b94f-98bcea71df76">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.2.2.200" ipVersion="4"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <entry name="serial">85e16f4c-d977-4032-9cbd-b904f1d789d4</entry>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <entry name="uuid">85e16f4c-d977-4032-9cbd-b904f1d789d4</entry>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/85e16f4c-d977-4032-9cbd-b904f1d789d4_disk.config">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-284ce110-dc52-4bb8-b8bc-a99864f7d576">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <serial>284ce110-dc52-4bb8-b8bc-a99864f7d576</serial>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-f3cb4822-8f33-4837-8242-54aa49d653b7">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <serial>f3cb4822-8f33-4837-8242-54aa49d653b7</serial>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-8211a79e-7243-4aef-aa8b-c47974e4d749">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="vdc" bus="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <serial>8211a79e-7243-4aef-aa8b-c47974e4d749</serial>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:77:02:fd"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tap43de799d-e6"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:6e:b4:87"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tapfd1f006d-b1"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:e2:af:a0"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tap6e8e75b8-b6"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:f4:7b:5a"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tap31ce268e-bf"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:76:b2:52"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tape76773d3-8a"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:63:b0:67"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tap51338d60-6e"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:d5:7d:39"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <target dev="tapd1a1fd13-04"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/console.log" append="off"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:15:33 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:15:33 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:15:33 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:15:33 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.358 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.358 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.359 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.359 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.359 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.360 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.360 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.360 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.360 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.361 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.361 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.361 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.361 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.362 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.362 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.362 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.362 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.363 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.363 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.363 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.363 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.364 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.364 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.364 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.364 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Preparing to wait for external event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.365 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.365 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.365 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.366 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.366 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.367 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.368 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.368 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.368 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.369 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.373 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.373 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43de799d-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.373 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43de799d-e6, col_values=(('external_ids', {'iface-id': '43de799d-e636-43ba-89c3-cb3a6a2ed888', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:02:fd', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.375 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.3765] manager: (tap43de799d-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.381 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.382 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6')#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.383 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.384 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.384 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.385 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.385 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.385 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.386 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.388 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.388 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd1f006d-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.388 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd1f006d-b1, col_values=(('external_ids', {'iface-id': 'fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:b4:87', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.389 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.3906] manager: (tapfd1f006d-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.394 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1')#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.395 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.395 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.396 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.396 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.397 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.397 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.399 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.399 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e8e75b8-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.399 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e8e75b8-b6, col_values=(('external_ids', {'iface-id': '6e8e75b8-b681-4fc0-be0b-89a15bda2ac3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:af:a0', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.400 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.4014] manager: (tap6e8e75b8-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.403 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.406 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.407 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6')#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.407 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.408 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.408 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.409 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.409 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.409 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.410 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.411 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.411 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31ce268e-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.412 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31ce268e-bf, col_values=(('external_ids', {'iface-id': '31ce268e-bfe2-4d8b-acd2-5a75e9725d95', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:7b:5a', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.413 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.4138] manager: (tap31ce268e-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.419 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.420 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf')#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.421 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.421 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.422 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.422 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.422 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.422 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.423 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.424 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape76773d3-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.425 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape76773d3-8a, col_values=(('external_ids', {'iface-id': 'e76773d3-8a3a-451d-a29e-d01474b5f82f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:b2:52', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.426 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.4269] manager: (tape76773d3-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.436 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.437 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a')#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.438 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.438 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.438 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.439 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.439 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.439 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.439 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.441 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.441 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51338d60-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.442 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51338d60-6e, col_values=(('external_ids', {'iface-id': '51338d60-6e7f-4716-b025-60a809240bd3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:b0:67', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.443 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.4438] manager: (tap51338d60-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/160)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.445 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.454 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.455 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e')#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.456 247403 DEBUG nova.virt.libvirt.vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:14:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.456 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.456 247403 DEBUG nova.network.os_vif_util [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.457 247403 DEBUG os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.457 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.457 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.457 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.459 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.459 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1a1fd13-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.460 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1a1fd13-04, col_values=(('external_ids', {'iface-id': 'd1a1fd13-040f-49ef-b94f-98bcea71df76', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:7d:39', 'vm-uuid': '85e16f4c-d977-4032-9cbd-b904f1d789d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.461 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 NetworkManager[49013]: <info>  [1769847333.4618] manager: (tapd1a1fd13-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.463 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.473 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.475 247403 INFO os_vif [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04')#033[00m
Jan 31 03:15:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 180 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 114 op/s
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.822 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.823 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.823 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] No VIF found with MAC fa:16:3e:77:02:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.823 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] No VIF found with MAC fa:16:3e:76:b2:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:15:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:33.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.824 247403 INFO nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Using config drive#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.850 247403 DEBUG nova.storage.rbd_utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] rbd image 85e16f4c-d977-4032-9cbd-b904f1d789d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:15:33 np0005603621 nova_compute[247399]: 2026-01-31 08:15:33.857 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.270 247403 INFO nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Creating config drive at /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/disk.config#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.275 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprmslc9ul execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.398 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprmslc9ul" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.426 247403 DEBUG nova.storage.rbd_utils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] rbd image 85e16f4c-d977-4032-9cbd-b904f1d789d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.429 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/disk.config 85e16f4c-d977-4032-9cbd-b904f1d789d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.796 247403 DEBUG nova.network.neutron [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updated VIF entry in instance network info cache for port d1a1fd13-040f-49ef-b94f-98bcea71df76. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.797 247403 DEBUG nova.network.neutron [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:34 np0005603621 nova_compute[247399]: 2026-01-31 08:15:34.898 247403 DEBUG oslo_concurrency.lockutils [req-b4339b10-8652-44c3-969e-d997c196a66b req-79d6401e-7b9a-4a6a-9ec8-f0677d8f95b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.128 247403 DEBUG oslo_concurrency.processutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/disk.config 85e16f4c-d977-4032-9cbd-b904f1d789d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.129 247403 INFO nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Deleting local config drive /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4/disk.config because it was imported into RBD.#033[00m
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.1692] manager: (tap43de799d-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Jan 31 03:15:35 np0005603621 kernel: tap43de799d-e6: entered promiscuous mode
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.1811] manager: (tapfd1f006d-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00311|binding|INFO|Claiming lport 43de799d-e636-43ba-89c3-cb3a6a2ed888 for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00312|binding|INFO|43de799d-e636-43ba-89c3-cb3a6a2ed888: Claiming fa:16:3e:77:02:fd 10.100.0.11
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 kernel: tapfd1f006d-b1: entered promiscuous mode
Jan 31 03:15:35 np0005603621 systemd-udevd[308904]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:15:35 np0005603621 systemd-udevd[308905]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.1989] manager: (tap6e8e75b8-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.198 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00313|if_status|INFO|Dropped 2 log messages in last 46 seconds (most recently, 46 seconds ago) due to excessive rate
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00314|if_status|INFO|Not updating pb chassis for fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b now as sb is readonly
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.201 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 systemd-udevd[308912]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.202 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:35 np0005603621 kernel: tap6e8e75b8-b6: entered promiscuous mode
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2075] device (tapfd1f006d-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2080] device (tapfd1f006d-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2109] device (tap43de799d-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2115] device (tap43de799d-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.207 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:02:fd 10.100.0.11'], port_security=['fa:16:3e:77:02:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af8b010-2cf7-4570-899f-0230a64afdc7, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=43de799d-e636-43ba-89c3-cb3a6a2ed888) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.208 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 43de799d-e636-43ba-89c3-cb3a6a2ed888 in datapath 370ef3ad-fbaf-4df3-ad34-de7587567f0e bound to our chassis#033[00m
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00315|binding|INFO|Claiming lport fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00316|binding|INFO|fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b: Claiming fa:16:3e:6e:b4:87 10.1.1.218
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00317|binding|INFO|Setting lport 43de799d-e636-43ba-89c3-cb3a6a2ed888 ovn-installed in OVS
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00318|binding|INFO|Setting lport 43de799d-e636-43ba-89c3-cb3a6a2ed888 up in Southbound
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.214 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 370ef3ad-fbaf-4df3-ad34-de7587567f0e#033[00m
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2155] manager: (tap31ce268e-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.216 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2177] device (tap6e8e75b8-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2183] device (tap6e8e75b8-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.224 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbb0f81-fc66-4b95-b90d-6b93faea661d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.225 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap370ef3ad-f1 in ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.228 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap370ef3ad-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.229 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec29e6b-cc77-406a-ac43-b74e7f4ef517]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.230 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6a634559-25c8-4e92-a753-86220fd697b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2313] manager: (tape76773d3-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.234 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:b4:87 10.1.1.218'], port_security=['fa:16:3e:6e:b4:87 10.1.1.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-486336001', 'neutron:cidrs': '10.1.1.218/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-486336001', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36d4c8c2-788b-4fc8-9051-a66dc5e53167', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.240 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[3971370f-65f1-4cba-a4cf-c0d547006267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2466] manager: (tap51338d60-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2581] manager: (tapd1a1fd13-04): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.263 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[93fd0b36-4aaa-4e39-b1dc-72e4d64a1c6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.283 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0239ac4f-cbc9-4fef-8f96-7db845346c53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 systemd-machined[212769]: New machine qemu-42-instance-0000005e.
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.2907] manager: (tap370ef3ad-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/169)
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.290 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e185759d-6fec-4cf8-9d90-fa86c8f385e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 systemd[1]: Started Virtual Machine qemu-42-instance-0000005e.
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3082] device (tap31ce268e-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 kernel: tap31ce268e-bf: entered promiscuous mode
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3093] device (tap31ce268e-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3100] device (tapd1a1fd13-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 kernel: tapd1a1fd13-04: entered promiscuous mode
Jan 31 03:15:35 np0005603621 kernel: tap51338d60-6e: entered promiscuous mode
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3112] device (tap51338d60-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 kernel: tape76773d3-8a: entered promiscuous mode
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3120] device (tapd1a1fd13-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3126] device (tap51338d60-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3130] device (tape76773d3-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3139] device (tape76773d3-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.315 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[44c7fee3-09ed-40d6-bb7a-225c3944b128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.319 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5e83dd-6385-473e-9e7a-960eebd1bb8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00319|binding|INFO|Claiming lport 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00320|binding|INFO|31ce268e-bfe2-4d8b-acd2-5a75e9725d95: Claiming fa:16:3e:f4:7b:5a 10.1.1.39
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00321|binding|INFO|Claiming lport e76773d3-8a3a-451d-a29e-d01474b5f82f for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00322|binding|INFO|e76773d3-8a3a-451d-a29e-d01474b5f82f: Claiming fa:16:3e:76:b2:52 10.1.1.17
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00323|binding|INFO|Claiming lport 51338d60-6e7f-4716-b025-60a809240bd3 for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00324|binding|INFO|51338d60-6e7f-4716-b025-60a809240bd3: Claiming fa:16:3e:63:b0:67 10.2.2.100
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00325|binding|INFO|Claiming lport 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00326|binding|INFO|6e8e75b8-b681-4fc0-be0b-89a15bda2ac3: Claiming fa:16:3e:e2:af:a0 10.1.1.145
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00327|binding|INFO|Claiming lport d1a1fd13-040f-49ef-b94f-98bcea71df76 for this chassis.
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00328|binding|INFO|d1a1fd13-040f-49ef-b94f-98bcea71df76: Claiming fa:16:3e:d5:7d:39 10.2.2.200
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00329|binding|INFO|Setting lport fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b ovn-installed in OVS
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.3366] device (tap370ef3ad-f0): carrier: link connected
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.341 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fe1183f0-59c5-4749-8df1-d1571e764031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.351 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:7b:5a 10.1.1.39'], port_security=['fa:16:3e:f4:7b:5a 10.1.1.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.39/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=31ce268e-bfe2-4d8b-acd2-5a75e9725d95) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00330|binding|INFO|Setting lport fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b up in Southbound
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.353 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b0:67 10.2.2.100'], port_security=['fa:16:3e:63:b0:67 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-411aa011-d813-4343-b297-43dfd39c905e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b392687b-637d-40d1-bb6d-50c20007db06, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=51338d60-6e7f-4716-b025-60a809240bd3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.354 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b2:52 10.1.1.17'], port_security=['fa:16:3e:76:b2:52 10.1.1.17'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.17/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e76773d3-8a3a-451d-a29e-d01474b5f82f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.355 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:af:a0 10.1.1.145'], port_security=['fa:16:3e:e2:af:a0 10.1.1.145'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1892014503', 'neutron:cidrs': '10.1.1.145/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1892014503', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36d4c8c2-788b-4fc8-9051-a66dc5e53167', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.357 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:7d:39 10.2.2.200'], port_security=['fa:16:3e:d5:7d:39 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-411aa011-d813-4343-b297-43dfd39c905e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b392687b-637d-40d1-bb6d-50c20007db06, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d1a1fd13-040f-49ef-b94f-98bcea71df76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.351 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd2d49a-f839-4b0a-90bf-0bd531b04c5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap370ef3ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:ed:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664085, 'reachable_time': 34153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308962, 'error': None, 'target': 'ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00331|binding|INFO|Setting lport 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 ovn-installed in OVS
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00332|binding|INFO|Setting lport 51338d60-6e7f-4716-b025-60a809240bd3 ovn-installed in OVS
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00333|binding|INFO|Setting lport e76773d3-8a3a-451d-a29e-d01474b5f82f ovn-installed in OVS
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00334|binding|INFO|Setting lport d1a1fd13-040f-49ef-b94f-98bcea71df76 ovn-installed in OVS
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00335|binding|INFO|Setting lport 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 ovn-installed in OVS
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00336|binding|INFO|Setting lport 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 up in Southbound
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00337|binding|INFO|Setting lport 51338d60-6e7f-4716-b025-60a809240bd3 up in Southbound
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00338|binding|INFO|Setting lport e76773d3-8a3a-451d-a29e-d01474b5f82f up in Southbound
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00339|binding|INFO|Setting lport d1a1fd13-040f-49ef-b94f-98bcea71df76 up in Southbound
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00340|binding|INFO|Setting lport 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 up in Southbound
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.415 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.416 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[97f32b57-4c7e-4bde-ac23-0184b33fe737]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:ed0b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664085, 'tstamp': 664085}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308967, 'error': None, 'target': 'ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.429 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[45f37272-d5a1-40de-999d-0f0e77e13f69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap370ef3ad-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:ed:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664085, 'reachable_time': 34153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308969, 'error': None, 'target': 'ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.449 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b5700d0f-4f12-4118-95b3-617ccc6fd6cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.488 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ae085e0f-0785-4baa-84cd-ef237ba4b6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.490 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap370ef3ad-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.490 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.490 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap370ef3ad-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.492 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 kernel: tap370ef3ad-f0: entered promiscuous mode
Jan 31 03:15:35 np0005603621 NetworkManager[49013]: <info>  [1769847335.4926] manager: (tap370ef3ad-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.494 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.495 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap370ef3ad-f0, col_values=(('external_ids', {'iface-id': '53bb0b66-6335-4be3-bfd6-e7890c25772d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.496 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:35Z|00341|binding|INFO|Releasing lport 53bb0b66-6335-4be3-bfd6-e7890c25772d from this chassis (sb_readonly=0)
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.497 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.497 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/370ef3ad-fbaf-4df3-ad34-de7587567f0e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/370ef3ad-fbaf-4df3-ad34-de7587567f0e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.498 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a7548004-520c-4ff3-9124-36402a922637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.499 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-370ef3ad-fbaf-4df3-ad34-de7587567f0e
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/370ef3ad-fbaf-4df3-ad34-de7587567f0e.pid.haproxy
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 370ef3ad-fbaf-4df3-ad34-de7587567f0e
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:15:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:35.500 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'env', 'PROCESS_TAG=haproxy-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/370ef3ad-fbaf-4df3-ad34-de7587567f0e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.501 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 180 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 103 op/s
Jan 31 03:15:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:35.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:35 np0005603621 podman[309071]: 2026-01-31 08:15:35.890991638 +0000 UTC m=+0.101644765 container create be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:15:35 np0005603621 podman[309071]: 2026-01-31 08:15:35.81157638 +0000 UTC m=+0.022229527 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:15:35 np0005603621 systemd[1]: Started libpod-conmon-be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01.scope.
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.950 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847335.9495523, 85e16f4c-d977-4032-9cbd-b904f1d789d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:35 np0005603621 nova_compute[247399]: 2026-01-31 08:15:35.952 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] VM Started (Lifecycle Event)#033[00m
Jan 31 03:15:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6d490567d2a7b101afccf046641b95b6a5c037a15d5572e794c37d8f36cb28b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:36 np0005603621 podman[309071]: 2026-01-31 08:15:36.048613427 +0000 UTC m=+0.259266584 container init be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:15:36 np0005603621 podman[309071]: 2026-01-31 08:15:36.052723616 +0000 UTC m=+0.263376743 container start be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:15:36 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [NOTICE]   (309105) : New worker (309107) forked
Jan 31 03:15:36 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [NOTICE]   (309105) : Loading success.
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.120 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.123 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.129 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[08cd1bd7-cba3-4586-bd15-ceb31466559f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.130 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e97a400-c1 in ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.131 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e97a400-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.131 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff4de6a-cd5b-40cd-83a1-f7e88b7235bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.132 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[40851709-e0c5-43d7-aa0e-b7186eb02121]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.139 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[ed142ea1-d85c-421f-a1e6-aa73a3379c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.148 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1d150306-3233-473a-b35e-cadbcfe0efb4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.178 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8298a0-353a-4a02-8097-9747524a411b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 NetworkManager[49013]: <info>  [1769847336.1873] manager: (tap3e97a400-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/171)
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.186 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b09e8963-2bf6-4036-816c-9cda67291158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 systemd-udevd[308946]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.207 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.211 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847335.9508364, 85e16f4c-d977-4032-9cbd-b904f1d789d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.212 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.214 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e1e82a-a32f-47ac-9171-346136512247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.216 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[476506e2-927a-420b-8a0c-2c786ca29bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 NetworkManager[49013]: <info>  [1769847336.2363] device (tap3e97a400-c0): carrier: link connected
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.241 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8cad9b-d784-4a1b-85b3-ba9018519216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.249 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.253 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.260 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d8680aff-342e-43a0-adda-2644cca23806]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e97a400-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:70:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664175, 'reachable_time': 44715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309126, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.280 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c8318a45-8871-45b6-998c-c21c50bd7407]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:708c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664175, 'tstamp': 664175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309127, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.288 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.299 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e0aea033-c815-49dc-900a-bd57d18a5638]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e97a400-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:70:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664175, 'reachable_time': 44715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309128, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.330 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ff43401f-ca52-49db-b02d-f9cb11e1db32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.387 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[63fa4a6a-1ec1-40a5-a0da-bee7b2273766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.389 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e97a400-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.389 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.390 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e97a400-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:36 np0005603621 NetworkManager[49013]: <info>  [1769847336.3930] manager: (tap3e97a400-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:36 np0005603621 kernel: tap3e97a400-c0: entered promiscuous mode
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.395 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.397 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e97a400-c0, col_values=(('external_ids', {'iface-id': 'bda09691-75c7-4a4d-82f5-6be74a2db008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:36Z|00342|binding|INFO|Releasing lport bda09691-75c7-4a4d-82f5-6be74a2db008 from this chassis (sb_readonly=0)
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.399 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.401 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e97a400-ca43-4c41-8017-9770a9ed4c8e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e97a400-ca43-4c41-8017-9770a9ed4c8e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.402 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[039210a7-bd14-4995-9a13-516cbfa65d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.403 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-3e97a400-ca43-4c41-8017-9770a9ed4c8e
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/3e97a400-ca43-4c41-8017-9770a9ed4c8e.pid.haproxy
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 3e97a400-ca43-4c41-8017-9770a9ed4c8e
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.403 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'env', 'PROCESS_TAG=haproxy-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e97a400-ca43-4c41-8017-9770a9ed4c8e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:15:36 np0005603621 nova_compute[247399]: 2026-01-31 08:15:36.406 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:36 np0005603621 podman[309160]: 2026-01-31 08:15:36.7649369 +0000 UTC m=+0.069531779 container create 779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:15:36 np0005603621 systemd[1]: Started libpod-conmon-779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba.scope.
Jan 31 03:15:36 np0005603621 podman[309160]: 2026-01-31 08:15:36.720734425 +0000 UTC m=+0.025329334 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:15:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/130d97de6e2de4d6d6ca859c8180804d0bf75067a58a01da95dac8586508986a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:36 np0005603621 podman[309160]: 2026-01-31 08:15:36.847700244 +0000 UTC m=+0.152295123 container init 779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:15:36 np0005603621 podman[309160]: 2026-01-31 08:15:36.853094753 +0000 UTC m=+0.157689632 container start 779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:15:36 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [NOTICE]   (309179) : New worker (309181) forked
Jan 31 03:15:36 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [NOTICE]   (309179) : Loading success.
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.944 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.947 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.961 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f79d2874-7df8-4866-98bb-38b2538cfc0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.980 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6a587b-2791-4a8b-9562-4b334cee1ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:36.984 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e74f9eae-41a5-409b-bbb3-544cc8587fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.010 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[cb307b08-df51-4f69-855d-eaa46ad8bba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.024 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bf19f967-58fc-469f-af18-1b037696e42e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e97a400-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:70:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664175, 'reachable_time': 44715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309195, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.037 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd4b1a9-3ec7-4ed0-85e8-222a866d80ef]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap3e97a400-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664187, 'tstamp': 664187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309196, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e97a400-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664189, 'tstamp': 664189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309196, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.039 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e97a400-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.043 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e97a400-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.043 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.044 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e97a400-c0, col_values=(('external_ids', {'iface-id': 'bda09691-75c7-4a4d-82f5-6be74a2db008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.044 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.045 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 51338d60-6e7f-4716-b025-60a809240bd3 in datapath 411aa011-d813-4343-b297-43dfd39c905e unbound from our chassis#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.047 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 411aa011-d813-4343-b297-43dfd39c905e#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.054 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c414f125-4274-4b71-8488-ff99564269ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.055 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap411aa011-d1 in ovnmeta-411aa011-d813-4343-b297-43dfd39c905e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.056 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap411aa011-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.057 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5349df0c-8118-4b5d-af6c-1a741f11d38e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.056 247403 DEBUG nova.compute.manager [req-8c36e790-8990-4a1d-9e52-98bb32da5c67 req-dab3a032-14cf-4cbd-946e-8587230fc038 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.057 247403 DEBUG oslo_concurrency.lockutils [req-8c36e790-8990-4a1d-9e52-98bb32da5c67 req-dab3a032-14cf-4cbd-946e-8587230fc038 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.057 247403 DEBUG oslo_concurrency.lockutils [req-8c36e790-8990-4a1d-9e52-98bb32da5c67 req-dab3a032-14cf-4cbd-946e-8587230fc038 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.057 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[72498494-7213-4b7a-9629-7e9338641d80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.058 247403 DEBUG oslo_concurrency.lockutils [req-8c36e790-8990-4a1d-9e52-98bb32da5c67 req-dab3a032-14cf-4cbd-946e-8587230fc038 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.058 247403 DEBUG nova.compute.manager [req-8c36e790-8990-4a1d-9e52-98bb32da5c67 req-dab3a032-14cf-4cbd-946e-8587230fc038 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.065 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d62ec6c3-5ffc-4713-8d28-c0c6b3123fdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.074 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[63caaf73-839a-41dd-93ea-0e7b63ecf75f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.103 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b82ff0-78c8-4408-a9b9-ebb92d47f992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 NetworkManager[49013]: <info>  [1769847337.1092] manager: (tap411aa011-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.109 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b44b0600-4959-48be-91af-42015df06887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.144 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7acfd9bd-aae9-4521-94ef-59a4ca671bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.146 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad90cea-9a90-44e9-b8d9-7fb15802172d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 NetworkManager[49013]: <info>  [1769847337.1702] device (tap411aa011-d0): carrier: link connected
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.176 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c67eddd8-8e13-4e66-9506-a582d458be7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.190 247403 DEBUG nova.compute.manager [req-e8b854f6-00a9-4be5-b5a9-5e9c34ec4460 req-44d4a2f7-5fb4-4856-a8c1-440583240510 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.191 247403 DEBUG oslo_concurrency.lockutils [req-e8b854f6-00a9-4be5-b5a9-5e9c34ec4460 req-44d4a2f7-5fb4-4856-a8c1-440583240510 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.191 247403 DEBUG oslo_concurrency.lockutils [req-e8b854f6-00a9-4be5-b5a9-5e9c34ec4460 req-44d4a2f7-5fb4-4856-a8c1-440583240510 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.191 247403 DEBUG oslo_concurrency.lockutils [req-e8b854f6-00a9-4be5-b5a9-5e9c34ec4460 req-44d4a2f7-5fb4-4856-a8c1-440583240510 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.191 247403 DEBUG nova.compute.manager [req-e8b854f6-00a9-4be5-b5a9-5e9c34ec4460 req-44d4a2f7-5fb4-4856-a8c1-440583240510 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.192 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8a37b7-d549-4b64-888e-1ff62f96c627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap411aa011-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:98:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664268, 'reachable_time': 31901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309207, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.209 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[38e2d00f-266d-496b-950f-8ef3c5d5fc49]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:98e7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664268, 'tstamp': 664268}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309208, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.222 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0a99cc-82da-40ad-ad12-234f92f2429d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap411aa011-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:98:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664268, 'reachable_time': 31901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309209, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.247 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[558cbfe3-5310-4da1-b969-15cc68710eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.288 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b25b4660-a355-424d-8c79-4221aee2bfbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.290 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap411aa011-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.290 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.291 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap411aa011-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:37 np0005603621 NetworkManager[49013]: <info>  [1769847337.2938] manager: (tap411aa011-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Jan 31 03:15:37 np0005603621 kernel: tap411aa011-d0: entered promiscuous mode
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.295 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.296 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap411aa011-d0, col_values=(('external_ids', {'iface-id': '27c9e779-94f2-4fd2-8a5c-bf488aa6eb99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.297 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:37Z|00343|binding|INFO|Releasing lport 27c9e779-94f2-4fd2-8a5c-bf488aa6eb99 from this chassis (sb_readonly=0)
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.299 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/411aa011-d813-4343-b297-43dfd39c905e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/411aa011-d813-4343-b297-43dfd39c905e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.300 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8be5ba83-c489-48e2-b6bf-4da7c16112a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.300 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-411aa011-d813-4343-b297-43dfd39c905e
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/411aa011-d813-4343-b297-43dfd39c905e.pid.haproxy
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 411aa011-d813-4343-b297-43dfd39c905e
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.301 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'env', 'PROCESS_TAG=haproxy-411aa011-d813-4343-b297-43dfd39c905e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/411aa011-d813-4343-b297-43dfd39c905e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:15:37 np0005603621 nova_compute[247399]: 2026-01-31 08:15:37.303 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:37 np0005603621 podman[309241]: 2026-01-31 08:15:37.588809964 +0000 UTC m=+0.020308498 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:15:37 np0005603621 podman[309241]: 2026-01-31 08:15:37.694417073 +0000 UTC m=+0.125915577 container create 8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:15:37 np0005603621 systemd[1]: Started libpod-conmon-8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2.scope.
Jan 31 03:15:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:15:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e09ee6ac7038831c5ccddc376356d03d25824ff3902b8e247cd0382613d431/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:15:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:37 np0005603621 podman[309241]: 2026-01-31 08:15:37.803001515 +0000 UTC m=+0.234500039 container init 8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:15:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:37 np0005603621 podman[309241]: 2026-01-31 08:15:37.808501318 +0000 UTC m=+0.239999822 container start 8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:15:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 181 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 85 op/s
Jan 31 03:15:37 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [NOTICE]   (309261) : New worker (309263) forked
Jan 31 03:15:37 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [NOTICE]   (309261) : Loading success.
Jan 31 03:15:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:37.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.899 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e76773d3-8a3a-451d-a29e-d01474b5f82f in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.904 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.918 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6920fa4a-6032-4bfd-ae2b-c58887ea1caf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.949 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ddeb1da6-6a0d-4bb2-b7ea-219028a2f1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.953 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[04b9b793-e8f8-4287-90f4-523ca085eecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:37.983 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[657ac0db-a888-4add-a010-c67296800232]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.001 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4e526a4a-4c5b-417a-a8f2-1b54f9b42d27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e97a400-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:70:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664175, 'reachable_time': 44715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309277, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.019 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f7707949-d78c-4e7e-912a-84d920cdf2d2]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap3e97a400-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664187, 'tstamp': 664187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309278, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e97a400-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664189, 'tstamp': 664189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309278, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.021 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e97a400-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.023 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.025 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.025 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e97a400-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.026 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.026 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e97a400-c0, col_values=(('external_ids', {'iface-id': 'bda09691-75c7-4a4d-82f5-6be74a2db008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.027 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.028 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.030 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.045 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e7af3fee-4faa-4b94-9215-17b6d02c7e8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.075 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0dfd50-9dbf-405b-a408-47e25152c780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.079 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[17f2ecc1-ad91-4ac3-b9b7-f3dc3e9ac1c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.106 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[298e117e-475e-4a0e-83b8-8c8e6001fe68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.125 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[93d3d97f-2a22-47c5-9c29-892a919416fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e97a400-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:70:8c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664175, 'reachable_time': 44715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309284, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.137 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c844c9e2-cf74-42f9-a3f3-c60d0bc79ef9]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.1.2'], ['IFA_LOCAL', '10.1.1.2'], ['IFA_BROADCAST', '10.1.1.255'], ['IFA_LABEL', 'tap3e97a400-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664187, 'tstamp': 664187}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309285, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3e97a400-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664189, 'tstamp': 664189}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309285, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.139 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e97a400-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.141 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e97a400-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.142 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.142 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e97a400-c0, col_values=(('external_ids', {'iface-id': 'bda09691-75c7-4a4d-82f5-6be74a2db008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.142 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.143 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d1a1fd13-040f-49ef-b94f-98bcea71df76 in datapath 411aa011-d813-4343-b297-43dfd39c905e unbound from our chassis#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.145 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 411aa011-d813-4343-b297-43dfd39c905e#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.156 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[06735ab7-38f8-4db0-8db5-0078e0decf3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.180 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[26130ceb-24de-4c74-93f9-8cf2562f65a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.183 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[32e115f5-f1f2-438e-9879-32b2d3bc4b8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.206 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e009ae77-1d10-49be-9511-ee7d0ac2df97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.220 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6deb5b-e80c-4e33-bc7d-8ec5f8ae87c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap411aa011-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:98:e7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664268, 'reachable_time': 31901, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309291, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.227 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.227 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.227 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.231 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a59d4238-76ab-4e56-bff9-025a0466c836]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.2.2.2'], ['IFA_LOCAL', '10.2.2.2'], ['IFA_BROADCAST', '10.2.2.255'], ['IFA_LABEL', 'tap411aa011-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664278, 'tstamp': 664278}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309292, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap411aa011-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664279, 'tstamp': 664279}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309292, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.233 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap411aa011-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.237 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.239 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap411aa011-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.239 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.240 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap411aa011-d0, col_values=(('external_ids', {'iface-id': '27c9e779-94f2-4fd2-8a5c-bf488aa6eb99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:15:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:15:38.240 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.461 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:15:38
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'vms', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:15:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:15:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:38 np0005603621 nova_compute[247399]: 2026-01-31 08:15:38.824 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.202 247403 DEBUG nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.202 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.202 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.203 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.203 247403 DEBUG nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No event matching network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 in dict_keys([('network-vif-plugged', 'fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b'), ('network-vif-plugged', '6e8e75b8-b681-4fc0-be0b-89a15bda2ac3'), ('network-vif-plugged', '31ce268e-bfe2-4d8b-acd2-5a75e9725d95'), ('network-vif-plugged', 'e76773d3-8a3a-451d-a29e-d01474b5f82f'), ('network-vif-plugged', '51338d60-6e7f-4716-b025-60a809240bd3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.203 247403 WARNING nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.204 247403 DEBUG nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.204 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.204 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.204 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.205 247403 DEBUG nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.205 247403 DEBUG nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.205 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.205 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.206 247403 DEBUG oslo_concurrency.lockutils [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.206 247403 DEBUG nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No event matching network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 in dict_keys([('network-vif-plugged', 'fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b'), ('network-vif-plugged', '6e8e75b8-b681-4fc0-be0b-89a15bda2ac3'), ('network-vif-plugged', 'e76773d3-8a3a-451d-a29e-d01474b5f82f'), ('network-vif-plugged', '51338d60-6e7f-4716-b025-60a809240bd3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.206 247403 WARNING nova.compute.manager [req-837365c2-7d44-454f-aa56-f5b02c54970f req-f85af098-fe2c-47ed-b99d-a8f624dce7fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.314 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.314 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.315 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.315 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.315 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No event matching network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 in dict_keys([('network-vif-plugged', 'fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b'), ('network-vif-plugged', '6e8e75b8-b681-4fc0-be0b-89a15bda2ac3'), ('network-vif-plugged', 'e76773d3-8a3a-451d-a29e-d01474b5f82f'), ('network-vif-plugged', '51338d60-6e7f-4716-b025-60a809240bd3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.316 247403 WARNING nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.316 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.316 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.317 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.317 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.317 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.318 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.318 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.318 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.318 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.319 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No event matching network-vif-plugged-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b in dict_keys([('network-vif-plugged', '6e8e75b8-b681-4fc0-be0b-89a15bda2ac3'), ('network-vif-plugged', 'e76773d3-8a3a-451d-a29e-d01474b5f82f'), ('network-vif-plugged', '51338d60-6e7f-4716-b025-60a809240bd3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.319 247403 WARNING nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.319 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.319 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.320 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.320 247403 DEBUG oslo_concurrency.lockutils [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:39 np0005603621 nova_compute[247399]: 2026-01-31 08:15:39.320 247403 DEBUG nova.compute.manager [req-03b9f364-37b9-4c30-aa32-ce78dc76eed2 req-1393fe06-7c47-4e13-9593-8b1592d0bae3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 168 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 98 op/s
Jan 31 03:15:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:15:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:39.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.258 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.260 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.260 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:15:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1396362008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.680 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.994 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.994 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.994 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:40 np0005603621 nova_compute[247399]: 2026-01-31 08:15:40.995 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000005e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.191 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.192 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4356MB free_disk=20.97223663330078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.192 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.192 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.451 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.452 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.452 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.453 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.453 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No event matching network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 in dict_keys([('network-vif-plugged', 'e76773d3-8a3a-451d-a29e-d01474b5f82f'), ('network-vif-plugged', '51338d60-6e7f-4716-b025-60a809240bd3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.454 247403 WARNING nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.454 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.454 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.455 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.455 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.456 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.456 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.457 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.457 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.458 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.458 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No event matching network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f in dict_keys([('network-vif-plugged', '51338d60-6e7f-4716-b025-60a809240bd3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.459 247403 WARNING nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.459 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.459 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.460 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.461 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.461 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Processing event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.462 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.462 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.462 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.463 247403 DEBUG oslo_concurrency.lockutils [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.463 247403 DEBUG nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.464 247403 WARNING nova.compute.manager [req-09191d8b-6533-445a-81bd-1c233b3f884f req-5ef3f357-e12d-48e2-89c8-7720d5ed9512 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.466 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.475 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847341.4742956, 85e16f4c-d977-4032-9cbd-b904f1d789d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.476 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.478 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.484 247403 INFO nova.virt.libvirt.driver [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance spawned successfully.#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.487 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.511 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.512 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.512 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.546 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.550 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.573 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.638 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.638 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.639 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.639 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.639 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.640 247403 DEBUG nova.virt.libvirt.driver [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.690 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:15:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:41.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 168 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 16 KiB/s wr, 63 op/s
Jan 31 03:15:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:41.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:15:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772581038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.959 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:15:41 np0005603621 nova_compute[247399]: 2026-01-31 08:15:41.962 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.467 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.530 247403 INFO nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Took 59.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.531 247403 DEBUG nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.618 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.619 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.619 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.619 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.686 247403 INFO nova.compute.manager [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Took 67.18 seconds to build instance.#033[00m
Jan 31 03:15:42 np0005603621 nova_compute[247399]: 2026-01-31 08:15:42.792 247403 DEBUG oslo_concurrency.lockutils [None req-8a429518-04f0-4869-bdb2-7bdfe081cc88 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 67.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:15:43 np0005603621 nova_compute[247399]: 2026-01-31 08:15:43.463 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:43 np0005603621 nova_compute[247399]: 2026-01-31 08:15:43.695 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:43.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 134 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 17 KiB/s wr, 95 op/s
Jan 31 03:15:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:43 np0005603621 nova_compute[247399]: 2026-01-31 08:15:43.827 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:43.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:45 np0005603621 podman[309392]: 2026-01-31 08:15:45.503426414 +0000 UTC m=+0.054775198 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:15:45 np0005603621 podman[309393]: 2026-01-31 08:15:45.555595508 +0000 UTC m=+0.106954202 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:15:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:45.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 134 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 17 KiB/s wr, 81 op/s
Jan 31 03:15:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:47 np0005603621 nova_compute[247399]: 2026-01-31 08:15:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:15:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:15:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:15:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 134 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 17 KiB/s wr, 94 op/s
Jan 31 03:15:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:48 np0005603621 nova_compute[247399]: 2026-01-31 08:15:48.464 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:48 np0005603621 nova_compute[247399]: 2026-01-31 08:15:48.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.724176437744277e-06 of space, bias 1.0, pg target 0.0026172529313232833 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0019791157756374467 of space, bias 1.0, pg target 0.593734732691234 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:15:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 134 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 KiB/s wr, 102 op/s
Jan 31 03:15:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:49.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:50 np0005603621 nova_compute[247399]: 2026-01-31 08:15:50.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:50 np0005603621 NetworkManager[49013]: <info>  [1769847350.2222] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 31 03:15:50 np0005603621 NetworkManager[49013]: <info>  [1769847350.2231] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Jan 31 03:15:50 np0005603621 nova_compute[247399]: 2026-01-31 08:15:50.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:50Z|00344|binding|INFO|Releasing lport 53bb0b66-6335-4be3-bfd6-e7890c25772d from this chassis (sb_readonly=0)
Jan 31 03:15:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:50Z|00345|binding|INFO|Releasing lport bda09691-75c7-4a4d-82f5-6be74a2db008 from this chassis (sb_readonly=0)
Jan 31 03:15:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:50Z|00346|binding|INFO|Releasing lport 27c9e779-94f2-4fd2-8a5c-bf488aa6eb99 from this chassis (sb_readonly=0)
Jan 31 03:15:50 np0005603621 nova_compute[247399]: 2026-01-31 08:15:50.303 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:51 np0005603621 nova_compute[247399]: 2026-01-31 08:15:51.080 247403 DEBUG nova.compute.manager [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-changed-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:15:51 np0005603621 nova_compute[247399]: 2026-01-31 08:15:51.080 247403 DEBUG nova.compute.manager [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing instance network info cache due to event network-changed-43de799d-e636-43ba-89c3-cb3a6a2ed888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:15:51 np0005603621 nova_compute[247399]: 2026-01-31 08:15:51.081 247403 DEBUG oslo_concurrency.lockutils [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:15:51 np0005603621 nova_compute[247399]: 2026-01-31 08:15:51.081 247403 DEBUG oslo_concurrency.lockutils [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:15:51 np0005603621 nova_compute[247399]: 2026-01-31 08:15:51.081 247403 DEBUG nova.network.neutron [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Refreshing network info cache for port 43de799d-e636-43ba-89c3-cb3a6a2ed888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:15:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 134 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 79 op/s
Jan 31 03:15:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:51.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:52 np0005603621 nova_compute[247399]: 2026-01-31 08:15:52.754 247403 DEBUG nova.network.neutron [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updated VIF entry in instance network info cache for port 43de799d-e636-43ba-89c3-cb3a6a2ed888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:15:52 np0005603621 nova_compute[247399]: 2026-01-31 08:15:52.755 247403 DEBUG nova.network.neutron [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:15:52 np0005603621 nova_compute[247399]: 2026-01-31 08:15:52.988 247403 DEBUG oslo_concurrency.lockutils [req-93c96e67-199b-4ec9-b4ec-0c2408577783 req-f3f678cf-3938-418e-bd31-ea94a31a3f13 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-85e16f4c-d977-4032-9cbd-b904f1d789d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:15:53 np0005603621 nova_compute[247399]: 2026-01-31 08:15:53.466 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 134 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 682 B/s wr, 79 op/s
Jan 31 03:15:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:53 np0005603621 nova_compute[247399]: 2026-01-31 08:15:53.831 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:53.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:54 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 31 03:15:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:55Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:02:fd 10.100.0.11
Jan 31 03:15:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:55Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:02:fd 10.100.0.11
Jan 31 03:15:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:55.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 136 MiB data, 822 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 403 KiB/s wr, 55 op/s
Jan 31 03:15:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:55.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:7d:39 10.2.2.200
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:7d:39 10.2.2.200
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:b4:87 10.1.1.218
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:b4:87 10.1.1.218
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00347|binding|INFO|Releasing lport 53bb0b66-6335-4be3-bfd6-e7890c25772d from this chassis (sb_readonly=0)
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00348|binding|INFO|Releasing lport bda09691-75c7-4a4d-82f5-6be74a2db008 from this chassis (sb_readonly=0)
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00349|binding|INFO|Releasing lport 27c9e779-94f2-4fd2-8a5c-bf488aa6eb99 from this chassis (sb_readonly=0)
Jan 31 03:15:56 np0005603621 nova_compute[247399]: 2026-01-31 08:15:56.591 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:af:a0 10.1.1.145
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:af:a0 10.1.1.145
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:7b:5a 10.1.1.39
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:7b:5a 10.1.1.39
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:b0:67 10.2.2.100
Jan 31 03:15:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:56Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:b0:67 10.2.2.100
Jan 31 03:15:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:57Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:b2:52 10.1.1.17
Jan 31 03:15:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:15:57Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:b2:52 10.1.1.17
Jan 31 03:15:57 np0005603621 nova_compute[247399]: 2026-01-31 08:15:57.453 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:15:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 150 MiB data, 848 MiB used, 20 GiB / 21 GiB avail; 903 KiB/s rd, 1.1 MiB/s wr, 61 op/s
Jan 31 03:15:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:15:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:57.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:15:58 np0005603621 nova_compute[247399]: 2026-01-31 08:15:58.468 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:15:58 np0005603621 nova_compute[247399]: 2026-01-31 08:15:58.833 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:15:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 167 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 762 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 03:15:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:15:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:15:59.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:15:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:15:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:15:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:15:59.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 167 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 03:16:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:01.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:01.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:03 np0005603621 nova_compute[247399]: 2026-01-31 08:16:03.470 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 167 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 03:16:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:03.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:03 np0005603621 nova_compute[247399]: 2026-01-31 08:16:03.836 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:03.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 167 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 03:16:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:05.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 03:16:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:07.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:07.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:08 np0005603621 nova_compute[247399]: 2026-01-31 08:16:08.472 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:08 np0005603621 nova_compute[247399]: 2026-01-31 08:16:08.717 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:08.829 159995 DEBUG eventlet.wsgi.server [-] (159995) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:08.830 159995 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: Accept: */*#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: Connection: close#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: Content-Type: text/plain#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: Host: 169.254.169.254#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: User-Agent: curl/7.84.0#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: X-Forwarded-For: 10.100.0.11#015
Jan 31 03:16:08 np0005603621 ovn_metadata_agent[159729]: X-Ovn-Network-Id: 370ef3ad-fbaf-4df3-ad34-de7587567f0e __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 31 03:16:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:08 np0005603621 nova_compute[247399]: 2026-01-31 08:16:08.839 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Jan 31 03:16:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:09.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:09.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:10.703 159995 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 31 03:16:10 np0005603621 haproxy-metadata-proxy-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309107]: 10.100.0.11:52314 [31/Jan/2026:08:16:08.827] listener listener/metadata 0/0/0/1876/1876 200 2534 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 31 03:16:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:10.704 159995 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 2550 time: 1.8743401#033[00m
Jan 31 03:16:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 35 KiB/s wr, 15 op/s
Jan 31 03:16:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:11.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:11.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.475 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.728 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.729 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.730 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.731 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.732 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.733 247403 INFO nova.compute.manager [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Terminating instance#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.736 247403 DEBUG nova.compute.manager [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:16:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 35 KiB/s wr, 17 op/s
Jan 31 03:16:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:13.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.841 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:13 np0005603621 kernel: tap43de799d-e6 (unregistering): left promiscuous mode
Jan 31 03:16:13 np0005603621 NetworkManager[49013]: <info>  [1769847373.9522] device (tap43de799d-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00350|binding|INFO|Releasing lport 43de799d-e636-43ba-89c3-cb3a6a2ed888 from this chassis (sb_readonly=0)
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00351|binding|INFO|Setting lport 43de799d-e636-43ba-89c3-cb3a6a2ed888 down in Southbound
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00352|binding|INFO|Removing iface tap43de799d-e6 ovn-installed in OVS
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 kernel: tapfd1f006d-b1 (unregistering): left promiscuous mode
Jan 31 03:16:13 np0005603621 NetworkManager[49013]: <info>  [1769847373.9829] device (tapfd1f006d-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00353|binding|INFO|Releasing lport fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b from this chassis (sb_readonly=1)
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00354|binding|INFO|Removing iface tapfd1f006d-b1 ovn-installed in OVS
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00355|if_status|INFO|Dropped 5 log messages in last 84 seconds (most recently, 83 seconds ago) due to excessive rate
Jan 31 03:16:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:13Z|00356|if_status|INFO|Not setting lport fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b down as sb is readonly
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 nova_compute[247399]: 2026-01-31 08:16:13.993 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:13 np0005603621 kernel: tap6e8e75b8-b6 (unregistering): left promiscuous mode
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.0033] device (tap6e8e75b8-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00357|binding|INFO|Releasing lport 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 from this chassis (sb_readonly=1)
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00358|binding|INFO|Removing iface tap6e8e75b8-b6 ovn-installed in OVS
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.017 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.022 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 kernel: tap31ce268e-bf (unregistering): left promiscuous mode
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.0333] device (tap31ce268e-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.046 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00359|binding|INFO|Releasing lport 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 from this chassis (sb_readonly=1)
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00360|binding|INFO|Removing iface tap31ce268e-bf ovn-installed in OVS
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.051 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 kernel: tape76773d3-8a (unregistering): left promiscuous mode
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.0612] device (tape76773d3-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.074 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00361|binding|INFO|Releasing lport e76773d3-8a3a-451d-a29e-d01474b5f82f from this chassis (sb_readonly=1)
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00362|binding|INFO|Removing iface tape76773d3-8a ovn-installed in OVS
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.076 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 kernel: tap51338d60-6e (unregistering): left promiscuous mode
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.0891] device (tap51338d60-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.105 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00363|binding|INFO|Releasing lport 51338d60-6e7f-4716-b025-60a809240bd3 from this chassis (sb_readonly=1)
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00364|binding|INFO|Removing iface tap51338d60-6e ovn-installed in OVS
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.110 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 kernel: tapd1a1fd13-04 (unregistering): left promiscuous mode
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.1189] device (tapd1a1fd13-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.149 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00365|binding|INFO|Releasing lport d1a1fd13-040f-49ef-b94f-98bcea71df76 from this chassis (sb_readonly=1)
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00366|binding|INFO|Removing iface tapd1a1fd13-04 ovn-installed in OVS
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.151 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 31 03:16:14 np0005603621 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005e.scope: Consumed 15.332s CPU time.
Jan 31 03:16:14 np0005603621 systemd-machined[212769]: Machine qemu-42-instance-0000005e terminated.
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00367|binding|INFO|Setting lport 51338d60-6e7f-4716-b025-60a809240bd3 down in Southbound
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00368|binding|INFO|Setting lport e76773d3-8a3a-451d-a29e-d01474b5f82f down in Southbound
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00369|binding|INFO|Setting lport fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b down in Southbound
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00370|binding|INFO|Setting lport 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 down in Southbound
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00371|binding|INFO|Setting lport d1a1fd13-040f-49ef-b94f-98bcea71df76 down in Southbound
Jan 31 03:16:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:16:14Z|00372|binding|INFO|Setting lport 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 down in Southbound
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.329 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:02:fd 10.100.0.11'], port_security=['fa:16:3e:77:02:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7af8b010-2cf7-4570-899f-0230a64afdc7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=43de799d-e636-43ba-89c3-cb3a6a2ed888) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.330 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 43de799d-e636-43ba-89c3-cb3a6a2ed888 in datapath 370ef3ad-fbaf-4df3-ad34-de7587567f0e unbound from our chassis#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.332 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 370ef3ad-fbaf-4df3-ad34-de7587567f0e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.334 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b223455e-36fe-4a06-b7a3-53e6c42a81de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.335 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e namespace which is not needed anymore#033[00m
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.3779] manager: (tap43de799d-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.3913] manager: (tapfd1f006d-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.4014] manager: (tap6e8e75b8-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.409 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 NetworkManager[49013]: <info>  [1769847374.4208] manager: (tape76773d3-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.449 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:7b:5a 10.1.1.39'], port_security=['fa:16:3e:f4:7b:5a 10.1.1.39'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.39/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=31ce268e-bfe2-4d8b-acd2-5a75e9725d95) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.452 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:b4:87 10.1.1.218'], port_security=['fa:16:3e:6e:b4:87 10.1.1.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-486336001', 'neutron:cidrs': '10.1.1.218/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-486336001', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36d4c8c2-788b-4fc8-9051-a66dc5e53167', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.454 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:b0:67 10.2.2.100'], port_security=['fa:16:3e:63:b0:67 10.2.2.100'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.100/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-411aa011-d813-4343-b297-43dfd39c905e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b392687b-637d-40d1-bb6d-50c20007db06, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=51338d60-6e7f-4716-b025-60a809240bd3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.456 247403 INFO nova.virt.libvirt.driver [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Instance destroyed successfully.#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.456 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:b2:52 10.1.1.17'], port_security=['fa:16:3e:76:b2:52 10.1.1.17'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.1.17/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e76773d3-8a3a-451d-a29e-d01474b5f82f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.456 247403 DEBUG nova.objects.instance [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lazy-loading 'resources' on Instance uuid 85e16f4c-d977-4032-9cbd-b904f1d789d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.458 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:af:a0 10.1.1.145'], port_security=['fa:16:3e:e2:af:a0 10.1.1.145'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TaggedBootDevicesTest_v242-1892014503', 'neutron:cidrs': '10.1.1.145/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TaggedBootDevicesTest_v242-1892014503', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36d4c8c2-788b-4fc8-9051-a66dc5e53167', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15446aa2-c1f6-41f4-9a8c-cdfc10175a6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:14.460 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:7d:39 10.2.2.200'], port_security=['fa:16:3e:d5:7d:39 10.2.2.200'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.2.2.200/24', 'neutron:device_id': '85e16f4c-d977-4032-9cbd-b904f1d789d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-411aa011-d813-4343-b297-43dfd39c905e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a6c60c75300483aa07e13b08923b1a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf0daa5d-6be2-4eac-b0a6-f856334b73c4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b392687b-637d-40d1-bb6d-50c20007db06, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d1a1fd13-040f-49ef-b94f-98bcea71df76) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:14 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [NOTICE]   (309105) : haproxy version is 2.8.14-c23fe91
Jan 31 03:16:14 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [NOTICE]   (309105) : path to executable is /usr/sbin/haproxy
Jan 31 03:16:14 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [WARNING]  (309105) : Exiting Master process...
Jan 31 03:16:14 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [ALERT]    (309105) : Current worker (309107) exited with code 143 (Terminated)
Jan 31 03:16:14 np0005603621 neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e[309101]: [WARNING]  (309105) : All workers exited. Exiting... (0)
Jan 31 03:16:14 np0005603621 systemd[1]: libpod-be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01.scope: Deactivated successfully.
Jan 31 03:16:14 np0005603621 podman[309653]: 2026-01-31 08:16:14.60811528 +0000 UTC m=+0.151106455 container died be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:16:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01-userdata-shm.mount: Deactivated successfully.
Jan 31 03:16:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a6d490567d2a7b101afccf046641b95b6a5c037a15d5572e794c37d8f36cb28b-merged.mount: Deactivated successfully.
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.907 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.908 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.908 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.908 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.910 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43de799d-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.913 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.926 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.929 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:02:fd,bridge_name='br-int',has_traffic_filtering=True,id=43de799d-e636-43ba-89c3-cb3a6a2ed888,network=Network(370ef3ad-fbaf-4df3-ad34-de7587567f0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43de799d-e6')#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.931 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.931 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.932 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.932 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.933 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd1f006d-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.936 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:16:14 np0005603621 podman[309653]: 2026-01-31 08:16:14.939018768 +0000 UTC m=+0.482009933 container cleanup be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:16:14 np0005603621 systemd[1]: libpod-conmon-be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01.scope: Deactivated successfully.
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.949 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.950 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:b4:87,bridge_name='br-int',has_traffic_filtering=True,id=fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfd1f006d-b1')#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.951 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.951 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.952 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.952 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.953 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.953 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e8e75b8-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.955 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.969 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:af:a0,bridge_name='br-int',has_traffic_filtering=True,id=6e8e75b8-b681-4fc0-be0b-89a15bda2ac3,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e8e75b8-b6')#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.970 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.970 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.971 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.971 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.973 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.973 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31ce268e-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.976 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.984 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.986 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:7b:5a,bridge_name='br-int',has_traffic_filtering=True,id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31ce268e-bf')#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.987 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.987 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.988 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.988 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.989 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape76773d3-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.990 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.997 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:14 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.999 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:b2:52,bridge_name='br-int',has_traffic_filtering=True,id=e76773d3-8a3a-451d-a29e-d01474b5f82f,network=Network(3e97a400-ca43-4c41-8017-9770a9ed4c8e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76773d3-8a')#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:14.999 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.000 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.000 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.001 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.002 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51338d60-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.003 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.005 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.008 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:b0:67,bridge_name='br-int',has_traffic_filtering=True,id=51338d60-6e7f-4716-b025-60a809240bd3,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51338d60-6e')#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.009 247403 DEBUG nova.virt.libvirt.vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:14:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-200568798',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-200568798',id=94,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAqG8gvESiMwKqdhpFrVMwsaLKiHe6jSPi4LnLB2/aqkXYTQnS2K9Izr/z++gGNBhnfJYcW9Pzx6e2MYJDn2njHJHZvEZnbgftKKq65jYnEyTFqy+tkiiXRx+TPwtwJwwA==',key_name='tempest-keypair-1340926437',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a6c60c75300483aa07e13b08923b1a1',ramdisk_id='',reservation_id='r-g89h8qtg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedBootDevicesTest_v242-1562315172',owner_user_name='tempest-TaggedBootDevicesTest_v242-1562315172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8080000681f449c3a9754c876165d667',uuid=85e16f4c-d977-4032-9cbd-b904f1d789d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.009 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converting VIF {"id": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "address": "fa:16:3e:d5:7d:39", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.200", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1a1fd13-04", "ovs_interfaceid": "d1a1fd13-040f-49ef-b94f-98bcea71df76", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.009 247403 DEBUG nova.network.os_vif_util [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.009 247403 DEBUG os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.010 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1a1fd13-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.011 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.013 247403 INFO os_vif [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:7d:39,bridge_name='br-int',has_traffic_filtering=True,id=d1a1fd13-040f-49ef-b94f-98bcea71df76,network=Network(411aa011-d813-4343-b297-43dfd39c905e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1a1fd13-04')#033[00m
Jan 31 03:16:15 np0005603621 podman[309706]: 2026-01-31 08:16:15.200409007 +0000 UTC m=+0.239543816 container remove be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.204 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bb7eb5-564f-4727-ac8d-df09dde78df9]: (4, ('Sat Jan 31 08:16:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e (be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01)\nbe59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01\nSat Jan 31 08:16:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e (be59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01)\nbe59eed5cb83062bee50f1c4266aa37c59ac4e8bb7800a3959fd03c8fe616c01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.205 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[195d3107-09b7-494a-8841-266868ab37e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.206 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap370ef3ad-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.208 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 kernel: tap370ef3ad-f0: left promiscuous mode
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.214 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.218 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f68dd45b-3125-46f8-aa4c-db6e59a7ccf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.237 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6e8b97-c371-4b5e-9172-354ddb8f6faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.239 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b01998f1-faff-432e-a0f9-79db8371e844]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.249 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[004ceb89-cf1b-47b6-807d-3f14658ed2cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664079, 'reachable_time': 29681, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309752, 'error': None, 'target': 'ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.252 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-370ef3ad-fbaf-4df3-ad34-de7587567f0e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:16:15 np0005603621 systemd[1]: run-netns-ovnmeta\x2d370ef3ad\x2dfbaf\x2d4df3\x2dad34\x2dde7587567f0e.mount: Deactivated successfully.
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.253 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[cb17f6db-cdf9-40df-af2d-026641b99405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.254 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 31ce268e-bfe2-4d8b-acd2-5a75e9725d95 in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.256 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.257 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[674520e6-fc0f-4d83-8647-bb5bfba6e46b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:15.258 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e namespace which is not needed anymore#033[00m
Jan 31 03:16:15 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [NOTICE]   (309179) : haproxy version is 2.8.14-c23fe91
Jan 31 03:16:15 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [NOTICE]   (309179) : path to executable is /usr/sbin/haproxy
Jan 31 03:16:15 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [WARNING]  (309179) : Exiting Master process...
Jan 31 03:16:15 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [ALERT]    (309179) : Current worker (309181) exited with code 143 (Terminated)
Jan 31 03:16:15 np0005603621 neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e[309175]: [WARNING]  (309179) : All workers exited. Exiting... (0)
Jan 31 03:16:15 np0005603621 systemd[1]: libpod-779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba.scope: Deactivated successfully.
Jan 31 03:16:15 np0005603621 podman[309771]: 2026-01-31 08:16:15.478191881 +0000 UTC m=+0.128857808 container died 779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:16:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 33 KiB/s wr, 17 op/s
Jan 31 03:16:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba-userdata-shm.mount: Deactivated successfully.
Jan 31 03:16:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-130d97de6e2de4d6d6ca859c8180804d0bf75067a58a01da95dac8586508986a-merged.mount: Deactivated successfully.
Jan 31 03:16:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:15.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:15 np0005603621 podman[309800]: 2026-01-31 08:16:15.872587668 +0000 UTC m=+0.145820760 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:16:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:15.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:15 np0005603621 podman[309801]: 2026-01-31 08:16:15.894486434 +0000 UTC m=+0.166397644 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.989 247403 DEBUG nova.compute.manager [req-bce5a6af-cf2d-4dff-93a4-7c925c872279 req-e440dc6d-3530-480a-9db7-a21f4829cac4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.990 247403 DEBUG oslo_concurrency.lockutils [req-bce5a6af-cf2d-4dff-93a4-7c925c872279 req-e440dc6d-3530-480a-9db7-a21f4829cac4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.991 247403 DEBUG oslo_concurrency.lockutils [req-bce5a6af-cf2d-4dff-93a4-7c925c872279 req-e440dc6d-3530-480a-9db7-a21f4829cac4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.991 247403 DEBUG oslo_concurrency.lockutils [req-bce5a6af-cf2d-4dff-93a4-7c925c872279 req-e440dc6d-3530-480a-9db7-a21f4829cac4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.992 247403 DEBUG nova.compute.manager [req-bce5a6af-cf2d-4dff-93a4-7c925c872279 req-e440dc6d-3530-480a-9db7-a21f4829cac4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-unplugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:15 np0005603621 nova_compute[247399]: 2026-01-31 08:16:15.992 247403 DEBUG nova.compute.manager [req-bce5a6af-cf2d-4dff-93a4-7c925c872279 req-e440dc6d-3530-480a-9db7-a21f4829cac4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.173 247403 DEBUG nova.compute.manager [req-d7ae435a-e6e5-4a64-9fd1-02ef74cb66f2 req-dfa05063-6fec-4eea-80bf-289add041af6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.173 247403 DEBUG oslo_concurrency.lockutils [req-d7ae435a-e6e5-4a64-9fd1-02ef74cb66f2 req-dfa05063-6fec-4eea-80bf-289add041af6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.173 247403 DEBUG oslo_concurrency.lockutils [req-d7ae435a-e6e5-4a64-9fd1-02ef74cb66f2 req-dfa05063-6fec-4eea-80bf-289add041af6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.173 247403 DEBUG oslo_concurrency.lockutils [req-d7ae435a-e6e5-4a64-9fd1-02ef74cb66f2 req-dfa05063-6fec-4eea-80bf-289add041af6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.174 247403 DEBUG nova.compute.manager [req-d7ae435a-e6e5-4a64-9fd1-02ef74cb66f2 req-dfa05063-6fec-4eea-80bf-289add041af6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-unplugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.174 247403 DEBUG nova.compute.manager [req-d7ae435a-e6e5-4a64-9fd1-02ef74cb66f2 req-dfa05063-6fec-4eea-80bf-289add041af6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:16:16 np0005603621 podman[309771]: 2026-01-31 08:16:16.228982215 +0000 UTC m=+0.879648132 container cleanup 779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:16:16 np0005603621 podman[309848]: 2026-01-31 08:16:16.59971984 +0000 UTC m=+0.352824586 container remove 779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:16:16 np0005603621 systemd[1]: libpod-conmon-779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba.scope: Deactivated successfully.
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.603 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ef07cf-91ad-4cfe-83bf-b92e5e9b9add]: (4, ('Sat Jan 31 08:16:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e (779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba)\n779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba\nSat Jan 31 08:16:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e (779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba)\n779df18e2a3d010667f6bfda156a17a197fa7d149f3d02bd82b82b9e6cd74aba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.604 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[87b0d577-e9d1-4620-811e-ce178cf0abaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.605 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e97a400-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.606 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:16 np0005603621 kernel: tap3e97a400-c0: left promiscuous mode
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.610 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.613 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fafb844f-4eff-47cf-990c-c8ce96d520a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.633 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ae35d9f5-db3d-4f7a-b879-e425dffa271b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.634 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6eb340-0be2-43c5-b8ed-105a2ac4f41a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.643 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[94b69114-ee09-4d2c-826e-f9694223af2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664168, 'reachable_time': 28433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309866, 'error': None, 'target': 'ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.646 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e97a400-ca43-4c41-8017-9770a9ed4c8e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.646 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e2990931-b554-42fc-9e00-e068f545e2f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 systemd[1]: run-netns-ovnmeta\x2d3e97a400\x2dca43\x2d4c41\x2d8017\x2d9770a9ed4c8e.mount: Deactivated successfully.
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.647 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.649 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.649 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9b753272-374c-4b0f-a97d-55614ca46a04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.650 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 51338d60-6e7f-4716-b025-60a809240bd3 in datapath 411aa011-d813-4343-b297-43dfd39c905e unbound from our chassis#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.651 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 411aa011-d813-4343-b297-43dfd39c905e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.652 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a0e7fc-872b-415a-aa37-e1d235d70395]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:16.652 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-411aa011-d813-4343-b297-43dfd39c905e namespace which is not needed anymore#033[00m
Jan 31 03:16:16 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [NOTICE]   (309261) : haproxy version is 2.8.14-c23fe91
Jan 31 03:16:16 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [NOTICE]   (309261) : path to executable is /usr/sbin/haproxy
Jan 31 03:16:16 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [WARNING]  (309261) : Exiting Master process...
Jan 31 03:16:16 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [ALERT]    (309261) : Current worker (309263) exited with code 143 (Terminated)
Jan 31 03:16:16 np0005603621 neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e[309257]: [WARNING]  (309261) : All workers exited. Exiting... (0)
Jan 31 03:16:16 np0005603621 systemd[1]: libpod-8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2.scope: Deactivated successfully.
Jan 31 03:16:16 np0005603621 podman[309884]: 2026-01-31 08:16:16.753254931 +0000 UTC m=+0.048609085 container died 8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.755 247403 INFO nova.virt.libvirt.driver [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Deleting instance files /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4_del#033[00m
Jan 31 03:16:16 np0005603621 nova_compute[247399]: 2026-01-31 08:16:16.756 247403 INFO nova.virt.libvirt.driver [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Deletion of /var/lib/nova/instances/85e16f4c-d977-4032-9cbd-b904f1d789d4_del complete#033[00m
Jan 31 03:16:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2-userdata-shm.mount: Deactivated successfully.
Jan 31 03:16:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-11e09ee6ac7038831c5ccddc376356d03d25824ff3902b8e247cd0382613d431-merged.mount: Deactivated successfully.
Jan 31 03:16:16 np0005603621 podman[309884]: 2026-01-31 08:16:16.863605608 +0000 UTC m=+0.158959752 container cleanup 8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:16:16 np0005603621 systemd[1]: libpod-conmon-8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2.scope: Deactivated successfully.
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:17 np0005603621 podman[309916]: 2026-01-31 08:16:17.086966776 +0000 UTC m=+0.209040891 container remove 8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.093 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[92c9d1db-bcff-4baf-983c-13d209802f1f]: (4, ('Sat Jan 31 08:16:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e (8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2)\n8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2\nSat Jan 31 08:16:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-411aa011-d813-4343-b297-43dfd39c905e (8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2)\n8af05ae584371e51d7b71da3577cc614ee8b51d440016d8fdf9a0668e4665de2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.095 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a1bf104c-c555-4c62-9934-1be9241b26ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.096 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap411aa011-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.098 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:17 np0005603621 kernel: tap411aa011-d0: left promiscuous mode
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.103 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.106 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0d6024-f937-42b1-96cd-1571e89d784b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.123 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd71be1-807d-4e5f-abb5-1bf05e29c2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.125 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d42e9e-1768-48af-9414-967eb366b110]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.136 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dee745e1-de3e-485d-abbc-f64eaaac1e31]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664261, 'reachable_time': 24386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309932, 'error': None, 'target': 'ovnmeta-411aa011-d813-4343-b297-43dfd39c905e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 systemd[1]: run-netns-ovnmeta\x2d411aa011\x2dd813\x2d4343\x2db297\x2d43dfd39c905e.mount: Deactivated successfully.
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.137 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-411aa011-d813-4343-b297-43dfd39c905e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.137 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b22c0a12-1859-4ba4-b81a-f92473c09a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.138 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e76773d3-8a3a-451d-a29e-d01474b5f82f in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.139 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.140 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[17f2b2d2-353e-4480-90c8-385df9741f66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.141 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 in datapath 3e97a400-ca43-4c41-8017-9770a9ed4c8e unbound from our chassis#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.142 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e97a400-ca43-4c41-8017-9770a9ed4c8e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.142 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2adfaea9-501e-4891-ade4-68f0925027e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.143 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d1a1fd13-040f-49ef-b94f-98bcea71df76 in datapath 411aa011-d813-4343-b297-43dfd39c905e unbound from our chassis#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.144 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 411aa011-d813-4343-b297-43dfd39c905e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:16:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:17.145 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[414bd5d4-3f21-4671-9c2c-b1929f09e59b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.551 247403 INFO nova.compute.manager [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Took 3.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.552 247403 DEBUG oslo.service.loopingcall [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.552 247403 DEBUG nova.compute.manager [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:16:17 np0005603621 nova_compute[247399]: 2026-01-31 08:16:17.552 247403 DEBUG nova.network.neutron [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:16:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 22 KiB/s wr, 17 op/s
Jan 31 03:16:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:17.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:17.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:18 np0005603621 nova_compute[247399]: 2026-01-31 08:16:18.843 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 4.8 KiB/s wr, 26 op/s
Jan 31 03:16:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:19.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.856 247403 DEBUG nova.compute.manager [req-90ba2c2c-55fc-4db1-9280-fbe8907cfaf3 req-8863208b-f2c6-434a-a840-98b76339a7b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.856 247403 DEBUG oslo_concurrency.lockutils [req-90ba2c2c-55fc-4db1-9280-fbe8907cfaf3 req-8863208b-f2c6-434a-a840-98b76339a7b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.856 247403 DEBUG oslo_concurrency.lockutils [req-90ba2c2c-55fc-4db1-9280-fbe8907cfaf3 req-8863208b-f2c6-434a-a840-98b76339a7b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.856 247403 DEBUG oslo_concurrency.lockutils [req-90ba2c2c-55fc-4db1-9280-fbe8907cfaf3 req-8863208b-f2c6-434a-a840-98b76339a7b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.856 247403 DEBUG nova.compute.manager [req-90ba2c2c-55fc-4db1-9280-fbe8907cfaf3 req-8863208b-f2c6-434a-a840-98b76339a7b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:20 np0005603621 nova_compute[247399]: 2026-01-31 08:16:20.857 247403 WARNING nova.compute.manager [req-90ba2c2c-55fc-4db1-9280-fbe8907cfaf3 req-8863208b-f2c6-434a-a840-98b76339a7b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-d1a1fd13-040f-49ef-b94f-98bcea71df76 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:16:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:21.660 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:16:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:21.661 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:16:21 np0005603621 nova_compute[247399]: 2026-01-31 08:16:21.661 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 03:16:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:21.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:21.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 03:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:23 np0005603621 nova_compute[247399]: 2026-01-31 08:16:23.845 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:23.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:23.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:16:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 397004a0-e292-4a55-a92e-0298a2208fc2 does not exist
Jan 31 03:16:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 026999a7-acec-4e59-ae34-6d77941dd1c2 does not exist
Jan 31 03:16:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 02dfd5dc-b8b7-431f-b0ff-8fdcae173635 does not exist
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:16:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:25.007655156 +0000 UTC m=+0.041782640 container create 24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 31 03:16:25 np0005603621 nova_compute[247399]: 2026-01-31 08:16:25.015 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:25 np0005603621 systemd[1]: Started libpod-conmon-24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a.scope.
Jan 31 03:16:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:24.983654395 +0000 UTC m=+0.017781899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:25.092882337 +0000 UTC m=+0.127009841 container init 24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclean, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:25.099895156 +0000 UTC m=+0.134022640 container start 24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:25.104694206 +0000 UTC m=+0.138821730 container attach 24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclean, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:16:25 np0005603621 happy_mclean[310276]: 167 167
Jan 31 03:16:25 np0005603621 systemd[1]: libpod-24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a.scope: Deactivated successfully.
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:25.107534675 +0000 UTC m=+0.141662189 container died 24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclean, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:16:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3816fdffa1c71215164dba98eceaba2aa54aa95ad87eb57557e88d158a4e103e-merged.mount: Deactivated successfully.
Jan 31 03:16:25 np0005603621 podman[310259]: 2026-01-31 08:16:25.195002346 +0000 UTC m=+0.229129830 container remove 24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:16:25 np0005603621 systemd[1]: libpod-conmon-24e170b309b21426eb1a3b3d42be6c579af65f553470fdcaac784fd8ccd9287a.scope: Deactivated successfully.
Jan 31 03:16:25 np0005603621 podman[310301]: 2026-01-31 08:16:25.320292641 +0000 UTC m=+0.044230286 container create 1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:16:25 np0005603621 systemd[1]: Started libpod-conmon-1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a.scope.
Jan 31 03:16:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:16:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe8f7e96d49461389978000d416b3e1722d25d11bced0f77804e2eb29a9629f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe8f7e96d49461389978000d416b3e1722d25d11bced0f77804e2eb29a9629f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe8f7e96d49461389978000d416b3e1722d25d11bced0f77804e2eb29a9629f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe8f7e96d49461389978000d416b3e1722d25d11bced0f77804e2eb29a9629f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe8f7e96d49461389978000d416b3e1722d25d11bced0f77804e2eb29a9629f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:25 np0005603621 podman[310301]: 2026-01-31 08:16:25.296435654 +0000 UTC m=+0.020373319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:16:25 np0005603621 podman[310301]: 2026-01-31 08:16:25.422086281 +0000 UTC m=+0.146023946 container init 1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:16:25 np0005603621 podman[310301]: 2026-01-31 08:16:25.427418788 +0000 UTC m=+0.151356433 container start 1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:16:25 np0005603621 podman[310301]: 2026-01-31 08:16:25.470380855 +0000 UTC m=+0.194318520 container attach 1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:16:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:25.665 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:16:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 13 op/s
Jan 31 03:16:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:26 np0005603621 exciting_sanderson[310317]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:16:26 np0005603621 exciting_sanderson[310317]: --> relative data size: 1.0
Jan 31 03:16:26 np0005603621 exciting_sanderson[310317]: --> All data devices are unavailable
Jan 31 03:16:26 np0005603621 systemd[1]: libpod-1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a.scope: Deactivated successfully.
Jan 31 03:16:26 np0005603621 podman[310301]: 2026-01-31 08:16:26.194353648 +0000 UTC m=+0.918291293 container died 1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:16:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dfe8f7e96d49461389978000d416b3e1722d25d11bced0f77804e2eb29a9629f-merged.mount: Deactivated successfully.
Jan 31 03:16:26 np0005603621 podman[310301]: 2026-01-31 08:16:26.383662179 +0000 UTC m=+1.107599824 container remove 1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:16:26 np0005603621 systemd[1]: libpod-conmon-1c3aa0da83103c1e6d2d812da1d0a94549d3db9cacce0d0c33d4b7d518190f9a.scope: Deactivated successfully.
Jan 31 03:16:26 np0005603621 podman[310485]: 2026-01-31 08:16:26.902497765 +0000 UTC m=+0.058249486 container create 92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:26 np0005603621 podman[310485]: 2026-01-31 08:16:26.866856669 +0000 UTC m=+0.022608410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:16:26 np0005603621 systemd[1]: Started libpod-conmon-92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e.scope.
Jan 31 03:16:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:16:27 np0005603621 podman[310485]: 2026-01-31 08:16:27.001232749 +0000 UTC m=+0.156984500 container init 92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:16:27 np0005603621 podman[310485]: 2026-01-31 08:16:27.007249778 +0000 UTC m=+0.163001519 container start 92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:16:27 np0005603621 thirsty_yonath[310502]: 167 167
Jan 31 03:16:27 np0005603621 systemd[1]: libpod-92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e.scope: Deactivated successfully.
Jan 31 03:16:27 np0005603621 podman[310485]: 2026-01-31 08:16:27.022688581 +0000 UTC m=+0.178440302 container attach 92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:16:27 np0005603621 podman[310485]: 2026-01-31 08:16:27.023043882 +0000 UTC m=+0.178795603 container died 92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:16:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-97e4582c963b7dff0ac79c7f7a23761a5a12fc0a781e3cd8cea9afa53bcd552d-merged.mount: Deactivated successfully.
Jan 31 03:16:27 np0005603621 podman[310485]: 2026-01-31 08:16:27.224931637 +0000 UTC m=+0.380683358 container remove 92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:16:27 np0005603621 systemd[1]: libpod-conmon-92f7267624ec7a622ffc9059280a31e8858103fcfbe9a469a1319928ec4de71e.scope: Deactivated successfully.
Jan 31 03:16:27 np0005603621 podman[310528]: 2026-01-31 08:16:27.356344345 +0000 UTC m=+0.062064836 container create 572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 31 03:16:27 np0005603621 systemd[1]: Started libpod-conmon-572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b.scope.
Jan 31 03:16:27 np0005603621 podman[310528]: 2026-01-31 08:16:27.313302107 +0000 UTC m=+0.019022618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:16:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a587a58fedc7c716b0feaf87b8c4783d04b5c47e87bdde68d3b839e4b1159895/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a587a58fedc7c716b0feaf87b8c4783d04b5c47e87bdde68d3b839e4b1159895/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a587a58fedc7c716b0feaf87b8c4783d04b5c47e87bdde68d3b839e4b1159895/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a587a58fedc7c716b0feaf87b8c4783d04b5c47e87bdde68d3b839e4b1159895/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:27 np0005603621 podman[310528]: 2026-01-31 08:16:27.445105756 +0000 UTC m=+0.150826267 container init 572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 03:16:27 np0005603621 podman[310528]: 2026-01-31 08:16:27.449648168 +0000 UTC m=+0.155368659 container start 572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:16:27 np0005603621 podman[310528]: 2026-01-31 08:16:27.456943117 +0000 UTC m=+0.162663638 container attach 572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:16:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 12 op/s
Jan 31 03:16:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:27.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]: {
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:    "0": [
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:        {
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "devices": [
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "/dev/loop3"
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            ],
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "lv_name": "ceph_lv0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "lv_size": "7511998464",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "name": "ceph_lv0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "tags": {
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.cluster_name": "ceph",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.crush_device_class": "",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.encrypted": "0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.osd_id": "0",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.type": "block",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:                "ceph.vdo": "0"
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            },
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "type": "block",
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:            "vg_name": "ceph_vg0"
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:        }
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]:    ]
Jan 31 03:16:28 np0005603621 hopeful_mahavira[310545]: }
Jan 31 03:16:28 np0005603621 systemd[1]: libpod-572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b.scope: Deactivated successfully.
Jan 31 03:16:28 np0005603621 podman[310528]: 2026-01-31 08:16:28.165289081 +0000 UTC m=+0.871009592 container died 572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:16:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a587a58fedc7c716b0feaf87b8c4783d04b5c47e87bdde68d3b839e4b1159895-merged.mount: Deactivated successfully.
Jan 31 03:16:28 np0005603621 podman[310528]: 2026-01-31 08:16:28.302051225 +0000 UTC m=+1.007771716 container remove 572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:16:28 np0005603621 systemd[1]: libpod-conmon-572bf3305fe2957779f60c883565853355875de89452fdebd7cd460e27a8419b.scope: Deactivated successfully.
Jan 31 03:16:28 np0005603621 podman[310709]: 2026-01-31 08:16:28.731291185 +0000 UTC m=+0.021413292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:16:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:28 np0005603621 nova_compute[247399]: 2026-01-31 08:16:28.847 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:28 np0005603621 podman[310709]: 2026-01-31 08:16:28.920874544 +0000 UTC m=+0.210996591 container create 82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:16:29 np0005603621 systemd[1]: Started libpod-conmon-82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4.scope.
Jan 31 03:16:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:16:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:16:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:16:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:16:29 np0005603621 podman[310709]: 2026-01-31 08:16:29.400634146 +0000 UTC m=+0.690756203 container init 82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heyrovsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 03:16:29 np0005603621 podman[310709]: 2026-01-31 08:16:29.405033393 +0000 UTC m=+0.695155440 container start 82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:16:29 np0005603621 elated_heyrovsky[310725]: 167 167
Jan 31 03:16:29 np0005603621 systemd[1]: libpod-82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4.scope: Deactivated successfully.
Jan 31 03:16:29 np0005603621 conmon[310725]: conmon 82d7c29efa0be51ef2ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4.scope/container/memory.events
Jan 31 03:16:29 np0005603621 nova_compute[247399]: 2026-01-31 08:16:29.455 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847374.4536688, 85e16f4c-d977-4032-9cbd-b904f1d789d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:16:29 np0005603621 nova_compute[247399]: 2026-01-31 08:16:29.457 247403 INFO nova.compute.manager [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:16:29 np0005603621 podman[310709]: 2026-01-31 08:16:29.575231377 +0000 UTC m=+0.865353444 container attach 82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:16:29 np0005603621 podman[310709]: 2026-01-31 08:16:29.575950119 +0000 UTC m=+0.866072206 container died 82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:16:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 12 op/s
Jan 31 03:16:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:29.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:29.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4f0360d2ccb6903997845da8c1483bdb38d8d36daf3d85969a573fa333fe3c5c-merged.mount: Deactivated successfully.
Jan 31 03:16:30 np0005603621 nova_compute[247399]: 2026-01-31 08:16:30.046 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:30 np0005603621 podman[310709]: 2026-01-31 08:16:30.202514512 +0000 UTC m=+1.492636559 container remove 82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_heyrovsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:16:30 np0005603621 systemd[1]: libpod-conmon-82d7c29efa0be51ef2ec22af9bc5f07cf2b290516f41498583eacf9b902b7dd4.scope: Deactivated successfully.
Jan 31 03:16:30 np0005603621 podman[310750]: 2026-01-31 08:16:30.304051852 +0000 UTC m=+0.021642378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:16:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:30.499 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:30.499 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:16:30.499 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:30 np0005603621 podman[310750]: 2026-01-31 08:16:30.586629656 +0000 UTC m=+0.304220162 container create c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:16:30 np0005603621 systemd[1]: Started libpod-conmon-c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3.scope.
Jan 31 03:16:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:16:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee8bddfc371e254a90f3c18c61836f4f9e27d0a154d901a7a7e6e9d46d078c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee8bddfc371e254a90f3c18c61836f4f9e27d0a154d901a7a7e6e9d46d078c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee8bddfc371e254a90f3c18c61836f4f9e27d0a154d901a7a7e6e9d46d078c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ee8bddfc371e254a90f3c18c61836f4f9e27d0a154d901a7a7e6e9d46d078c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:16:30 np0005603621 podman[310750]: 2026-01-31 08:16:30.869570031 +0000 UTC m=+0.587160547 container init c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:16:30 np0005603621 podman[310750]: 2026-01-31 08:16:30.875436535 +0000 UTC m=+0.593027041 container start c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:16:31 np0005603621 podman[310750]: 2026-01-31 08:16:31.004022564 +0000 UTC m=+0.721613070 container attach c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.040 247403 DEBUG nova.compute.manager [req-d82bfddc-9e8f-4e6f-936a-83271ddcc8e2 req-d384e90e-4e66-43ce-918c-b878f6f4a557 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.041 247403 DEBUG oslo_concurrency.lockutils [req-d82bfddc-9e8f-4e6f-936a-83271ddcc8e2 req-d384e90e-4e66-43ce-918c-b878f6f4a557 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.041 247403 DEBUG oslo_concurrency.lockutils [req-d82bfddc-9e8f-4e6f-936a-83271ddcc8e2 req-d384e90e-4e66-43ce-918c-b878f6f4a557 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.041 247403 DEBUG oslo_concurrency.lockutils [req-d82bfddc-9e8f-4e6f-936a-83271ddcc8e2 req-d384e90e-4e66-43ce-918c-b878f6f4a557 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.041 247403 DEBUG nova.compute.manager [req-d82bfddc-9e8f-4e6f-936a-83271ddcc8e2 req-d384e90e-4e66-43ce-918c-b878f6f4a557 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.041 247403 WARNING nova.compute.manager [req-d82bfddc-9e8f-4e6f-936a-83271ddcc8e2 req-d384e90e-4e66-43ce-918c-b878f6f4a557 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-43de799d-e636-43ba-89c3-cb3a6a2ed888 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.412 247403 DEBUG nova.compute.manager [req-0378fdbb-35c4-44d0-beed-c44decc866e5 req-671ee193-8554-432c-a50c-dfed3e17d028 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.413 247403 DEBUG oslo_concurrency.lockutils [req-0378fdbb-35c4-44d0-beed-c44decc866e5 req-671ee193-8554-432c-a50c-dfed3e17d028 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.413 247403 DEBUG oslo_concurrency.lockutils [req-0378fdbb-35c4-44d0-beed-c44decc866e5 req-671ee193-8554-432c-a50c-dfed3e17d028 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.413 247403 DEBUG oslo_concurrency.lockutils [req-0378fdbb-35c4-44d0-beed-c44decc866e5 req-671ee193-8554-432c-a50c-dfed3e17d028 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.414 247403 DEBUG nova.compute.manager [req-0378fdbb-35c4-44d0-beed-c44decc866e5 req-671ee193-8554-432c-a50c-dfed3e17d028 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-unplugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.414 247403 DEBUG nova.compute.manager [req-0378fdbb-35c4-44d0-beed-c44decc866e5 req-671ee193-8554-432c-a50c-dfed3e17d028 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:16:31 np0005603621 nova_compute[247399]: 2026-01-31 08:16:31.434 247403 DEBUG nova.compute.manager [None req-d9a9d4a9-884a-4ab6-a221-5dc0ba8b221d - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]: {
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:        "osd_id": 0,
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:        "type": "bluestore"
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]:    }
Jan 31 03:16:31 np0005603621 unruffled_mendel[310766]: }
Jan 31 03:16:31 np0005603621 systemd[1]: libpod-c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3.scope: Deactivated successfully.
Jan 31 03:16:31 np0005603621 conmon[310766]: conmon c4786f0cfb99c6f41465 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3.scope/container/memory.events
Jan 31 03:16:31 np0005603621 podman[310750]: 2026-01-31 08:16:31.654229626 +0000 UTC m=+1.371820132 container died c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:16:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail
Jan 31 03:16:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:31.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:31.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d6ee8bddfc371e254a90f3c18c61836f4f9e27d0a154d901a7a7e6e9d46d078c-merged.mount: Deactivated successfully.
Jan 31 03:16:33 np0005603621 podman[310750]: 2026-01-31 08:16:33.093565643 +0000 UTC m=+2.811156149 container remove c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:16:33 np0005603621 systemd[1]: libpod-conmon-c4786f0cfb99c6f414650af07865f2497e9d118602ceafce5d7365b56aa13da3.scope: Deactivated successfully.
Jan 31 03:16:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:16:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:16:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:16:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:16:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f3152924-0f85-4fde-b27f-c1f7ebf06b5f does not exist
Jan 31 03:16:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 460a638d-85b5-438d-b0e0-1bfc1c6e1585 does not exist
Jan 31 03:16:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 04148c60-3839-417c-beb1-0026163f00a2 does not exist
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.674 247403 DEBUG nova.compute.manager [req-0f74547e-5631-482c-9ab9-ab8b561d1576 req-6b28044e-f376-4207-b9e2-cfccd0a9ae9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.675 247403 DEBUG oslo_concurrency.lockutils [req-0f74547e-5631-482c-9ab9-ab8b561d1576 req-6b28044e-f376-4207-b9e2-cfccd0a9ae9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.675 247403 DEBUG oslo_concurrency.lockutils [req-0f74547e-5631-482c-9ab9-ab8b561d1576 req-6b28044e-f376-4207-b9e2-cfccd0a9ae9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.676 247403 DEBUG oslo_concurrency.lockutils [req-0f74547e-5631-482c-9ab9-ab8b561d1576 req-6b28044e-f376-4207-b9e2-cfccd0a9ae9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.676 247403 DEBUG nova.compute.manager [req-0f74547e-5631-482c-9ab9-ab8b561d1576 req-6b28044e-f376-4207-b9e2-cfccd0a9ae9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-unplugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.677 247403 DEBUG nova.compute.manager [req-0f74547e-5631-482c-9ab9-ab8b561d1576 req-6b28044e-f376-4207-b9e2-cfccd0a9ae9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.724 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.725 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:16:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail
Jan 31 03:16:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:33 np0005603621 nova_compute[247399]: 2026-01-31 08:16:33.849 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:33.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:33.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:16:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:16:34 np0005603621 nova_compute[247399]: 2026-01-31 08:16:34.614 247403 DEBUG nova.compute.manager [req-1a167abd-e71b-4fd7-a43e-bc1357b28ed6 req-b7550dd9-bcdb-424e-bd2d-87900899ce64 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:34 np0005603621 nova_compute[247399]: 2026-01-31 08:16:34.615 247403 DEBUG oslo_concurrency.lockutils [req-1a167abd-e71b-4fd7-a43e-bc1357b28ed6 req-b7550dd9-bcdb-424e-bd2d-87900899ce64 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:34 np0005603621 nova_compute[247399]: 2026-01-31 08:16:34.616 247403 DEBUG oslo_concurrency.lockutils [req-1a167abd-e71b-4fd7-a43e-bc1357b28ed6 req-b7550dd9-bcdb-424e-bd2d-87900899ce64 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:34 np0005603621 nova_compute[247399]: 2026-01-31 08:16:34.616 247403 DEBUG oslo_concurrency.lockutils [req-1a167abd-e71b-4fd7-a43e-bc1357b28ed6 req-b7550dd9-bcdb-424e-bd2d-87900899ce64 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:34 np0005603621 nova_compute[247399]: 2026-01-31 08:16:34.617 247403 DEBUG nova.compute.manager [req-1a167abd-e71b-4fd7-a43e-bc1357b28ed6 req-b7550dd9-bcdb-424e-bd2d-87900899ce64 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:34 np0005603621 nova_compute[247399]: 2026-01-31 08:16:34.617 247403 WARNING nova.compute.manager [req-1a167abd-e71b-4fd7-a43e-bc1357b28ed6 req-b7550dd9-bcdb-424e-bd2d-87900899ce64 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.049 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.848 247403 DEBUG nova.compute.manager [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.848 247403 DEBUG oslo_concurrency.lockutils [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.848 247403 DEBUG oslo_concurrency.lockutils [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.849 247403 DEBUG oslo_concurrency.lockutils [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.849 247403 DEBUG nova.compute.manager [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.849 247403 WARNING nova.compute.manager [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-6e8e75b8-b681-4fc0-be0b-89a15bda2ac3 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.849 247403 DEBUG nova.compute.manager [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-e76773d3-8a3a-451d-a29e-d01474b5f82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.849 247403 DEBUG oslo_concurrency.lockutils [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.849 247403 DEBUG oslo_concurrency.lockutils [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.850 247403 DEBUG oslo_concurrency.lockutils [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.850 247403 DEBUG nova.compute.manager [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-unplugged-e76773d3-8a3a-451d-a29e-d01474b5f82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:35 np0005603621 nova_compute[247399]: 2026-01-31 08:16:35.850 247403 DEBUG nova.compute.manager [req-25b3205d-f9bd-45d5-917a-a2f9f28bc167 req-aba504de-76f5-4c85-9808-59aa8b483984 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-e76773d3-8a3a-451d-a29e-d01474b5f82f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:16:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:37 np0005603621 nova_compute[247399]: 2026-01-31 08:16:37.721 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 188 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 713 KiB/s wr, 21 op/s
Jan 31 03:16:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:37.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:38 np0005603621 nova_compute[247399]: 2026-01-31 08:16:38.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:38 np0005603621 nova_compute[247399]: 2026-01-31 08:16:38.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:16:38
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', '.rgw.root']
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:16:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:16:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:38 np0005603621 nova_compute[247399]: 2026-01-31 08:16:38.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:39 np0005603621 nova_compute[247399]: 2026-01-31 08:16:39.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Jan 31 03:16:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:39.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.051 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.278 247403 DEBUG nova.compute.manager [req-80354931-fd02-4f91-98b4-376758102b1a req-f60a7649-08f9-44f5-912b-8fe439b7a2d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.278 247403 DEBUG oslo_concurrency.lockutils [req-80354931-fd02-4f91-98b4-376758102b1a req-f60a7649-08f9-44f5-912b-8fe439b7a2d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.278 247403 DEBUG oslo_concurrency.lockutils [req-80354931-fd02-4f91-98b4-376758102b1a req-f60a7649-08f9-44f5-912b-8fe439b7a2d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.278 247403 DEBUG oslo_concurrency.lockutils [req-80354931-fd02-4f91-98b4-376758102b1a req-f60a7649-08f9-44f5-912b-8fe439b7a2d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.279 247403 DEBUG nova.compute.manager [req-80354931-fd02-4f91-98b4-376758102b1a req-f60a7649-08f9-44f5-912b-8fe439b7a2d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.279 247403 WARNING nova.compute.manager [req-80354931-fd02-4f91-98b4-376758102b1a req-f60a7649-08f9-44f5-912b-8fe439b7a2d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-e76773d3-8a3a-451d-a29e-d01474b5f82f for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.309 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:40 np0005603621 nova_compute[247399]: 2026-01-31 08:16:40.309 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.499 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.499 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.500 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.500 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.500 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:16:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 23 op/s
Jan 31 03:16:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:41.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:16:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955414532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:16:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:41.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:41 np0005603621 nova_compute[247399]: 2026-01-31 08:16:41.928 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.057 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.059 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4432MB free_disk=20.96734619140625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.059 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.059 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.701 247403 DEBUG nova.compute.manager [req-a3d0badb-5186-43fd-b940-22a4f858a507 req-1dff59b3-0a6f-4530-81b7-250acd02e9df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-51338d60-6e7f-4716-b025-60a809240bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.701 247403 DEBUG oslo_concurrency.lockutils [req-a3d0badb-5186-43fd-b940-22a4f858a507 req-1dff59b3-0a6f-4530-81b7-250acd02e9df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.702 247403 DEBUG oslo_concurrency.lockutils [req-a3d0badb-5186-43fd-b940-22a4f858a507 req-1dff59b3-0a6f-4530-81b7-250acd02e9df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.702 247403 DEBUG oslo_concurrency.lockutils [req-a3d0badb-5186-43fd-b940-22a4f858a507 req-1dff59b3-0a6f-4530-81b7-250acd02e9df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.702 247403 DEBUG nova.compute.manager [req-a3d0badb-5186-43fd-b940-22a4f858a507 req-1dff59b3-0a6f-4530-81b7-250acd02e9df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-unplugged-51338d60-6e7f-4716-b025-60a809240bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:42 np0005603621 nova_compute[247399]: 2026-01-31 08:16:42.702 247403 DEBUG nova.compute.manager [req-a3d0badb-5186-43fd-b940-22a4f858a507 req-1dff59b3-0a6f-4530-81b7-250acd02e9df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-unplugged-51338d60-6e7f-4716-b025-60a809240bd3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.024 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.024 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.024 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.377 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.447 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.448 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.468 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.500 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.536 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:16:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.853 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:43.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:43.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:16:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25528227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.983 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:16:43 np0005603621 nova_compute[247399]: 2026-01-31 08:16:43.988 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:16:44 np0005603621 nova_compute[247399]: 2026-01-31 08:16:44.271 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:16:44 np0005603621 nova_compute[247399]: 2026-01-31 08:16:44.619 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:16:44 np0005603621 nova_compute[247399]: 2026-01-31 08:16:44.619 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.053 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.149 247403 DEBUG nova.compute.manager [req-363d1955-9748-4084-87a8-b83a190320c2 req-8527f255-2e7c-4210-8ed9-1f52516b7ec4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.149 247403 DEBUG oslo_concurrency.lockutils [req-363d1955-9748-4084-87a8-b83a190320c2 req-8527f255-2e7c-4210-8ed9-1f52516b7ec4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.150 247403 DEBUG oslo_concurrency.lockutils [req-363d1955-9748-4084-87a8-b83a190320c2 req-8527f255-2e7c-4210-8ed9-1f52516b7ec4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.150 247403 DEBUG oslo_concurrency.lockutils [req-363d1955-9748-4084-87a8-b83a190320c2 req-8527f255-2e7c-4210-8ed9-1f52516b7ec4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.150 247403 DEBUG nova.compute.manager [req-363d1955-9748-4084-87a8-b83a190320c2 req-8527f255-2e7c-4210-8ed9-1f52516b7ec4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] No waiting events found dispatching network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.150 247403 WARNING nova.compute.manager [req-363d1955-9748-4084-87a8-b83a190320c2 req-8527f255-2e7c-4210-8ed9-1f52516b7ec4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received unexpected event network-vif-plugged-51338d60-6e7f-4716-b025-60a809240bd3 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:16:45 np0005603621 nova_compute[247399]: 2026-01-31 08:16:45.621 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 808 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 31 03:16:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:45.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:45.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:46 np0005603621 podman[310955]: 2026-01-31 08:16:46.495304486 +0000 UTC m=+0.046896070 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:16:46 np0005603621 podman[310956]: 2026-01-31 08:16:46.52159397 +0000 UTC m=+0.073012840 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:16:46 np0005603621 nova_compute[247399]: 2026-01-31 08:16:46.971 247403 DEBUG nova.compute.manager [req-10fbe970-619c-4321-b7da-2f16f729f716 req-27d2064d-a3b0-4923-90a4-55a7ad119812 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-deleted-d1a1fd13-040f-49ef-b94f-98bcea71df76 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:46 np0005603621 nova_compute[247399]: 2026-01-31 08:16:46.972 247403 INFO nova.compute.manager [req-10fbe970-619c-4321-b7da-2f16f729f716 req-27d2064d-a3b0-4923-90a4-55a7ad119812 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Neutron deleted interface d1a1fd13-040f-49ef-b94f-98bcea71df76; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:16:46 np0005603621 nova_compute[247399]: 2026-01-31 08:16:46.972 247403 DEBUG nova.network.neutron [req-10fbe970-619c-4321-b7da-2f16f729f716 req-27d2064d-a3b0-4923-90a4-55a7ad119812 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "address": "fa:16:3e:77:02:fd", "network": {"id": "370ef3ad-fbaf-4df3-ad34-de7587567f0e", "bridge": "br-int", "label": "tempest-TaggedBootDevicesTest_v242-1476828789-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43de799d-e6", "ovs_interfaceid": "43de799d-e636-43ba-89c3-cb3a6a2ed888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:47 np0005603621 nova_compute[247399]: 2026-01-31 08:16:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:16:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 31 03:16:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:47.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:47.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:47 np0005603621 nova_compute[247399]: 2026-01-31 08:16:47.929 247403 DEBUG nova.compute.manager [req-10fbe970-619c-4321-b7da-2f16f729f716 req-27d2064d-a3b0-4923-90a4-55a7ad119812 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Detach interface failed, port_id=d1a1fd13-040f-49ef-b94f-98bcea71df76, reason: Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:16:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:48 np0005603621 nova_compute[247399]: 2026-01-31 08:16:48.854 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000997464172715429 of space, bias 1.0, pg target 0.2992392518146287 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003157970116787642 of space, bias 1.0, pg target 0.9473910350362926 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:16:49 np0005603621 nova_compute[247399]: 2026-01-31 08:16:49.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:49 np0005603621 nova_compute[247399]: 2026-01-31 08:16:49.543 247403 DEBUG nova.compute.manager [req-ffb49dff-8a55-4e66-b8a6-a497d17dca43 req-e256b8fa-8131-4da9-974f-c25f1741ef14 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-deleted-43de799d-e636-43ba-89c3-cb3a6a2ed888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:49 np0005603621 nova_compute[247399]: 2026-01-31 08:16:49.543 247403 INFO nova.compute.manager [req-ffb49dff-8a55-4e66-b8a6-a497d17dca43 req-e256b8fa-8131-4da9-974f-c25f1741ef14 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Neutron deleted interface 43de799d-e636-43ba-89c3-cb3a6a2ed888; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:16:49 np0005603621 nova_compute[247399]: 2026-01-31 08:16:49.543 247403 DEBUG nova.network.neutron [req-ffb49dff-8a55-4e66-b8a6-a497d17dca43 req-e256b8fa-8131-4da9-974f-c25f1741ef14 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "address": "fa:16:3e:76:b2:52", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76773d3-8a", "ovs_interfaceid": "e76773d3-8a3a-451d-a29e-d01474b5f82f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 69 op/s
Jan 31 03:16:49 np0005603621 nova_compute[247399]: 2026-01-31 08:16:49.847 247403 DEBUG nova.compute.manager [req-ffb49dff-8a55-4e66-b8a6-a497d17dca43 req-e256b8fa-8131-4da9-974f-c25f1741ef14 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Detach interface failed, port_id=43de799d-e636-43ba-89c3-cb3a6a2ed888, reason: Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:16:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:49.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:16:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:49.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:16:50 np0005603621 nova_compute[247399]: 2026-01-31 08:16:50.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:51 np0005603621 nova_compute[247399]: 2026-01-31 08:16:51.810 247403 DEBUG nova.compute.manager [req-5234f1c2-a858-4755-bb71-9d8fd4b95d88 req-bc60f16a-9738-4f1a-9509-729ec3faad73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-deleted-e76773d3-8a3a-451d-a29e-d01474b5f82f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:51 np0005603621 nova_compute[247399]: 2026-01-31 08:16:51.810 247403 INFO nova.compute.manager [req-5234f1c2-a858-4755-bb71-9d8fd4b95d88 req-bc60f16a-9738-4f1a-9509-729ec3faad73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Neutron deleted interface e76773d3-8a3a-451d-a29e-d01474b5f82f; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:16:51 np0005603621 nova_compute[247399]: 2026-01-31 08:16:51.810 247403 DEBUG nova.network.neutron [req-5234f1c2-a858-4755-bb71-9d8fd4b95d88 req-bc60f16a-9738-4f1a-9509-729ec3faad73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "address": "fa:16:3e:f4:7b:5a", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.39", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31ce268e-bf", "ovs_interfaceid": "31ce268e-bfe2-4d8b-acd2-5a75e9725d95", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 67 op/s
Jan 31 03:16:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:51.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:51.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:52 np0005603621 nova_compute[247399]: 2026-01-31 08:16:52.717 247403 DEBUG nova.compute.manager [req-5234f1c2-a858-4755-bb71-9d8fd4b95d88 req-bc60f16a-9738-4f1a-9509-729ec3faad73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Detach interface failed, port_id=e76773d3-8a3a-451d-a29e-d01474b5f82f, reason: Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:16:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 31 03:16:53 np0005603621 nova_compute[247399]: 2026-01-31 08:16:53.855 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:53.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:54 np0005603621 nova_compute[247399]: 2026-01-31 08:16:54.396 247403 DEBUG nova.network.neutron [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:55 np0005603621 nova_compute[247399]: 2026-01-31 08:16:55.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:55 np0005603621 nova_compute[247399]: 2026-01-31 08:16:55.662 247403 DEBUG nova.compute.manager [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-deleted-31ce268e-bfe2-4d8b-acd2-5a75e9725d95 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:55 np0005603621 nova_compute[247399]: 2026-01-31 08:16:55.662 247403 INFO nova.compute.manager [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Neutron deleted interface 31ce268e-bfe2-4d8b-acd2-5a75e9725d95; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:16:55 np0005603621 nova_compute[247399]: 2026-01-31 08:16:55.662 247403 DEBUG nova.network.neutron [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "51338d60-6e7f-4716-b025-60a809240bd3", "address": "fa:16:3e:63:b0:67", "network": {"id": "411aa011-d813-4343-b297-43dfd39c905e", "bridge": "br-int", "label": "tempest-device-tagging-net2-285119066", "subnets": [{"cidr": "10.2.2.0/24", "dns": [], "gateway": {"address": "10.2.2.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.2.2.100", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51338d60-6e", "ovs_interfaceid": "51338d60-6e7f-4716-b025-60a809240bd3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:55 np0005603621 nova_compute[247399]: 2026-01-31 08:16:55.816 247403 INFO nova.compute.manager [-] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Took 38.26 seconds to deallocate network for instance.#033[00m
Jan 31 03:16:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 214 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:16:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:55.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:55.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:56 np0005603621 nova_compute[247399]: 2026-01-31 08:16:56.496 247403 DEBUG nova.compute.manager [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Detach interface failed, port_id=31ce268e-bfe2-4d8b-acd2-5a75e9725d95, reason: Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:16:56 np0005603621 nova_compute[247399]: 2026-01-31 08:16:56.497 247403 DEBUG nova.compute.manager [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Received event network-vif-deleted-51338d60-6e7f-4716-b025-60a809240bd3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:16:56 np0005603621 nova_compute[247399]: 2026-01-31 08:16:56.497 247403 INFO nova.compute.manager [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Neutron deleted interface 51338d60-6e7f-4716-b025-60a809240bd3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:16:56 np0005603621 nova_compute[247399]: 2026-01-31 08:16:56.497 247403 DEBUG nova.network.neutron [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Updating instance_info_cache with network_info: [{"id": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "address": "fa:16:3e:6e:b4:87", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd1f006d-b1", "ovs_interfaceid": "fd1f006d-b119-4bc7-8bd9-bfdd207ffe5b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}, {"id": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "address": "fa:16:3e:e2:af:a0", "network": {"id": "3e97a400-ca43-4c41-8017-9770a9ed4c8e", "bridge": "br-int", "label": "tempest-device-tagging-net1-471014700", "subnets": [{"cidr": "10.1.1.0/24", "dns": [], "gateway": {"address": "10.1.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.1.145", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a6c60c75300483aa07e13b08923b1a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e8e75b8-b6", "ovs_interfaceid": "6e8e75b8-b681-4fc0-be0b-89a15bda2ac3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:16:56 np0005603621 nova_compute[247399]: 2026-01-31 08:16:56.980 247403 DEBUG nova.compute.manager [req-f4952600-dc97-42cf-bebc-c5a816924d0e req-6d1f30bf-f50d-4983-99d9-a6e3bf7b6cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Detach interface failed, port_id=51338d60-6e7f-4716-b025-60a809240bd3, reason: Instance 85e16f4c-d977-4032-9cbd-b904f1d789d4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:16:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 230 MiB data, 886 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 64 op/s
Jan 31 03:16:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:16:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:57.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:16:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:57.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:58 np0005603621 nova_compute[247399]: 2026-01-31 08:16:58.473 247403 INFO nova.compute.manager [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] [instance: 85e16f4c-d977-4032-9cbd-b904f1d789d4] Took 2.66 seconds to detach 3 volumes for instance.#033[00m
Jan 31 03:16:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:16:58 np0005603621 nova_compute[247399]: 2026-01-31 08:16:58.857 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:16:59 np0005603621 nova_compute[247399]: 2026-01-31 08:16:59.518 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:16:59 np0005603621 nova_compute[247399]: 2026-01-31 08:16:59.518 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:16:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:16:59.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:16:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:16:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:16:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:16:59.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 235 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 84 op/s
Jan 31 03:17:00 np0005603621 nova_compute[247399]: 2026-01-31 08:17:00.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:00 np0005603621 nova_compute[247399]: 2026-01-31 08:17:00.369 247403 DEBUG oslo_concurrency.processutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275380568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:00 np0005603621 nova_compute[247399]: 2026-01-31 08:17:00.820 247403 DEBUG oslo_concurrency.processutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:00 np0005603621 nova_compute[247399]: 2026-01-31 08:17:00.829 247403 DEBUG nova.compute.provider_tree [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:17:01 np0005603621 nova_compute[247399]: 2026-01-31 08:17:01.005 247403 DEBUG nova.scheduler.client.report [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:17:01 np0005603621 nova_compute[247399]: 2026-01-31 08:17:01.144 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:01 np0005603621 nova_compute[247399]: 2026-01-31 08:17:01.291 247403 INFO nova.scheduler.client.report [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Deleted allocations for instance 85e16f4c-d977-4032-9cbd-b904f1d789d4#033[00m
Jan 31 03:17:01 np0005603621 nova_compute[247399]: 2026-01-31 08:17:01.530 247403 DEBUG oslo_concurrency.lockutils [None req-7b106ebd-de43-4096-9cfd-ca40df012413 8080000681f449c3a9754c876165d667 5a6c60c75300483aa07e13b08923b1a1 - - default default] Lock "85e16f4c-d977-4032-9cbd-b904f1d789d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 47.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 235 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 998 KiB/s rd, 2.0 MiB/s wr, 69 op/s
Jan 31 03:17:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:01.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:01.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:02 np0005603621 nova_compute[247399]: 2026-01-31 08:17:02.297 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:02 np0005603621 nova_compute[247399]: 2026-01-31 08:17:02.298 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:02 np0005603621 nova_compute[247399]: 2026-01-31 08:17:02.463 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:17:03 np0005603621 nova_compute[247399]: 2026-01-31 08:17:03.177 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:03 np0005603621 nova_compute[247399]: 2026-01-31 08:17:03.178 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:03 np0005603621 nova_compute[247399]: 2026-01-31 08:17:03.186 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:17:03 np0005603621 nova_compute[247399]: 2026-01-31 08:17:03.186 247403 INFO nova.compute.claims [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:17:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 246 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 31 03:17:03 np0005603621 nova_compute[247399]: 2026-01-31 08:17:03.859 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:03.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:03 np0005603621 nova_compute[247399]: 2026-01-31 08:17:03.980 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:03.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244741714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:04 np0005603621 nova_compute[247399]: 2026-01-31 08:17:04.395 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:04 np0005603621 nova_compute[247399]: 2026-01-31 08:17:04.400 247403 DEBUG nova.compute.provider_tree [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:17:04 np0005603621 nova_compute[247399]: 2026-01-31 08:17:04.609 247403 DEBUG nova.scheduler.client.report [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.010 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.011 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.247 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.247 247403 DEBUG nova.network.neutron [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.346 247403 INFO nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.501 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.770 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.771 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.772 247403 INFO nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Creating image(s)#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.800 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.833 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 247 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.862 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.866 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.888 247403 DEBUG nova.policy [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8db5a8acb6d04c988f9dd4f74380c487', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'eafe22d6cfcb41d4b31597087498a565', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:17:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:05.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.923 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.925 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.925 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.926 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.952 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:05 np0005603621 nova_compute[247399]: 2026-01-31 08:17:05.955 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:05.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.333 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.402 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] resizing rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.514 247403 DEBUG nova.objects.instance [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lazy-loading 'migration_context' on Instance uuid f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.976 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.977 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Ensure instance console log exists: /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.977 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.978 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:06 np0005603621 nova_compute[247399]: 2026-01-31 08:17:06.978 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 270 MiB data, 913 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 142 op/s
Jan 31 03:17:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:07.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:07.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:08 np0005603621 nova_compute[247399]: 2026-01-31 08:17:08.427 247403 DEBUG nova.network.neutron [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Successfully created port: 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:08 np0005603621 nova_compute[247399]: 2026-01-31 08:17:08.862 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 378 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.1 MiB/s wr, 199 op/s
Jan 31 03:17:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:09.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:09.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:10 np0005603621 nova_compute[247399]: 2026-01-31 08:17:10.133 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 378 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.4 MiB/s wr, 155 op/s
Jan 31 03:17:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:11.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:11.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:12 np0005603621 nova_compute[247399]: 2026-01-31 08:17:12.778 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:12 np0005603621 nova_compute[247399]: 2026-01-31 08:17:12.800 247403 DEBUG nova.network.neutron [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Successfully updated port: 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:17:12 np0005603621 nova_compute[247399]: 2026-01-31 08:17:12.883 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:17:12 np0005603621 nova_compute[247399]: 2026-01-31 08:17:12.884 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquired lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:17:12 np0005603621 nova_compute[247399]: 2026-01-31 08:17:12.884 247403 DEBUG nova.network.neutron [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:17:12 np0005603621 nova_compute[247399]: 2026-01-31 08:17:12.940 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:13 np0005603621 nova_compute[247399]: 2026-01-31 08:17:13.409 247403 DEBUG nova.network.neutron [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:17:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 339 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.4 MiB/s wr, 179 op/s
Jan 31 03:17:13 np0005603621 nova_compute[247399]: 2026-01-31 08:17:13.861 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:13.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:13.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:15 np0005603621 nova_compute[247399]: 2026-01-31 08:17:15.133 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:15 np0005603621 nova_compute[247399]: 2026-01-31 08:17:15.676 247403 DEBUG nova.compute.manager [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-changed-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:17:15 np0005603621 nova_compute[247399]: 2026-01-31 08:17:15.677 247403 DEBUG nova.compute.manager [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Refreshing instance network info cache due to event network-changed-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:17:15 np0005603621 nova_compute[247399]: 2026-01-31 08:17:15.677 247403 DEBUG oslo_concurrency.lockutils [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:17:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 339 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 5.3 MiB/s wr, 112 op/s
Jan 31 03:17:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:15.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:16.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:17 np0005603621 nova_compute[247399]: 2026-01-31 08:17:17.032 247403 DEBUG nova.network.neutron [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Updating instance_info_cache with network_info: [{"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:17:17 np0005603621 podman[311278]: 2026-01-31 08:17:17.50559682 +0000 UTC m=+0.050996809 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 03:17:17 np0005603621 podman[311279]: 2026-01-31 08:17:17.551427177 +0000 UTC m=+0.095192254 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:17:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 339 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 5.3 MiB/s wr, 123 op/s
Jan 31 03:17:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:17.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:18.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:18 np0005603621 nova_compute[247399]: 2026-01-31 08:17:18.863 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.024 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Releasing lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.025 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Instance network_info: |[{"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.025 247403 DEBUG oslo_concurrency.lockutils [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.025 247403 DEBUG nova.network.neutron [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Refreshing network info cache for port 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.028 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Start _get_guest_xml network_info=[{"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.031 247403 WARNING nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.036 247403 DEBUG nova.virt.libvirt.host [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.037 247403 DEBUG nova.virt.libvirt.host [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.039 247403 DEBUG nova.virt.libvirt.host [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.040 247403 DEBUG nova.virt.libvirt.host [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.041 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.041 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.041 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.041 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.041 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.042 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.042 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.042 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.042 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.042 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.042 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.043 247403 DEBUG nova.virt.hardware [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.045 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:17:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198843850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.486 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.513 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.518 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 339 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 4.8 MiB/s wr, 123 op/s
Jan 31 03:17:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:19.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:17:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352106307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.954 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.957 247403 DEBUG nova.virt.libvirt.vif [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-603829389',display_name='tempest-ListServersNegativeTestJSON-server-603829389-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-603829389-1',id=98,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eafe22d6cfcb41d4b31597087498a565',ramdisk_id='',reservation_id='r-5jowh2i5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-79577656',owner_user_name='tempest-ListServersNegativeTestJSON-79577656-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:05Z,user_data=None,user_id='8db5a8acb6d04c988f9dd4f74380c487',uuid=f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.957 247403 DEBUG nova.network.os_vif_util [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Converting VIF {"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.958 247403 DEBUG nova.network.os_vif_util [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:17:19 np0005603621 nova_compute[247399]: 2026-01-31 08:17:19.960 247403 DEBUG nova.objects.instance [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lazy-loading 'pci_devices' on Instance uuid f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:17:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:20.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.049 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <uuid>f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b</uuid>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <name>instance-00000062</name>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:name>tempest-ListServersNegativeTestJSON-server-603829389-1</nova:name>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:17:19</nova:creationTime>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:user uuid="8db5a8acb6d04c988f9dd4f74380c487">tempest-ListServersNegativeTestJSON-79577656-project-member</nova:user>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:project uuid="eafe22d6cfcb41d4b31597087498a565">tempest-ListServersNegativeTestJSON-79577656</nova:project>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <nova:port uuid="418a8dcf-ff92-43bd-a0b6-1f7d9ae61527">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <entry name="serial">f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b</entry>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <entry name="uuid">f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b</entry>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk.config">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:f9:a0:63"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <target dev="tap418a8dcf-ff"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/console.log" append="off"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:17:20 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:17:20 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:17:20 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:17:20 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.050 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Preparing to wait for external event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.052 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.053 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.053 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.054 247403 DEBUG nova.virt.libvirt.vif [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-603829389',display_name='tempest-ListServersNegativeTestJSON-server-603829389-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-603829389-1',id=98,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='eafe22d6cfcb41d4b31597087498a565',ramdisk_id='',reservation_id='r-5jowh2i5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-79577656',owner_user_name='tempest-ListServersNegativeTestJSON-79577656-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:17:05Z,user_data=None,user_id='8db5a8acb6d04c988f9dd4f74380c487',uuid=f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.055 247403 DEBUG nova.network.os_vif_util [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Converting VIF {"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.055 247403 DEBUG nova.network.os_vif_util [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.056 247403 DEBUG os_vif [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.057 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.058 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.062 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap418a8dcf-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.062 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap418a8dcf-ff, col_values=(('external_ids', {'iface-id': '418a8dcf-ff92-43bd-a0b6-1f7d9ae61527', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:a0:63', 'vm-uuid': 'f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:20 np0005603621 NetworkManager[49013]: <info>  [1769847440.0644] manager: (tap418a8dcf-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.066 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.069 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.070 247403 INFO os_vif [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff')#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.262 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.263 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.263 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] No VIF found with MAC fa:16:3e:f9:a0:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.264 247403 INFO nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Using config drive#033[00m
Jan 31 03:17:20 np0005603621 nova_compute[247399]: 2026-01-31 08:17:20.285 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:17:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1452837925' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:17:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:17:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1452837925' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:17:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 339 MiB data, 949 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 17 KiB/s wr, 50 op/s
Jan 31 03:17:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:21.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:22.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.272 247403 INFO nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Creating config drive at /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/disk.config#033[00m
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.275 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1mmu43by execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.403 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1mmu43by" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.432 247403 DEBUG nova.storage.rbd_utils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] rbd image f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.437 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/disk.config f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.581 247403 DEBUG oslo_concurrency.processutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/disk.config f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.582 247403 INFO nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Deleting local config drive /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b/disk.config because it was imported into RBD.#033[00m
Jan 31 03:17:22 np0005603621 kernel: tap418a8dcf-ff: entered promiscuous mode
Jan 31 03:17:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:22Z|00373|binding|INFO|Claiming lport 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 for this chassis.
Jan 31 03:17:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:22Z|00374|binding|INFO|418a8dcf-ff92-43bd-a0b6-1f7d9ae61527: Claiming fa:16:3e:f9:a0:63 10.100.0.14
Jan 31 03:17:22 np0005603621 NetworkManager[49013]: <info>  [1769847442.6261] manager: (tap418a8dcf-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.625 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:22 np0005603621 systemd-udevd[311459]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:17:22 np0005603621 systemd-machined[212769]: New machine qemu-43-instance-00000062.
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.652 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:22Z|00375|binding|INFO|Setting lport 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 ovn-installed in OVS
Jan 31 03:17:22 np0005603621 nova_compute[247399]: 2026-01-31 08:17:22.657 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:22 np0005603621 NetworkManager[49013]: <info>  [1769847442.6607] device (tap418a8dcf-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:17:22 np0005603621 NetworkManager[49013]: <info>  [1769847442.6614] device (tap418a8dcf-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:17:22 np0005603621 systemd[1]: Started Virtual Machine qemu-43-instance-00000062.
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.086 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847443.0860682, f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.087 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] VM Started (Lifecycle Event)#033[00m
Jan 31 03:17:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:23Z|00376|binding|INFO|Setting lport 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 up in Southbound
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.495 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a0:63 10.100.0.14'], port_security=['fa:16:3e:f9:a0:63 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eafe22d6cfcb41d4b31597087498a565', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2748ecb7-7ea4-47e3-84b3-eb7c4d3ddc31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d5be35-4a5e-4d77-b9b0-f9a41d41dd18, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.496 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 in datapath 4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 bound to our chassis#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.498 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.507 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[41a3c377-3272-4618-b391-4a559c6f8c34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.508 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ba4d6d9-c1 in ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.509 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ba4d6d9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.510 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[085a3b6b-46a4-41f0-9a0a-78f2488e53c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.510 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1e49b7-d8c1-490c-b749-eb9f7f2c51f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.518 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ed62f1-d718-4cac-b23f-15e7932926d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.527 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f1eb2559-2e26-49bb-b1ef-593ddf73cdb1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.550 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[dab4b354-3232-4ea3-88a9-0d1cf08c4491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.553 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.555 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0c6842-bce6-458a-8489-7aa9e9bdc789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 NetworkManager[49013]: <info>  [1769847443.5567] manager: (tap4ba4d6d9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Jan 31 03:17:23 np0005603621 systemd-udevd[311461]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.556 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847443.0871654, f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.557 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.579 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[13cc83cc-e659-409c-bdfe-5bee4a8ac93a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.582 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5d8c5c-9071-42fb-9728-1a82911b69e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 NetworkManager[49013]: <info>  [1769847443.6000] device (tap4ba4d6d9-c0): carrier: link connected
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.603 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[941d90de-5c71-4edd-96d0-a9cf433bc631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.615 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d033fe69-15fc-4867-ae1e-4fc542d5ef68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ba4d6d9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:09:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674911, 'reachable_time': 15978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311535, 'error': None, 'target': 'ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.626 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0ddce1e8-0a20-44f9-afd3-37ac71431b56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:951'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674911, 'tstamp': 674911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311536, 'error': None, 'target': 'ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.638 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a2856670-a226-45c7-b5ab-a2d2962dc2d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ba4d6d9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:09:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674911, 'reachable_time': 15978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311537, 'error': None, 'target': 'ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.653 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.656 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.658 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3083e62f-f733-4384-b941-5ae7b7cd89d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.693 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a0b60d-3154-4dac-8740-9340153d6bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.694 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ba4d6d9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.694 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.695 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ba4d6d9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:23 np0005603621 NetworkManager[49013]: <info>  [1769847443.6972] manager: (tap4ba4d6d9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 31 03:17:23 np0005603621 kernel: tap4ba4d6d9-c0: entered promiscuous mode
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.698 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.699 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ba4d6d9-c0, col_values=(('external_ids', {'iface-id': 'f9d388c2-0e9b-4991-9c20-42c713a8ba0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:23Z|00377|binding|INFO|Releasing lport f9d388c2-0e9b-4991-9c20-42c713a8ba0d from this chassis (sb_readonly=0)
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.701 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.702 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.703 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e95994ec-0035-4590-9f4c-13ecc2503ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.704 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9.pid.haproxy
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.705 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:23.705 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'env', 'PROCESS_TAG=haproxy-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.723 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:17:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 312 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 30 KiB/s wr, 71 op/s
Jan 31 03:17:23 np0005603621 nova_compute[247399]: 2026-01-31 08:17:23.865 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:24.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:24 np0005603621 podman[311570]: 2026-01-31 08:17:24.031197359 +0000 UTC m=+0.050640057 container create 73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:17:24 np0005603621 systemd[1]: Started libpod-conmon-73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2.scope.
Jan 31 03:17:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557404ae95650eb70a592d3a473bd2be1c6cb5b927ad421e8139f7cb6a2d1737/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:24 np0005603621 podman[311570]: 2026-01-31 08:17:23.99740068 +0000 UTC m=+0.016843398 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:17:24 np0005603621 podman[311570]: 2026-01-31 08:17:24.096432904 +0000 UTC m=+0.115875612 container init 73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:17:24 np0005603621 podman[311570]: 2026-01-31 08:17:24.100432738 +0000 UTC m=+0.119875436 container start 73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:17:24 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [NOTICE]   (311589) : New worker (311591) forked
Jan 31 03:17:24 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [NOTICE]   (311589) : Loading success.
Jan 31 03:17:24 np0005603621 nova_compute[247399]: 2026-01-31 08:17:24.296 247403 DEBUG nova.network.neutron [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Updated VIF entry in instance network info cache for port 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:17:24 np0005603621 nova_compute[247399]: 2026-01-31 08:17:24.297 247403 DEBUG nova.network.neutron [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Updating instance_info_cache with network_info: [{"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:17:24 np0005603621 nova_compute[247399]: 2026-01-31 08:17:24.382 247403 DEBUG oslo_concurrency.lockutils [req-a83d50c6-c569-4922-bc62-c2da429eef42 req-6094df42-d0fa-487d-bebd-fc47849d9403 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:17:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:17:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1643322517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:17:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:17:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1643322517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:17:25 np0005603621 nova_compute[247399]: 2026-01-31 08:17:25.065 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 285 MiB data, 917 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 27 KiB/s wr, 70 op/s
Jan 31 03:17:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:26.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 260 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 39 KiB/s wr, 75 op/s
Jan 31 03:17:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:27.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:28.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:28.068 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:17:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:28.069 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:17:28 np0005603621 nova_compute[247399]: 2026-01-31 08:17:28.113 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:28 np0005603621 nova_compute[247399]: 2026-01-31 08:17:28.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 260 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 39 KiB/s wr, 137 op/s
Jan 31 03:17:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:30.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.067 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.196 247403 DEBUG nova.compute.manager [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.196 247403 DEBUG oslo_concurrency.lockutils [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.197 247403 DEBUG oslo_concurrency.lockutils [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.197 247403 DEBUG oslo_concurrency.lockutils [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.197 247403 DEBUG nova.compute.manager [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Processing event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.197 247403 DEBUG nova.compute.manager [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.197 247403 DEBUG oslo_concurrency.lockutils [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.198 247403 DEBUG oslo_concurrency.lockutils [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.198 247403 DEBUG oslo_concurrency.lockutils [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.198 247403 DEBUG nova.compute.manager [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] No waiting events found dispatching network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.198 247403 WARNING nova.compute.manager [req-700e900a-185f-40f7-8bbd-c40c1b40b712 req-9f116f2b-f359-4c9f-ae91-289d7f920eea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received unexpected event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.199 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.203 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847450.2027764, f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.203 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.204 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.207 247403 INFO nova.virt.libvirt.driver [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Instance spawned successfully.#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.207 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.306 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.311 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.312 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.312 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.313 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.313 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.313 247403 DEBUG nova.virt.libvirt.driver [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.318 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.420 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:17:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:30.500 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:30.500 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:30.501 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.581 247403 INFO nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Took 24.81 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.581 247403 DEBUG nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.739 247403 INFO nova.compute.manager [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Took 27.59 seconds to build instance.#033[00m
Jan 31 03:17:30 np0005603621 nova_compute[247399]: 2026-01-31 08:17:30.811 247403 DEBUG oslo_concurrency.lockutils [None req-9dd4deff-06fe-42a3-a8fd-88cce8b1934d 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 260 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 123 op/s
Jan 31 03:17:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:31.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:33.072 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:33 np0005603621 nova_compute[247399]: 2026-01-31 08:17:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 260 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 38 KiB/s wr, 214 op/s
Jan 31 03:17:33 np0005603621 nova_compute[247399]: 2026-01-31 08:17:33.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:33.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:35 np0005603621 nova_compute[247399]: 2026-01-31 08:17:35.070 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:17:35 np0005603621 nova_compute[247399]: 2026-01-31 08:17:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:35 np0005603621 nova_compute[247399]: 2026-01-31 08:17:35.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:17:35 np0005603621 nova_compute[247399]: 2026-01-31 08:17:35.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:17:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:17:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 260 MiB data, 903 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 230 op/s
Jan 31 03:17:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:35.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:17:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:17:36 np0005603621 nova_compute[247399]: 2026-01-31 08:17:36.082 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:17:36 np0005603621 nova_compute[247399]: 2026-01-31 08:17:36.082 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:17:36 np0005603621 nova_compute[247399]: 2026-01-31 08:17:36.082 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:17:36 np0005603621 nova_compute[247399]: 2026-01-31 08:17:36.083 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ce6c116e-23e6-4451-8775-77053bd0dbc7 does not exist
Jan 31 03:17:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a8b08833-7df9-428c-8d2d-29911999af93 does not exist
Jan 31 03:17:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c1ec98c0-a675-4bd4-bc39-5c9946393493 does not exist
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:17:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:17:37 np0005603621 podman[311926]: 2026-01-31 08:17:37.486898493 +0000 UTC m=+0.084857100 container create 5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bose, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:17:37 np0005603621 podman[311926]: 2026-01-31 08:17:37.424169437 +0000 UTC m=+0.022128064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:17:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:17:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:17:37 np0005603621 systemd[1]: Started libpod-conmon-5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0.scope.
Jan 31 03:17:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 261 MiB data, 905 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 240 KiB/s wr, 209 op/s
Jan 31 03:17:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:37.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:38 np0005603621 podman[311926]: 2026-01-31 08:17:38.068206716 +0000 UTC m=+0.666165343 container init 5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bose, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:17:38 np0005603621 podman[311926]: 2026-01-31 08:17:38.076238338 +0000 UTC m=+0.674196945 container start 5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:17:38 np0005603621 fervent_bose[311943]: 167 167
Jan 31 03:17:38 np0005603621 systemd[1]: libpod-5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0.scope: Deactivated successfully.
Jan 31 03:17:38 np0005603621 podman[311926]: 2026-01-31 08:17:38.208486622 +0000 UTC m=+0.806445229 container attach 5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:17:38 np0005603621 podman[311926]: 2026-01-31 08:17:38.210197945 +0000 UTC m=+0.808156562 container died 5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:17:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1372935f2d8a9921508406655073cffa589bb1785c8fda2151353ee9e57329d8-merged.mount: Deactivated successfully.
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:17:38
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'default.rgw.log', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:17:38 np0005603621 podman[311926]: 2026-01-31 08:17:38.564777265 +0000 UTC m=+1.162735882 container remove 5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bose, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 03:17:38 np0005603621 systemd[1]: libpod-conmon-5d15ff7e5024cd480e9772fa8e05541d3e4a4edde3e86f67cfad313c6de838b0.scope: Deactivated successfully.
Jan 31 03:17:38 np0005603621 podman[311967]: 2026-01-31 08:17:38.720342309 +0000 UTC m=+0.065665158 container create 5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:38 np0005603621 podman[311967]: 2026-01-31 08:17:38.675662019 +0000 UTC m=+0.020984898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:17:38 np0005603621 systemd[1]: Started libpod-conmon-5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6.scope.
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:17:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:17:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c48bd767ad124d3d3c34af3a64253610aab758db5f12d8d939ba6f790bb28b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c48bd767ad124d3d3c34af3a64253610aab758db5f12d8d939ba6f790bb28b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c48bd767ad124d3d3c34af3a64253610aab758db5f12d8d939ba6f790bb28b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c48bd767ad124d3d3c34af3a64253610aab758db5f12d8d939ba6f790bb28b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64c48bd767ad124d3d3c34af3a64253610aab758db5f12d8d939ba6f790bb28b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:38 np0005603621 nova_compute[247399]: 2026-01-31 08:17:38.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:38 np0005603621 podman[311967]: 2026-01-31 08:17:38.93486884 +0000 UTC m=+0.280191709 container init 5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:17:38 np0005603621 podman[311967]: 2026-01-31 08:17:38.94093135 +0000 UTC m=+0.286254199 container start 5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:17:39 np0005603621 podman[311967]: 2026-01-31 08:17:39.068965662 +0000 UTC m=+0.414288511 container attach 5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:17:39 np0005603621 unruffled_joliot[311983]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:17:39 np0005603621 unruffled_joliot[311983]: --> relative data size: 1.0
Jan 31 03:17:39 np0005603621 unruffled_joliot[311983]: --> All data devices are unavailable
Jan 31 03:17:39 np0005603621 systemd[1]: libpod-5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6.scope: Deactivated successfully.
Jan 31 03:17:39 np0005603621 podman[311967]: 2026-01-31 08:17:39.709280044 +0000 UTC m=+1.054602893 container died 5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:17:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-64c48bd767ad124d3d3c34af3a64253610aab758db5f12d8d939ba6f790bb28b-merged.mount: Deactivated successfully.
Jan 31 03:17:39 np0005603621 podman[311967]: 2026-01-31 08:17:39.764094181 +0000 UTC m=+1.109417030 container remove 5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_joliot, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:17:39 np0005603621 systemd[1]: libpod-conmon-5fcf37e9835d61d468b4d8cc1eeeb8d148ab182b64de68ffe037b6b1165027b6.scope: Deactivated successfully.
Jan 31 03:17:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 272 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.4 MiB/s wr, 229 op/s
Jan 31 03:17:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:39.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:40.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.285383725 +0000 UTC m=+0.036697901 container create 546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:17:40 np0005603621 systemd[1]: Started libpod-conmon-546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b.scope.
Jan 31 03:17:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.267710181 +0000 UTC m=+0.019024357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.36567213 +0000 UTC m=+0.116986326 container init 546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.371484272 +0000 UTC m=+0.122798448 container start 546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:17:40 np0005603621 epic_ride[312167]: 167 167
Jan 31 03:17:40 np0005603621 systemd[1]: libpod-546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b.scope: Deactivated successfully.
Jan 31 03:17:40 np0005603621 conmon[312167]: conmon 546345af0c3459ae8832 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b.scope/container/memory.events
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.379751061 +0000 UTC m=+0.131065267 container attach 546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.380562437 +0000 UTC m=+0.131876623 container died 546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:17:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a1edacec57ff5e3239d8513b0ec999c3ee5a3863ec40dceb0a0ce9281ef56d02-merged.mount: Deactivated successfully.
Jan 31 03:17:40 np0005603621 podman[312152]: 2026-01-31 08:17:40.436942164 +0000 UTC m=+0.188256340 container remove 546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:17:40 np0005603621 systemd[1]: libpod-conmon-546345af0c3459ae88326cbf0f0990cedbe82fa69ea4d2a07316028b595c005b.scope: Deactivated successfully.
Jan 31 03:17:40 np0005603621 podman[312190]: 2026-01-31 08:17:40.577020162 +0000 UTC m=+0.043749601 container create 1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:17:40 np0005603621 systemd[1]: Started libpod-conmon-1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723.scope.
Jan 31 03:17:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3a034f6eca0c4a020f011e25e102b4bb4a9bcebd1d2e4bb5b77749d7853300/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:40 np0005603621 podman[312190]: 2026-01-31 08:17:40.559219174 +0000 UTC m=+0.025948633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:17:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3a034f6eca0c4a020f011e25e102b4bb4a9bcebd1d2e4bb5b77749d7853300/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3a034f6eca0c4a020f011e25e102b4bb4a9bcebd1d2e4bb5b77749d7853300/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b3a034f6eca0c4a020f011e25e102b4bb4a9bcebd1d2e4bb5b77749d7853300/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:40 np0005603621 podman[312190]: 2026-01-31 08:17:40.676895421 +0000 UTC m=+0.143624880 container init 1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:17:40 np0005603621 podman[312190]: 2026-01-31 08:17:40.682134795 +0000 UTC m=+0.148864234 container start 1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:17:40 np0005603621 podman[312190]: 2026-01-31 08:17:40.691344854 +0000 UTC m=+0.158074293 container attach 1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.879 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.880 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.880 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.881 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.881 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.882 247403 INFO nova.compute.manager [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Terminating instance#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.883 247403 DEBUG nova.compute.manager [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:17:40 np0005603621 kernel: tap418a8dcf-ff (unregistering): left promiscuous mode
Jan 31 03:17:40 np0005603621 NetworkManager[49013]: <info>  [1769847460.9344] device (tap418a8dcf-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:17:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:40Z|00378|binding|INFO|Releasing lport 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 from this chassis (sb_readonly=0)
Jan 31 03:17:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:40Z|00379|binding|INFO|Setting lport 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 down in Southbound
Jan 31 03:17:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:17:40Z|00380|binding|INFO|Removing iface tap418a8dcf-ff ovn-installed in OVS
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.944 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:40 np0005603621 nova_compute[247399]: 2026-01-31 08:17:40.958 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:40 np0005603621 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 31 03:17:40 np0005603621 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000062.scope: Consumed 11.005s CPU time.
Jan 31 03:17:40 np0005603621 systemd-machined[212769]: Machine qemu-43-instance-00000062 terminated.
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.018 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:a0:63 10.100.0.14'], port_security=['fa:16:3e:f9:a0:63 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'eafe22d6cfcb41d4b31597087498a565', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2748ecb7-7ea4-47e3-84b3-eb7c4d3ddc31', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=27d5be35-4a5e-4d77-b9b0-f9a41d41dd18, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.020 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 in datapath 4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 unbound from our chassis#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.021 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.025 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Updating instance_info_cache with network_info: [{"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.023 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0969ca-5b27-4404-9815-04c96b8be1e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.023 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 namespace which is not needed anymore#033[00m
Jan 31 03:17:41 np0005603621 NetworkManager[49013]: <info>  [1769847461.1018] manager: (tap418a8dcf-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/185)
Jan 31 03:17:41 np0005603621 systemd-udevd[312214]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.112 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.119 247403 INFO nova.virt.libvirt.driver [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Instance destroyed successfully.#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.120 247403 DEBUG nova.objects.instance [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lazy-loading 'resources' on Instance uuid f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:17:41 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [NOTICE]   (311589) : haproxy version is 2.8.14-c23fe91
Jan 31 03:17:41 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [NOTICE]   (311589) : path to executable is /usr/sbin/haproxy
Jan 31 03:17:41 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [WARNING]  (311589) : Exiting Master process...
Jan 31 03:17:41 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [ALERT]    (311589) : Current worker (311591) exited with code 143 (Terminated)
Jan 31 03:17:41 np0005603621 neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9[311585]: [WARNING]  (311589) : All workers exited. Exiting... (0)
Jan 31 03:17:41 np0005603621 systemd[1]: libpod-73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2.scope: Deactivated successfully.
Jan 31 03:17:41 np0005603621 podman[312236]: 2026-01-31 08:17:41.149419726 +0000 UTC m=+0.051406981 container died 73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:17:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2-userdata-shm.mount: Deactivated successfully.
Jan 31 03:17:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-557404ae95650eb70a592d3a473bd2be1c6cb5b927ad421e8139f7cb6a2d1737-merged.mount: Deactivated successfully.
Jan 31 03:17:41 np0005603621 podman[312236]: 2026-01-31 08:17:41.238620272 +0000 UTC m=+0.140607557 container cleanup 73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:17:41 np0005603621 systemd[1]: libpod-conmon-73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2.scope: Deactivated successfully.
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.249 247403 DEBUG nova.virt.libvirt.vif [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:16:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-603829389',display_name='tempest-ListServersNegativeTestJSON-server-603829389-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-603829389-1',id=98,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:17:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='eafe22d6cfcb41d4b31597087498a565',ramdisk_id='',reservation_id='r-5jowh2i5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-79577656',owner_user_name='tempest-ListServersNegativeTestJSON-79577656-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:17:30Z,user_data=None,user_id='8db5a8acb6d04c988f9dd4f74380c487',uuid=f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.250 247403 DEBUG nova.network.os_vif_util [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Converting VIF {"id": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "address": "fa:16:3e:f9:a0:63", "network": {"id": "4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-399826021-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "eafe22d6cfcb41d4b31597087498a565", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap418a8dcf-ff", "ovs_interfaceid": "418a8dcf-ff92-43bd-a0b6-1f7d9ae61527", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.251 247403 DEBUG nova.network.os_vif_util [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.252 247403 DEBUG os_vif [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.256 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.257 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap418a8dcf-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.265 247403 INFO os_vif [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:a0:63,bridge_name='br-int',has_traffic_filtering=True,id=418a8dcf-ff92-43bd-a0b6-1f7d9ae61527,network=Network(4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap418a8dcf-ff')#033[00m
Jan 31 03:17:41 np0005603621 podman[312269]: 2026-01-31 08:17:41.305612181 +0000 UTC m=+0.049158852 container remove 73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.308 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[faa085db-4d8f-4eb3-9899-672fa45bb34c]: (4, ('Sat Jan 31 08:17:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 (73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2)\n73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2\nSat Jan 31 08:17:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 (73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2)\n73d9b47d2f3dcc5994e38ca376b1663e2205e7a877aff1cf7196ce6d5d2637e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.310 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f35ab64-1bae-4a25-875f-ba1118b21497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.311 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ba4d6d9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.313 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 kernel: tap4ba4d6d9-c0: left promiscuous mode
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.319 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4305b18b-490b-46f7-a6c0-562f20b89442]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.324 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.332 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.333 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.334 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.334 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.334 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.335 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.335 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.336 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c2fcc300-1a51-4fc8-9e74-ff2366a897aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.337 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[441d64cb-a458-427b-8a0c-a186ee3c5e41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.347 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc4b256-7cb2-42a8-b3ce-e56a99e9ef0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674906, 'reachable_time': 16933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312299, 'error': None, 'target': 'ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 systemd[1]: run-netns-ovnmeta\x2d4ba4d6d9\x2dcb51\x2d4c5e\x2d9b78\x2ddca15ca271c9.mount: Deactivated successfully.
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.350 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ba4d6d9-cb51-4c5e-9b78-dca15ca271c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:17:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:17:41.351 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[3000e976-da14-47f8-a548-3efbd146edf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]: {
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:    "0": [
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:        {
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "devices": [
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "/dev/loop3"
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            ],
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "lv_name": "ceph_lv0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "lv_size": "7511998464",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "name": "ceph_lv0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "tags": {
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.cluster_name": "ceph",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.crush_device_class": "",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.encrypted": "0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.osd_id": "0",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.type": "block",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:                "ceph.vdo": "0"
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            },
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "type": "block",
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:            "vg_name": "ceph_vg0"
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:        }
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]:    ]
Jan 31 03:17:41 np0005603621 friendly_engelbart[312206]: }
Jan 31 03:17:41 np0005603621 systemd[1]: libpod-1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723.scope: Deactivated successfully.
Jan 31 03:17:41 np0005603621 podman[312190]: 2026-01-31 08:17:41.478826238 +0000 UTC m=+0.945555677 container died 1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.480 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.481 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.481 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.481 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.482 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4b3a034f6eca0c4a020f011e25e102b4bb4a9bcebd1d2e4bb5b77749d7853300-merged.mount: Deactivated successfully.
Jan 31 03:17:41 np0005603621 podman[312190]: 2026-01-31 08:17:41.540127938 +0000 UTC m=+1.006857377 container remove 1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:17:41 np0005603621 systemd[1]: libpod-conmon-1df24fafb0f95580a41ba77623febb7a6e8b60ac133b2b81cfcde3015c443723.scope: Deactivated successfully.
Jan 31 03:17:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 272 MiB data, 932 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.4 MiB/s wr, 155 op/s
Jan 31 03:17:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881682163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:41.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:41 np0005603621 nova_compute[247399]: 2026-01-31 08:17:41.943 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:42.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.074644396 +0000 UTC m=+0.032217661 container create 639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:17:42 np0005603621 systemd[1]: Started libpod-conmon-639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e.scope.
Jan 31 03:17:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.150 247403 DEBUG nova.compute.manager [req-16ab37d6-3d41-4735-a3df-bcf4d27f269c req-0f5ccdc8-2211-4122-99bf-93dfc0d23bf7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-vif-unplugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.151 247403 DEBUG oslo_concurrency.lockutils [req-16ab37d6-3d41-4735-a3df-bcf4d27f269c req-0f5ccdc8-2211-4122-99bf-93dfc0d23bf7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.151 247403 DEBUG oslo_concurrency.lockutils [req-16ab37d6-3d41-4735-a3df-bcf4d27f269c req-0f5ccdc8-2211-4122-99bf-93dfc0d23bf7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.151 247403 DEBUG oslo_concurrency.lockutils [req-16ab37d6-3d41-4735-a3df-bcf4d27f269c req-0f5ccdc8-2211-4122-99bf-93dfc0d23bf7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.151 247403 DEBUG nova.compute.manager [req-16ab37d6-3d41-4735-a3df-bcf4d27f269c req-0f5ccdc8-2211-4122-99bf-93dfc0d23bf7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] No waiting events found dispatching network-vif-unplugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.152 247403 DEBUG nova.compute.manager [req-16ab37d6-3d41-4735-a3df-bcf4d27f269c req-0f5ccdc8-2211-4122-99bf-93dfc0d23bf7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-vif-unplugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.06040698 +0000 UTC m=+0.017980275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.248240165 +0000 UTC m=+0.205813460 container init 639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.255806022 +0000 UTC m=+0.213379287 container start 639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:17:42 np0005603621 infallible_jackson[312500]: 167 167
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.260 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.261 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:17:42 np0005603621 systemd[1]: libpod-639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e.scope: Deactivated successfully.
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.313785749 +0000 UTC m=+0.271359034 container attach 639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.316095771 +0000 UTC m=+0.273669036 container died 639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:17:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b56463d5cb5a8eb8615b5597e8a25708ab76a20d0a05204a122fcd565df32c95-merged.mount: Deactivated successfully.
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.390 247403 INFO nova.virt.libvirt.driver [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Deleting instance files /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_del#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.391 247403 INFO nova.virt.libvirt.driver [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Deletion of /var/lib/nova/instances/f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b_del complete#033[00m
Jan 31 03:17:42 np0005603621 podman[312482]: 2026-01-31 08:17:42.415883898 +0000 UTC m=+0.373457153 container remove 639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.416 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.417 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4396MB free_disk=20.909297943115234GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.418 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:42 np0005603621 nova_compute[247399]: 2026-01-31 08:17:42.418 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:42 np0005603621 systemd[1]: libpod-conmon-639ffc0b0ab6cf7df97626468e64200b095b5a98c5378090a19b791a8562846e.scope: Deactivated successfully.
Jan 31 03:17:42 np0005603621 podman[312523]: 2026-01-31 08:17:42.524596664 +0000 UTC m=+0.034187553 container create b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:17:42 np0005603621 systemd[1]: Started libpod-conmon-b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4.scope.
Jan 31 03:17:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:17:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92930f8f07e13ddaf3ac778757911c11d2500542e28ce3a4309004b9a93c93b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92930f8f07e13ddaf3ac778757911c11d2500542e28ce3a4309004b9a93c93b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92930f8f07e13ddaf3ac778757911c11d2500542e28ce3a4309004b9a93c93b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c92930f8f07e13ddaf3ac778757911c11d2500542e28ce3a4309004b9a93c93b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:17:42 np0005603621 podman[312523]: 2026-01-31 08:17:42.583791348 +0000 UTC m=+0.093382267 container init b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:17:42 np0005603621 podman[312523]: 2026-01-31 08:17:42.589433915 +0000 UTC m=+0.099024804 container start b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:17:42 np0005603621 podman[312523]: 2026-01-31 08:17:42.594659949 +0000 UTC m=+0.104250918 container attach b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:17:42 np0005603621 podman[312523]: 2026-01-31 08:17:42.510070948 +0000 UTC m=+0.019661857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.199 247403 INFO nova.compute.manager [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Took 2.32 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.200 247403 DEBUG oslo.service.loopingcall [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.200 247403 DEBUG nova.compute.manager [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.201 247403 DEBUG nova.network.neutron [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.363 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.363 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.363 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:17:43 np0005603621 hungry_newton[312539]: {
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:        "osd_id": 0,
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:        "type": "bluestore"
Jan 31 03:17:43 np0005603621 hungry_newton[312539]:    }
Jan 31 03:17:43 np0005603621 hungry_newton[312539]: }
Jan 31 03:17:43 np0005603621 systemd[1]: libpod-b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4.scope: Deactivated successfully.
Jan 31 03:17:43 np0005603621 conmon[312539]: conmon b8e078eb27f6209f58e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4.scope/container/memory.events
Jan 31 03:17:43 np0005603621 podman[312523]: 2026-01-31 08:17:43.392232658 +0000 UTC m=+0.901823547 container died b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.428 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c92930f8f07e13ddaf3ac778757911c11d2500542e28ce3a4309004b9a93c93b-merged.mount: Deactivated successfully.
Jan 31 03:17:43 np0005603621 podman[312523]: 2026-01-31 08:17:43.477538701 +0000 UTC m=+0.987129590 container remove b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_newton, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:17:43 np0005603621 systemd[1]: libpod-conmon-b8e078eb27f6209f58e01a89572fba3e0389bd730322df168eb4a37329842cb4.scope: Deactivated successfully.
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a761d30d-c16a-47ca-bbb4-8cae86177f16 does not exist
Jan 31 03:17:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e3527076-ebb6-4817-bb26-13d6e162769f does not exist
Jan 31 03:17:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7ed3e628-52ee-4257-89c5-462be90df10d does not exist
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 267 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.1 MiB/s wr, 223 op/s
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165428002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.899 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.904 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:17:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:43.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:43 np0005603621 nova_compute[247399]: 2026-01-31 08:17:43.980 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:17:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:44.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.243 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.837 247403 DEBUG nova.compute.manager [req-8ad3df72-e9a8-4e28-8755-4c86cd700e8b req-6cbe8892-9247-449a-9d7a-3f9f93af2d42 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.838 247403 DEBUG oslo_concurrency.lockutils [req-8ad3df72-e9a8-4e28-8755-4c86cd700e8b req-6cbe8892-9247-449a-9d7a-3f9f93af2d42 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.838 247403 DEBUG oslo_concurrency.lockutils [req-8ad3df72-e9a8-4e28-8755-4c86cd700e8b req-6cbe8892-9247-449a-9d7a-3f9f93af2d42 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.838 247403 DEBUG oslo_concurrency.lockutils [req-8ad3df72-e9a8-4e28-8755-4c86cd700e8b req-6cbe8892-9247-449a-9d7a-3f9f93af2d42 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.838 247403 DEBUG nova.compute.manager [req-8ad3df72-e9a8-4e28-8755-4c86cd700e8b req-6cbe8892-9247-449a-9d7a-3f9f93af2d42 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] No waiting events found dispatching network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:17:44 np0005603621 nova_compute[247399]: 2026-01-31 08:17:44.838 247403 WARNING nova.compute.manager [req-8ad3df72-e9a8-4e28-8755-4c86cd700e8b req-6cbe8892-9247-449a-9d7a-3f9f93af2d42 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received unexpected event network-vif-plugged-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:17:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 270 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 175 op/s
Jan 31 03:17:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:46 np0005603621 nova_compute[247399]: 2026-01-31 08:17:46.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:46 np0005603621 nova_compute[247399]: 2026-01-31 08:17:46.360 247403 DEBUG nova.network.neutron [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:17:46 np0005603621 nova_compute[247399]: 2026-01-31 08:17:46.478 247403 INFO nova.compute.manager [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Took 3.28 seconds to deallocate network for instance.#033[00m
Jan 31 03:17:46 np0005603621 nova_compute[247399]: 2026-01-31 08:17:46.613 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:17:46 np0005603621 nova_compute[247399]: 2026-01-31 08:17:46.613 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:17:46 np0005603621 nova_compute[247399]: 2026-01-31 08:17:46.703 247403 DEBUG oslo_concurrency.processutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:17:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:17:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180736397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:17:47 np0005603621 nova_compute[247399]: 2026-01-31 08:17:47.102 247403 DEBUG oslo_concurrency.processutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:17:47 np0005603621 nova_compute[247399]: 2026-01-31 08:17:47.107 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:47 np0005603621 nova_compute[247399]: 2026-01-31 08:17:47.108 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:47 np0005603621 nova_compute[247399]: 2026-01-31 08:17:47.109 247403 DEBUG nova.compute.provider_tree [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:17:47 np0005603621 nova_compute[247399]: 2026-01-31 08:17:47.437 247403 DEBUG nova.compute.manager [req-7984eff1-e4ff-4cab-97ba-3e462b39306f req-62a25e01-edd1-475d-8434-320dbf698f69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Received event network-vif-deleted-418a8dcf-ff92-43bd-a0b6-1f7d9ae61527 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:17:47 np0005603621 nova_compute[247399]: 2026-01-31 08:17:47.630 247403 DEBUG nova.scheduler.client.report [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:17:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 275 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 4.2 MiB/s wr, 149 op/s
Jan 31 03:17:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:47.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:48 np0005603621 nova_compute[247399]: 2026-01-31 08:17:48.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:17:48 np0005603621 podman[312718]: 2026-01-31 08:17:48.535287401 +0000 UTC m=+0.085856211 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 03:17:48 np0005603621 podman[312719]: 2026-01-31 08:17:48.54195575 +0000 UTC m=+0.086615355 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:17:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:48 np0005603621 nova_compute[247399]: 2026-01-31 08:17:48.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004317558568304485 of space, bias 1.0, pg target 1.2952675704913454 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002166867322724735 of space, bias 1.0, pg target 0.6478933294946957 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:17:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 718 KiB/s rd, 4.0 MiB/s wr, 153 op/s
Jan 31 03:17:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:50.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:50 np0005603621 nova_compute[247399]: 2026-01-31 08:17:50.567 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:51 np0005603621 nova_compute[247399]: 2026-01-31 08:17:51.262 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:51 np0005603621 nova_compute[247399]: 2026-01-31 08:17:51.459 247403 INFO nova.scheduler.client.report [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Deleted allocations for instance f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b#033[00m
Jan 31 03:17:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 667 KiB/s rd, 2.9 MiB/s wr, 128 op/s
Jan 31 03:17:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:51.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:17:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:52.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:17:53 np0005603621 nova_compute[247399]: 2026-01-31 08:17:53.106 247403 DEBUG oslo_concurrency.lockutils [None req-6ce7c591-46b6-4280-998b-02fe992b8faa 8db5a8acb6d04c988f9dd4f74380c487 eafe22d6cfcb41d4b31597087498a565 - - default default] Lock "f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:17:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 667 KiB/s rd, 2.9 MiB/s wr, 128 op/s
Jan 31 03:17:53 np0005603621 nova_compute[247399]: 2026-01-31 08:17:53.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:53.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:54.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 1.1 MiB/s wr, 61 op/s
Jan 31 03:17:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:56.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:56 np0005603621 nova_compute[247399]: 2026-01-31 08:17:56.118 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847461.117204, f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:17:56 np0005603621 nova_compute[247399]: 2026-01-31 08:17:56.119 247403 INFO nova.compute.manager [-] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:17:56 np0005603621 nova_compute[247399]: 2026-01-31 08:17:56.231 247403 DEBUG nova.compute.manager [None req-64453325-ac92-4b49-b881-f39b4291c2c7 - - - - - -] [instance: f91dfb36-a9c9-4ea7-b2c8-9de5eff24f6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:17:56 np0005603621 nova_compute[247399]: 2026-01-31 08:17:56.264 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 116 KiB/s wr, 18 op/s
Jan 31 03:17:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:57.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:17:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:17:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:17:58 np0005603621 nova_compute[247399]: 2026-01-31 08:17:58.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:17:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 33 KiB/s wr, 7 op/s
Jan 31 03:17:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:17:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:17:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:17:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:00.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:01 np0005603621 nova_compute[247399]: 2026-01-31 08:18:01.267 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 279 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 31 03:18:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:01.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 170 MiB data, 895 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 47 op/s
Jan 31 03:18:03 np0005603621 nova_compute[247399]: 2026-01-31 08:18:03.892 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:03.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:04.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 121 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 70 op/s
Jan 31 03:18:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:05.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:06 np0005603621 nova_compute[247399]: 2026-01-31 08:18:06.269 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 121 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 70 op/s
Jan 31 03:18:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:07.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:08.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:08 np0005603621 nova_compute[247399]: 2026-01-31 08:18:08.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 121 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 3.8 KiB/s wr, 82 op/s
Jan 31 03:18:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:09.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:10.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:11 np0005603621 nova_compute[247399]: 2026-01-31 08:18:11.271 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 121 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 3.8 KiB/s wr, 82 op/s
Jan 31 03:18:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:11.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:12.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 64 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 4.2 KiB/s wr, 97 op/s
Jan 31 03:18:13 np0005603621 nova_compute[247399]: 2026-01-31 08:18:13.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:13.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:14.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 47 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.7 KiB/s wr, 51 op/s
Jan 31 03:18:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:15.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:16.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:16 np0005603621 nova_compute[247399]: 2026-01-31 08:18:16.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 817 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 597 B/s wr, 28 op/s
Jan 31 03:18:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:17.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:18 np0005603621 nova_compute[247399]: 2026-01-31 08:18:18.897 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:19 np0005603621 podman[312826]: 2026-01-31 08:18:19.482321861 +0000 UTC m=+0.043745182 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:18:19 np0005603621 podman[312827]: 2026-01-31 08:18:19.529190279 +0000 UTC m=+0.088968168 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:18:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 597 B/s wr, 28 op/s
Jan 31 03:18:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:19.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:20.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:21 np0005603621 nova_compute[247399]: 2026-01-31 08:18:21.312 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Jan 31 03:18:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:21.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:22.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Jan 31 03:18:23 np0005603621 nova_compute[247399]: 2026-01-31 08:18:23.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:24.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:24 np0005603621 nova_compute[247399]: 2026-01-31 08:18:24.381 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.634427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847504634461, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2140, "num_deletes": 252, "total_data_size": 3811394, "memory_usage": 3863960, "flush_reason": "Manual Compaction"}
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847504709937, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3744132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41745, "largest_seqno": 43884, "table_properties": {"data_size": 3734529, "index_size": 6033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20336, "raw_average_key_size": 20, "raw_value_size": 3715173, "raw_average_value_size": 3760, "num_data_blocks": 263, "num_entries": 988, "num_filter_entries": 988, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847288, "oldest_key_time": 1769847288, "file_creation_time": 1769847504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 75565 microseconds, and 5869 cpu microseconds.
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.709985) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3744132 bytes OK
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.710006) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.783683) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.783733) EVENT_LOG_v1 {"time_micros": 1769847504783722, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.783805) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3802714, prev total WAL file size 3802714, number of live WAL files 2.
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.785212) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3656KB)], [92(9481KB)]
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847504785280, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13453086, "oldest_snapshot_seqno": -1}
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7210 keys, 11566570 bytes, temperature: kUnknown
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847504975936, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11566570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11517617, "index_size": 29833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 185460, "raw_average_key_size": 25, "raw_value_size": 11388292, "raw_average_value_size": 1579, "num_data_blocks": 1183, "num_entries": 7210, "num_filter_entries": 7210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847504, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:18:24 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.976173) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11566570 bytes
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.047351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.5 rd, 60.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.3 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7734, records dropped: 524 output_compression: NoCompression
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.047412) EVENT_LOG_v1 {"time_micros": 1769847505047389, "job": 54, "event": "compaction_finished", "compaction_time_micros": 190736, "compaction_time_cpu_micros": 18824, "output_level": 6, "num_output_files": 1, "total_output_size": 11566570, "num_input_records": 7734, "num_output_records": 7210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847505048360, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847505049432, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:24.784983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.049608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.049620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.049625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.049628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:18:25 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:18:25.049631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:18:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 255 B/s wr, 2 op/s
Jan 31 03:18:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:25.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:26 np0005603621 nova_compute[247399]: 2026-01-31 08:18:26.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 53 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 603 KiB/s wr, 13 op/s
Jan 31 03:18:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:27.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:28.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:18:28.528 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:18:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:18:28.529 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:18:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:18:28.530 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:18:28 np0005603621 nova_compute[247399]: 2026-01-31 08:18:28.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:28 np0005603621 nova_compute[247399]: 2026-01-31 08:18:28.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 03:18:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:29.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:30.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:18:30.500 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:18:30.501 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:18:30.501 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:31 np0005603621 nova_compute[247399]: 2026-01-31 08:18:31.318 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:18:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:31.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:32.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:18:33 np0005603621 nova_compute[247399]: 2026-01-31 08:18:33.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:33.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:34.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:34 np0005603621 nova_compute[247399]: 2026-01-31 08:18:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:18:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:36 np0005603621 nova_compute[247399]: 2026-01-31 08:18:36.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:37 np0005603621 nova_compute[247399]: 2026-01-31 08:18:37.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:37 np0005603621 nova_compute[247399]: 2026-01-31 08:18:37.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:18:37 np0005603621 nova_compute[247399]: 2026-01-31 08:18:37.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:18:37 np0005603621 nova_compute[247399]: 2026-01-31 08:18:37.318 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:18:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:18:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:18:38
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta', '.rgw.root']
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:18:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:18:38 np0005603621 nova_compute[247399]: 2026-01-31 08:18:38.906 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:39 np0005603621 nova_compute[247399]: 2026-01-31 08:18:39.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:39 np0005603621 nova_compute[247399]: 2026-01-31 08:18:39.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:18:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 8.1 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Jan 31 03:18:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:39.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:40.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:40 np0005603621 nova_compute[247399]: 2026-01-31 08:18:40.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:41 np0005603621 nova_compute[247399]: 2026-01-31 08:18:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:41 np0005603621 nova_compute[247399]: 2026-01-31 08:18:41.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Jan 31 03:18:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:41.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:42.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.279 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.280 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.353 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.353 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.354 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.354 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.354 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972322283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.758 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.885 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.886 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4444MB free_disk=20.967517852783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.887 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.887 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 56 op/s
Jan 31 03:18:43 np0005603621 nova_compute[247399]: 2026-01-31 08:18:43.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.055 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.055 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.074 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:44.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594332944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.604 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.609 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.676 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.802 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:18:44 np0005603621 nova_compute[247399]: 2026-01-31 08:18:44.803 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:45 np0005603621 podman[313147]: 2026-01-31 08:18:45.030089381 +0000 UTC m=+0.443399933 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:18:45 np0005603621 podman[313147]: 2026-01-31 08:18:45.243511408 +0000 UTC m=+0.656821970 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:18:45 np0005603621 nova_compute[247399]: 2026-01-31 08:18:45.772 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:45 np0005603621 nova_compute[247399]: 2026-01-31 08:18:45.773 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:45 np0005603621 nova_compute[247399]: 2026-01-31 08:18:45.819 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:18:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 70 op/s
Jan 31 03:18:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.012 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.013 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.025 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.026 247403 INFO nova.compute.claims [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:18:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:46.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.245 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.324 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:46 np0005603621 podman[313354]: 2026-01-31 08:18:46.50297642 +0000 UTC m=+0.140845514 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:18:46 np0005603621 podman[313391]: 2026-01-31 08:18:46.566904292 +0000 UTC m=+0.051009398 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:18:46 np0005603621 podman[313354]: 2026-01-31 08:18:46.675604809 +0000 UTC m=+0.313473893 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:18:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:18:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186356054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.708 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.715 247403 DEBUG nova.compute.provider_tree [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.752 247403 DEBUG nova.scheduler.client.report [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.803 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.804 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.963 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:18:46 np0005603621 nova_compute[247399]: 2026-01-31 08:18:46.963 247403 DEBUG nova.network.neutron [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.017 247403 INFO nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.101 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:18:47 np0005603621 podman[313436]: 2026-01-31 08:18:47.104032302 +0000 UTC m=+0.206180351 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, vcs-type=git, version=2.2.4)
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.338 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.340 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.340 247403 INFO nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Creating image(s)#033[00m
Jan 31 03:18:47 np0005603621 podman[313456]: 2026-01-31 08:18:47.354918983 +0000 UTC m=+0.234987684 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, description=keepalived for Ceph, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.buildah.version=1.28.2)
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.363 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:18:47 np0005603621 podman[313436]: 2026-01-31 08:18:47.387303847 +0000 UTC m=+0.489451876 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9)
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.389 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.415 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.419 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.471 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.471 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.472 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.472 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.500 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.503 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 60b7324e-952b-494a-9188-126d91c94d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:18:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:18:47 np0005603621 nova_compute[247399]: 2026-01-31 08:18:47.722 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:18:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:18:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 80 op/s
Jan 31 03:18:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:18:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:18:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:48.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:18:48 np0005603621 nova_compute[247399]: 2026-01-31 08:18:48.280 247403 DEBUG nova.policy [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '588d05c741d9418f8c5b5bfa0d0c887a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b88014f079bc4442acb657ee1d42b453', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:18:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 39b004da-d091-478f-bac4-b4fd8c0a82cf does not exist
Jan 31 03:18:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 85d56094-57f6-43f9-b901-cbf9c678b405 does not exist
Jan 31 03:18:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2c1b0c3d-2644-4fba-9197-b64135542c5b does not exist
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:18:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:18:48 np0005603621 nova_compute[247399]: 2026-01-31 08:18:48.909 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:18:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:18:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:18:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:18:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:18:49 np0005603621 nova_compute[247399]: 2026-01-31 08:18:49.108 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 60b7324e-952b-494a-9188-126d91c94d28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:18:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:18:49 np0005603621 nova_compute[247399]: 2026-01-31 08:18:49.229 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] resizing rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:18:49 np0005603621 podman[313868]: 2026-01-31 08:18:49.241614336 +0000 UTC m=+0.021929458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:18:49 np0005603621 podman[313868]: 2026-01-31 08:18:49.344645854 +0000 UTC m=+0.124960946 container create 062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:18:49 np0005603621 systemd[1]: Started libpod-conmon-062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d.scope.
Jan 31 03:18:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:18:49 np0005603621 podman[313868]: 2026-01-31 08:18:49.538951493 +0000 UTC m=+0.319266605 container init 062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:18:49 np0005603621 podman[313868]: 2026-01-31 08:18:49.546047594 +0000 UTC m=+0.326362726 container start 062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:18:49 np0005603621 tender_montalcini[313905]: 167 167
Jan 31 03:18:49 np0005603621 systemd[1]: libpod-062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d.scope: Deactivated successfully.
Jan 31 03:18:49 np0005603621 podman[313868]: 2026-01-31 08:18:49.606428707 +0000 UTC m=+0.386743799 container attach 062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:18:49 np0005603621 podman[313868]: 2026-01-31 08:18:49.60751082 +0000 UTC m=+0.387825922 container died 062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:18:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 63 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 723 KiB/s wr, 96 op/s
Jan 31 03:18:49 np0005603621 nova_compute[247399]: 2026-01-31 08:18:49.962 247403 DEBUG nova.objects.instance [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lazy-loading 'migration_context' on Instance uuid 60b7324e-952b-494a-9188-126d91c94d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:18:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:50.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.027 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.027 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Ensure instance console log exists: /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.028 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.028 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.029 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:18:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:50.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:18:50 np0005603621 nova_compute[247399]: 2026-01-31 08:18:50.413 247403 DEBUG nova.network.neutron [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Successfully created port: c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:18:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7e614e0c36e00143c1964ead6973c64b1243aed9f50d791eb06719e69318cb2f-merged.mount: Deactivated successfully.
Jan 31 03:18:51 np0005603621 podman[313868]: 2026-01-31 08:18:51.103719409 +0000 UTC m=+1.884034501 container remove 062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_montalcini, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:18:51 np0005603621 systemd[1]: libpod-conmon-062cf3b5fb40534f57d90e1651362cade820bf29a2c57f6231992e4a3c76965d.scope: Deactivated successfully.
Jan 31 03:18:51 np0005603621 podman[313910]: 2026-01-31 08:18:51.1707663 +0000 UTC m=+1.591335451 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 03:18:51 np0005603621 podman[313917]: 2026-01-31 08:18:51.2106612 +0000 UTC m=+1.631105537 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:18:51 np0005603621 nova_compute[247399]: 2026-01-31 08:18:51.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:51 np0005603621 podman[313992]: 2026-01-31 08:18:51.232135243 +0000 UTC m=+0.023626721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:18:51 np0005603621 podman[313992]: 2026-01-31 08:18:51.651512972 +0000 UTC m=+0.443004460 container create b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:18:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 63 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 723 KiB/s wr, 96 op/s
Jan 31 03:18:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:52.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:52 np0005603621 systemd[1]: Started libpod-conmon-b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9.scope.
Jan 31 03:18:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:18:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e026071e25023fce6fb4fd988ced3e9b09d5e48d7fc1b0a2962742c211960/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e026071e25023fce6fb4fd988ced3e9b09d5e48d7fc1b0a2962742c211960/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e026071e25023fce6fb4fd988ced3e9b09d5e48d7fc1b0a2962742c211960/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e026071e25023fce6fb4fd988ced3e9b09d5e48d7fc1b0a2962742c211960/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/927e026071e25023fce6fb4fd988ced3e9b09d5e48d7fc1b0a2962742c211960/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:52 np0005603621 podman[313992]: 2026-01-31 08:18:52.855869568 +0000 UTC m=+1.647361046 container init b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:18:52 np0005603621 podman[313992]: 2026-01-31 08:18:52.866187981 +0000 UTC m=+1.657679459 container start b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:18:53 np0005603621 podman[313992]: 2026-01-31 08:18:53.053920943 +0000 UTC m=+1.845412451 container attach b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:18:53 np0005603621 boring_mclaren[314012]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:18:53 np0005603621 boring_mclaren[314012]: --> relative data size: 1.0
Jan 31 03:18:53 np0005603621 boring_mclaren[314012]: --> All data devices are unavailable
Jan 31 03:18:53 np0005603621 systemd[1]: libpod-b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9.scope: Deactivated successfully.
Jan 31 03:18:53 np0005603621 podman[313992]: 2026-01-31 08:18:53.636732924 +0000 UTC m=+2.428224382 container died b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:18:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Jan 31 03:18:53 np0005603621 nova_compute[247399]: 2026-01-31 08:18:53.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:54.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:54 np0005603621 nova_compute[247399]: 2026-01-31 08:18:54.210 247403 DEBUG nova.network.neutron [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Successfully updated port: c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:18:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-927e026071e25023fce6fb4fd988ced3e9b09d5e48d7fc1b0a2962742c211960-merged.mount: Deactivated successfully.
Jan 31 03:18:54 np0005603621 nova_compute[247399]: 2026-01-31 08:18:54.998 247403 DEBUG nova.compute.manager [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-changed-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:18:54 np0005603621 nova_compute[247399]: 2026-01-31 08:18:54.999 247403 DEBUG nova.compute.manager [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Refreshing instance network info cache due to event network-changed-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:18:55 np0005603621 nova_compute[247399]: 2026-01-31 08:18:55.000 247403 DEBUG oslo_concurrency.lockutils [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-60b7324e-952b-494a-9188-126d91c94d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:18:55 np0005603621 nova_compute[247399]: 2026-01-31 08:18:55.000 247403 DEBUG oslo_concurrency.lockutils [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-60b7324e-952b-494a-9188-126d91c94d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:18:55 np0005603621 nova_compute[247399]: 2026-01-31 08:18:55.001 247403 DEBUG nova.network.neutron [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Refreshing network info cache for port c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:18:55 np0005603621 nova_compute[247399]: 2026-01-31 08:18:55.026 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "refresh_cache-60b7324e-952b-494a-9188-126d91c94d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:18:55 np0005603621 podman[313992]: 2026-01-31 08:18:55.287831846 +0000 UTC m=+4.079323314 container remove b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 03:18:55 np0005603621 systemd[1]: libpod-conmon-b3ffed614364e759977876a6f9a7476ae0c22383c56035d1c37cdc70b1de3da9.scope: Deactivated successfully.
Jan 31 03:18:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 565 KiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 31 03:18:55 np0005603621 podman[314180]: 2026-01-31 08:18:55.814881209 +0000 UTC m=+0.021484403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:18:55 np0005603621 podman[314180]: 2026-01-31 08:18:55.984104982 +0000 UTC m=+0.190708196 container create 224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:18:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:56.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:56 np0005603621 systemd[1]: Started libpod-conmon-224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a.scope.
Jan 31 03:18:56 np0005603621 nova_compute[247399]: 2026-01-31 08:18:56.188 247403 DEBUG nova.network.neutron [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:18:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:18:56 np0005603621 nova_compute[247399]: 2026-01-31 08:18:56.328 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:56 np0005603621 podman[314180]: 2026-01-31 08:18:56.565689074 +0000 UTC m=+0.772292258 container init 224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 03:18:56 np0005603621 podman[314180]: 2026-01-31 08:18:56.572998263 +0000 UTC m=+0.779601467 container start 224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:18:56 np0005603621 systemd[1]: libpod-224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a.scope: Deactivated successfully.
Jan 31 03:18:56 np0005603621 determined_thompson[314196]: 167 167
Jan 31 03:18:56 np0005603621 conmon[314196]: conmon 224cce7efd95bead1fa0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a.scope/container/memory.events
Jan 31 03:18:56 np0005603621 podman[314180]: 2026-01-31 08:18:56.660918298 +0000 UTC m=+0.867521472 container attach 224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:18:56 np0005603621 podman[314180]: 2026-01-31 08:18:56.661219147 +0000 UTC m=+0.867822321 container died 224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:18:56 np0005603621 nova_compute[247399]: 2026-01-31 08:18:56.885 247403 DEBUG nova.network.neutron [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:18:56 np0005603621 nova_compute[247399]: 2026-01-31 08:18:56.971 247403 DEBUG oslo_concurrency.lockutils [req-ca414562-6124-4eb8-b196-8f4c59485d1e req-6f590e5a-7464-4478-b3a9-8bf8e08b0fc3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-60b7324e-952b-494a-9188-126d91c94d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:18:56 np0005603621 nova_compute[247399]: 2026-01-31 08:18:56.971 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquired lock "refresh_cache-60b7324e-952b-494a-9188-126d91c94d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:18:56 np0005603621 nova_compute[247399]: 2026-01-31 08:18:56.972 247403 DEBUG nova.network.neutron [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:18:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ddccc0cf79c316e05343aab2da73fab69402c917b630a83fbee18524dc211feb-merged.mount: Deactivated successfully.
Jan 31 03:18:57 np0005603621 nova_compute[247399]: 2026-01-31 08:18:57.279 247403 DEBUG nova.network.neutron [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:18:57 np0005603621 podman[314180]: 2026-01-31 08:18:57.404298458 +0000 UTC m=+1.610901632 container remove 224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_thompson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:18:57 np0005603621 systemd[1]: libpod-conmon-224cce7efd95bead1fa014246b8bbaddbbf69c4575d3985dc97432083159bf1a.scope: Deactivated successfully.
Jan 31 03:18:57 np0005603621 podman[314221]: 2026-01-31 08:18:57.512455657 +0000 UTC m=+0.023467626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:18:57 np0005603621 podman[314221]: 2026-01-31 08:18:57.885570377 +0000 UTC m=+0.396582326 container create 3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:18:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 127 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 31 03:18:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:18:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:18:58.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:18:58 np0005603621 systemd[1]: Started libpod-conmon-3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47.scope.
Jan 31 03:18:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:18:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae78ed07242c3a623a4d741d33e5ff18f08ce82728fa55da35ace5169036055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae78ed07242c3a623a4d741d33e5ff18f08ce82728fa55da35ace5169036055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae78ed07242c3a623a4d741d33e5ff18f08ce82728fa55da35ace5169036055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ae78ed07242c3a623a4d741d33e5ff18f08ce82728fa55da35ace5169036055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:18:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:18:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:18:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:18:58.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:18:58 np0005603621 podman[314221]: 2026-01-31 08:18:58.171594579 +0000 UTC m=+0.682606548 container init 3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:18:58 np0005603621 podman[314221]: 2026-01-31 08:18:58.177479393 +0000 UTC m=+0.688491342 container start 3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:18:58 np0005603621 podman[314221]: 2026-01-31 08:18:58.366150265 +0000 UTC m=+0.877162234 container attach 3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:18:58 np0005603621 angry_feynman[314239]: {
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:    "0": [
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:        {
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "devices": [
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "/dev/loop3"
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            ],
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "lv_name": "ceph_lv0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "lv_size": "7511998464",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "name": "ceph_lv0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "tags": {
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.cluster_name": "ceph",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.crush_device_class": "",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.encrypted": "0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.osd_id": "0",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.type": "block",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:                "ceph.vdo": "0"
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            },
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "type": "block",
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:            "vg_name": "ceph_vg0"
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:        }
Jan 31 03:18:58 np0005603621 angry_feynman[314239]:    ]
Jan 31 03:18:58 np0005603621 angry_feynman[314239]: }
Jan 31 03:18:58 np0005603621 systemd[1]: libpod-3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47.scope: Deactivated successfully.
Jan 31 03:18:58 np0005603621 podman[314221]: 2026-01-31 08:18:58.886428546 +0000 UTC m=+1.397440495 container died 3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 03:18:58 np0005603621 nova_compute[247399]: 2026-01-31 08:18:58.913 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:18:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:18:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2ae78ed07242c3a623a4d741d33e5ff18f08ce82728fa55da35ace5169036055-merged.mount: Deactivated successfully.
Jan 31 03:18:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Jan 31 03:19:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:00.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.312 247403 DEBUG nova.network.neutron [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Updating instance_info_cache with network_info: [{"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:00 np0005603621 podman[314221]: 2026-01-31 08:19:00.378285349 +0000 UTC m=+2.889297288 container remove 3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:19:00 np0005603621 systemd[1]: libpod-conmon-3657e832584a78fb23b0fdb12a9b1e3302affaf84c89fd351a6d3a1a46ca8c47.scope: Deactivated successfully.
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.461 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Releasing lock "refresh_cache-60b7324e-952b-494a-9188-126d91c94d28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.462 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Instance network_info: |[{"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.464 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Start _get_guest_xml network_info=[{"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.469 247403 WARNING nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.480 247403 DEBUG nova.virt.libvirt.host [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.481 247403 DEBUG nova.virt.libvirt.host [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.484 247403 DEBUG nova.virt.libvirt.host [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.484 247403 DEBUG nova.virt.libvirt.host [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.485 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.485 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.486 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.486 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.486 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.486 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.487 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.487 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.487 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.487 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.487 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.487 247403 DEBUG nova.virt.hardware [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:19:00 np0005603621 nova_compute[247399]: 2026-01-31 08:19:00.490 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:00 np0005603621 podman[314420]: 2026-01-31 08:19:00.840733618 +0000 UTC m=+0.019570154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:19:01 np0005603621 podman[314420]: 2026-01-31 08:19:01.203867795 +0000 UTC m=+0.382704311 container create d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.329 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:01 np0005603621 systemd[1]: Started libpod-conmon-d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc.scope.
Jan 31 03:19:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:19:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:19:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2075758687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.459 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.970s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.482 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.485 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:01 np0005603621 podman[314420]: 2026-01-31 08:19:01.511928818 +0000 UTC m=+0.690765364 container init d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:19:01 np0005603621 podman[314420]: 2026-01-31 08:19:01.517470512 +0000 UTC m=+0.696307028 container start d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:19:01 np0005603621 nostalgic_hermann[314440]: 167 167
Jan 31 03:19:01 np0005603621 systemd[1]: libpod-d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc.scope: Deactivated successfully.
Jan 31 03:19:01 np0005603621 podman[314420]: 2026-01-31 08:19:01.655212408 +0000 UTC m=+0.834049264 container attach d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:19:01 np0005603621 podman[314420]: 2026-01-31 08:19:01.655635361 +0000 UTC m=+0.834471877 container died d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hermann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:19:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:19:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387797596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.894 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.896 247403 DEBUG nova.virt.libvirt.vif [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:18:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-506257151',display_name='tempest-InstanceActionsNegativeTestJSON-server-506257151',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-506257151',id=102,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b88014f079bc4442acb657ee1d42b453',ramdisk_id='',reservation_id='r-m7hwniok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-691358202',owner_user_name='tempest-InstanceActionsNegativeTestJSON-691358202-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:18:47Z,user_data=None,user_id='588d05c741d9418f8c5b5bfa0d0c887a',uuid=60b7324e-952b-494a-9188-126d91c94d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.896 247403 DEBUG nova.network.os_vif_util [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Converting VIF {"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.897 247403 DEBUG nova.network.os_vif_util [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:01 np0005603621 nova_compute[247399]: 2026-01-31 08:19:01.898 247403 DEBUG nova.objects.instance [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lazy-loading 'pci_devices' on Instance uuid 60b7324e-952b-494a-9188-126d91c94d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Jan 31 03:19:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:02.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-90d0b23e9aaf1fcc22a87edeb6b8d61a399d3c43e576f47d246a8bc600e58793-merged.mount: Deactivated successfully.
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.102 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <uuid>60b7324e-952b-494a-9188-126d91c94d28</uuid>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <name>instance-00000066</name>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-506257151</nova:name>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:19:00</nova:creationTime>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:user uuid="588d05c741d9418f8c5b5bfa0d0c887a">tempest-InstanceActionsNegativeTestJSON-691358202-project-member</nova:user>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:project uuid="b88014f079bc4442acb657ee1d42b453">tempest-InstanceActionsNegativeTestJSON-691358202</nova:project>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <nova:port uuid="c824f9c2-363a-4225-9e4e-e5f8a52ce1e9">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <entry name="serial">60b7324e-952b-494a-9188-126d91c94d28</entry>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <entry name="uuid">60b7324e-952b-494a-9188-126d91c94d28</entry>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/60b7324e-952b-494a-9188-126d91c94d28_disk">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/60b7324e-952b-494a-9188-126d91c94d28_disk.config">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:91:ad:3d"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <target dev="tapc824f9c2-36"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/console.log" append="off"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:19:02 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:19:02 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:19:02 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:19:02 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.104 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Preparing to wait for external event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.104 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.105 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.105 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.105 247403 DEBUG nova.virt.libvirt.vif [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:18:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-506257151',display_name='tempest-InstanceActionsNegativeTestJSON-server-506257151',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-506257151',id=102,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b88014f079bc4442acb657ee1d42b453',ramdisk_id='',reservation_id='r-m7hwniok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-691358202',owner_user_name='tempest-InstanceActionsNegativeTestJSON-691358202-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:18:47Z,user_data=None,user_id='588d05c741d9418f8c5b5bfa0d0c887a',uuid=60b7324e-952b-494a-9188-126d91c94d28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.106 247403 DEBUG nova.network.os_vif_util [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Converting VIF {"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.107 247403 DEBUG nova.network.os_vif_util [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.107 247403 DEBUG os_vif [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.108 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.109 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.112 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.112 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc824f9c2-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.112 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc824f9c2-36, col_values=(('external_ids', {'iface-id': 'c824f9c2-363a-4225-9e4e-e5f8a52ce1e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:ad:3d', 'vm-uuid': '60b7324e-952b-494a-9188-126d91c94d28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.114 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:02 np0005603621 NetworkManager[49013]: <info>  [1769847542.1160] manager: (tapc824f9c2-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.120 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.121 247403 INFO os_vif [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36')#033[00m
Jan 31 03:19:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:02.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:02 np0005603621 podman[314420]: 2026-01-31 08:19:02.377922172 +0000 UTC m=+1.556758688 container remove d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:19:02 np0005603621 systemd[1]: libpod-conmon-d1f3f26a57e65d3b8793d01310e4bbcb2e56f452ab6a91bd0d1a5e59746e86bc.scope: Deactivated successfully.
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.572 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.573 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.574 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] No VIF found with MAC fa:16:3e:91:ad:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.574 247403 INFO nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Using config drive#033[00m
Jan 31 03:19:02 np0005603621 nova_compute[247399]: 2026-01-31 08:19:02.598 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:02 np0005603621 podman[314508]: 2026-01-31 08:19:02.505704405 +0000 UTC m=+0.029759413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:19:02 np0005603621 podman[314508]: 2026-01-31 08:19:02.641612824 +0000 UTC m=+0.165667812 container create dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ride, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:19:02 np0005603621 systemd[1]: Started libpod-conmon-dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f.scope.
Jan 31 03:19:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:19:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e06afdede9eacd8e68b9b9f64bb0bf6f13a13f2622b871dd6aa0404588a4d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e06afdede9eacd8e68b9b9f64bb0bf6f13a13f2622b871dd6aa0404588a4d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e06afdede9eacd8e68b9b9f64bb0bf6f13a13f2622b871dd6aa0404588a4d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78e06afdede9eacd8e68b9b9f64bb0bf6f13a13f2622b871dd6aa0404588a4d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:02 np0005603621 podman[314508]: 2026-01-31 08:19:02.915269958 +0000 UTC m=+0.439324966 container init dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ride, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:19:02 np0005603621 podman[314508]: 2026-01-31 08:19:02.92109106 +0000 UTC m=+0.445146048 container start dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:19:02 np0005603621 podman[314508]: 2026-01-31 08:19:02.982276577 +0000 UTC m=+0.506331595 container attach dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ride, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:19:03 np0005603621 priceless_ride[314542]: {
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:        "osd_id": 0,
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:        "type": "bluestore"
Jan 31 03:19:03 np0005603621 priceless_ride[314542]:    }
Jan 31 03:19:03 np0005603621 priceless_ride[314542]: }
Jan 31 03:19:03 np0005603621 systemd[1]: libpod-dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f.scope: Deactivated successfully.
Jan 31 03:19:03 np0005603621 conmon[314542]: conmon dd8db3138b74d5311e5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f.scope/container/memory.events
Jan 31 03:19:03 np0005603621 podman[314508]: 2026-01-31 08:19:03.763774023 +0000 UTC m=+1.287829021 container died dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ride, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:19:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.1 MiB/s wr, 30 op/s
Jan 31 03:19:03 np0005603621 nova_compute[247399]: 2026-01-31 08:19:03.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-78e06afdede9eacd8e68b9b9f64bb0bf6f13a13f2622b871dd6aa0404588a4d4-merged.mount: Deactivated successfully.
Jan 31 03:19:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:04.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:04.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:04 np0005603621 podman[314508]: 2026-01-31 08:19:04.541619344 +0000 UTC m=+2.065674352 container remove dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:19:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:19:04 np0005603621 systemd[1]: libpod-conmon-dd8db3138b74d5311e5f4e9a5e222963fedcb52c49f6c30e0d379707cb99ca7f.scope: Deactivated successfully.
Jan 31 03:19:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:19:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:19:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:19:05 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2be7d9b0-7f92-468d-9111-a9111100c761 does not exist
Jan 31 03:19:05 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 98b5991c-c43f-4d11-b1b5-db3211446233 does not exist
Jan 31 03:19:05 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 014fb1fa-8fc3-4524-b67e-a5c513d60c9e does not exist
Jan 31 03:19:05 np0005603621 nova_compute[247399]: 2026-01-31 08:19:05.509 247403 INFO nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Creating config drive at /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/disk.config#033[00m
Jan 31 03:19:05 np0005603621 nova_compute[247399]: 2026-01-31 08:19:05.513 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_a0cvq5p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:05 np0005603621 nova_compute[247399]: 2026-01-31 08:19:05.633 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_a0cvq5p" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:05 np0005603621 nova_compute[247399]: 2026-01-31 08:19:05.754 247403 DEBUG nova.storage.rbd_utils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] rbd image 60b7324e-952b-494a-9188-126d91c94d28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:05 np0005603621 nova_compute[247399]: 2026-01-31 08:19:05.759 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/disk.config 60b7324e-952b-494a-9188-126d91c94d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 3.2 KiB/s rd, 426 B/s wr, 4 op/s
Jan 31 03:19:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:06.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:19:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:19:06 np0005603621 nova_compute[247399]: 2026-01-31 08:19:06.799 247403 DEBUG oslo_concurrency.processutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/disk.config 60b7324e-952b-494a-9188-126d91c94d28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:06 np0005603621 nova_compute[247399]: 2026-01-31 08:19:06.801 247403 INFO nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Deleting local config drive /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28/disk.config because it was imported into RBD.#033[00m
Jan 31 03:19:06 np0005603621 kernel: tapc824f9c2-36: entered promiscuous mode
Jan 31 03:19:06 np0005603621 NetworkManager[49013]: <info>  [1769847546.8438] manager: (tapc824f9c2-36): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Jan 31 03:19:06 np0005603621 nova_compute[247399]: 2026-01-31 08:19:06.843 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:06Z|00381|binding|INFO|Claiming lport c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 for this chassis.
Jan 31 03:19:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:06Z|00382|binding|INFO|c824f9c2-363a-4225-9e4e-e5f8a52ce1e9: Claiming fa:16:3e:91:ad:3d 10.100.0.14
Jan 31 03:19:06 np0005603621 nova_compute[247399]: 2026-01-31 08:19:06.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:06 np0005603621 nova_compute[247399]: 2026-01-31 08:19:06.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:06 np0005603621 systemd-udevd[314729]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:19:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:06Z|00383|binding|INFO|Setting lport c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 ovn-installed in OVS
Jan 31 03:19:06 np0005603621 nova_compute[247399]: 2026-01-31 08:19:06.875 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:06 np0005603621 systemd-machined[212769]: New machine qemu-44-instance-00000066.
Jan 31 03:19:06 np0005603621 NetworkManager[49013]: <info>  [1769847546.8859] device (tapc824f9c2-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:19:06 np0005603621 NetworkManager[49013]: <info>  [1769847546.8869] device (tapc824f9c2-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:19:06 np0005603621 systemd[1]: Started Virtual Machine qemu-44-instance-00000066.
Jan 31 03:19:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:06Z|00384|binding|INFO|Setting lport c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 up in Southbound
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.940 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:ad:3d 10.100.0.14'], port_security=['fa:16:3e:91:ad:3d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '60b7324e-952b-494a-9188-126d91c94d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6150ea7-560a-4332-ba22-b0a784771dc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b88014f079bc4442acb657ee1d42b453', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3d6867c8-5eb2-4dbd-bb27-acbae9b3c2fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20afb08b-6e82-473a-855a-e75f7cf1c6da, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.942 159734 INFO neutron.agent.ovn.metadata.agent [-] Port c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 in datapath d6150ea7-560a-4332-ba22-b0a784771dc4 bound to our chassis#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.943 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6150ea7-560a-4332-ba22-b0a784771dc4#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.952 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d576a133-c1ce-4d39-bcbd-d52fa73a4357]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.953 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6150ea7-51 in ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.954 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6150ea7-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.954 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b780ff50-55ba-46db-973b-ec6938c853f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.955 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[57f9183e-9644-4152-b8a1-38c9f142c9de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.963 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c85d6ed9-14cb-44f7-8d23-684363257c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.974 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dd80cd99-b18a-4bf8-b7ae-cc0eefda2c2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:06.994 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[22687b23-b7f6-4863-b642-1147e79b3a12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 NetworkManager[49013]: <info>  [1769847547.0011] manager: (tapd6150ea7-50): new Veth device (/org/freedesktop/NetworkManager/Devices/188)
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.002 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[900e42e3-878f-4444-846d-1f1ead5b0773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 systemd-udevd[314732]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.031 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7d83c05d-d094-430a-9eb9-92d3af3faecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.033 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[500be263-c7ae-4e28-84c8-8410335cfaa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 NetworkManager[49013]: <info>  [1769847547.0492] device (tapd6150ea7-50): carrier: link connected
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.053 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9971c992-55fe-496c-9612-c2a21c459a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.064 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[82410b2e-2bba-443d-b4c6-4f2661b78d64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6150ea7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:77:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685256, 'reachable_time': 28762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314763, 'error': None, 'target': 'ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.074 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2a20f9-0a68-459e-b6d5-8f697e718d2b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:77a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685256, 'tstamp': 685256}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314764, 'error': None, 'target': 'ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.088 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbd51e5-3e65-4089-9502-5b8bbcfe0444]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6150ea7-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:77:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685256, 'reachable_time': 28762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314765, 'error': None, 'target': 'ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.112 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[903ec05a-7305-4499-aea2-23e7829aa929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.115 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.153 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6400ff13-22be-49cf-8099-3bd2ea45c1a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.155 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6150ea7-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.155 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.155 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6150ea7-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:07 np0005603621 NetworkManager[49013]: <info>  [1769847547.1575] manager: (tapd6150ea7-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/189)
Jan 31 03:19:07 np0005603621 kernel: tapd6150ea7-50: entered promiscuous mode
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.158 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6150ea7-50, col_values=(('external_ids', {'iface-id': 'd95dfa0f-8443-4c24-b5b1-e846783e0a79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.158 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:07Z|00385|binding|INFO|Releasing lport d95dfa0f-8443-4c24-b5b1-e846783e0a79 from this chassis (sb_readonly=0)
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.160 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6150ea7-560a-4332-ba22-b0a784771dc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6150ea7-560a-4332-ba22-b0a784771dc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.161 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc4e543-bfe8-4f97-8875-0d2be3f6d627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.162 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-d6150ea7-560a-4332-ba22-b0a784771dc4
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/d6150ea7-560a-4332-ba22-b0a784771dc4.pid.haproxy
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID d6150ea7-560a-4332-ba22-b0a784771dc4
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:19:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:07.163 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4', 'env', 'PROCESS_TAG=haproxy-d6150ea7-560a-4332-ba22-b0a784771dc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6150ea7-560a-4332-ba22-b0a784771dc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.164 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:07 np0005603621 podman[314833]: 2026-01-31 08:19:07.505061015 +0000 UTC m=+0.065832034 container create 05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.526 247403 DEBUG nova.compute.manager [req-cb5fad26-1d26-4aa4-a690-0a35e4fc7463 req-502976dc-a25b-48eb-b4dd-fa8c8a53e179 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.526 247403 DEBUG oslo_concurrency.lockutils [req-cb5fad26-1d26-4aa4-a690-0a35e4fc7463 req-502976dc-a25b-48eb-b4dd-fa8c8a53e179 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.526 247403 DEBUG oslo_concurrency.lockutils [req-cb5fad26-1d26-4aa4-a690-0a35e4fc7463 req-502976dc-a25b-48eb-b4dd-fa8c8a53e179 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.526 247403 DEBUG oslo_concurrency.lockutils [req-cb5fad26-1d26-4aa4-a690-0a35e4fc7463 req-502976dc-a25b-48eb-b4dd-fa8c8a53e179 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.527 247403 DEBUG nova.compute.manager [req-cb5fad26-1d26-4aa4-a690-0a35e4fc7463 req-502976dc-a25b-48eb-b4dd-fa8c8a53e179 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Processing event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:19:07 np0005603621 systemd[1]: Started libpod-conmon-05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d.scope.
Jan 31 03:19:07 np0005603621 podman[314833]: 2026-01-31 08:19:07.456102181 +0000 UTC m=+0.016873120 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.559 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847547.558915, 60b7324e-952b-494a-9188-126d91c94d28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.559 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] VM Started (Lifecycle Event)#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.562 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:19:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.567 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:19:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8b681bff580bb3ffd5b12fd157e9a248c338f77523ed39a5085681aa3d0e96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.572 247403 INFO nova.virt.libvirt.driver [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Instance spawned successfully.#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.573 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:19:07 np0005603621 podman[314833]: 2026-01-31 08:19:07.59266796 +0000 UTC m=+0.153438879 container init 05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:19:07 np0005603621 podman[314833]: 2026-01-31 08:19:07.597726148 +0000 UTC m=+0.158497067 container start 05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:19:07 np0005603621 neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4[314855]: [NOTICE]   (314860) : New worker (314862) forked
Jan 31 03:19:07 np0005603621 neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4[314855]: [NOTICE]   (314860) : Loading success.
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.746 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.748 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.749 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.749 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.750 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.750 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.751 247403 DEBUG nova.virt.libvirt.driver [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.756 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.866 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.867 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847547.559035, 60b7324e-952b-494a-9188-126d91c94d28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.867 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.910 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 88 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.915 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847547.5655107, 60b7324e-952b-494a-9188-126d91c94d28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:07 np0005603621 nova_compute[247399]: 2026-01-31 08:19:07.915 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.000 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.004 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:19:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.033 247403 INFO nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Took 20.69 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.033 247403 DEBUG nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.084 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:19:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:08.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.399 247403 INFO nova.compute.manager [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Took 22.41 seconds to build instance.#033[00m
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.461 247403 DEBUG oslo_concurrency.lockutils [None req-dcc3de3b-7863-49ce-90f5-a546e5070982 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:08 np0005603621 nova_compute[247399]: 2026-01-31 08:19:08.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:09 np0005603621 nova_compute[247399]: 2026-01-31 08:19:09.749 247403 DEBUG nova.compute.manager [req-cfd39c27-f8f2-43a8-aae5-add93a81f1d0 req-b1556487-27c1-4dbe-9498-1cb72d8fa4fc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:09 np0005603621 nova_compute[247399]: 2026-01-31 08:19:09.749 247403 DEBUG oslo_concurrency.lockutils [req-cfd39c27-f8f2-43a8-aae5-add93a81f1d0 req-b1556487-27c1-4dbe-9498-1cb72d8fa4fc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:09 np0005603621 nova_compute[247399]: 2026-01-31 08:19:09.750 247403 DEBUG oslo_concurrency.lockutils [req-cfd39c27-f8f2-43a8-aae5-add93a81f1d0 req-b1556487-27c1-4dbe-9498-1cb72d8fa4fc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:09 np0005603621 nova_compute[247399]: 2026-01-31 08:19:09.750 247403 DEBUG oslo_concurrency.lockutils [req-cfd39c27-f8f2-43a8-aae5-add93a81f1d0 req-b1556487-27c1-4dbe-9498-1cb72d8fa4fc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:09 np0005603621 nova_compute[247399]: 2026-01-31 08:19:09.750 247403 DEBUG nova.compute.manager [req-cfd39c27-f8f2-43a8-aae5-add93a81f1d0 req-b1556487-27c1-4dbe-9498-1cb72d8fa4fc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] No waiting events found dispatching network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:09 np0005603621 nova_compute[247399]: 2026-01-31 08:19:09.750 247403 WARNING nova.compute.manager [req-cfd39c27-f8f2-43a8-aae5-add93a81f1d0 req-b1556487-27c1-4dbe-9498-1cb72d8fa4fc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received unexpected event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:19:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 750 KiB/s rd, 13 KiB/s wr, 34 op/s
Jan 31 03:19:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:10.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:10.267 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.267 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:10.268 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.787 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.788 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.788 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.788 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.788 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.790 247403 INFO nova.compute.manager [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Terminating instance#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.791 247403 DEBUG nova.compute.manager [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:19:10 np0005603621 kernel: tapc824f9c2-36 (unregistering): left promiscuous mode
Jan 31 03:19:10 np0005603621 NetworkManager[49013]: <info>  [1769847550.9129] device (tapc824f9c2-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.917 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:10Z|00386|binding|INFO|Releasing lport c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 from this chassis (sb_readonly=0)
Jan 31 03:19:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:10Z|00387|binding|INFO|Setting lport c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 down in Southbound
Jan 31 03:19:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:19:10Z|00388|binding|INFO|Removing iface tapc824f9c2-36 ovn-installed in OVS
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.919 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603621 nova_compute[247399]: 2026-01-31 08:19:10.923 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:10 np0005603621 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 31 03:19:10 np0005603621 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000066.scope: Consumed 3.877s CPU time.
Jan 31 03:19:10 np0005603621 systemd-machined[212769]: Machine qemu-44-instance-00000066 terminated.
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.018 247403 INFO nova.virt.libvirt.driver [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Instance destroyed successfully.#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.018 247403 DEBUG nova.objects.instance [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lazy-loading 'resources' on Instance uuid 60b7324e-952b-494a-9188-126d91c94d28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.041 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:ad:3d 10.100.0.14'], port_security=['fa:16:3e:91:ad:3d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '60b7324e-952b-494a-9188-126d91c94d28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6150ea7-560a-4332-ba22-b0a784771dc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b88014f079bc4442acb657ee1d42b453', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3d6867c8-5eb2-4dbd-bb27-acbae9b3c2fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=20afb08b-6e82-473a-855a-e75f7cf1c6da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.042 159734 INFO neutron.agent.ovn.metadata.agent [-] Port c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 in datapath d6150ea7-560a-4332-ba22-b0a784771dc4 unbound from our chassis#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.044 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6150ea7-560a-4332-ba22-b0a784771dc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.045 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[898dc7d8-77fc-4e68-a3dc-6533c7a7d20a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.045 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4 namespace which is not needed anymore#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.052 247403 DEBUG nova.virt.libvirt.vif [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:18:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-506257151',display_name='tempest-InstanceActionsNegativeTestJSON-server-506257151',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-506257151',id=102,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:19:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b88014f079bc4442acb657ee1d42b453',ramdisk_id='',reservation_id='r-m7hwniok',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-691358202',owner_user_name='tempest-InstanceActionsNegativeTestJSON-691358202-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:19:08Z,user_data=None,user_id='588d05c741d9418f8c5b5bfa0d0c887a',uuid=60b7324e-952b-494a-9188-126d91c94d28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.053 247403 DEBUG nova.network.os_vif_util [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Converting VIF {"id": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "address": "fa:16:3e:91:ad:3d", "network": {"id": "d6150ea7-560a-4332-ba22-b0a784771dc4", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-476419557-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b88014f079bc4442acb657ee1d42b453", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc824f9c2-36", "ovs_interfaceid": "c824f9c2-363a-4225-9e4e-e5f8a52ce1e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.053 247403 DEBUG nova.network.os_vif_util [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.054 247403 DEBUG os_vif [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.055 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc824f9c2-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.058 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.060 247403 INFO os_vif [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:ad:3d,bridge_name='br-int',has_traffic_filtering=True,id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9,network=Network(d6150ea7-560a-4332-ba22-b0a784771dc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc824f9c2-36')#033[00m
Jan 31 03:19:11 np0005603621 neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4[314855]: [NOTICE]   (314860) : haproxy version is 2.8.14-c23fe91
Jan 31 03:19:11 np0005603621 neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4[314855]: [NOTICE]   (314860) : path to executable is /usr/sbin/haproxy
Jan 31 03:19:11 np0005603621 neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4[314855]: [ALERT]    (314860) : Current worker (314862) exited with code 143 (Terminated)
Jan 31 03:19:11 np0005603621 neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4[314855]: [WARNING]  (314860) : All workers exited. Exiting... (0)
Jan 31 03:19:11 np0005603621 systemd[1]: libpod-05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d.scope: Deactivated successfully.
Jan 31 03:19:11 np0005603621 podman[314922]: 2026-01-31 08:19:11.181594178 +0000 UTC m=+0.067060792 container died 05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:19:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:19:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0e8b681bff580bb3ffd5b12fd157e9a248c338f77523ed39a5085681aa3d0e96-merged.mount: Deactivated successfully.
Jan 31 03:19:11 np0005603621 podman[314922]: 2026-01-31 08:19:11.268327635 +0000 UTC m=+0.153794249 container cleanup 05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:19:11 np0005603621 systemd[1]: libpod-conmon-05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d.scope: Deactivated successfully.
Jan 31 03:19:11 np0005603621 podman[314951]: 2026-01-31 08:19:11.32689882 +0000 UTC m=+0.042393969 container remove 05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.330 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[40816f57-51a4-4741-925d-83ea19e495ad]: (4, ('Sat Jan 31 08:19:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4 (05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d)\n05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d\nSat Jan 31 08:19:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4 (05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d)\n05309fd8c445bb7f8e87833269af7b7a0c54144f0104821e66c2a76311a5167d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.331 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0b45a9-62fe-424e-add6-d40e083537f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.332 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6150ea7-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.333 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:11 np0005603621 kernel: tapd6150ea7-50: left promiscuous mode
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.341 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8478f9-9375-44a2-b2e5-aa94eb60c480]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.351 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[43beb7cd-67d5-401b-b631-e4ab05202b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.352 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6946d484-d1dc-4221-8e26-647233f39d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.363 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cf7400-eb38-4791-83d4-f831d2e9e7b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685250, 'reachable_time': 26887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314967, 'error': None, 'target': 'ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.365 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6150ea7-560a-4332-ba22-b0a784771dc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:19:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:11.365 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[240e7bb9-614e-487f-9199-05085952e898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:19:11 np0005603621 systemd[1]: run-netns-ovnmeta\x2dd6150ea7\x2d560a\x2d4332\x2dba22\x2db0a784771dc4.mount: Deactivated successfully.
Jan 31 03:19:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 88 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 749 KiB/s rd, 12 KiB/s wr, 33 op/s
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.943 247403 DEBUG nova.compute.manager [req-e0b5f415-4433-426d-9413-643b292d63a9 req-71e5a71c-4db4-4554-bca0-4f54061e1e60 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-vif-unplugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.944 247403 DEBUG oslo_concurrency.lockutils [req-e0b5f415-4433-426d-9413-643b292d63a9 req-71e5a71c-4db4-4554-bca0-4f54061e1e60 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.944 247403 DEBUG oslo_concurrency.lockutils [req-e0b5f415-4433-426d-9413-643b292d63a9 req-71e5a71c-4db4-4554-bca0-4f54061e1e60 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.944 247403 DEBUG oslo_concurrency.lockutils [req-e0b5f415-4433-426d-9413-643b292d63a9 req-71e5a71c-4db4-4554-bca0-4f54061e1e60 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.944 247403 DEBUG nova.compute.manager [req-e0b5f415-4433-426d-9413-643b292d63a9 req-71e5a71c-4db4-4554-bca0-4f54061e1e60 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] No waiting events found dispatching network-vif-unplugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:11 np0005603621 nova_compute[247399]: 2026-01-31 08:19:11.945 247403 DEBUG nova.compute.manager [req-e0b5f415-4433-426d-9413-643b292d63a9 req-71e5a71c-4db4-4554-bca0-4f54061e1e60 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-vif-unplugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:19:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:12.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:12 np0005603621 nova_compute[247399]: 2026-01-31 08:19:12.168 247403 INFO nova.virt.libvirt.driver [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Deleting instance files /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28_del#033[00m
Jan 31 03:19:12 np0005603621 nova_compute[247399]: 2026-01-31 08:19:12.169 247403 INFO nova.virt.libvirt.driver [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Deletion of /var/lib/nova/instances/60b7324e-952b-494a-9188-126d91c94d28_del complete#033[00m
Jan 31 03:19:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:12.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:12 np0005603621 nova_compute[247399]: 2026-01-31 08:19:12.371 247403 INFO nova.compute.manager [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Took 1.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:19:12 np0005603621 nova_compute[247399]: 2026-01-31 08:19:12.372 247403 DEBUG oslo.service.loopingcall [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:19:12 np0005603621 nova_compute[247399]: 2026-01-31 08:19:12.372 247403 DEBUG nova.compute.manager [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:19:12 np0005603621 nova_compute[247399]: 2026-01-31 08:19:12.372 247403 DEBUG nova.network.neutron [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:19:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 92 MiB data, 849 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 94 op/s
Jan 31 03:19:13 np0005603621 nova_compute[247399]: 2026-01-31 08:19:13.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:14.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:14.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:19:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426585274' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:19:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:19:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426585274' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.398 247403 DEBUG nova.compute.manager [req-f7971ffb-d6d7-453d-ae05-cb363dd7281a req-6ba84df5-ffb3-47e8-b8c1-860122bc770d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.398 247403 DEBUG oslo_concurrency.lockutils [req-f7971ffb-d6d7-453d-ae05-cb363dd7281a req-6ba84df5-ffb3-47e8-b8c1-860122bc770d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60b7324e-952b-494a-9188-126d91c94d28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.399 247403 DEBUG oslo_concurrency.lockutils [req-f7971ffb-d6d7-453d-ae05-cb363dd7281a req-6ba84df5-ffb3-47e8-b8c1-860122bc770d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.399 247403 DEBUG oslo_concurrency.lockutils [req-f7971ffb-d6d7-453d-ae05-cb363dd7281a req-6ba84df5-ffb3-47e8-b8c1-860122bc770d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.399 247403 DEBUG nova.compute.manager [req-f7971ffb-d6d7-453d-ae05-cb363dd7281a req-6ba84df5-ffb3-47e8-b8c1-860122bc770d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] No waiting events found dispatching network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.399 247403 WARNING nova.compute.manager [req-f7971ffb-d6d7-453d-ae05-cb363dd7281a req-6ba84df5-ffb3-47e8-b8c1-860122bc770d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received unexpected event network-vif-plugged-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.688 247403 DEBUG nova.network.neutron [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.933 247403 DEBUG nova.compute.manager [req-e13040d0-8469-482e-bd3f-d060bfabaec6 req-c91bedac-632d-4541-8a6f-9f3081f821af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Received event network-vif-deleted-c824f9c2-363a-4225-9e4e-e5f8a52ce1e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.933 247403 INFO nova.compute.manager [req-e13040d0-8469-482e-bd3f-d060bfabaec6 req-c91bedac-632d-4541-8a6f-9f3081f821af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Neutron deleted interface c824f9c2-363a-4225-9e4e-e5f8a52ce1e9; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:19:14 np0005603621 nova_compute[247399]: 2026-01-31 08:19:14.933 247403 DEBUG nova.network.neutron [req-e13040d0-8469-482e-bd3f-d060bfabaec6 req-c91bedac-632d-4541-8a6f-9f3081f821af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.089 247403 INFO nova.compute.manager [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Took 2.72 seconds to deallocate network for instance.#033[00m
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.103 247403 DEBUG nova.compute.manager [req-e13040d0-8469-482e-bd3f-d060bfabaec6 req-c91bedac-632d-4541-8a6f-9f3081f821af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Detach interface failed, port_id=c824f9c2-363a-4225-9e4e-e5f8a52ce1e9, reason: Instance 60b7324e-952b-494a-9188-126d91c94d28 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.284 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.284 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.360 247403 DEBUG oslo_concurrency.processutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854808080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.794 247403 DEBUG oslo_concurrency.processutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:15 np0005603621 nova_compute[247399]: 2026-01-31 08:19:15.802 247403 DEBUG nova.compute.provider_tree [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:19:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 88 MiB data, 843 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 31 03:19:16 np0005603621 nova_compute[247399]: 2026-01-31 08:19:16.011 247403 DEBUG nova.scheduler.client.report [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:19:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:16.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:16 np0005603621 nova_compute[247399]: 2026-01-31 08:19:16.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:16.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:16 np0005603621 nova_compute[247399]: 2026-01-31 08:19:16.208 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:16.270 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:16 np0005603621 nova_compute[247399]: 2026-01-31 08:19:16.479 247403 INFO nova.scheduler.client.report [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Deleted allocations for instance 60b7324e-952b-494a-9188-126d91c94d28#033[00m
Jan 31 03:19:16 np0005603621 nova_compute[247399]: 2026-01-31 08:19:16.698 247403 DEBUG oslo_concurrency.lockutils [None req-1d4f18a2-ea0d-457f-8be0-746e54dba023 588d05c741d9418f8c5b5bfa0d0c887a b88014f079bc4442acb657ee1d42b453 - - default default] Lock "60b7324e-952b-494a-9188-126d91c94d28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 100 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Jan 31 03:19:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:18.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:18 np0005603621 nova_compute[247399]: 2026-01-31 08:19:18.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 134 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 227 op/s
Jan 31 03:19:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:20.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:20.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:21 np0005603621 nova_compute[247399]: 2026-01-31 08:19:21.059 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:21 np0005603621 podman[314997]: 2026-01-31 08:19:21.51646296 +0000 UTC m=+0.070200961 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:19:21 np0005603621 podman[314996]: 2026-01-31 08:19:21.520549387 +0000 UTC m=+0.074972110 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 31 03:19:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 134 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Jan 31 03:19:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:22.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:22.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 93 MiB data, 841 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Jan 31 03:19:24 np0005603621 nova_compute[247399]: 2026-01-31 08:19:23.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:24.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:24.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 88 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Jan 31 03:19:26 np0005603621 nova_compute[247399]: 2026-01-31 08:19:26.017 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847551.0160408, 60b7324e-952b-494a-9188-126d91c94d28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:19:26 np0005603621 nova_compute[247399]: 2026-01-31 08:19:26.018 247403 INFO nova.compute.manager [-] [instance: 60b7324e-952b-494a-9188-126d91c94d28] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:19:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:19:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:19:26 np0005603621 nova_compute[247399]: 2026-01-31 08:19:26.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:26 np0005603621 nova_compute[247399]: 2026-01-31 08:19:26.073 247403 DEBUG nova.compute.manager [None req-5e3b3d7f-3574-43be-abca-40df9bd7cae1 - - - - - -] [instance: 60b7324e-952b-494a-9188-126d91c94d28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:19:26 np0005603621 nova_compute[247399]: 2026-01-31 08:19:26.171 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 88 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Jan 31 03:19:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:28.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:28.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:29 np0005603621 nova_compute[247399]: 2026-01-31 08:19:29.001 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 88 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.5 MiB/s wr, 146 op/s
Jan 31 03:19:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:30.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:30.501 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:30.502 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:30.502 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:31 np0005603621 nova_compute[247399]: 2026-01-31 08:19:31.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 88 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 16 KiB/s wr, 71 op/s
Jan 31 03:19:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:32.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:32.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 88 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 31 03:19:34 np0005603621 nova_compute[247399]: 2026-01-31 08:19:34.003 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:34.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:34.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:35 np0005603621 nova_compute[247399]: 2026-01-31 08:19:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 88 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 91 op/s
Jan 31 03:19:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:36 np0005603621 nova_compute[247399]: 2026-01-31 08:19:36.066 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:36.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 88 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 03:19:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:19:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:38.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:19:38 np0005603621 nova_compute[247399]: 2026-01-31 08:19:38.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:38 np0005603621 nova_compute[247399]: 2026-01-31 08:19:38.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:19:38 np0005603621 nova_compute[247399]: 2026-01-31 08:19:38.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:19:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:38.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:38 np0005603621 nova_compute[247399]: 2026-01-31 08:19:38.285 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:19:38
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'images', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr']
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:19:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:19:39 np0005603621 nova_compute[247399]: 2026-01-31 08:19:39.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 93 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 640 KiB/s wr, 80 op/s
Jan 31 03:19:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:40.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:40 np0005603621 nova_compute[247399]: 2026-01-31 08:19:40.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:40 np0005603621 nova_compute[247399]: 2026-01-31 08:19:40.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:19:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:40.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:41 np0005603621 nova_compute[247399]: 2026-01-31 08:19:41.068 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:41 np0005603621 nova_compute[247399]: 2026-01-31 08:19:41.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:41 np0005603621 nova_compute[247399]: 2026-01-31 08:19:41.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 93 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 883 KiB/s rd, 626 KiB/s wr, 40 op/s
Jan 31 03:19:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:19:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:19:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:43 np0005603621 nova_compute[247399]: 2026-01-31 08:19:43.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 119 MiB data, 858 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.008 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:44.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:44.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.264 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.265 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.265 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.265 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.265 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101301774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.697 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.853 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.855 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4441MB free_disk=20.942874908447266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.856 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:44 np0005603621 nova_compute[247399]: 2026-01-31 08:19:44.856 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.177 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.177 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.364 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.373 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance b6bf273c-d5a3-4f02-bddd-465a846a764d has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.373 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.374 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.456 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.734 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484635018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.906 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:45 np0005603621 nova_compute[247399]: 2026-01-31 08:19:45.910 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:19:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 121 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.061 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:19:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.070 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.126 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.126 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.127 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.132 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.132 247403 INFO nova.compute.claims [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:19:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:46.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.332 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:19:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/714679761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.791 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.797 247403 DEBUG nova.compute.provider_tree [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.832 247403 DEBUG nova.scheduler.client.report [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.961 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:46 np0005603621 nova_compute[247399]: 2026-01-31 08:19:46.962 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:19:47 np0005603621 nova_compute[247399]: 2026-01-31 08:19:47.484 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:19:47 np0005603621 nova_compute[247399]: 2026-01-31 08:19:47.484 247403 DEBUG nova.network.neutron [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:19:47 np0005603621 nova_compute[247399]: 2026-01-31 08:19:47.602 247403 INFO nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:19:47 np0005603621 nova_compute[247399]: 2026-01-31 08:19:47.856 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:19:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 121 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 03:19:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:48.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.128 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.128 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:48.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.331 247403 DEBUG nova.policy [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8897cd859ff4a79a1a16eaee71d22ed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '29d7f464a8694725aa9692aac772c256', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1597 writes, 6802 keys, 1597 commit groups, 1.0 writes per commit group, ingest: 10.34 MB, 0.02 MB/s#012Interval WAL: 1597 writes, 1597 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     32.4      1.80              0.14        27    0.067       0      0       0.0       0.0#012  L6      1/0   11.03 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1     51.3     42.9      5.60              0.53        26    0.215    150K    14K       0.0       0.0#012 Sum      1/0   11.03 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     38.9     40.4      7.39              0.67        53    0.139    150K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.0     60.2     61.3      0.99              0.13        10    0.099     36K   2596       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     51.3     42.9      5.60              0.53        26    0.215    150K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     32.4      1.79              0.14        26    0.069       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.057, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 7.4 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 33.64 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000315 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1957,32.49 MB,10.6878%) FilterBlock(54,429.67 KB,0.138027%) IndexBlock(54,748.89 KB,0.240572%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.748 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.750 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.750 247403 INFO nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Creating image(s)#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.777 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.809 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.841 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.845 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.904 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.905 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.906 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.906 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.934 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:19:48 np0005603621 nova_compute[247399]: 2026-01-31 08:19:48.937 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b6bf273c-d5a3-4f02-bddd-465a846a764d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021703206425646754 of space, bias 1.0, pg target 0.6510961927694027 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.240 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b6bf273c-d5a3-4f02-bddd-465a846a764d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.325 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] resizing rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.457 247403 DEBUG nova.objects.instance [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'migration_context' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.636 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.636 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Ensure instance console log exists: /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.637 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.637 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:19:49 np0005603621 nova_compute[247399]: 2026-01-31 08:19:49.637 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:19:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 138 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.9 MiB/s wr, 77 op/s
Jan 31 03:19:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:50.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:50.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:51 np0005603621 nova_compute[247399]: 2026-01-31 08:19:51.072 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:51 np0005603621 nova_compute[247399]: 2026-01-31 08:19:51.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:19:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 138 MiB data, 883 MiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 2.3 MiB/s wr, 65 op/s
Jan 31 03:19:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:52.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:19:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:19:52 np0005603621 podman[315392]: 2026-01-31 08:19:52.527488665 +0000 UTC m=+0.082006411 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 03:19:52 np0005603621 podman[315393]: 2026-01-31 08:19:52.527651139 +0000 UTC m=+0.080560285 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:19:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 167 MiB data, 895 MiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 3.3 MiB/s wr, 79 op/s
Jan 31 03:19:54 np0005603621 nova_compute[247399]: 2026-01-31 08:19:54.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:54.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:54 np0005603621 nova_compute[247399]: 2026-01-31 08:19:54.902 247403 DEBUG nova.network.neutron [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Successfully created port: f45c5fd8-45be-479f-bfb8-5305390417f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:19:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:54.960 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:19:54 np0005603621 nova_compute[247399]: 2026-01-31 08:19:54.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:54.961 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:19:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 167 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 31 03:19:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:19:55.964 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:19:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:19:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:56.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:19:56 np0005603621 nova_compute[247399]: 2026-01-31 08:19:56.074 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:56.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 167 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:19:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:19:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:58 np0005603621 nova_compute[247399]: 2026-01-31 08:19:58.089 247403 DEBUG nova.network.neutron [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Successfully updated port: f45c5fd8-45be-479f-bfb8-5305390417f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:19:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:19:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:19:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:19:58.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:19:58 np0005603621 nova_compute[247399]: 2026-01-31 08:19:58.639 247403 DEBUG nova.compute.manager [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-changed-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:19:58 np0005603621 nova_compute[247399]: 2026-01-31 08:19:58.639 247403 DEBUG nova.compute.manager [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Refreshing instance network info cache due to event network-changed-f45c5fd8-45be-479f-bfb8-5305390417f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:19:58 np0005603621 nova_compute[247399]: 2026-01-31 08:19:58.639 247403 DEBUG oslo_concurrency.lockutils [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:58 np0005603621 nova_compute[247399]: 2026-01-31 08:19:58.639 247403 DEBUG oslo_concurrency.lockutils [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:19:58 np0005603621 nova_compute[247399]: 2026-01-31 08:19:58.640 247403 DEBUG nova.network.neutron [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Refreshing network info cache for port f45c5fd8-45be-479f-bfb8-5305390417f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:19:59 np0005603621 nova_compute[247399]: 2026-01-31 08:19:59.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:19:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:19:59 np0005603621 nova_compute[247399]: 2026-01-31 08:19:59.377 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:19:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 167 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:20:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 03:20:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:00.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:00.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 03:20:00 np0005603621 nova_compute[247399]: 2026-01-31 08:20:00.480 247403 DEBUG nova.network.neutron [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:20:01 np0005603621 nova_compute[247399]: 2026-01-31 08:20:01.075 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:01 np0005603621 nova_compute[247399]: 2026-01-31 08:20:01.129 247403 DEBUG nova.network.neutron [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:01 np0005603621 nova_compute[247399]: 2026-01-31 08:20:01.456 247403 DEBUG oslo_concurrency.lockutils [req-e01fe774-3f79-4fb4-98e7-6619b7da4f83 req-44edd222-8938-449f-9ad8-cf4582b67c5b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:01 np0005603621 nova_compute[247399]: 2026-01-31 08:20:01.456 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquired lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:01 np0005603621 nova_compute[247399]: 2026-01-31 08:20:01.457 247403 DEBUG nova.network.neutron [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:20:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 167 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 7.7 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.198 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.200 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.200 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:02.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:02.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.395 247403 DEBUG nova.network.neutron [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.609 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.609 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Image id 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 yields fingerprint b1c202daae0a5d5b639e0239462ea0d46fe633d6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.609 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] image 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 at (/var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6): checking#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.610 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] image 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 at (/var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.611 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.612 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] b6bf273c-d5a3-4f02-bddd-465a846a764d is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.612 247403 WARNING nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.612 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Active base files: /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.612 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Removable base files: /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.612 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.613 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.613 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 31 03:20:02 np0005603621 nova_compute[247399]: 2026-01-31 08:20:02.613 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 31 03:20:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:03Z|00389|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 31 03:20:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 205 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Jan 31 03:20:04 np0005603621 nova_compute[247399]: 2026-01-31 08:20:04.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:20:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:04.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:20:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:04.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 213 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 31 03:20:06 np0005603621 nova_compute[247399]: 2026-01-31 08:20:06.076 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:06.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:20:06 np0005603621 nova_compute[247399]: 2026-01-31 08:20:06.675 247403 DEBUG nova.network.neutron [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updating instance_info_cache with network_info: [{"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:20:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2c9e83ee-8333-4548-ab83-898882b25dd0 does not exist
Jan 31 03:20:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 98742f72-4106-461e-9ce2-324a2649ac95 does not exist
Jan 31 03:20:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0967436d-9f5d-4962-9a1b-c3062772fe31 does not exist
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:20:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.173 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Releasing lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.175 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance network_info: |[{"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.178 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Start _get_guest_xml network_info=[{"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.184 247403 WARNING nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.188 247403 DEBUG nova.virt.libvirt.host [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.190 247403 DEBUG nova.virt.libvirt.host [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.192 247403 DEBUG nova.virt.libvirt.host [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.192 247403 DEBUG nova.virt.libvirt.host [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.193 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.194 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.194 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.194 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.195 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.195 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.195 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.195 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.196 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.197 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.197 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.197 247403 DEBUG nova.virt.hardware [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.200 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.212297526 +0000 UTC m=+0.058156083 container create 40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.174589055 +0000 UTC m=+0.020447622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:20:07 np0005603621 systemd[1]: Started libpod-conmon-40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd.scope.
Jan 31 03:20:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.339135101 +0000 UTC m=+0.184993708 container init 40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.345615294 +0000 UTC m=+0.191473851 container start 40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:20:07 np0005603621 lucid_wiles[315782]: 167 167
Jan 31 03:20:07 np0005603621 systemd[1]: libpod-40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd.scope: Deactivated successfully.
Jan 31 03:20:07 np0005603621 conmon[315782]: conmon 40f3d54c5b4a5a3589af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd.scope/container/memory.events
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.355481883 +0000 UTC m=+0.201340460 container attach 40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.357907979 +0000 UTC m=+0.203766546 container died 40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:20:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a2afd86c160cb2eb5d284298944e4ab1f2549c1ea54ca9bb0bde754aee222352-merged.mount: Deactivated successfully.
Jan 31 03:20:07 np0005603621 podman[315764]: 2026-01-31 08:20:07.561982732 +0000 UTC m=+0.407841289 container remove 40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:20:07 np0005603621 systemd[1]: libpod-conmon-40f3d54c5b4a5a3589afc3b9e83f4c0433cb0f6db9cd1efa241487c2fb4a28bd.scope: Deactivated successfully.
Jan 31 03:20:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583468854' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.624 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.653 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:07 np0005603621 nova_compute[247399]: 2026-01-31 08:20:07.657 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:20:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:20:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:20:07 np0005603621 podman[315842]: 2026-01-31 08:20:07.688247379 +0000 UTC m=+0.040368696 container create 27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:20:07 np0005603621 podman[315842]: 2026-01-31 08:20:07.668089757 +0000 UTC m=+0.020211094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:20:07 np0005603621 systemd[1]: Started libpod-conmon-27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34.scope.
Jan 31 03:20:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9ab9730507bab5f165bf9c19aeb80f1be2398fc2ae84247db6cc179b1618c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9ab9730507bab5f165bf9c19aeb80f1be2398fc2ae84247db6cc179b1618c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9ab9730507bab5f165bf9c19aeb80f1be2398fc2ae84247db6cc179b1618c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9ab9730507bab5f165bf9c19aeb80f1be2398fc2ae84247db6cc179b1618c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9ab9730507bab5f165bf9c19aeb80f1be2398fc2ae84247db6cc179b1618c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:07 np0005603621 podman[315842]: 2026-01-31 08:20:07.924212072 +0000 UTC m=+0.276333429 container init 27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:20:07 np0005603621 podman[315842]: 2026-01-31 08:20:07.930454218 +0000 UTC m=+0.282575545 container start 27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:20:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 226 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 2.2 MiB/s wr, 38 op/s
Jan 31 03:20:07 np0005603621 podman[315842]: 2026-01-31 08:20:07.956776412 +0000 UTC m=+0.308897739 container attach 27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:20:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4029064444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.102 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.104 247403 DEBUG nova.virt.libvirt.vif [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1598984785',display_name='tempest-ServerRescueTestJSON-server-1598984785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1598984785',id=105,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-7pc5icog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:48Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=b6bf273c-d5a3-4f02-bddd-465a846a764d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.105 247403 DEBUG nova.network.os_vif_util [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.106 247403 DEBUG nova.network.os_vif_util [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.107 247403 DEBUG nova.objects.instance [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:08.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:08 np0005603621 sad_tu[315882]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:20:08 np0005603621 sad_tu[315882]: --> relative data size: 1.0
Jan 31 03:20:08 np0005603621 sad_tu[315882]: --> All data devices are unavailable
Jan 31 03:20:08 np0005603621 systemd[1]: libpod-27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34.scope: Deactivated successfully.
Jan 31 03:20:08 np0005603621 conmon[315882]: conmon 27be841e3a19e4871748 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34.scope/container/memory.events
Jan 31 03:20:08 np0005603621 podman[315899]: 2026-01-31 08:20:08.727951985 +0000 UTC m=+0.023766326 container died 27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.867 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <uuid>b6bf273c-d5a3-4f02-bddd-465a846a764d</uuid>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <name>instance-00000069</name>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerRescueTestJSON-server-1598984785</nova:name>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:20:07</nova:creationTime>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:user uuid="a8897cd859ff4a79a1a16eaee71d22ed">tempest-ServerRescueTestJSON-476946386-project-member</nova:user>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:project uuid="29d7f464a8694725aa9692aac772c256">tempest-ServerRescueTestJSON-476946386</nova:project>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <nova:port uuid="f45c5fd8-45be-479f-bfb8-5305390417f3">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <entry name="serial">b6bf273c-d5a3-4f02-bddd-465a846a764d</entry>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <entry name="uuid">b6bf273c-d5a3-4f02-bddd-465a846a764d</entry>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b6bf273c-d5a3-4f02-bddd-465a846a764d_disk">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:78:55:8c"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <target dev="tapf45c5fd8-45"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/console.log" append="off"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:20:08 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:20:08 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:20:08 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:20:08 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.871 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Preparing to wait for external event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.871 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.871 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.872 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.873 247403 DEBUG nova.virt.libvirt.vif [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:19:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1598984785',display_name='tempest-ServerRescueTestJSON-server-1598984785',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1598984785',id=105,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-7pc5icog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:19:48Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=b6bf273c-d5a3-4f02-bddd-465a846a764d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.873 247403 DEBUG nova.network.os_vif_util [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.874 247403 DEBUG nova.network.os_vif_util [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.875 247403 DEBUG os_vif [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.875 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.876 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.877 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.881 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.881 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf45c5fd8-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.882 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf45c5fd8-45, col_values=(('external_ids', {'iface-id': 'f45c5fd8-45be-479f-bfb8-5305390417f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:55:8c', 'vm-uuid': 'b6bf273c-d5a3-4f02-bddd-465a846a764d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:08 np0005603621 NetworkManager[49013]: <info>  [1769847608.8848] manager: (tapf45c5fd8-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.884 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.890 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:08 np0005603621 nova_compute[247399]: 2026-01-31 08:20:08.891 247403 INFO os_vif [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45')#033[00m
Jan 31 03:20:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-db9ab9730507bab5f165bf9c19aeb80f1be2398fc2ae84247db6cc179b1618c5-merged.mount: Deactivated successfully.
Jan 31 03:20:09 np0005603621 nova_compute[247399]: 2026-01-31 08:20:09.059 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:09 np0005603621 podman[315899]: 2026-01-31 08:20:09.376597178 +0000 UTC m=+0.672411499 container remove 27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:20:09 np0005603621 systemd[1]: libpod-conmon-27be841e3a19e48717481adb07d8b0dca6b7fee68c0e8894ccc257957f5d4b34.scope: Deactivated successfully.
Jan 31 03:20:09 np0005603621 nova_compute[247399]: 2026-01-31 08:20:09.458 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:09 np0005603621 nova_compute[247399]: 2026-01-31 08:20:09.459 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:09 np0005603621 nova_compute[247399]: 2026-01-31 08:20:09.459 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No VIF found with MAC fa:16:3e:78:55:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:20:09 np0005603621 nova_compute[247399]: 2026-01-31 08:20:09.460 247403 INFO nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Using config drive#033[00m
Jan 31 03:20:09 np0005603621 nova_compute[247399]: 2026-01-31 08:20:09.483 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:09 np0005603621 podman[316075]: 2026-01-31 08:20:09.867471529 +0000 UTC m=+0.101215263 container create 2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:20:09 np0005603621 podman[316075]: 2026-01-31 08:20:09.784016424 +0000 UTC m=+0.017760168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:20:09 np0005603621 systemd[1]: Started libpod-conmon-2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c.scope.
Jan 31 03:20:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:09 np0005603621 podman[316075]: 2026-01-31 08:20:09.933465586 +0000 UTC m=+0.167209360 container init 2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:20:09 np0005603621 podman[316075]: 2026-01-31 08:20:09.9387203 +0000 UTC m=+0.172464034 container start 2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:20:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 260 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 31 03:20:09 np0005603621 systemd[1]: libpod-2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c.scope: Deactivated successfully.
Jan 31 03:20:09 np0005603621 loving_heyrovsky[316092]: 167 167
Jan 31 03:20:09 np0005603621 podman[316075]: 2026-01-31 08:20:09.959967757 +0000 UTC m=+0.193711491 container attach 2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:20:09 np0005603621 podman[316075]: 2026-01-31 08:20:09.961460363 +0000 UTC m=+0.195204107 container died 2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:20:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2dbf098c82112de7ec90e75f7d1cd18e2573099a2e295efd6b846fc42e66e933-merged.mount: Deactivated successfully.
Jan 31 03:20:10 np0005603621 podman[316075]: 2026-01-31 08:20:10.173798066 +0000 UTC m=+0.407541800 container remove 2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 03:20:10 np0005603621 systemd[1]: libpod-conmon-2fa3d11809e38ec268496d7a96e153797ef09317384bf965cb7d01fc2a2aac9c.scope: Deactivated successfully.
Jan 31 03:20:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:10.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:10 np0005603621 podman[316117]: 2026-01-31 08:20:10.363659324 +0000 UTC m=+0.106978652 container create 9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:20:10 np0005603621 podman[316117]: 2026-01-31 08:20:10.279504619 +0000 UTC m=+0.022823967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:20:10 np0005603621 systemd[1]: Started libpod-conmon-9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97.scope.
Jan 31 03:20:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:10 np0005603621 nova_compute[247399]: 2026-01-31 08:20:10.470 247403 INFO nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Creating config drive at /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config#033[00m
Jan 31 03:20:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc9119e6e369c266ebd54044e9b97585afbdd8c36b214ba5f943949cab9a749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc9119e6e369c266ebd54044e9b97585afbdd8c36b214ba5f943949cab9a749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc9119e6e369c266ebd54044e9b97585afbdd8c36b214ba5f943949cab9a749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbc9119e6e369c266ebd54044e9b97585afbdd8c36b214ba5f943949cab9a749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:10 np0005603621 nova_compute[247399]: 2026-01-31 08:20:10.476 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7bkn80ks execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:10 np0005603621 podman[316117]: 2026-01-31 08:20:10.519281701 +0000 UTC m=+0.262601059 container init 9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:20:10 np0005603621 podman[316117]: 2026-01-31 08:20:10.524812194 +0000 UTC m=+0.268131522 container start 9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 03:20:10 np0005603621 nova_compute[247399]: 2026-01-31 08:20:10.601 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7bkn80ks" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:10 np0005603621 podman[316117]: 2026-01-31 08:20:10.658797632 +0000 UTC m=+0.402116980 container attach 9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:20:10 np0005603621 nova_compute[247399]: 2026-01-31 08:20:10.725 247403 DEBUG nova.storage.rbd_utils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:10 np0005603621 nova_compute[247399]: 2026-01-31 08:20:10.731 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.190 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.190 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]: {
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:    "0": [
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:        {
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "devices": [
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "/dev/loop3"
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            ],
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "lv_name": "ceph_lv0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "lv_size": "7511998464",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "name": "ceph_lv0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "tags": {
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.cluster_name": "ceph",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.crush_device_class": "",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.encrypted": "0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.osd_id": "0",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.type": "block",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:                "ceph.vdo": "0"
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            },
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "type": "block",
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:            "vg_name": "ceph_vg0"
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:        }
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]:    ]
Jan 31 03:20:11 np0005603621 busy_dijkstra[316134]: }
Jan 31 03:20:11 np0005603621 systemd[1]: libpod-9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97.scope: Deactivated successfully.
Jan 31 03:20:11 np0005603621 podman[316117]: 2026-01-31 08:20:11.275887007 +0000 UTC m=+1.019206325 container died 9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:20:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bbc9119e6e369c266ebd54044e9b97585afbdd8c36b214ba5f943949cab9a749-merged.mount: Deactivated successfully.
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.493 247403 DEBUG oslo_concurrency.processutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.762s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.494 247403 INFO nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Deleting local config drive /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config because it was imported into RBD.#033[00m
Jan 31 03:20:11 np0005603621 kernel: tapf45c5fd8-45: entered promiscuous mode
Jan 31 03:20:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:11Z|00390|binding|INFO|Claiming lport f45c5fd8-45be-479f-bfb8-5305390417f3 for this chassis.
Jan 31 03:20:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:11Z|00391|binding|INFO|f45c5fd8-45be-479f-bfb8-5305390417f3: Claiming fa:16:3e:78:55:8c 10.100.0.10
Jan 31 03:20:11 np0005603621 NetworkManager[49013]: <info>  [1769847611.5493] manager: (tapf45c5fd8-45): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.558 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:11 np0005603621 systemd-udevd[316206]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:20:11 np0005603621 podman[316117]: 2026-01-31 08:20:11.577329251 +0000 UTC m=+1.320648579 container remove 9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:20:11 np0005603621 NetworkManager[49013]: <info>  [1769847611.5815] device (tapf45c5fd8-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:20:11 np0005603621 NetworkManager[49013]: <info>  [1769847611.5823] device (tapf45c5fd8-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:20:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:11Z|00392|binding|INFO|Setting lport f45c5fd8-45be-479f-bfb8-5305390417f3 ovn-installed in OVS
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.588 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:11 np0005603621 systemd-machined[212769]: New machine qemu-45-instance-00000069.
Jan 31 03:20:11 np0005603621 systemd[1]: Started Virtual Machine qemu-45-instance-00000069.
Jan 31 03:20:11 np0005603621 systemd[1]: libpod-conmon-9aa2e5a6abce2ac1b4eda3d5e646b3b50dd08920f8bd657566fc77bcfec02d97.scope: Deactivated successfully.
Jan 31 03:20:11 np0005603621 nova_compute[247399]: 2026-01-31 08:20:11.685 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:20:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:11Z|00393|binding|INFO|Setting lport f45c5fd8-45be-479f-bfb8-5305390417f3 up in Southbound
Jan 31 03:20:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:11.735 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:55:8c 10.100.0.10'], port_security=['fa:16:3e:78:55:8c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b6bf273c-d5a3-4f02-bddd-465a846a764d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f45c5fd8-45be-479f-bfb8-5305390417f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:11.736 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f45c5fd8-45be-479f-bfb8-5305390417f3 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 bound to our chassis#033[00m
Jan 31 03:20:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:11.737 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:20:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:11.738 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[714fe24a-2795-4117-a844-c9095fa1c511]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 260 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 31 03:20:12 np0005603621 podman[316376]: 2026-01-31 08:20:12.052615953 +0000 UTC m=+0.018220152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:20:12 np0005603621 podman[316376]: 2026-01-31 08:20:12.230095064 +0000 UTC m=+0.195699263 container create 984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.323 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.325 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.337 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.338 247403 INFO nova.compute.claims [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:20:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:12.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.409 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847612.4088247, b6bf273c-d5a3-4f02-bddd-465a846a764d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.412 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:20:12 np0005603621 systemd[1]: Started libpod-conmon-984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac.scope.
Jan 31 03:20:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.575 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.581 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847612.409011, b6bf273c-d5a3-4f02-bddd-465a846a764d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.581 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:20:12 np0005603621 podman[316376]: 2026-01-31 08:20:12.672279898 +0000 UTC m=+0.637884137 container init 984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:20:12 np0005603621 podman[316376]: 2026-01-31 08:20:12.684562132 +0000 UTC m=+0.650166301 container start 984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:20:12 np0005603621 quizzical_yonath[316417]: 167 167
Jan 31 03:20:12 np0005603621 systemd[1]: libpod-984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac.scope: Deactivated successfully.
Jan 31 03:20:12 np0005603621 conmon[316417]: conmon 984fa86a07b765b7514d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac.scope/container/memory.events
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.705 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.711 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:12 np0005603621 podman[316376]: 2026-01-31 08:20:12.822620959 +0000 UTC m=+0.788225168 container attach 984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:20:12 np0005603621 podman[316376]: 2026-01-31 08:20:12.823383173 +0000 UTC m=+0.788987372 container died 984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.903 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:20:12 np0005603621 nova_compute[247399]: 2026-01-31 08:20:12.993 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3279866ddca042322b18395132ec0e8803e06d662eb2fa87c5afb26bf585146d-merged.mount: Deactivated successfully.
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.025 247403 DEBUG nova.compute.manager [req-654e6492-a315-476d-8ce4-f28f84f10b5c req-b66af982-8ec0-4f14-afd9-904b88276b1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.027 247403 DEBUG oslo_concurrency.lockutils [req-654e6492-a315-476d-8ce4-f28f84f10b5c req-b66af982-8ec0-4f14-afd9-904b88276b1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.027 247403 DEBUG oslo_concurrency.lockutils [req-654e6492-a315-476d-8ce4-f28f84f10b5c req-b66af982-8ec0-4f14-afd9-904b88276b1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.028 247403 DEBUG oslo_concurrency.lockutils [req-654e6492-a315-476d-8ce4-f28f84f10b5c req-b66af982-8ec0-4f14-afd9-904b88276b1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.028 247403 DEBUG nova.compute.manager [req-654e6492-a315-476d-8ce4-f28f84f10b5c req-b66af982-8ec0-4f14-afd9-904b88276b1c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Processing event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.030 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.035 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847613.033967, b6bf273c-d5a3-4f02-bddd-465a846a764d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.036 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.039 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.046 247403 INFO nova.virt.libvirt.driver [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance spawned successfully.#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.047 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:20:13 np0005603621 podman[316376]: 2026-01-31 08:20:13.084498784 +0000 UTC m=+1.050102953 container remove 984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:20:13 np0005603621 systemd[1]: libpod-conmon-984fa86a07b765b7514d821e361a1dbbd42da34999530684b5805ba9d16da3ac.scope: Deactivated successfully.
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.202 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.213 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.216 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.217 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.217 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.218 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.218 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.219 247403 DEBUG nova.virt.libvirt.driver [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:13 np0005603621 podman[316463]: 2026-01-31 08:20:13.291002424 +0000 UTC m=+0.072573464 container create 48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:20:13 np0005603621 podman[316463]: 2026-01-31 08:20:13.242629608 +0000 UTC m=+0.024200648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:20:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762837829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.476 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:20:13 np0005603621 systemd[1]: Started libpod-conmon-48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d.scope.
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.489 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.496 247403 DEBUG nova.compute.provider_tree [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a941982382587f099aefc9cd88fe9a9764ec6ef27c9a95ebec80a173d92d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a941982382587f099aefc9cd88fe9a9764ec6ef27c9a95ebec80a173d92d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a941982382587f099aefc9cd88fe9a9764ec6ef27c9a95ebec80a173d92d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a941982382587f099aefc9cd88fe9a9764ec6ef27c9a95ebec80a173d92d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:13 np0005603621 podman[316463]: 2026-01-31 08:20:13.569434058 +0000 UTC m=+0.351005108 container init 48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:20:13 np0005603621 podman[316463]: 2026-01-31 08:20:13.576063326 +0000 UTC m=+0.357634366 container start 48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:20:13 np0005603621 podman[316463]: 2026-01-31 08:20:13.838411265 +0000 UTC m=+0.619982335 container attach 48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:20:13 np0005603621 nova_compute[247399]: 2026-01-31 08:20:13.884 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 294 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 5.2 MiB/s wr, 75 op/s
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.296 247403 INFO nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Took 25.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.297 247403 DEBUG nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:14.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:14.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]: {
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:        "osd_id": 0,
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:        "type": "bluestore"
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]:    }
Jan 31 03:20:14 np0005603621 charming_mirzakhani[316481]: }
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.379 247403 DEBUG nova.scheduler.client.report [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:14 np0005603621 systemd[1]: libpod-48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d.scope: Deactivated successfully.
Jan 31 03:20:14 np0005603621 podman[316463]: 2026-01-31 08:20:14.390996949 +0000 UTC m=+1.172567989 container died 48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.556 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.558 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:20:14 np0005603621 nova_compute[247399]: 2026-01-31 08:20:14.831 247403 INFO nova.compute.manager [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Took 29.12 seconds to build instance.#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.000 247403 DEBUG oslo_concurrency.lockutils [None req-1ad1a518-3627-4b9a-b0ac-d7f23494084a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 29.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.073 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.074 247403 DEBUG nova.network.neutron [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:20:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2b8a941982382587f099aefc9cd88fe9a9764ec6ef27c9a95ebec80a173d92d4-merged.mount: Deactivated successfully.
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.327 247403 DEBUG nova.policy [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4d66dd0b7ff443cbcdb6e2c9f5c4c8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf024d54545b4af882a87c721105742a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.371 247403 DEBUG nova.compute.manager [req-6759a191-4617-4074-9e6e-dc2d074a7b9b req-785056b2-4add-439a-b45a-2e32f17cb441 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.371 247403 DEBUG oslo_concurrency.lockutils [req-6759a191-4617-4074-9e6e-dc2d074a7b9b req-785056b2-4add-439a-b45a-2e32f17cb441 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.372 247403 DEBUG oslo_concurrency.lockutils [req-6759a191-4617-4074-9e6e-dc2d074a7b9b req-785056b2-4add-439a-b45a-2e32f17cb441 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.372 247403 DEBUG oslo_concurrency.lockutils [req-6759a191-4617-4074-9e6e-dc2d074a7b9b req-785056b2-4add-439a-b45a-2e32f17cb441 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.372 247403 DEBUG nova.compute.manager [req-6759a191-4617-4074-9e6e-dc2d074a7b9b req-785056b2-4add-439a-b45a-2e32f17cb441 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.373 247403 WARNING nova.compute.manager [req-6759a191-4617-4074-9e6e-dc2d074a7b9b req-785056b2-4add-439a-b45a-2e32f17cb441 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received unexpected event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.404 247403 INFO nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:20:15 np0005603621 podman[316463]: 2026-01-31 08:20:15.450910008 +0000 UTC m=+2.232481048 container remove 48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_mirzakhani, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:20:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:20:15 np0005603621 systemd[1]: libpod-conmon-48ed4d8ce8abd64678885d43dcb8adbaf8ed1ca18ea85e60be1d0293b2a4a37d.scope: Deactivated successfully.
Jan 31 03:20:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:20:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:20:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:20:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7a65a7f5-0071-4a16-acc3-1b5d1ad06512 does not exist
Jan 31 03:20:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9c1f6de6-edcb-4599-96f0-b0599a0fc989 does not exist
Jan 31 03:20:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 20409606-20ac-4189-beeb-dd9b19691340 does not exist
Jan 31 03:20:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 306 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 31 03:20:15 np0005603621 nova_compute[247399]: 2026-01-31 08:20:15.992 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:20:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:20:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:20:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:16.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:20:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:16.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.453 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.455 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.455 247403 INFO nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Creating image(s)#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.533 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.561 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.584 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.587 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.636 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.638 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.638 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.639 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.661 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:16 np0005603621 nova_compute[247399]: 2026-01-31 08:20:16.665 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f02cbbe1-1133-4659-a065-630c53ee2683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:17 np0005603621 nova_compute[247399]: 2026-01-31 08:20:17.121 247403 DEBUG nova.network.neutron [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Successfully created port: cc10268d-b3b3-404e-ba33-00ef9ef3ce4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:20:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:20:17 np0005603621 nova_compute[247399]: 2026-01-31 08:20:17.738 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f02cbbe1-1133-4659-a065-630c53ee2683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:17 np0005603621 nova_compute[247399]: 2026-01-31 08:20:17.809 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] resizing rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:20:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 306 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Jan 31 03:20:17 np0005603621 nova_compute[247399]: 2026-01-31 08:20:17.968 247403 DEBUG nova.objects.instance [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'migration_context' on Instance uuid f02cbbe1-1133-4659-a065-630c53ee2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:18 np0005603621 nova_compute[247399]: 2026-01-31 08:20:18.147 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:20:18 np0005603621 nova_compute[247399]: 2026-01-31 08:20:18.148 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Ensure instance console log exists: /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:20:18 np0005603621 nova_compute[247399]: 2026-01-31 08:20:18.148 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:18 np0005603621 nova_compute[247399]: 2026-01-31 08:20:18.148 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:18 np0005603621 nova_compute[247399]: 2026-01-31 08:20:18.149 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:18.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:18 np0005603621 nova_compute[247399]: 2026-01-31 08:20:18.888 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603621 nova_compute[247399]: 2026-01-31 08:20:19.101 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:19 np0005603621 nova_compute[247399]: 2026-01-31 08:20:19.416 247403 INFO nova.compute.manager [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Rescuing#033[00m
Jan 31 03:20:19 np0005603621 nova_compute[247399]: 2026-01-31 08:20:19.417 247403 DEBUG oslo_concurrency.lockutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:19 np0005603621 nova_compute[247399]: 2026-01-31 08:20:19.417 247403 DEBUG oslo_concurrency.lockutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquired lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:19 np0005603621 nova_compute[247399]: 2026-01-31 08:20:19.417 247403 DEBUG nova.network.neutron [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:20:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 333 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.3 MiB/s wr, 223 op/s
Jan 31 03:20:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:20.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:20.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:20 np0005603621 nova_compute[247399]: 2026-01-31 08:20:20.634 247403 DEBUG nova.network.neutron [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Successfully updated port: cc10268d-b3b3-404e-ba33-00ef9ef3ce4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:20:21 np0005603621 nova_compute[247399]: 2026-01-31 08:20:21.186 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:21 np0005603621 nova_compute[247399]: 2026-01-31 08:20:21.187 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquired lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:21 np0005603621 nova_compute[247399]: 2026-01-31 08:20:21.187 247403 DEBUG nova.network.neutron [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:20:21 np0005603621 nova_compute[247399]: 2026-01-31 08:20:21.209 247403 DEBUG nova.compute.manager [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-changed-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:21 np0005603621 nova_compute[247399]: 2026-01-31 08:20:21.210 247403 DEBUG nova.compute.manager [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Refreshing instance network info cache due to event network-changed-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:20:21 np0005603621 nova_compute[247399]: 2026-01-31 08:20:21.210 247403 DEBUG oslo_concurrency.lockutils [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 333 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.9 MiB/s wr, 207 op/s
Jan 31 03:20:22 np0005603621 nova_compute[247399]: 2026-01-31 08:20:22.340 247403 DEBUG nova.network.neutron [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:20:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:22.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:22.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:23 np0005603621 nova_compute[247399]: 2026-01-31 08:20:23.022 247403 DEBUG nova.network.neutron [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updating instance_info_cache with network_info: [{"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:23 np0005603621 nova_compute[247399]: 2026-01-31 08:20:23.260 247403 DEBUG oslo_concurrency.lockutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Releasing lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:23 np0005603621 podman[316735]: 2026-01-31 08:20:23.500422355 +0000 UTC m=+0.053650322 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:20:23 np0005603621 podman[316736]: 2026-01-31 08:20:23.520833795 +0000 UTC m=+0.072710649 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:20:23 np0005603621 nova_compute[247399]: 2026-01-31 08:20:23.845 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:20:23 np0005603621 nova_compute[247399]: 2026-01-31 08:20:23.848 247403 DEBUG nova.network.neutron [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updating instance_info_cache with network_info: [{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:23 np0005603621 nova_compute[247399]: 2026-01-31 08:20:23.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 353 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 273 op/s
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.190 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Releasing lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.190 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Instance network_info: |[{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.190 247403 DEBUG oslo_concurrency.lockutils [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.191 247403 DEBUG nova.network.neutron [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Refreshing network info cache for port cc10268d-b3b3-404e-ba33-00ef9ef3ce4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.193 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Start _get_guest_xml network_info=[{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.196 247403 WARNING nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.203 247403 DEBUG nova.virt.libvirt.host [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.204 247403 DEBUG nova.virt.libvirt.host [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.206 247403 DEBUG nova.virt.libvirt.host [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.207 247403 DEBUG nova.virt.libvirt.host [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.208 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.208 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.209 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.209 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.209 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.209 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.210 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.210 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.210 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.210 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.210 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.211 247403 DEBUG nova.virt.hardware [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.214 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:20:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:24.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:20:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:24.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451398308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.644 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.667 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:24 np0005603621 nova_compute[247399]: 2026-01-31 08:20:24.671 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2269187978' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.125 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.127 247403 DEBUG nova.virt.libvirt.vif [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1916861428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1916861428',id=109,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-7u9orvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:16Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=f02cbbe1-1133-4659-a065-630c53ee2683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.127 247403 DEBUG nova.network.os_vif_util [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.128 247403 DEBUG nova.network.os_vif_util [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.130 247403 DEBUG nova.objects.instance [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'pci_devices' on Instance uuid f02cbbe1-1133-4659-a065-630c53ee2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.267 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <uuid>f02cbbe1-1133-4659-a065-630c53ee2683</uuid>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <name>instance-0000006d</name>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1916861428</nova:name>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:20:24</nova:creationTime>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:user uuid="f4d66dd0b7ff443cbcdb6e2c9f5c4c8c">tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member</nova:user>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:project uuid="cf024d54545b4af882a87c721105742a">tempest-ServerBootFromVolumeStableRescueTest-468517745</nova:project>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <nova:port uuid="cc10268d-b3b3-404e-ba33-00ef9ef3ce4f">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <entry name="serial">f02cbbe1-1133-4659-a065-630c53ee2683</entry>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <entry name="uuid">f02cbbe1-1133-4659-a065-630c53ee2683</entry>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f02cbbe1-1133-4659-a065-630c53ee2683_disk">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f02cbbe1-1133-4659-a065-630c53ee2683_disk.config">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:fb:99:c6"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <target dev="tapcc10268d-b3"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/console.log" append="off"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:20:25 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:20:25 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:20:25 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:20:25 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.268 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Preparing to wait for external event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.268 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.269 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.269 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.270 247403 DEBUG nova.virt.libvirt.vif [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1916861428',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1916861428',id=109,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-7u9orvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:16Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=f02cbbe1-1133-4659-a065-630c53ee2683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.270 247403 DEBUG nova.network.os_vif_util [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.271 247403 DEBUG nova.network.os_vif_util [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.271 247403 DEBUG os_vif [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.271 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.272 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.272 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.274 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.274 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcc10268d-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.275 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcc10268d-b3, col_values=(('external_ids', {'iface-id': 'cc10268d-b3b3-404e-ba33-00ef9ef3ce4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:99:c6', 'vm-uuid': 'f02cbbe1-1133-4659-a065-630c53ee2683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:25 np0005603621 NetworkManager[49013]: <info>  [1769847625.2773] manager: (tapcc10268d-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.276 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.282 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.283 247403 INFO os_vif [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3')#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.534 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.535 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.535 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No VIF found with MAC fa:16:3e:fb:99:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.536 247403 INFO nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Using config drive#033[00m
Jan 31 03:20:25 np0005603621 nova_compute[247399]: 2026-01-31 08:20:25.563 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 353 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.1 MiB/s wr, 293 op/s
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.164 247403 INFO nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Creating config drive at /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/disk.config#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.171 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpa3r4ug12 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.191 247403 DEBUG nova.network.neutron [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updated VIF entry in instance network info cache for port cc10268d-b3b3-404e-ba33-00ef9ef3ce4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.192 247403 DEBUG nova.network.neutron [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updating instance_info_cache with network_info: [{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.294 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpa3r4ug12" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:20:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.393 247403 DEBUG nova.storage.rbd_utils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image f02cbbe1-1133-4659-a065-630c53ee2683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.397 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/disk.config f02cbbe1-1133-4659-a065-630c53ee2683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.418 247403 DEBUG oslo_concurrency.lockutils [req-cfe02923-1983-445c-980a-e1993c01fdd2 req-b9af464e-9581-497d-af8a-9d8a285894b8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.737 247403 DEBUG oslo_concurrency.processutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/disk.config f02cbbe1-1133-4659-a065-630c53ee2683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.738 247403 INFO nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Deleting local config drive /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683/disk.config because it was imported into RBD.#033[00m
Jan 31 03:20:26 np0005603621 kernel: tapcc10268d-b3: entered promiscuous mode
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.777 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:26 np0005603621 NetworkManager[49013]: <info>  [1769847626.7789] manager: (tapcc10268d-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Jan 31 03:20:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:26Z|00394|binding|INFO|Claiming lport cc10268d-b3b3-404e-ba33-00ef9ef3ce4f for this chassis.
Jan 31 03:20:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:26Z|00395|binding|INFO|cc10268d-b3b3-404e-ba33-00ef9ef3ce4f: Claiming fa:16:3e:fb:99:c6 10.100.0.4
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.801 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:26 np0005603621 systemd-machined[212769]: New machine qemu-46-instance-0000006d.
Jan 31 03:20:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:26Z|00396|binding|INFO|Setting lport cc10268d-b3b3-404e-ba33-00ef9ef3ce4f ovn-installed in OVS
Jan 31 03:20:26 np0005603621 nova_compute[247399]: 2026-01-31 08:20:26.807 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:26 np0005603621 systemd[1]: Started Virtual Machine qemu-46-instance-0000006d.
Jan 31 03:20:26 np0005603621 systemd-udevd[316967]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:20:26 np0005603621 NetworkManager[49013]: <info>  [1769847626.8396] device (tapcc10268d-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:20:26 np0005603621 NetworkManager[49013]: <info>  [1769847626.8406] device (tapcc10268d-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:20:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:26Z|00397|binding|INFO|Setting lport cc10268d-b3b3-404e-ba33-00ef9ef3ce4f up in Southbound
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.892 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:99:c6 10.100.0.4'], port_security=['fa:16:3e:fb:99:c6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f02cbbe1-1133-4659-a065-630c53ee2683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.893 159734 INFO neutron.agent.ovn.metadata.agent [-] Port cc10268d-b3b3-404e-ba33-00ef9ef3ce4f in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab bound to our chassis#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.895 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.904 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1a029c-c2fa-41ba-a28e-1ee06f06d29f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.904 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap98be5db6-51 in ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.906 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap98be5db6-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.906 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[216151e3-5558-4176-b0f7-f553d80cf76f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.907 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d646b2cd-6856-4673-8e86-ea5459e80dc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.917 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f909bd4b-5072-4ad9-a678-3f060d897e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.928 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[33e885b3-b4c3-4086-a02e-c4d3e6f9bdf6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.951 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fe9db1-081c-4400-89ea-285e5f94f2b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.956 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[203047f7-deed-43ee-98f9-961947606333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 NetworkManager[49013]: <info>  [1769847626.9575] manager: (tap98be5db6-50): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.980 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[58860645-53c6-4657-9210-0d326d4503a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:26.983 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f4549961-4367-4f41-bd6c-d6996d566a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 NetworkManager[49013]: <info>  [1769847627.0010] device (tap98be5db6-50): carrier: link connected
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.005 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f269841e-5704-4dbb-987c-deeaf5d59d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.018 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[012a5b92-d48b-459e-b831-5b4bf3a367e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 19643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317000, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.030 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ead16038-10a9-4cba-bdb5-6f1b529320d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:3a3f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693251, 'tstamp': 693251}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317001, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.043 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f22cd8f8-285b-4a8a-82fa-30c2c2b75271]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 19643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317002, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.064 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc9ec59-75f5-41e6-8a86-1f80438b48c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.109 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e928691-fffa-485f-8d03-feeee92937cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:27 np0005603621 kernel: tap98be5db6-50: entered promiscuous mode
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.152 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:27 np0005603621 NetworkManager[49013]: <info>  [1769847627.1582] manager: (tap98be5db6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.158 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.159 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.160 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:27 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:27Z|00398|binding|INFO|Releasing lport dad27cfe-7e8a-4f55-a945-07f9cae848c1 from this chassis (sb_readonly=0)
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.166 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.166 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/98be5db6-5633-4d23-b9a9-16382d8e99ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/98be5db6-5633-4d23-b9a9-16382d8e99ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.167 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[70593d0f-8cd4-41a2-8af8-7a7e6c8f84ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.168 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-98be5db6-5633-4d23-b9a9-16382d8e99ab
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/98be5db6-5633-4d23-b9a9-16382d8e99ab.pid.haproxy
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 98be5db6-5633-4d23-b9a9-16382d8e99ab
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:20:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:27.169 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'env', 'PROCESS_TAG=haproxy-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/98be5db6-5633-4d23-b9a9-16382d8e99ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.393 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847627.392849, f02cbbe1-1133-4659-a065-630c53ee2683 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.393 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] VM Started (Lifecycle Event)#033[00m
Jan 31 03:20:27 np0005603621 podman[317076]: 2026-01-31 08:20:27.478895909 +0000 UTC m=+0.044454263 container create a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:20:27 np0005603621 systemd[1]: Started libpod-conmon-a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724.scope.
Jan 31 03:20:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:20:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6177bf735eff8a7e9956e732c3d523a0a76c36ee9257961eea414ab6962a474/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:20:27 np0005603621 podman[317076]: 2026-01-31 08:20:27.45306793 +0000 UTC m=+0.018626304 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:20:27 np0005603621 podman[317076]: 2026-01-31 08:20:27.557565624 +0000 UTC m=+0.123124028 container init a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:20:27 np0005603621 podman[317076]: 2026-01-31 08:20:27.562005413 +0000 UTC m=+0.127563767 container start a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:20:27 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [NOTICE]   (317095) : New worker (317097) forked
Jan 31 03:20:27 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [NOTICE]   (317095) : Loading success.
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.625 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.629 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847627.3929777, f02cbbe1-1133-4659-a065-630c53ee2683 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:27 np0005603621 nova_compute[247399]: 2026-01-31 08:20:27.629 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:20:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 355 MiB data, 990 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.1 MiB/s wr, 277 op/s
Jan 31 03:20:28 np0005603621 nova_compute[247399]: 2026-01-31 08:20:28.063 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:28 np0005603621 nova_compute[247399]: 2026-01-31 08:20:28.066 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:28 np0005603621 nova_compute[247399]: 2026-01-31 08:20:28.239 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:20:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.104 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.883 247403 DEBUG nova.compute.manager [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.883 247403 DEBUG oslo_concurrency.lockutils [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.884 247403 DEBUG oslo_concurrency.lockutils [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.884 247403 DEBUG oslo_concurrency.lockutils [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.884 247403 DEBUG nova.compute.manager [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Processing event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.884 247403 DEBUG nova.compute.manager [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.884 247403 DEBUG oslo_concurrency.lockutils [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.885 247403 DEBUG oslo_concurrency.lockutils [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.885 247403 DEBUG oslo_concurrency.lockutils [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.885 247403 DEBUG nova.compute.manager [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] No waiting events found dispatching network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.885 247403 WARNING nova.compute.manager [req-fd76e93f-1581-4052-a738-2c7aaf78ca7b req-5868b9c3-fdfd-422d-ac47-db98f98189eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received unexpected event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.886 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.891 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.892 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847629.8910449, f02cbbe1-1133-4659-a065-630c53ee2683 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.892 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.896 247403 INFO nova.virt.libvirt.driver [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Instance spawned successfully.#033[00m
Jan 31 03:20:29 np0005603621 nova_compute[247399]: 2026-01-31 08:20:29.897 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:20:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 385 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.9 MiB/s wr, 400 op/s
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.264 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.268 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.269 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.269 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.270 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.270 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.270 247403 DEBUG nova.virt.libvirt.driver [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.273 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:20:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:30.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:20:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.452 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:20:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:30.502 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:30.503 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:30.503 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:30 np0005603621 nova_compute[247399]: 2026-01-31 08:20:30.918 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:20:31 np0005603621 nova_compute[247399]: 2026-01-31 08:20:31.285 247403 INFO nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Took 14.83 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:20:31 np0005603621 nova_compute[247399]: 2026-01-31 08:20:31.286 247403 DEBUG nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:31 np0005603621 nova_compute[247399]: 2026-01-31 08:20:31.574 247403 INFO nova.compute.manager [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Took 19.29 seconds to build instance.#033[00m
Jan 31 03:20:31 np0005603621 nova_compute[247399]: 2026-01-31 08:20:31.771 247403 DEBUG oslo_concurrency.lockutils [None req-af0da27f-244e-4a3a-afed-d6b262df51f6 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 385 MiB data, 998 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 2.8 MiB/s wr, 330 op/s
Jan 31 03:20:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:33 np0005603621 nova_compute[247399]: 2026-01-31 08:20:33.886 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:20:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 407 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 4.5 MiB/s wr, 414 op/s
Jan 31 03:20:34 np0005603621 nova_compute[247399]: 2026-01-31 08:20:34.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:20:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:34.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:20:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:35 np0005603621 nova_compute[247399]: 2026-01-31 08:20:35.280 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:35 np0005603621 nova_compute[247399]: 2026-01-31 08:20:35.452 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 416 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.3 MiB/s wr, 386 op/s
Jan 31 03:20:36 np0005603621 kernel: tapf45c5fd8-45 (unregistering): left promiscuous mode
Jan 31 03:20:36 np0005603621 NetworkManager[49013]: <info>  [1769847636.1739] device (tapf45c5fd8-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:20:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:36Z|00399|binding|INFO|Releasing lport f45c5fd8-45be-479f-bfb8-5305390417f3 from this chassis (sb_readonly=0)
Jan 31 03:20:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:36Z|00400|binding|INFO|Setting lport f45c5fd8-45be-479f-bfb8-5305390417f3 down in Southbound
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.181 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:36Z|00401|binding|INFO|Removing iface tapf45c5fd8-45 ovn-installed in OVS
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.187 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:36.234 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:55:8c 10.100.0.10'], port_security=['fa:16:3e:78:55:8c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b6bf273c-d5a3-4f02-bddd-465a846a764d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f45c5fd8-45be-479f-bfb8-5305390417f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:36 np0005603621 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 31 03:20:36 np0005603621 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000069.scope: Consumed 12.899s CPU time.
Jan 31 03:20:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:36.237 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f45c5fd8-45be-479f-bfb8-5305390417f3 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 unbound from our chassis#033[00m
Jan 31 03:20:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:36.238 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:20:36 np0005603621 systemd-machined[212769]: Machine qemu-45-instance-00000069 terminated.
Jan 31 03:20:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:36.239 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b73c6676-db9a-44f5-a9c6-707ded96ff2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:36.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:36.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.404 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.782 247403 DEBUG nova.compute.manager [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.899 247403 INFO nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance shutdown successfully after 13 seconds.#033[00m
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.907 247403 INFO nova.virt.libvirt.driver [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance destroyed successfully.#033[00m
Jan 31 03:20:36 np0005603621 nova_compute[247399]: 2026-01-31 08:20:36.907 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'numa_topology' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.614 247403 INFO nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Attempting rescue#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.615 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.619 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.619 247403 INFO nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Creating image(s)#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.646 247403 DEBUG nova.storage.rbd_utils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.649 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:37 np0005603621 nova_compute[247399]: 2026-01-31 08:20:37.714 247403 INFO nova.compute.manager [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] instance snapshotting#033[00m
Jan 31 03:20:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 378 op/s
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.099 247403 DEBUG nova.compute.manager [req-8f70ae15-74e1-4302-92d8-232c521f18be req-83f787f8-3e13-48fc-bed3-93d6b2d73dd7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-unplugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.100 247403 DEBUG oslo_concurrency.lockutils [req-8f70ae15-74e1-4302-92d8-232c521f18be req-83f787f8-3e13-48fc-bed3-93d6b2d73dd7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.100 247403 DEBUG oslo_concurrency.lockutils [req-8f70ae15-74e1-4302-92d8-232c521f18be req-83f787f8-3e13-48fc-bed3-93d6b2d73dd7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.100 247403 DEBUG oslo_concurrency.lockutils [req-8f70ae15-74e1-4302-92d8-232c521f18be req-83f787f8-3e13-48fc-bed3-93d6b2d73dd7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.100 247403 DEBUG nova.compute.manager [req-8f70ae15-74e1-4302-92d8-232c521f18be req-83f787f8-3e13-48fc-bed3-93d6b2d73dd7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-unplugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.100 247403 WARNING nova.compute.manager [req-8f70ae15-74e1-4302-92d8-232c521f18be req-83f787f8-3e13-48fc-bed3-93d6b2d73dd7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received unexpected event network-vif-unplugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.326 247403 DEBUG nova.storage.rbd_utils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.350 247403 DEBUG nova.storage.rbd_utils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.355 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.374 247403 INFO nova.virt.libvirt.driver [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Beginning live snapshot process#033[00m
Jan 31 03:20:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:38.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:38.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.406 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.408 247403 DEBUG oslo_concurrency.lockutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.408 247403 DEBUG oslo_concurrency.lockutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.409 247403 DEBUG oslo_concurrency.lockutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.433 247403 DEBUG nova.storage.rbd_utils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.436 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:20:38
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'volumes', '.mgr']
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.753 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:38 np0005603621 nova_compute[247399]: 2026-01-31 08:20:38.754 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'migration_context' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:20:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.004 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.005 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Start _get_guest_xml network_info=[{"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1486638082-network", "vif_mac": "fa:16:3e:78:55:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.005 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'resources' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3835187053
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.769 247403 DEBUG nova.virt.libvirt.imagebackend [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.857 247403 WARNING nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.865 247403 DEBUG nova.virt.libvirt.host [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.866 247403 DEBUG nova.virt.libvirt.host [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.868 247403 DEBUG nova.virt.libvirt.host [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.869 247403 DEBUG nova.virt.libvirt.host [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.871 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.871 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.872 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.873 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.873 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.873 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.874 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.874 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.875 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.875 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.876 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.876 247403 DEBUG nova.virt.hardware [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:20:39 np0005603621 nova_compute[247399]: 2026-01-31 08:20:39.876 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 9.2 MiB/s wr, 375 op/s
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.034 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.142 247403 DEBUG nova.storage.rbd_utils [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] creating snapshot(d38bb74ccd114b75bc85772434d16f21) on rbd image(f02cbbe1-1133-4659-a065-630c53ee2683_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.302 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.303 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.303 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.305 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 31 03:20:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:40.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:40.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.478 247403 DEBUG nova.compute.manager [req-280b1fba-1afa-4b40-9285-766852f2f9c2 req-5c1eb0e5-42f0-4658-a5c0-7e350dd53358 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.479 247403 DEBUG oslo_concurrency.lockutils [req-280b1fba-1afa-4b40-9285-766852f2f9c2 req-5c1eb0e5-42f0-4658-a5c0-7e350dd53358 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.479 247403 DEBUG oslo_concurrency.lockutils [req-280b1fba-1afa-4b40-9285-766852f2f9c2 req-5c1eb0e5-42f0-4658-a5c0-7e350dd53358 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.479 247403 DEBUG oslo_concurrency.lockutils [req-280b1fba-1afa-4b40-9285-766852f2f9c2 req-5c1eb0e5-42f0-4658-a5c0-7e350dd53358 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.480 247403 DEBUG nova.compute.manager [req-280b1fba-1afa-4b40-9285-766852f2f9c2 req-5c1eb0e5-42f0-4658-a5c0-7e350dd53358 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.480 247403 WARNING nova.compute.manager [req-280b1fba-1afa-4b40-9285-766852f2f9c2 req-5c1eb0e5-42f0-4658-a5c0-7e350dd53358 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received unexpected event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.502 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.502 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.502 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.503 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:40.511 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.511 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:40.513 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728413451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.596 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.597 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 31 03:20:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 31 03:20:40 np0005603621 nova_compute[247399]: 2026-01-31 08:20:40.837 247403 DEBUG nova.storage.rbd_utils [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] cloning vms/f02cbbe1-1133-4659-a065-630c53ee2683_disk@d38bb74ccd114b75bc85772434d16f21 to images/4659652d-d50c-40d4-aa17-bb68b7e73113 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/716657344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.053 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.054 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.141 247403 DEBUG nova.storage.rbd_utils [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] flattening images/4659652d-d50c-40d4-aa17-bb68b7e73113 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972679225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.514 247403 DEBUG nova.storage.rbd_utils [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] removing snapshot(d38bb74ccd114b75bc85772434d16f21) on rbd image(f02cbbe1-1133-4659-a065-630c53ee2683_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.517 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.518 247403 DEBUG nova.virt.libvirt.vif [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1598984785',display_name='tempest-ServerRescueTestJSON-server-1598984785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1598984785',id=105,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-7pc5icog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:14Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=b6bf273c-d5a3-4f02-bddd-465a846a764d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1486638082-network", "vif_mac": "fa:16:3e:78:55:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.518 247403 DEBUG nova.network.os_vif_util [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1486638082-network", "vif_mac": "fa:16:3e:78:55:8c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.519 247403 DEBUG nova.network.os_vif_util [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.520 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.702 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <uuid>b6bf273c-d5a3-4f02-bddd-465a846a764d</uuid>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <name>instance-00000069</name>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerRescueTestJSON-server-1598984785</nova:name>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:20:39</nova:creationTime>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:user uuid="a8897cd859ff4a79a1a16eaee71d22ed">tempest-ServerRescueTestJSON-476946386-project-member</nova:user>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:project uuid="29d7f464a8694725aa9692aac772c256">tempest-ServerRescueTestJSON-476946386</nova:project>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <nova:port uuid="f45c5fd8-45be-479f-bfb8-5305390417f3">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <entry name="serial">b6bf273c-d5a3-4f02-bddd-465a846a764d</entry>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <entry name="uuid">b6bf273c-d5a3-4f02-bddd-465a846a764d</entry>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.rescue">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b6bf273c-d5a3-4f02-bddd-465a846a764d_disk">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config.rescue">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:78:55:8c"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <target dev="tapf45c5fd8-45"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/console.log" append="off"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:20:41 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:20:41 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:20:41 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:20:41 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:20:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.712 247403 INFO nova.virt.libvirt.driver [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance destroyed successfully.#033[00m
Jan 31 03:20:41 np0005603621 nova_compute[247399]: 2026-01-31 08:20:41.728 247403 DEBUG nova.storage.rbd_utils [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] creating snapshot(snap) on rbd image(4659652d-d50c-40d4-aa17-bb68b7e73113) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:20:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 501 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.3 MiB/s wr, 200 op/s
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.257 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.257 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.258 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.258 247403 DEBUG nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No VIF found with MAC fa:16:3e:78:55:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.258 247403 INFO nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Using config drive#033[00m
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.285 247403 DEBUG nova.storage.rbd_utils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.355 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:42 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:42Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fb:99:c6 10.100.0.4
Jan 31 03:20:42 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:42Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fb:99:c6 10.100.0.4
Jan 31 03:20:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:42.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:42 np0005603621 nova_compute[247399]: 2026-01-31 08:20:42.504 247403 DEBUG nova.objects.instance [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'keypairs' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:20:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 31 03:20:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 31 03:20:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 31 03:20:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 573 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 16 MiB/s wr, 348 op/s
Jan 31 03:20:44 np0005603621 nova_compute[247399]: 2026-01-31 08:20:44.110 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:44.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:44.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:44 np0005603621 nova_compute[247399]: 2026-01-31 08:20:44.450 247403 INFO nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Creating config drive at /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config.rescue#033[00m
Jan 31 03:20:44 np0005603621 nova_compute[247399]: 2026-01-31 08:20:44.454 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx3yqh4pa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:44 np0005603621 nova_compute[247399]: 2026-01-31 08:20:44.577 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpx3yqh4pa" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.593041) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847644593103, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1522, "num_deletes": 256, "total_data_size": 2543305, "memory_usage": 2581872, "flush_reason": "Manual Compaction"}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 31 03:20:44 np0005603621 nova_compute[247399]: 2026-01-31 08:20:44.604 247403 DEBUG nova.storage.rbd_utils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:44 np0005603621 nova_compute[247399]: 2026-01-31 08:20:44.607 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config.rescue b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847644616347, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2491355, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43886, "largest_seqno": 45406, "table_properties": {"data_size": 2484388, "index_size": 3974, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15083, "raw_average_key_size": 19, "raw_value_size": 2470159, "raw_average_value_size": 3271, "num_data_blocks": 175, "num_entries": 755, "num_filter_entries": 755, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847505, "oldest_key_time": 1769847505, "file_creation_time": 1769847644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 23352 microseconds, and 4636 cpu microseconds.
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.616390) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2491355 bytes OK
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.616408) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.618148) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.618162) EVENT_LOG_v1 {"time_micros": 1769847644618158, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.618182) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2536751, prev total WAL file size 2536751, number of live WAL files 2.
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.618825) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373538' seq:0, type:0; will stop at (end)
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2432KB)], [95(11MB)]
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847644618899, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 14057925, "oldest_snapshot_seqno": -1}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7436 keys, 13922257 bytes, temperature: kUnknown
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847644890458, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 13922257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13869245, "index_size": 33316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 191256, "raw_average_key_size": 25, "raw_value_size": 13733591, "raw_average_value_size": 1846, "num_data_blocks": 1328, "num_entries": 7436, "num_filter_entries": 7436, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847644, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.890724) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13922257 bytes
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.922095) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.8 rd, 51.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 11.0 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 7965, records dropped: 529 output_compression: NoCompression
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.922146) EVENT_LOG_v1 {"time_micros": 1769847644922127, "job": 56, "event": "compaction_finished", "compaction_time_micros": 271644, "compaction_time_cpu_micros": 29227, "output_level": 6, "num_output_files": 1, "total_output_size": 13922257, "num_input_records": 7965, "num_output_records": 7436, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847644922659, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847644924777, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.618690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.924856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.924863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.924867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.924871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:44 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:44.924875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:45 np0005603621 nova_compute[247399]: 2026-01-31 08:20:45.307 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603621 nova_compute[247399]: 2026-01-31 08:20:45.443 247403 DEBUG oslo_concurrency.processutils [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config.rescue b6bf273c-d5a3-4f02-bddd-465a846a764d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.836s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:45 np0005603621 nova_compute[247399]: 2026-01-31 08:20:45.446 247403 INFO nova.virt.libvirt.driver [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Deleting local config drive /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:20:45 np0005603621 kernel: tapf45c5fd8-45: entered promiscuous mode
Jan 31 03:20:45 np0005603621 NetworkManager[49013]: <info>  [1769847645.4872] manager: (tapf45c5fd8-45): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 31 03:20:45 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:45Z|00402|binding|INFO|Claiming lport f45c5fd8-45be-479f-bfb8-5305390417f3 for this chassis.
Jan 31 03:20:45 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:45Z|00403|binding|INFO|f45c5fd8-45be-479f-bfb8-5305390417f3: Claiming fa:16:3e:78:55:8c 10.100.0.10
Jan 31 03:20:45 np0005603621 nova_compute[247399]: 2026-01-31 08:20:45.523 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:45Z|00404|binding|INFO|Setting lport f45c5fd8-45be-479f-bfb8-5305390417f3 ovn-installed in OVS
Jan 31 03:20:45 np0005603621 nova_compute[247399]: 2026-01-31 08:20:45.528 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603621 nova_compute[247399]: 2026-01-31 08:20:45.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:45 np0005603621 systemd-udevd[317507]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:20:45 np0005603621 systemd-machined[212769]: New machine qemu-47-instance-00000069.
Jan 31 03:20:45 np0005603621 NetworkManager[49013]: <info>  [1769847645.5504] device (tapf45c5fd8-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:20:45 np0005603621 NetworkManager[49013]: <info>  [1769847645.5509] device (tapf45c5fd8-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:20:45 np0005603621 systemd[1]: Started Virtual Machine qemu-47-instance-00000069.
Jan 31 03:20:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 600 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.5 MiB/s wr, 338 op/s
Jan 31 03:20:45 np0005603621 ovn_controller[149152]: 2026-01-31T08:20:45Z|00405|binding|INFO|Setting lport f45c5fd8-45be-479f-bfb8-5305390417f3 up in Southbound
Jan 31 03:20:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:45.987 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:55:8c 10.100.0.10'], port_security=['fa:16:3e:78:55:8c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b6bf273c-d5a3-4f02-bddd-465a846a764d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f45c5fd8-45be-479f-bfb8-5305390417f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:20:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:45.988 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f45c5fd8-45be-479f-bfb8-5305390417f3 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 bound to our chassis#033[00m
Jan 31 03:20:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:45.989 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:20:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:45.990 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a0160dea-3a3d-4f3f-afc7-6fb127924167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:20:46 np0005603621 nova_compute[247399]: 2026-01-31 08:20:46.392 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updating instance_info_cache with network_info: [{"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:20:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:46 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:20:46.516 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.061 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for b6bf273c-d5a3-4f02-bddd-465a846a764d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.062 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847647.0611029, b6bf273c-d5a3-4f02-bddd-465a846a764d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.062 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.067 247403 DEBUG nova.compute.manager [None req-ca24d5c2-59c6-4381-9594-b4742bb29f31 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.737 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.740 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.796 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-b6bf273c-d5a3-4f02-bddd-465a846a764d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.796 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.797 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.797 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.797 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.797 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.798 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.798 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.847 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.848 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847647.0616462, b6bf273c-d5a3-4f02-bddd-465a846a764d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.848 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.935 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.935 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.935 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.936 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.936 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 606 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.9 MiB/s wr, 291 op/s
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.956 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:20:47 np0005603621 nova_compute[247399]: 2026-01-31 08:20:47.964 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.148 247403 DEBUG nova.compute.manager [req-eb1d5be4-2cf7-41ff-9df6-d4fe6f314e6f req-494b0617-8680-47ae-b252-64cfbef755fb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.148 247403 DEBUG oslo_concurrency.lockutils [req-eb1d5be4-2cf7-41ff-9df6-d4fe6f314e6f req-494b0617-8680-47ae-b252-64cfbef755fb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.149 247403 DEBUG oslo_concurrency.lockutils [req-eb1d5be4-2cf7-41ff-9df6-d4fe6f314e6f req-494b0617-8680-47ae-b252-64cfbef755fb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.149 247403 DEBUG oslo_concurrency.lockutils [req-eb1d5be4-2cf7-41ff-9df6-d4fe6f314e6f req-494b0617-8680-47ae-b252-64cfbef755fb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.149 247403 DEBUG nova.compute.manager [req-eb1d5be4-2cf7-41ff-9df6-d4fe6f314e6f req-494b0617-8680-47ae-b252-64cfbef755fb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.149 247403 WARNING nova.compute.manager [req-eb1d5be4-2cf7-41ff-9df6-d4fe6f314e6f req-494b0617-8680-47ae-b252-64cfbef755fb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received unexpected event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:20:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:48.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/905532931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:48.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.428 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.882 247403 INFO nova.virt.libvirt.driver [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Snapshot image upload complete#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.882 247403 INFO nova.compute.manager [None req-aa7eb418-5e37-4e1f-a9ed-2a54d7531a40 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Took 11.17 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.972 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.973 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.976 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.976 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:48 np0005603621 nova_compute[247399]: 2026-01-31 08:20:48.977 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.108 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.109 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4029MB free_disk=20.694782257080078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.109 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.109 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.112 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013983946061325143 of space, bias 1.0, pg target 4.195183818397543 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.3799088032756375e-05 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8560510887772195 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.294 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance b6bf273c-d5a3-4f02-bddd-465a846a764d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.294 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.294 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.294 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:20:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 31 03:20:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 31 03:20:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.388 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855756503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.826 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:49 np0005603621 nova_compute[247399]: 2026-01-31 08:20:49.830 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 7.3 MiB/s wr, 381 op/s
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.106 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:50.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:50.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.425 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.426 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.427 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.427 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.821 247403 DEBUG nova.compute.manager [req-ef72a49a-a281-40cf-9bd6-70f76ded1fa8 req-30f9ee5a-fafd-476a-b11d-7178c969d983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.822 247403 DEBUG oslo_concurrency.lockutils [req-ef72a49a-a281-40cf-9bd6-70f76ded1fa8 req-30f9ee5a-fafd-476a-b11d-7178c969d983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.822 247403 DEBUG oslo_concurrency.lockutils [req-ef72a49a-a281-40cf-9bd6-70f76ded1fa8 req-30f9ee5a-fafd-476a-b11d-7178c969d983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.822 247403 DEBUG oslo_concurrency.lockutils [req-ef72a49a-a281-40cf-9bd6-70f76ded1fa8 req-30f9ee5a-fafd-476a-b11d-7178c969d983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.822 247403 DEBUG nova.compute.manager [req-ef72a49a-a281-40cf-9bd6-70f76ded1fa8 req-30f9ee5a-fafd-476a-b11d-7178c969d983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.823 247403 WARNING nova.compute.manager [req-ef72a49a-a281-40cf-9bd6-70f76ded1fa8 req-30f9ee5a-fafd-476a-b11d-7178c969d983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received unexpected event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:20:50 np0005603621 nova_compute[247399]: 2026-01-31 08:20:50.831 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:51 np0005603621 nova_compute[247399]: 2026-01-31 08:20:51.923 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:51 np0005603621 nova_compute[247399]: 2026-01-31 08:20:51.924 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.0 MiB/s wr, 268 op/s
Jan 31 03:20:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:52.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:52.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:53 np0005603621 nova_compute[247399]: 2026-01-31 08:20:53.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:20:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 195 op/s
Jan 31 03:20:54 np0005603621 nova_compute[247399]: 2026-01-31 08:20:54.114 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.333076) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847654333110, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 252, "total_data_size": 169714, "memory_usage": 175728, "flush_reason": "Manual Compaction"}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847654335332, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 167472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45407, "largest_seqno": 45760, "table_properties": {"data_size": 165314, "index_size": 322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 6317, "raw_average_key_size": 20, "raw_value_size": 160767, "raw_average_value_size": 527, "num_data_blocks": 14, "num_entries": 305, "num_filter_entries": 305, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847645, "oldest_key_time": 1769847645, "file_creation_time": 1769847654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 2281 microseconds, and 810 cpu microseconds.
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.335359) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 167472 bytes OK
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.335371) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.337019) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.337035) EVENT_LOG_v1 {"time_micros": 1769847654337029, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.337052) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 167332, prev total WAL file size 167332, number of live WAL files 2.
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.337421) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(163KB)], [98(13MB)]
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847654337473, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 14089729, "oldest_snapshot_seqno": -1}
Jan 31 03:20:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:54.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:54.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7224 keys, 10248052 bytes, temperature: kUnknown
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847654480874, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10248052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10201267, "index_size": 27601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 187064, "raw_average_key_size": 25, "raw_value_size": 10074060, "raw_average_value_size": 1394, "num_data_blocks": 1089, "num_entries": 7224, "num_filter_entries": 7224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.481086) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10248052 bytes
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.483423) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.2 rd, 71.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(145.3) write-amplify(61.2) OK, records in: 7741, records dropped: 517 output_compression: NoCompression
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.483469) EVENT_LOG_v1 {"time_micros": 1769847654483453, "job": 58, "event": "compaction_finished", "compaction_time_micros": 143456, "compaction_time_cpu_micros": 20018, "output_level": 6, "num_output_files": 1, "total_output_size": 10248052, "num_input_records": 7741, "num_output_records": 7224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847654483754, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847654485250, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.337333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.485378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.485387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.485388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.485390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:54 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:20:54.485392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:20:54 np0005603621 podman[317677]: 2026-01-31 08:20:54.509569854 +0000 UTC m=+0.059477165 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:20:54 np0005603621 podman[317678]: 2026-01-31 08:20:54.540450222 +0000 UTC m=+0.090582240 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:20:55 np0005603621 nova_compute[247399]: 2026-01-31 08:20:55.313 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 165 KiB/s wr, 127 op/s
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.252 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.252 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.313 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:20:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.504 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.505 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.513 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.513 247403 INFO nova.compute.claims [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:20:56 np0005603621 nova_compute[247399]: 2026-01-31 08:20:56.945 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.004 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.005 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.061 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.176 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404367917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.407 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.412 247403 DEBUG nova.compute.provider_tree [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.439 247403 DEBUG nova.scheduler.client.report [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.544 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.545 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.547 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.556 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.556 247403 INFO nova.compute.claims [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.757 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.758 247403 DEBUG nova.network.neutron [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.849 247403 INFO nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.912 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:57 np0005603621 nova_compute[247399]: 2026-01-31 08:20:57.934 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:20:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 86 KiB/s wr, 144 op/s
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.028 247403 INFO nova.virt.block_device [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Booting with blank volume at /dev/vda#033[00m
Jan 31 03:20:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:20:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822955377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.336 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.340 247403 DEBUG nova.compute.provider_tree [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.408 247403 DEBUG nova.policy [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4d66dd0b7ff443cbcdb6e2c9f5c4c8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf024d54545b4af882a87c721105742a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:20:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:20:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:20:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:20:58.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:20:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:20:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:20:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:20:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.449 247403 DEBUG nova.scheduler.client.report [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.570 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.571 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.759 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.759 247403 DEBUG nova.network.neutron [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:20:58 np0005603621 nova_compute[247399]: 2026-01-31 08:20:58.960 247403 INFO nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.013 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.115 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.167 247403 DEBUG nova.policy [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a8897cd859ff4a79a1a16eaee71d22ed', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '29d7f464a8694725aa9692aac772c256', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:20:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.379 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.381 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.381 247403 INFO nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Creating image(s)#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.406 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.434 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.464 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.467 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.526 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.527 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.527 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.528 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.553 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.557 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 26e4f031-8730-4987-a2df-239ce4b73191_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:20:59 np0005603621 nova_compute[247399]: 2026-01-31 08:20:59.577 247403 DEBUG nova.network.neutron [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Successfully created port: fb9bab50-6b70-4dec-9b6d-9d961083b257 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:21:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 39 KiB/s wr, 183 op/s
Jan 31 03:21:00 np0005603621 nova_compute[247399]: 2026-01-31 08:21:00.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:00.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:00 np0005603621 nova_compute[247399]: 2026-01-31 08:21:00.562 247403 DEBUG nova.network.neutron [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Successfully created port: 30e3cd2c-30a0-4349-817c-7651950c3869 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:21:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 612 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 111 op/s
Jan 31 03:21:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:02.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:02.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:02 np0005603621 nova_compute[247399]: 2026-01-31 08:21:02.508 247403 DEBUG nova.network.neutron [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Successfully updated port: fb9bab50-6b70-4dec-9b6d-9d961083b257 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:21:02 np0005603621 nova_compute[247399]: 2026-01-31 08:21:02.928 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:02 np0005603621 nova_compute[247399]: 2026-01-31 08:21:02.928 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquired lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:02 np0005603621 nova_compute[247399]: 2026-01-31 08:21:02.928 247403 DEBUG nova.network.neutron [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.211 247403 DEBUG nova.network.neutron [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.324 247403 DEBUG nova.network.neutron [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Successfully updated port: 30e3cd2c-30a0-4349-817c-7651950c3869 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.460 247403 DEBUG os_brick.utils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.462 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.472 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.472 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[19c3afb7-5cf0-4da0-bdf3-f3ba45ea52b0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.473 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.480 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.480 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquired lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.480 247403 DEBUG nova.network.neutron [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.479 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.480 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[89a24c49-4153-4341-bbba-e4b51e88e2a3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.483 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.489 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.489 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[8199973d-2a18-4002-91cf-4c99c62cef2e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.490 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3c84d9-3d3b-41d5-bb36-704e4aa700dd]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.491 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.512 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.515 247403 DEBUG os_brick.initiator.connectors.lightos [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.515 247403 DEBUG os_brick.initiator.connectors.lightos [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.515 247403 DEBUG os_brick.initiator.connectors.lightos [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.515 247403 DEBUG os_brick.utils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.516 247403 DEBUG nova.virt.block_device [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating existing volume attachment record: d62d58cb-62c8-4814-ab07-94c3742a7f65 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.522 247403 DEBUG nova.compute.manager [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-changed-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.522 247403 DEBUG nova.compute.manager [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Refreshing instance network info cache due to event network-changed-30e3cd2c-30a0-4349-817c-7651950c3869. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.522 247403 DEBUG oslo_concurrency.lockutils [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.545 247403 DEBUG nova.compute.manager [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-changed-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.545 247403 DEBUG nova.compute.manager [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Refreshing instance network info cache due to event network-changed-fb9bab50-6b70-4dec-9b6d-9d961083b257. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.545 247403 DEBUG oslo_concurrency.lockutils [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:03 np0005603621 nova_compute[247399]: 2026-01-31 08:21:03.787 247403 DEBUG nova.network.neutron [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:21:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 613 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 31 KiB/s wr, 220 op/s
Jan 31 03:21:04 np0005603621 nova_compute[247399]: 2026-01-31 08:21:04.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:04.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:04.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:04 np0005603621 nova_compute[247399]: 2026-01-31 08:21:04.710 247403 DEBUG nova.network.neutron [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:04 np0005603621 nova_compute[247399]: 2026-01-31 08:21:04.781 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Releasing lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:04 np0005603621 nova_compute[247399]: 2026-01-31 08:21:04.782 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance network_info: |[{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:21:04 np0005603621 nova_compute[247399]: 2026-01-31 08:21:04.782 247403 DEBUG oslo_concurrency.lockutils [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:04 np0005603621 nova_compute[247399]: 2026-01-31 08:21:04.782 247403 DEBUG nova.network.neutron [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Refreshing network info cache for port fb9bab50-6b70-4dec-9b6d-9d961083b257 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:21:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4161913083' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:05 np0005603621 nova_compute[247399]: 2026-01-31 08:21:05.319 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:05 np0005603621 nova_compute[247399]: 2026-01-31 08:21:05.694 247403 DEBUG nova.network.neutron [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updating instance_info_cache with network_info: [{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:05 np0005603621 nova_compute[247399]: 2026-01-31 08:21:05.763 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Releasing lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:05 np0005603621 nova_compute[247399]: 2026-01-31 08:21:05.763 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance network_info: |[{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:21:05 np0005603621 nova_compute[247399]: 2026-01-31 08:21:05.764 247403 DEBUG oslo_concurrency.lockutils [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:05 np0005603621 nova_compute[247399]: 2026-01-31 08:21:05.764 247403 DEBUG nova.network.neutron [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Refreshing network info cache for port 30e3cd2c-30a0-4349-817c-7651950c3869 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:21:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 613 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 19 KiB/s wr, 199 op/s
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.208 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.209 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.210 247403 INFO nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Creating image(s)#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.210 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.210 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Ensure instance console log exists: /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.211 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.211 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.211 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.213 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Start _get_guest_xml network_info=[{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afd3af33-f7a2-401c-9400-47dff922d4f4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afd3af33-f7a2-401c-9400-47dff922d4f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'attached_at': '', 'detached_at': '', 'volume_id': 'afd3af33-f7a2-401c-9400-47dff922d4f4', 'serial': 'afd3af33-f7a2-401c-9400-47dff922d4f4'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': 'd62d58cb-62c8-4814-ab07-94c3742a7f65', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.218 247403 WARNING nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.227 247403 DEBUG nova.virt.libvirt.host [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.227 247403 DEBUG nova.virt.libvirt.host [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.236 247403 DEBUG nova.virt.libvirt.host [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.237 247403 DEBUG nova.virt.libvirt.host [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.237 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.238 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.238 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.238 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.238 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.238 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.239 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.239 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.239 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.239 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.239 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.239 247403 DEBUG nova.virt.hardware [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.264 247403 DEBUG nova.storage.rbd_utils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:06 np0005603621 nova_compute[247399]: 2026-01-31 08:21:06.268 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:06.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870923636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 628 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 495 KiB/s wr, 200 op/s
Jan 31 03:21:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:08.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:08 np0005603621 nova_compute[247399]: 2026-01-31 08:21:08.997 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.102 247403 DEBUG nova.virt.libvirt.vif [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-668266523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-668266523',id=110,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-6l4wap3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:57Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=23200b4a-e522-43bf-a83e-cb2f9bb31571,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.102 247403 DEBUG nova.network.os_vif_util [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.103 247403 DEBUG nova.network.os_vif_util [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.104 247403 DEBUG nova.objects.instance [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'pci_devices' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.118 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.242 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <uuid>23200b4a-e522-43bf-a83e-cb2f9bb31571</uuid>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <name>instance-0000006e</name>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-668266523</nova:name>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:21:06</nova:creationTime>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:user uuid="f4d66dd0b7ff443cbcdb6e2c9f5c4c8c">tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member</nova:user>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:project uuid="cf024d54545b4af882a87c721105742a">tempest-ServerBootFromVolumeStableRescueTest-468517745</nova:project>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <nova:port uuid="fb9bab50-6b70-4dec-9b6d-9d961083b257">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <entry name="serial">23200b4a-e522-43bf-a83e-cb2f9bb31571</entry>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <entry name="uuid">23200b4a-e522-43bf-a83e-cb2f9bb31571</entry>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-afd3af33-f7a2-401c-9400-47dff922d4f4">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <serial>afd3af33-f7a2-401c-9400-47dff922d4f4</serial>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:69:49:09"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <target dev="tapfb9bab50-6b"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/console.log" append="off"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:21:09 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:21:09 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:21:09 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:21:09 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.242 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Preparing to wait for external event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.242 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.243 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.243 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.243 247403 DEBUG nova.virt.libvirt.vif [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-668266523',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-668266523',id=110,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-6l4wap3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:57Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=23200b4a-e522-43bf-a83e-cb2f9bb31571,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.244 247403 DEBUG nova.network.os_vif_util [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.244 247403 DEBUG nova.network.os_vif_util [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.245 247403 DEBUG os_vif [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.246 247403 DEBUG nova.network.neutron [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updated VIF entry in instance network info cache for port fb9bab50-6b70-4dec-9b6d-9d961083b257. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.246 247403 DEBUG nova.network.neutron [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.248 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.248 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.248 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.252 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.252 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb9bab50-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.253 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb9bab50-6b, col_values=(('external_ids', {'iface-id': 'fb9bab50-6b70-4dec-9b6d-9d961083b257', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:49:09', 'vm-uuid': '23200b4a-e522-43bf-a83e-cb2f9bb31571'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.254 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:09 np0005603621 NetworkManager[49013]: <info>  [1769847669.2556] manager: (tapfb9bab50-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/197)
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.256 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.261 247403 INFO os_vif [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b')#033[00m
Jan 31 03:21:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.541 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 26e4f031-8730-4987-a2df-239ce4b73191_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.574 247403 DEBUG oslo_concurrency.lockutils [req-461505c5-a8fd-40b2-b88c-8aebb4e1af2d req-a984a226-e3a5-41de-9c9d-1ea2bf83116f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.611 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] resizing rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.950 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.950 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.951 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No VIF found with MAC fa:16:3e:69:49:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.951 247403 INFO nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Using config drive#033[00m
Jan 31 03:21:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.2 MiB/s wr, 196 op/s
Jan 31 03:21:09 np0005603621 nova_compute[247399]: 2026-01-31 08:21:09.973 247403 DEBUG nova.storage.rbd_utils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:10.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:10.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:10 np0005603621 nova_compute[247399]: 2026-01-31 08:21:10.757 247403 DEBUG nova.objects.instance [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'migration_context' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:10 np0005603621 nova_compute[247399]: 2026-01-31 08:21:10.790 247403 INFO nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Creating config drive at /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config#033[00m
Jan 31 03:21:10 np0005603621 nova_compute[247399]: 2026-01-31 08:21:10.795 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7iezbo09 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:10 np0005603621 nova_compute[247399]: 2026-01-31 08:21:10.923 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp7iezbo09" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:10 np0005603621 nova_compute[247399]: 2026-01-31 08:21:10.950 247403 DEBUG nova.storage.rbd_utils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:10 np0005603621 nova_compute[247399]: 2026-01-31 08:21:10.953 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.042 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.043 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Ensure instance console log exists: /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.043 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.043 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.044 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.046 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Start _get_guest_xml network_info=[{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.050 247403 WARNING nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.059 247403 DEBUG nova.virt.libvirt.host [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.060 247403 DEBUG nova.virt.libvirt.host [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.064 247403 DEBUG nova.virt.libvirt.host [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.064 247403 DEBUG nova.virt.libvirt.host [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.065 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.065 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.066 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.066 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.066 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.066 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.066 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.067 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.067 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.067 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.067 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.067 247403 DEBUG nova.virt.hardware [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.070 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402963487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.518 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.549 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:11 np0005603621 nova_compute[247399]: 2026-01-31 08:21:11.556 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 652 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 128 op/s
Jan 31 03:21:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210134680' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.005 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.007 247403 DEBUG nova.virt.libvirt.vif [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-716187142',display_name='tempest-ServerRescueTestJSON-server-716187142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-716187142',id=111,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-06wkdhgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:59Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=26e4f031-8730-4987-a2df-239ce4b73191,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.007 247403 DEBUG nova.network.os_vif_util [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.008 247403 DEBUG nova.network.os_vif_util [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.009 247403 DEBUG nova.objects.instance [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.110 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <uuid>26e4f031-8730-4987-a2df-239ce4b73191</uuid>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <name>instance-0000006f</name>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerRescueTestJSON-server-716187142</nova:name>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:21:11</nova:creationTime>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:user uuid="a8897cd859ff4a79a1a16eaee71d22ed">tempest-ServerRescueTestJSON-476946386-project-member</nova:user>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:project uuid="29d7f464a8694725aa9692aac772c256">tempest-ServerRescueTestJSON-476946386</nova:project>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <nova:port uuid="30e3cd2c-30a0-4349-817c-7651950c3869">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <entry name="serial">26e4f031-8730-4987-a2df-239ce4b73191</entry>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <entry name="uuid">26e4f031-8730-4987-a2df-239ce4b73191</entry>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/26e4f031-8730-4987-a2df-239ce4b73191_disk">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/26e4f031-8730-4987-a2df-239ce4b73191_disk.config">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:54:09:f8"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <target dev="tap30e3cd2c-30"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/console.log" append="off"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:21:12 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:21:12 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:21:12 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:21:12 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.111 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Preparing to wait for external event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.112 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.112 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.112 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.113 247403 DEBUG nova.virt.libvirt.vif [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-716187142',display_name='tempest-ServerRescueTestJSON-server-716187142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-716187142',id=111,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-06wkdhgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:20:59Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=26e4f031-8730-4987-a2df-239ce4b73191,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.113 247403 DEBUG nova.network.os_vif_util [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.114 247403 DEBUG nova.network.os_vif_util [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.114 247403 DEBUG os_vif [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.115 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.115 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.116 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.118 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.118 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30e3cd2c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.119 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30e3cd2c-30, col_values=(('external_ids', {'iface-id': '30e3cd2c-30a0-4349-817c-7651950c3869', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:09:f8', 'vm-uuid': '26e4f031-8730-4987-a2df-239ce4b73191'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 NetworkManager[49013]: <info>  [1769847672.1844] manager: (tap30e3cd2c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.187 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.189 247403 INFO os_vif [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30')#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.219 247403 DEBUG oslo_concurrency.processutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.220 247403 INFO nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Deleting local config drive /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config because it was imported into RBD.#033[00m
Jan 31 03:21:12 np0005603621 kernel: tapfb9bab50-6b: entered promiscuous mode
Jan 31 03:21:12 np0005603621 NetworkManager[49013]: <info>  [1769847672.2593] manager: (tapfb9bab50-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/199)
Jan 31 03:21:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:12Z|00406|binding|INFO|Claiming lport fb9bab50-6b70-4dec-9b6d-9d961083b257 for this chassis.
Jan 31 03:21:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:12Z|00407|binding|INFO|fb9bab50-6b70-4dec-9b6d-9d961083b257: Claiming fa:16:3e:69:49:09 10.100.0.8
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:12Z|00408|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 ovn-installed in OVS
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.271 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.274 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 systemd-udevd[318177]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:12 np0005603621 systemd-machined[212769]: New machine qemu-48-instance-0000006e.
Jan 31 03:21:12 np0005603621 NetworkManager[49013]: <info>  [1769847672.2899] device (tapfb9bab50-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:21:12 np0005603621 NetworkManager[49013]: <info>  [1769847672.2906] device (tapfb9bab50-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.290 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:49:09 10.100.0.8'], port_security=['fa:16:3e:69:49:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb9bab50-6b70-4dec-9b6d-9d961083b257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:12Z|00409|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 up in Southbound
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.291 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb9bab50-6b70-4dec-9b6d-9d961083b257 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab bound to our chassis#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.293 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:21:12 np0005603621 systemd[1]: Started Virtual Machine qemu-48-instance-0000006e.
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.302 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[698b5b17-0d15-4e7b-8279-663ec40b71a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.320 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ad380e20-15b8-486f-b098-416df548199d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.323 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[499bdb40-6f1e-4559-b4f7-3733af7da69e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.341 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8ce7b5-8942-4832-86c8-0f8126202b26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.353 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bc763605-a15d-47c3-831a-fe7f8c8fff7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 19643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318191, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.363 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e49abf86-f800-4368-b191-6c17defa549e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318192, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318192, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.364 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.366 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.370 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.370 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.371 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.371 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:12.372 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:21:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:12.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:12.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.447 247403 DEBUG nova.network.neutron [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updated VIF entry in instance network info cache for port 30e3cd2c-30a0-4349-817c-7651950c3869. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.447 247403 DEBUG nova.network.neutron [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updating instance_info_cache with network_info: [{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.545 247403 DEBUG oslo_concurrency.lockutils [req-33c0f4a3-e290-41b2-8fe0-8fc3959eaee1 req-f6981419-a2fe-4099-9fa4-5b639cf22c50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.554 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.554 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.554 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No VIF found with MAC fa:16:3e:54:09:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.555 247403 INFO nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Using config drive#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.777 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.781 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847672.7344253, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.782 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Started (Lifecycle Event)#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.909 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.913 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847672.7346194, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:12 np0005603621 nova_compute[247399]: 2026-01-31 08:21:12.913 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:21:13 np0005603621 nova_compute[247399]: 2026-01-31 08:21:13.249 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:13 np0005603621 nova_compute[247399]: 2026-01-31 08:21:13.253 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:13 np0005603621 nova_compute[247399]: 2026-01-31 08:21:13.311 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.120 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.377 247403 INFO nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Creating config drive at /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.382 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvl__85g5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:14.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:14.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.507 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvl__85g5" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.538 247403 DEBUG nova.storage.rbd_utils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.542 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config 26e4f031-8730-4987-a2df-239ce4b73191_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.818 247403 DEBUG nova.compute.manager [req-4920be67-8238-41ae-b27f-cc39bfec3bf9 req-7a5456a1-17ee-47f9-9cc1-8644b788ba9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.818 247403 DEBUG oslo_concurrency.lockutils [req-4920be67-8238-41ae-b27f-cc39bfec3bf9 req-7a5456a1-17ee-47f9-9cc1-8644b788ba9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.819 247403 DEBUG oslo_concurrency.lockutils [req-4920be67-8238-41ae-b27f-cc39bfec3bf9 req-7a5456a1-17ee-47f9-9cc1-8644b788ba9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.819 247403 DEBUG oslo_concurrency.lockutils [req-4920be67-8238-41ae-b27f-cc39bfec3bf9 req-7a5456a1-17ee-47f9-9cc1-8644b788ba9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.820 247403 DEBUG nova.compute.manager [req-4920be67-8238-41ae-b27f-cc39bfec3bf9 req-7a5456a1-17ee-47f9-9cc1-8644b788ba9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Processing event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.820 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.823 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847674.823397, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.824 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.826 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.829 247403 INFO nova.virt.libvirt.driver [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance spawned successfully.#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.829 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.872 247403 DEBUG oslo_concurrency.processutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config 26e4f031-8730-4987-a2df-239ce4b73191_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.873 247403 INFO nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Deleting local config drive /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config because it was imported into RBD.#033[00m
Jan 31 03:21:14 np0005603621 kernel: tap30e3cd2c-30: entered promiscuous mode
Jan 31 03:21:14 np0005603621 systemd-udevd[318179]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:14 np0005603621 NetworkManager[49013]: <info>  [1769847674.9137] manager: (tap30e3cd2c-30): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.914 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:14Z|00410|binding|INFO|Claiming lport 30e3cd2c-30a0-4349-817c-7651950c3869 for this chassis.
Jan 31 03:21:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:14Z|00411|binding|INFO|30e3cd2c-30a0-4349-817c-7651950c3869: Claiming fa:16:3e:54:09:f8 10.100.0.3
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.921 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:14Z|00412|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 ovn-installed in OVS
Jan 31 03:21:14 np0005603621 NetworkManager[49013]: <info>  [1769847674.9245] device (tap30e3cd2c-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:21:14 np0005603621 NetworkManager[49013]: <info>  [1769847674.9251] device (tap30e3cd2c-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.926 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.931 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.932 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.932 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.933 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.933 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.933 247403 DEBUG nova.virt.libvirt.driver [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:14 np0005603621 nova_compute[247399]: 2026-01-31 08:21:14.936 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:14 np0005603621 systemd-machined[212769]: New machine qemu-49-instance-0000006f.
Jan 31 03:21:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:14Z|00413|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 up in Southbound
Jan 31 03:21:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:14.951 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:09:f8 10.100.0.3'], port_security=['fa:16:3e:54:09:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26e4f031-8730-4987-a2df-239ce4b73191', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=30e3cd2c-30a0-4349-817c-7651950c3869) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:14.952 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 30e3cd2c-30a0-4349-817c-7651950c3869 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 bound to our chassis#033[00m
Jan 31 03:21:14 np0005603621 systemd[1]: Started Virtual Machine qemu-49-instance-0000006f.
Jan 31 03:21:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:14.953 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:21:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:14.953 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[17de1360-398e-4380-9c85-f3dd4d17091d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.083 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.356 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847675.3558798, 26e4f031-8730-4987-a2df-239ce4b73191 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.356 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Started (Lifecycle Event)#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.550 247403 INFO nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Took 9.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.550 247403 DEBUG nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.639 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.643 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847675.3559778, 26e4f031-8730-4987-a2df-239ce4b73191 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.643 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.695 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.698 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.778 247403 INFO nova.compute.manager [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Took 19.31 seconds to build instance.#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.832 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:15 np0005603621 nova_compute[247399]: 2026-01-31 08:21:15.921 247403 DEBUG oslo_concurrency.lockutils [None req-20f467a8-e174-4503-a29f-b68b9bcf279f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 660 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 764 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 03:21:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:16.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:16.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.185 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.234 247403 DEBUG nova.compute.manager [req-fbd49946-f0a5-402e-9a63-60e46f37c4d3 req-893a1a42-1b05-476c-b078-e8c5f43b2f91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.234 247403 DEBUG oslo_concurrency.lockutils [req-fbd49946-f0a5-402e-9a63-60e46f37c4d3 req-893a1a42-1b05-476c-b078-e8c5f43b2f91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.234 247403 DEBUG oslo_concurrency.lockutils [req-fbd49946-f0a5-402e-9a63-60e46f37c4d3 req-893a1a42-1b05-476c-b078-e8c5f43b2f91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.235 247403 DEBUG oslo_concurrency.lockutils [req-fbd49946-f0a5-402e-9a63-60e46f37c4d3 req-893a1a42-1b05-476c-b078-e8c5f43b2f91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.235 247403 DEBUG nova.compute.manager [req-fbd49946-f0a5-402e-9a63-60e46f37c4d3 req-893a1a42-1b05-476c-b078-e8c5f43b2f91 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Processing event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.236 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.239 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847677.2390935, 26e4f031-8730-4987-a2df-239ce4b73191 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.239 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.241 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.244 247403 INFO nova.virt.libvirt.driver [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance spawned successfully.#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.244 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.529 247403 DEBUG nova.compute.manager [req-940985c4-a21d-4353-a7a8-8aec125b885f req-9c36df70-76fd-4803-8c55-dad1acb6c8de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.529 247403 DEBUG oslo_concurrency.lockutils [req-940985c4-a21d-4353-a7a8-8aec125b885f req-9c36df70-76fd-4803-8c55-dad1acb6c8de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.530 247403 DEBUG oslo_concurrency.lockutils [req-940985c4-a21d-4353-a7a8-8aec125b885f req-9c36df70-76fd-4803-8c55-dad1acb6c8de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.530 247403 DEBUG oslo_concurrency.lockutils [req-940985c4-a21d-4353-a7a8-8aec125b885f req-9c36df70-76fd-4803-8c55-dad1acb6c8de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.530 247403 DEBUG nova.compute.manager [req-940985c4-a21d-4353-a7a8-8aec125b885f req-9c36df70-76fd-4803-8c55-dad1acb6c8de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.530 247403 WARNING nova.compute.manager [req-940985c4-a21d-4353-a7a8-8aec125b885f req-9c36df70-76fd-4803-8c55-dad1acb6c8de fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.545 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.546 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.546 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.546 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.547 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.547 247403 DEBUG nova.virt.libvirt.driver [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.551 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.553 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:17 np0005603621 nova_compute[247399]: 2026-01-31 08:21:17.824 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:21:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 642 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 31 03:21:18 np0005603621 nova_compute[247399]: 2026-01-31 08:21:18.292 247403 INFO nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Took 18.91 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:21:18 np0005603621 nova_compute[247399]: 2026-01-31 08:21:18.293 247403 DEBUG nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:18.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:18.584 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:18 np0005603621 nova_compute[247399]: 2026-01-31 08:21:18.585 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:18.586 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:21:18 np0005603621 nova_compute[247399]: 2026-01-31 08:21:18.709 247403 INFO nova.compute.manager [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Took 21.56 seconds to build instance.#033[00m
Jan 31 03:21:18 np0005603621 nova_compute[247399]: 2026-01-31 08:21:18.983 247403 DEBUG oslo_concurrency.lockutils [None req-a03e6631-221c-4e1a-be7c-3a6f0cd92bba a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.122 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.437 247403 DEBUG nova.compute.manager [req-9ea4b6b2-7cd0-4134-af8f-9d49411a6664 req-fa8d291d-c88f-4de6-9530-ab0ff13cb8d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.437 247403 DEBUG oslo_concurrency.lockutils [req-9ea4b6b2-7cd0-4134-af8f-9d49411a6664 req-fa8d291d-c88f-4de6-9530-ab0ff13cb8d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.438 247403 DEBUG oslo_concurrency.lockutils [req-9ea4b6b2-7cd0-4134-af8f-9d49411a6664 req-fa8d291d-c88f-4de6-9530-ab0ff13cb8d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.438 247403 DEBUG oslo_concurrency.lockutils [req-9ea4b6b2-7cd0-4134-af8f-9d49411a6664 req-fa8d291d-c88f-4de6-9530-ab0ff13cb8d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.438 247403 DEBUG nova.compute.manager [req-9ea4b6b2-7cd0-4134-af8f-9d49411a6664 req-fa8d291d-c88f-4de6-9530-ab0ff13cb8d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:19 np0005603621 nova_compute[247399]: 2026-01-31 08:21:19.438 247403 WARNING nova.compute.manager [req-9ea4b6b2-7cd0-4134-af8f-9d49411a6664 req-fa8d291d-c88f-4de6-9530-ab0ff13cb8d6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 30925163-ba0a-4bfb-8391-72327af224fb does not exist
Jan 31 03:21:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3d03a60a-63c8-4743-a71e-28e9c38994d5 does not exist
Jan 31 03:21:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d1c5c334-35bb-49b9-a40d-129d58ddb8dd does not exist
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:21:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:21:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 583 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 213 op/s
Jan 31 03:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.440670523 +0000 UTC m=+0.049597785 container create 8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cartwright, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:21:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:20.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:20 np0005603621 systemd[1]: Started libpod-conmon-8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf.scope.
Jan 31 03:21:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.410933731 +0000 UTC m=+0.019861023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.609113021 +0000 UTC m=+0.218040273 container init 8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.615873183 +0000 UTC m=+0.224800445 container start 8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cartwright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:21:20 np0005603621 wizardly_cartwright[318649]: 167 167
Jan 31 03:21:20 np0005603621 systemd[1]: libpod-8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf.scope: Deactivated successfully.
Jan 31 03:21:20 np0005603621 conmon[318649]: conmon 8496bac52d57d8555084 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf.scope/container/memory.events
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.642901369 +0000 UTC m=+0.251828831 container attach 8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.644017985 +0000 UTC m=+0.252945287 container died 8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:21:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ec4f66b8aaf28e1f87a2c30157bf0de7ec50b3071dfd187663f123f1376afe3c-merged.mount: Deactivated successfully.
Jan 31 03:21:20 np0005603621 podman[318633]: 2026-01-31 08:21:20.813460913 +0000 UTC m=+0.422388175 container remove 8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_cartwright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:21:20 np0005603621 systemd[1]: libpod-conmon-8496bac52d57d855508422aa9c07e5060616ce455def37b60f094700bf24ecdf.scope: Deactivated successfully.
Jan 31 03:21:20 np0005603621 podman[318673]: 2026-01-31 08:21:20.975862742 +0000 UTC m=+0.066640899 container create 5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:21 np0005603621 podman[318673]: 2026-01-31 08:21:20.939291466 +0000 UTC m=+0.030069643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:21:21 np0005603621 systemd[1]: Started libpod-conmon-5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e.scope.
Jan 31 03:21:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:21:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d2365de54ec06b4bd071057687e1ab971f0cc269d4bcd2341aca7d6f7827af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d2365de54ec06b4bd071057687e1ab971f0cc269d4bcd2341aca7d6f7827af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d2365de54ec06b4bd071057687e1ab971f0cc269d4bcd2341aca7d6f7827af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d2365de54ec06b4bd071057687e1ab971f0cc269d4bcd2341aca7d6f7827af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9d2365de54ec06b4bd071057687e1ab971f0cc269d4bcd2341aca7d6f7827af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:21 np0005603621 podman[318673]: 2026-01-31 08:21:21.109643173 +0000 UTC m=+0.200421330 container init 5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:21:21 np0005603621 podman[318673]: 2026-01-31 08:21:21.117506429 +0000 UTC m=+0.208284586 container start 5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:21 np0005603621 podman[318673]: 2026-01-31 08:21:21.124173028 +0000 UTC m=+0.214951215 container attach 5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shtern, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:21:21 np0005603621 boring_shtern[318690]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:21:21 np0005603621 boring_shtern[318690]: --> relative data size: 1.0
Jan 31 03:21:21 np0005603621 boring_shtern[318690]: --> All data devices are unavailable
Jan 31 03:21:21 np0005603621 systemd[1]: libpod-5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e.scope: Deactivated successfully.
Jan 31 03:21:21 np0005603621 podman[318673]: 2026-01-31 08:21:21.93554668 +0000 UTC m=+1.026324847 container died 5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 583 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 644 KiB/s wr, 194 op/s
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.102 247403 INFO nova.compute.manager [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Rescuing#033[00m
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.107 247403 DEBUG oslo_concurrency.lockutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.107 247403 DEBUG oslo_concurrency.lockutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquired lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.107 247403 DEBUG nova.network.neutron [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d9d2365de54ec06b4bd071057687e1ab971f0cc269d4bcd2341aca7d6f7827af-merged.mount: Deactivated successfully.
Jan 31 03:21:22 np0005603621 podman[318673]: 2026-01-31 08:21:22.148074489 +0000 UTC m=+1.238852646 container remove 5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shtern, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:21:22 np0005603621 systemd[1]: libpod-conmon-5fc11f0a0e2cbc2704b88a77a1fce69fdd0aecfa2773441f16a7f18d5ee28e1e.scope: Deactivated successfully.
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:21:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:22.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:21:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:22.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.490 247403 INFO nova.compute.manager [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Rescuing#033[00m
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.491 247403 DEBUG oslo_concurrency.lockutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.491 247403 DEBUG oslo_concurrency.lockutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquired lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:22 np0005603621 nova_compute[247399]: 2026-01-31 08:21:22.491 247403 DEBUG nova.network.neutron [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.624867518 +0000 UTC m=+0.031558760 container create 48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_herschel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:21:22 np0005603621 systemd[1]: Started libpod-conmon-48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f.scope.
Jan 31 03:21:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.693848479 +0000 UTC m=+0.100539751 container init 48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_herschel, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.698881638 +0000 UTC m=+0.105572890 container start 48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_herschel, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:21:22 np0005603621 exciting_herschel[318876]: 167 167
Jan 31 03:21:22 np0005603621 systemd[1]: libpod-48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f.scope: Deactivated successfully.
Jan 31 03:21:22 np0005603621 conmon[318876]: conmon 48429be357747af23016 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f.scope/container/memory.events
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.60993218 +0000 UTC m=+0.016623452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.70695376 +0000 UTC m=+0.113645002 container attach 48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.707305092 +0000 UTC m=+0.113996344 container died 48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:21:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7c902b11c3d6b8edfe10982c74eb1b215d80731ef2d3fdd829edafae8d6df287-merged.mount: Deactivated successfully.
Jan 31 03:21:22 np0005603621 podman[318859]: 2026-01-31 08:21:22.793253624 +0000 UTC m=+0.199944876 container remove 48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_herschel, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:21:22 np0005603621 systemd[1]: libpod-conmon-48429be357747af23016cd0a3fa800b0e7dd337c054289550ea9775a4b9d362f.scope: Deactivated successfully.
Jan 31 03:21:22 np0005603621 podman[318902]: 2026-01-31 08:21:22.918488318 +0000 UTC m=+0.039539510 container create 9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:21:22 np0005603621 systemd[1]: Started libpod-conmon-9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d.scope.
Jan 31 03:21:22 np0005603621 podman[318902]: 2026-01-31 08:21:22.898715688 +0000 UTC m=+0.019766910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:21:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:21:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0519543524ec1d57217802f4742f68ba4c62fdfae4352c2c56f6ede65d346b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0519543524ec1d57217802f4742f68ba4c62fdfae4352c2c56f6ede65d346b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0519543524ec1d57217802f4742f68ba4c62fdfae4352c2c56f6ede65d346b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0519543524ec1d57217802f4742f68ba4c62fdfae4352c2c56f6ede65d346b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:23 np0005603621 podman[318902]: 2026-01-31 08:21:23.036934859 +0000 UTC m=+0.157986071 container init 9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:21:23 np0005603621 podman[318902]: 2026-01-31 08:21:23.043050621 +0000 UTC m=+0.164101813 container start 9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:21:23 np0005603621 podman[318902]: 2026-01-31 08:21:23.128711315 +0000 UTC m=+0.249762527 container attach 9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:21:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:23.589 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]: {
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:    "0": [
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:        {
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "devices": [
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "/dev/loop3"
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            ],
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "lv_name": "ceph_lv0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "lv_size": "7511998464",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "name": "ceph_lv0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "tags": {
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.cluster_name": "ceph",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.crush_device_class": "",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.encrypted": "0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.osd_id": "0",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.type": "block",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:                "ceph.vdo": "0"
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            },
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "type": "block",
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:            "vg_name": "ceph_vg0"
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:        }
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]:    ]
Jan 31 03:21:23 np0005603621 mystifying_edison[318918]: }
Jan 31 03:21:23 np0005603621 systemd[1]: libpod-9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d.scope: Deactivated successfully.
Jan 31 03:21:23 np0005603621 podman[318928]: 2026-01-31 08:21:23.827594322 +0000 UTC m=+0.020935657 container died 9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:21:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1f0519543524ec1d57217802f4742f68ba4c62fdfae4352c2c56f6ede65d346b-merged.mount: Deactivated successfully.
Jan 31 03:21:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 583 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 679 KiB/s wr, 217 op/s
Jan 31 03:21:23 np0005603621 podman[318928]: 2026-01-31 08:21:23.984211339 +0000 UTC m=+0.177552674 container remove 9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:21:23 np0005603621 systemd[1]: libpod-conmon-9eb007887a44308f625b1a044e77b6e5a3ce9a55268b351d36e765054ac0533d.scope: Deactivated successfully.
Jan 31 03:21:24 np0005603621 nova_compute[247399]: 2026-01-31 08:21:24.123 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:24.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:24.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:24 np0005603621 podman[319082]: 2026-01-31 08:21:24.465577251 +0000 UTC m=+0.019593935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:21:24 np0005603621 podman[319082]: 2026-01-31 08:21:24.565617836 +0000 UTC m=+0.119634490 container create f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:21:24 np0005603621 systemd[1]: Started libpod-conmon-f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe.scope.
Jan 31 03:21:24 np0005603621 podman[319096]: 2026-01-31 08:21:24.676926283 +0000 UTC m=+0.075341211 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:21:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:21:24 np0005603621 podman[319082]: 2026-01-31 08:21:24.8274679 +0000 UTC m=+0.381484594 container init f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ardinghelli, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 03:21:24 np0005603621 podman[319082]: 2026-01-31 08:21:24.833651884 +0000 UTC m=+0.387668538 container start f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:21:24 np0005603621 stoic_ardinghelli[319132]: 167 167
Jan 31 03:21:24 np0005603621 systemd[1]: libpod-f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe.scope: Deactivated successfully.
Jan 31 03:21:24 np0005603621 conmon[319132]: conmon f5380395709c7e53acea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe.scope/container/memory.events
Jan 31 03:21:24 np0005603621 podman[319082]: 2026-01-31 08:21:24.894169379 +0000 UTC m=+0.448186063 container attach f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:21:24 np0005603621 podman[319082]: 2026-01-31 08:21:24.896047149 +0000 UTC m=+0.450063803 container died f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:21:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6c3238bac76b798c79ea841caf946a4a0f62d1582c8632f0b0fa739486aa19a1-merged.mount: Deactivated successfully.
Jan 31 03:21:25 np0005603621 podman[319082]: 2026-01-31 08:21:25.179501929 +0000 UTC m=+0.733518593 container remove f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:21:25 np0005603621 systemd[1]: libpod-conmon-f5380395709c7e53aceace9a6f23720dc3908473dd924c5a102d36d7d082c4fe.scope: Deactivated successfully.
Jan 31 03:21:25 np0005603621 podman[319097]: 2026-01-31 08:21:25.237369523 +0000 UTC m=+0.635982948 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 03:21:25 np0005603621 podman[319166]: 2026-01-31 08:21:25.326780795 +0000 UTC m=+0.047333394 container create f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:21:25 np0005603621 systemd[1]: Started libpod-conmon-f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034.scope.
Jan 31 03:21:25 np0005603621 podman[319166]: 2026-01-31 08:21:25.304378973 +0000 UTC m=+0.024931572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:21:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:21:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83578f32bbe04fc51939a574cd95b0ab773457064dfbb4561a9361d352ff6042/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83578f32bbe04fc51939a574cd95b0ab773457064dfbb4561a9361d352ff6042/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83578f32bbe04fc51939a574cd95b0ab773457064dfbb4561a9361d352ff6042/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83578f32bbe04fc51939a574cd95b0ab773457064dfbb4561a9361d352ff6042/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:21:25 np0005603621 podman[319166]: 2026-01-31 08:21:25.493342473 +0000 UTC m=+0.213895142 container init f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:21:25 np0005603621 podman[319166]: 2026-01-31 08:21:25.499467915 +0000 UTC m=+0.220020514 container start f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:21:25 np0005603621 podman[319166]: 2026-01-31 08:21:25.560505317 +0000 UTC m=+0.281057976 container attach f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:21:25 np0005603621 nova_compute[247399]: 2026-01-31 08:21:25.932 247403 DEBUG nova.network.neutron [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updating instance_info_cache with network_info: [{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 583 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 69 KiB/s wr, 136 op/s
Jan 31 03:21:26 np0005603621 clever_tu[319182]: {
Jan 31 03:21:26 np0005603621 clever_tu[319182]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:21:26 np0005603621 clever_tu[319182]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:21:26 np0005603621 clever_tu[319182]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:21:26 np0005603621 clever_tu[319182]:        "osd_id": 0,
Jan 31 03:21:26 np0005603621 clever_tu[319182]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:21:26 np0005603621 clever_tu[319182]:        "type": "bluestore"
Jan 31 03:21:26 np0005603621 clever_tu[319182]:    }
Jan 31 03:21:26 np0005603621 clever_tu[319182]: }
Jan 31 03:21:26 np0005603621 systemd[1]: libpod-f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034.scope: Deactivated successfully.
Jan 31 03:21:26 np0005603621 nova_compute[247399]: 2026-01-31 08:21:26.372 247403 DEBUG oslo_concurrency.lockutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Releasing lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:26 np0005603621 podman[319254]: 2026-01-31 08:21:26.383935207 +0000 UTC m=+0.024837399 container died f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:21:26 np0005603621 nova_compute[247399]: 2026-01-31 08:21:26.445 247403 DEBUG nova.network.neutron [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:26.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83578f32bbe04fc51939a574cd95b0ab773457064dfbb4561a9361d352ff6042-merged.mount: Deactivated successfully.
Jan 31 03:21:26 np0005603621 podman[319254]: 2026-01-31 08:21:26.581363333 +0000 UTC m=+0.222265495 container remove f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_tu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:21:26 np0005603621 systemd[1]: libpod-conmon-f8c89859e88dc30a5105ef3e4bbca7fb04e277d8c4e2745b2786be9b2987a034.scope: Deactivated successfully.
Jan 31 03:21:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:21:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:21:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev aa3fc3b2-5c52-417c-9150-562d8c07c000 does not exist
Jan 31 03:21:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b19d9ed5-38fc-458a-8333-8ddf9e9445e2 does not exist
Jan 31 03:21:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c08bcbdb-7e87-415d-97df-6923a029ee82 does not exist
Jan 31 03:21:26 np0005603621 nova_compute[247399]: 2026-01-31 08:21:26.666 247403 DEBUG oslo_concurrency.lockutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Releasing lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:26 np0005603621 nova_compute[247399]: 2026-01-31 08:21:26.955 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:21:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:21:27 np0005603621 nova_compute[247399]: 2026-01-31 08:21:27.192 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:27 np0005603621 nova_compute[247399]: 2026-01-31 08:21:27.878 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:21:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 561 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 69 KiB/s wr, 148 op/s
Jan 31 03:21:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:28.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:29 np0005603621 nova_compute[247399]: 2026-01-31 08:21:29.126 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:29 np0005603621 nova_compute[247399]: 2026-01-31 08:21:29.952 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 267 KiB/s wr, 153 op/s
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.285 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid b6bf273c-d5a3-4f02-bddd-465a846a764d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.286 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid f02cbbe1-1133-4659-a065-630c53ee2683 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.286 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.286 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 26e4f031-8730-4987-a2df-239ce4b73191 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.286 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.287 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.287 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.288 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "f02cbbe1-1133-4659-a065-630c53ee2683" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.288 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.288 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.289 247403 INFO nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.289 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.289 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.289 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "26e4f031-8730-4987-a2df-239ce4b73191" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.290 247403 INFO nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.290 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "26e4f031-8730-4987-a2df-239ce4b73191" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:30.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:30.503 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:30.503 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:30.504 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.577 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "f02cbbe1-1133-4659-a065-630c53ee2683" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:30 np0005603621 nova_compute[247399]: 2026-01-31 08:21:30.584 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 622 KiB/s rd, 240 KiB/s wr, 56 op/s
Jan 31 03:21:32 np0005603621 nova_compute[247399]: 2026-01-31 08:21:32.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:32.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 520 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 193 op/s
Jan 31 03:21:34 np0005603621 nova_compute[247399]: 2026-01-31 08:21:34.128 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:34.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:34.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:35 np0005603621 nova_compute[247399]: 2026-01-31 08:21:35.536 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 520 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 170 op/s
Jan 31 03:21:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:36.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:36 np0005603621 nova_compute[247399]: 2026-01-31 08:21:36.992 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:21:37 np0005603621 nova_compute[247399]: 2026-01-31 08:21:37.197 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:37 np0005603621 nova_compute[247399]: 2026-01-31 08:21:37.914 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:21:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 504 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 181 op/s
Jan 31 03:21:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:38.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:38.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:21:38
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms']
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:21:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:21:39 np0005603621 nova_compute[247399]: 2026-01-31 08:21:39.130 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Jan 31 03:21:40 np0005603621 kernel: tap30e3cd2c-30 (unregistering): left promiscuous mode
Jan 31 03:21:40 np0005603621 NetworkManager[49013]: <info>  [1769847700.2088] device (tap30e3cd2c-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:21:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:40Z|00414|binding|INFO|Releasing lport 30e3cd2c-30a0-4349-817c-7651950c3869 from this chassis (sb_readonly=0)
Jan 31 03:21:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:40Z|00415|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 down in Southbound
Jan 31 03:21:40 np0005603621 nova_compute[247399]: 2026-01-31 08:21:40.214 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:40Z|00416|binding|INFO|Removing iface tap30e3cd2c-30 ovn-installed in OVS
Jan 31 03:21:40 np0005603621 nova_compute[247399]: 2026-01-31 08:21:40.216 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:40 np0005603621 nova_compute[247399]: 2026-01-31 08:21:40.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:40 np0005603621 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 31 03:21:40 np0005603621 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000006f.scope: Consumed 12.794s CPU time.
Jan 31 03:21:40 np0005603621 systemd-machined[212769]: Machine qemu-49-instance-0000006f terminated.
Jan 31 03:21:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:40.313 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:09:f8 10.100.0.3'], port_security=['fa:16:3e:54:09:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26e4f031-8730-4987-a2df-239ce4b73191', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=30e3cd2c-30a0-4349-817c-7651950c3869) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:40.314 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 30e3cd2c-30a0-4349-817c-7651950c3869 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 unbound from our chassis#033[00m
Jan 31 03:21:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:40.315 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:21:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:40.316 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5fd620-3ddf-4520-b2d4-a3e290701a23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:40 np0005603621 nova_compute[247399]: 2026-01-31 08:21:40.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:40 np0005603621 nova_compute[247399]: 2026-01-31 08:21:40.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:40.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:40.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.007 247403 INFO nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance shutdown successfully after 14 seconds.#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.012 247403 INFO nova.virt.libvirt.driver [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance destroyed successfully.#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.012 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'numa_topology' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.154 247403 INFO nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Attempting rescue#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.155 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.159 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.159 247403 INFO nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Creating image(s)#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.184 247403 DEBUG nova.storage.rbd_utils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.187 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.399 247403 DEBUG nova.storage.rbd_utils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.425 247403 DEBUG nova.storage.rbd_utils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.429 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.484 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.485 247403 DEBUG oslo_concurrency.lockutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.486 247403 DEBUG oslo_concurrency.lockutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.486 247403 DEBUG oslo_concurrency.lockutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.510 247403 DEBUG nova.storage.rbd_utils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.512 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.811 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.813 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'migration_context' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.907 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.908 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Start _get_guest_xml network_info=[{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1486638082-network", "vif_mac": "fa:16:3e:54:09:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:21:41 np0005603621 nova_compute[247399]: 2026-01-31 08:21:41.909 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'resources' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 164 op/s
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.121 247403 WARNING nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.129 247403 DEBUG nova.virt.libvirt.host [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.130 247403 DEBUG nova.virt.libvirt.host [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.133 247403 DEBUG nova.virt.libvirt.host [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.133 247403 DEBUG nova.virt.libvirt.host [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.134 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.134 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.135 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.135 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.135 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.136 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.136 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.136 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.136 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.136 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.137 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.137 247403 DEBUG nova.virt.hardware [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.137 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:21:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:42.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:21:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.518 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2515079272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.950 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:42 np0005603621 nova_compute[247399]: 2026-01-31 08:21:42.951 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.031 247403 DEBUG nova.compute.manager [req-b7f4ca7c-3453-4a36-967a-ffedd6130e60 req-21147bdc-77f4-4d23-8ea9-b491855ccf09 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.032 247403 DEBUG oslo_concurrency.lockutils [req-b7f4ca7c-3453-4a36-967a-ffedd6130e60 req-21147bdc-77f4-4d23-8ea9-b491855ccf09 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.032 247403 DEBUG oslo_concurrency.lockutils [req-b7f4ca7c-3453-4a36-967a-ffedd6130e60 req-21147bdc-77f4-4d23-8ea9-b491855ccf09 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.034 247403 DEBUG oslo_concurrency.lockutils [req-b7f4ca7c-3453-4a36-967a-ffedd6130e60 req-21147bdc-77f4-4d23-8ea9-b491855ccf09 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.034 247403 DEBUG nova.compute.manager [req-b7f4ca7c-3453-4a36-967a-ffedd6130e60 req-21147bdc-77f4-4d23-8ea9-b491855ccf09 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.035 247403 WARNING nova.compute.manager [req-b7f4ca7c-3453-4a36-967a-ffedd6130e60 req-21147bdc-77f4-4d23-8ea9-b491855ccf09 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.038 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.039 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.039 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363589995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.372 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.373 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:21:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833212826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.817 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.819 247403 DEBUG nova.virt.libvirt.vif [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-716187142',display_name='tempest-ServerRescueTestJSON-server-716187142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-716187142',id=111,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-06wkdhgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:18Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=26e4f031-8730-4987-a2df-239ce4b73191,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1486638082-network", "vif_mac": "fa:16:3e:54:09:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.819 247403 DEBUG nova.network.os_vif_util [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1486638082-network", "vif_mac": "fa:16:3e:54:09:f8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.820 247403 DEBUG nova.network.os_vif_util [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.821 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.954 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <uuid>26e4f031-8730-4987-a2df-239ce4b73191</uuid>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <name>instance-0000006f</name>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerRescueTestJSON-server-716187142</nova:name>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:21:42</nova:creationTime>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:user uuid="a8897cd859ff4a79a1a16eaee71d22ed">tempest-ServerRescueTestJSON-476946386-project-member</nova:user>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:project uuid="29d7f464a8694725aa9692aac772c256">tempest-ServerRescueTestJSON-476946386</nova:project>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <nova:port uuid="30e3cd2c-30a0-4349-817c-7651950c3869">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <entry name="serial">26e4f031-8730-4987-a2df-239ce4b73191</entry>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <entry name="uuid">26e4f031-8730-4987-a2df-239ce4b73191</entry>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/26e4f031-8730-4987-a2df-239ce4b73191_disk.rescue">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/26e4f031-8730-4987-a2df-239ce4b73191_disk">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/26e4f031-8730-4987-a2df-239ce4b73191_disk.config.rescue">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:54:09:f8"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <target dev="tap30e3cd2c-30"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/console.log" append="off"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:21:43 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:21:43 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:21:43 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:21:43 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:21:43 np0005603621 nova_compute[247399]: 2026-01-31 08:21:43.961 247403 INFO nova.virt.libvirt.driver [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance destroyed successfully.#033[00m
Jan 31 03:21:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.7 MiB/s wr, 223 op/s
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.211 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.211 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.211 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.211 247403 DEBUG nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] No VIF found with MAC fa:16:3e:54:09:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.212 247403 INFO nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Using config drive#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.254 247403 DEBUG nova.storage.rbd_utils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:44.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.595 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:44 np0005603621 nova_compute[247399]: 2026-01-31 08:21:44.727 247403 DEBUG nova.objects.instance [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'keypairs' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:44Z|00417|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 03:21:45 np0005603621 nova_compute[247399]: 2026-01-31 08:21:45.157 247403 DEBUG nova.compute.manager [req-395beb65-6f1e-4de5-a027-37d504fd90b2 req-8e7c24b8-ab1e-4a5d-ac2b-c25a358a873b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:45 np0005603621 nova_compute[247399]: 2026-01-31 08:21:45.157 247403 DEBUG oslo_concurrency.lockutils [req-395beb65-6f1e-4de5-a027-37d504fd90b2 req-8e7c24b8-ab1e-4a5d-ac2b-c25a358a873b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:45 np0005603621 nova_compute[247399]: 2026-01-31 08:21:45.158 247403 DEBUG oslo_concurrency.lockutils [req-395beb65-6f1e-4de5-a027-37d504fd90b2 req-8e7c24b8-ab1e-4a5d-ac2b-c25a358a873b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:45 np0005603621 nova_compute[247399]: 2026-01-31 08:21:45.158 247403 DEBUG oslo_concurrency.lockutils [req-395beb65-6f1e-4de5-a027-37d504fd90b2 req-8e7c24b8-ab1e-4a5d-ac2b-c25a358a873b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:45 np0005603621 nova_compute[247399]: 2026-01-31 08:21:45.158 247403 DEBUG nova.compute.manager [req-395beb65-6f1e-4de5-a027-37d504fd90b2 req-8e7c24b8-ab1e-4a5d-ac2b-c25a358a873b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:45 np0005603621 nova_compute[247399]: 2026-01-31 08:21:45.158 247403 WARNING nova.compute.manager [req-395beb65-6f1e-4de5-a027-37d504fd90b2 req-8e7c24b8-ab1e-4a5d-ac2b-c25a358a873b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:21:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 491 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.442 247403 INFO nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Creating config drive at /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config.rescue#033[00m
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.446 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9lfr8t3l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:46.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:46.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.573 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp9lfr8t3l" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.614 247403 DEBUG nova.storage.rbd_utils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] rbd image 26e4f031-8730-4987-a2df-239ce4b73191_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.620 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config.rescue 26e4f031-8730-4987-a2df-239ce4b73191_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.989 247403 DEBUG oslo_concurrency.processutils [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config.rescue 26e4f031-8730-4987-a2df-239ce4b73191_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:46 np0005603621 nova_compute[247399]: 2026-01-31 08:21:46.990 247403 INFO nova.virt.libvirt.driver [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Deleting local config drive /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:21:47 np0005603621 kernel: tap30e3cd2c-30: entered promiscuous mode
Jan 31 03:21:47 np0005603621 NetworkManager[49013]: <info>  [1769847707.0460] manager: (tap30e3cd2c-30): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Jan 31 03:21:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:47Z|00418|binding|INFO|Claiming lport 30e3cd2c-30a0-4349-817c-7651950c3869 for this chassis.
Jan 31 03:21:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:47Z|00419|binding|INFO|30e3cd2c-30a0-4349-817c-7651950c3869: Claiming fa:16:3e:54:09:f8 10.100.0.3
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:47Z|00420|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 ovn-installed in OVS
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.058 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:47 np0005603621 systemd-udevd[319629]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:47 np0005603621 systemd-machined[212769]: New machine qemu-50-instance-0000006f.
Jan 31 03:21:47 np0005603621 NetworkManager[49013]: <info>  [1769847707.0817] device (tap30e3cd2c-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:21:47 np0005603621 NetworkManager[49013]: <info>  [1769847707.0824] device (tap30e3cd2c-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:21:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:47.088 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:09:f8 10.100.0.3'], port_security=['fa:16:3e:54:09:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26e4f031-8730-4987-a2df-239ce4b73191', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=30e3cd2c-30a0-4349-817c-7651950c3869) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:47Z|00421|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 up in Southbound
Jan 31 03:21:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:47.089 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 30e3cd2c-30a0-4349-817c-7651950c3869 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 bound to our chassis#033[00m
Jan 31 03:21:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:47.090 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:21:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:47.091 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6a12ab79-29f4-47a9-b299-79d5a71fcbf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:47 np0005603621 systemd[1]: Started Virtual Machine qemu-50-instance-0000006f.
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:47Z|00422|binding|INFO|Releasing lport dad27cfe-7e8a-4f55-a945-07f9cae848c1 from this chassis (sb_readonly=0)
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.577 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 26e4f031-8730-4987-a2df-239ce4b73191 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.578 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847707.5771098, 26e4f031-8730-4987-a2df-239ce4b73191 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.578 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.583 247403 DEBUG nova.compute.manager [None req-ab6b67d8-80c9-4c01-b155-fcf78617238b a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.712 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:47 np0005603621 nova_compute[247399]: 2026-01-31 08:21:47.714 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 563 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.108 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updating instance_info_cache with network_info: [{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.142 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847707.5780768, 26e4f031-8730-4987-a2df-239ce4b73191 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.143 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Started (Lifecycle Event)#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.228 247403 DEBUG nova.compute.manager [req-d2e213a8-a46b-4187-b9bb-ce8ef796261a req-5e155508-42b7-4f6a-a61c-a5b8a68be02c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.230 247403 DEBUG oslo_concurrency.lockutils [req-d2e213a8-a46b-4187-b9bb-ce8ef796261a req-5e155508-42b7-4f6a-a61c-a5b8a68be02c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.230 247403 DEBUG oslo_concurrency.lockutils [req-d2e213a8-a46b-4187-b9bb-ce8ef796261a req-5e155508-42b7-4f6a-a61c-a5b8a68be02c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.230 247403 DEBUG oslo_concurrency.lockutils [req-d2e213a8-a46b-4187-b9bb-ce8ef796261a req-5e155508-42b7-4f6a-a61c-a5b8a68be02c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.230 247403 DEBUG nova.compute.manager [req-d2e213a8-a46b-4187-b9bb-ce8ef796261a req-5e155508-42b7-4f6a-a61c-a5b8a68be02c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.231 247403 WARNING nova.compute.manager [req-d2e213a8-a46b-4187-b9bb-ce8ef796261a req-5e155508-42b7-4f6a-a61c-a5b8a68be02c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.345 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.348 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.371 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.371 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.371 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.372 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.372 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.372 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.373 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.373 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.462 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.463 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.463 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.463 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.464 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:21:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621090057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.872 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:48 np0005603621 nova_compute[247399]: 2026-01-31 08:21:48.958 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.134 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010681300018611577 of space, bias 1.0, pg target 3.2043900055834733 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8589431532663316 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:21:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.381 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.382 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.385 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.387 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.387 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.390 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.391 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.393 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.394 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.394 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.535 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.537 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3789MB free_disk=20.76409912109375GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.537 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:49 np0005603621 nova_compute[247399]: 2026-01-31 08:21:49.537 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.125 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance b6bf273c-d5a3-4f02-bddd-465a846a764d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.126 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.126 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.127 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 26e4f031-8730-4987-a2df-239ce4b73191 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.127 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.127 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.288 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:21:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:50.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:50.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.681 247403 INFO nova.compute.manager [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Unrescuing#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.682 247403 DEBUG oslo_concurrency.lockutils [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.682 247403 DEBUG oslo_concurrency.lockutils [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquired lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.682 247403 DEBUG nova.network.neutron [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.778 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.779 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.815 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.838 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:21:50 np0005603621 nova_compute[247399]: 2026-01-31 08:21:50.983 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.039 247403 DEBUG nova.compute.manager [req-045ac1be-9d54-4639-a9a0-15d4e17ad96a req-c1e1e230-fb68-4f9f-a6d0-358bfb45209d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.040 247403 DEBUG oslo_concurrency.lockutils [req-045ac1be-9d54-4639-a9a0-15d4e17ad96a req-c1e1e230-fb68-4f9f-a6d0-358bfb45209d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.040 247403 DEBUG oslo_concurrency.lockutils [req-045ac1be-9d54-4639-a9a0-15d4e17ad96a req-c1e1e230-fb68-4f9f-a6d0-358bfb45209d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.040 247403 DEBUG oslo_concurrency.lockutils [req-045ac1be-9d54-4639-a9a0-15d4e17ad96a req-c1e1e230-fb68-4f9f-a6d0-358bfb45209d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.041 247403 DEBUG nova.compute.manager [req-045ac1be-9d54-4639-a9a0-15d4e17ad96a req-c1e1e230-fb68-4f9f-a6d0-358bfb45209d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.041 247403 WARNING nova.compute.manager [req-045ac1be-9d54-4639-a9a0-15d4e17ad96a req-c1e1e230-fb68-4f9f-a6d0-358bfb45209d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:21:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:21:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954807447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.398 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.403 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.503 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.649 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:21:51 np0005603621 nova_compute[247399]: 2026-01-31 08:21:51.650 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 03:21:52 np0005603621 nova_compute[247399]: 2026-01-31 08:21:52.205 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:52.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:52.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:53 np0005603621 nova_compute[247399]: 2026-01-31 08:21:53.747 247403 DEBUG nova.network.neutron [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updating instance_info_cache with network_info: [{"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:21:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.049 247403 DEBUG oslo_concurrency.lockutils [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Releasing lock "refresh_cache-26e4f031-8730-4987-a2df-239ce4b73191" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.051 247403 DEBUG nova.objects.instance [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'flavor' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:54 np0005603621 kernel: tap30e3cd2c-30 (unregistering): left promiscuous mode
Jan 31 03:21:54 np0005603621 NetworkManager[49013]: <info>  [1769847714.2898] device (tap30e3cd2c-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.295 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00423|binding|INFO|Releasing lport 30e3cd2c-30a0-4349-817c-7651950c3869 from this chassis (sb_readonly=0)
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00424|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 down in Southbound
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00425|binding|INFO|Removing iface tap30e3cd2c-30 ovn-installed in OVS
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.303 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:54 np0005603621 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 31 03:21:54 np0005603621 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006f.scope: Consumed 7.249s CPU time.
Jan 31 03:21:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:54 np0005603621 systemd-machined[212769]: Machine qemu-50-instance-0000006f terminated.
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.348 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:09:f8 10.100.0.3'], port_security=['fa:16:3e:54:09:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26e4f031-8730-4987-a2df-239ce4b73191', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=30e3cd2c-30a0-4349-817c-7651950c3869) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.350 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 30e3cd2c-30a0-4349-817c-7651950c3869 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 unbound from our chassis#033[00m
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.351 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.352 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[de213127-5e04-4c7e-92a7-88da0fb77745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.458 247403 INFO nova.virt.libvirt.driver [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance destroyed successfully.#033[00m
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.458 247403 DEBUG nova.objects.instance [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'numa_topology' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:21:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:54.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:54 np0005603621 kernel: tap30e3cd2c-30: entered promiscuous mode
Jan 31 03:21:54 np0005603621 systemd-udevd[319750]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:21:54 np0005603621 NetworkManager[49013]: <info>  [1769847714.5877] manager: (tap30e3cd2c-30): new Tun device (/org/freedesktop/NetworkManager/Devices/202)
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00426|binding|INFO|Claiming lport 30e3cd2c-30a0-4349-817c-7651950c3869 for this chassis.
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00427|binding|INFO|30e3cd2c-30a0-4349-817c-7651950c3869: Claiming fa:16:3e:54:09:f8 10.100.0.3
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.590 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:54 np0005603621 NetworkManager[49013]: <info>  [1769847714.5971] device (tap30e3cd2c-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:21:54 np0005603621 NetworkManager[49013]: <info>  [1769847714.5977] device (tap30e3cd2c-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00428|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 ovn-installed in OVS
Jan 31 03:21:54 np0005603621 nova_compute[247399]: 2026-01-31 08:21:54.600 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:54 np0005603621 systemd-machined[212769]: New machine qemu-51-instance-0000006f.
Jan 31 03:21:54 np0005603621 systemd[1]: Started Virtual Machine qemu-51-instance-0000006f.
Jan 31 03:21:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:21:54Z|00429|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 up in Southbound
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.659 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:09:f8 10.100.0.3'], port_security=['fa:16:3e:54:09:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26e4f031-8730-4987-a2df-239ce4b73191', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=30e3cd2c-30a0-4349-817c-7651950c3869) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.660 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 30e3cd2c-30a0-4349-817c-7651950c3869 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 bound to our chassis#033[00m
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.661 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:21:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:21:54.662 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[17c2a8e2-6e0a-4edd-b845-1ebfafd0d327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.038 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 26e4f031-8730-4987-a2df-239ce4b73191 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.039 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847715.0385723, 26e4f031-8730-4987-a2df-239ce4b73191 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.039 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.122 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.131 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.205 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.206 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847715.039842, 26e4f031-8730-4987-a2df-239ce4b73191 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.206 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Started (Lifecycle Event)#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.332 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.336 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:21:55 np0005603621 nova_compute[247399]: 2026-01-31 08:21:55.442 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:21:55 np0005603621 podman[319848]: 2026-01-31 08:21:55.512039122 +0000 UTC m=+0.064638067 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:21:55 np0005603621 podman[319847]: 2026-01-31 08:21:55.512062183 +0000 UTC m=+0.068850921 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 03:21:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 502 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 33 KiB/s wr, 81 op/s
Jan 31 03:21:56 np0005603621 nova_compute[247399]: 2026-01-31 08:21:56.412 247403 DEBUG nova.compute.manager [None req-03edd025-389e-46b8-a5ee-20f99eb59de2 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:21:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:56.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.208 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.400 247403 DEBUG nova.compute.manager [req-48018387-781f-4ee0-a815-fb1f9d94d862 req-f3b28f46-aa7a-451b-a866-95c22b855c04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.400 247403 DEBUG oslo_concurrency.lockutils [req-48018387-781f-4ee0-a815-fb1f9d94d862 req-f3b28f46-aa7a-451b-a866-95c22b855c04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.400 247403 DEBUG oslo_concurrency.lockutils [req-48018387-781f-4ee0-a815-fb1f9d94d862 req-f3b28f46-aa7a-451b-a866-95c22b855c04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.401 247403 DEBUG oslo_concurrency.lockutils [req-48018387-781f-4ee0-a815-fb1f9d94d862 req-f3b28f46-aa7a-451b-a866-95c22b855c04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.401 247403 DEBUG nova.compute.manager [req-48018387-781f-4ee0-a815-fb1f9d94d862 req-f3b28f46-aa7a-451b-a866-95c22b855c04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:57 np0005603621 nova_compute[247399]: 2026-01-31 08:21:57.401 247403 WARNING nova.compute.manager [req-48018387-781f-4ee0-a815-fb1f9d94d862 req-f3b28f46-aa7a-451b-a866-95c22b855c04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:21:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 490 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 37 KiB/s wr, 105 op/s
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.059129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847718059167, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 798, "num_deletes": 251, "total_data_size": 1117733, "memory_usage": 1148624, "flush_reason": "Manual Compaction"}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847718076465, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1105091, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45761, "largest_seqno": 46558, "table_properties": {"data_size": 1100998, "index_size": 1809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9498, "raw_average_key_size": 19, "raw_value_size": 1092719, "raw_average_value_size": 2295, "num_data_blocks": 79, "num_entries": 476, "num_filter_entries": 476, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847654, "oldest_key_time": 1769847654, "file_creation_time": 1769847718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17383 microseconds, and 3101 cpu microseconds.
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.076509) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1105091 bytes OK
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.076529) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.080349) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.080362) EVENT_LOG_v1 {"time_micros": 1769847718080357, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.080384) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1113780, prev total WAL file size 1113780, number of live WAL files 2.
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.080804) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1079KB)], [101(10007KB)]
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847718080867, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11353143, "oldest_snapshot_seqno": -1}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7183 keys, 9511883 bytes, temperature: kUnknown
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847718193879, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9511883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9466099, "index_size": 26728, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 186992, "raw_average_key_size": 26, "raw_value_size": 9340260, "raw_average_value_size": 1300, "num_data_blocks": 1047, "num_entries": 7183, "num_filter_entries": 7183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.194125) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9511883 bytes
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.200818) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.4 rd, 84.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.8 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(18.9) write-amplify(8.6) OK, records in: 7700, records dropped: 517 output_compression: NoCompression
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.200848) EVENT_LOG_v1 {"time_micros": 1769847718200834, "job": 60, "event": "compaction_finished", "compaction_time_micros": 113102, "compaction_time_cpu_micros": 17388, "output_level": 6, "num_output_files": 1, "total_output_size": 9511883, "num_input_records": 7700, "num_output_records": 7183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847718201132, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847718202301, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.080706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.202342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.202347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.202349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.202351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:21:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:21:58.202353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:21:58 np0005603621 nova_compute[247399]: 2026-01-31 08:21:58.476 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:21:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:21:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:21:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:21:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:21:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:21:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:21:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:21:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.675 247403 DEBUG nova.compute.manager [req-51d1d655-8aa8-489a-bce4-9c4d699768c9 req-876bd64c-659a-4a5b-ba1f-a4cc517790f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.676 247403 DEBUG oslo_concurrency.lockutils [req-51d1d655-8aa8-489a-bce4-9c4d699768c9 req-876bd64c-659a-4a5b-ba1f-a4cc517790f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.676 247403 DEBUG oslo_concurrency.lockutils [req-51d1d655-8aa8-489a-bce4-9c4d699768c9 req-876bd64c-659a-4a5b-ba1f-a4cc517790f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.676 247403 DEBUG oslo_concurrency.lockutils [req-51d1d655-8aa8-489a-bce4-9c4d699768c9 req-876bd64c-659a-4a5b-ba1f-a4cc517790f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.676 247403 DEBUG nova.compute.manager [req-51d1d655-8aa8-489a-bce4-9c4d699768c9 req-876bd64c-659a-4a5b-ba1f-a4cc517790f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:21:59 np0005603621 nova_compute[247399]: 2026-01-31 08:21:59.676 247403 WARNING nova.compute.manager [req-51d1d655-8aa8-489a-bce4-9c4d699768c9 req-876bd64c-659a-4a5b-ba1f-a4cc517790f0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:21:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 172 op/s
Jan 31 03:22:00 np0005603621 nova_compute[247399]: 2026-01-31 08:22:00.001 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:22:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:00.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:22:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:00.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 31K writes, 121K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 10K syncs, 2.99 writes per sync, written: 0.11 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5104 writes, 19K keys, 5104 commit groups, 1.0 writes per commit group, ingest: 22.14 MB, 0.04 MB/s#012Interval WAL: 5104 writes, 1959 syncs, 2.61 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.504 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.504 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.505 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.505 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.505 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.506 247403 INFO nova.compute.manager [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Terminating instance#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.507 247403 DEBUG nova.compute.manager [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:22:01 np0005603621 kernel: tap30e3cd2c-30 (unregistering): left promiscuous mode
Jan 31 03:22:01 np0005603621 NetworkManager[49013]: <info>  [1769847721.5990] device (tap30e3cd2c-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.605 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:01Z|00430|binding|INFO|Releasing lport 30e3cd2c-30a0-4349-817c-7651950c3869 from this chassis (sb_readonly=0)
Jan 31 03:22:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:01Z|00431|binding|INFO|Setting lport 30e3cd2c-30a0-4349-817c-7651950c3869 down in Southbound
Jan 31 03:22:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:01Z|00432|binding|INFO|Removing iface tap30e3cd2c-30 ovn-installed in OVS
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.608 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.613 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:01 np0005603621 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 31 03:22:01 np0005603621 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006f.scope: Consumed 7.067s CPU time.
Jan 31 03:22:01 np0005603621 systemd-machined[212769]: Machine qemu-51-instance-0000006f terminated.
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.728 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.737 247403 INFO nova.virt.libvirt.driver [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Instance destroyed successfully.#033[00m
Jan 31 03:22:01 np0005603621 nova_compute[247399]: 2026-01-31 08:22:01.738 247403 DEBUG nova.objects.instance [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'resources' on Instance uuid 26e4f031-8730-4987-a2df-239ce4b73191 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 11 KiB/s wr, 131 op/s
Jan 31 03:22:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:02.009 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:09:f8 10.100.0.3'], port_security=['fa:16:3e:54:09:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '26e4f031-8730-4987-a2df-239ce4b73191', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=30e3cd2c-30a0-4349-817c-7651950c3869) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:02.011 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 30e3cd2c-30a0-4349-817c-7651950c3869 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 unbound from our chassis#033[00m
Jan 31 03:22:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:02.012 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:22:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:02.013 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3ac2f334-4192-41a0-af85-5d94c0523622]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.258 247403 DEBUG nova.virt.libvirt.vif [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-716187142',display_name='tempest-ServerRescueTestJSON-server-716187142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-716187142',id=111,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-06wkdhgj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:21:56Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=26e4f031-8730-4987-a2df-239ce4b73191,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.258 247403 DEBUG nova.network.os_vif_util [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "30e3cd2c-30a0-4349-817c-7651950c3869", "address": "fa:16:3e:54:09:f8", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30e3cd2c-30", "ovs_interfaceid": "30e3cd2c-30a0-4349-817c-7651950c3869", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.259 247403 DEBUG nova.network.os_vif_util [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.259 247403 DEBUG os_vif [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.261 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30e3cd2c-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.262 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.265 247403 INFO os_vif [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:54:09:f8,bridge_name='br-int',has_traffic_filtering=True,id=30e3cd2c-30a0-4349-817c-7651950c3869,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30e3cd2c-30')#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.469 247403 DEBUG nova.compute.manager [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.470 247403 DEBUG oslo_concurrency.lockutils [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.470 247403 DEBUG oslo_concurrency.lockutils [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.470 247403 DEBUG oslo_concurrency.lockutils [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.471 247403 DEBUG nova.compute.manager [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.471 247403 WARNING nova.compute.manager [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.471 247403 DEBUG nova.compute.manager [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.471 247403 DEBUG oslo_concurrency.lockutils [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.471 247403 DEBUG oslo_concurrency.lockutils [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.472 247403 DEBUG oslo_concurrency.lockutils [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.472 247403 DEBUG nova.compute.manager [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:02 np0005603621 nova_compute[247399]: 2026-01-31 08:22:02.472 247403 WARNING nova.compute.manager [req-5c9ac274-8a07-41c9-aa1e-a33ec0359555 req-fab7ccd4-86a6-49c2-9c75-05528b90ca7f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:22:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:02.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 18 KiB/s wr, 137 op/s
Jan 31 03:22:04 np0005603621 nova_compute[247399]: 2026-01-31 08:22:04.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:04.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:04 np0005603621 nova_compute[247399]: 2026-01-31 08:22:04.621 247403 INFO nova.virt.libvirt.driver [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Deleting instance files /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191_del#033[00m
Jan 31 03:22:04 np0005603621 nova_compute[247399]: 2026-01-31 08:22:04.621 247403 INFO nova.virt.libvirt.driver [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Deletion of /var/lib/nova/instances/26e4f031-8730-4987-a2df-239ce4b73191_del complete#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.875 247403 INFO nova.compute.manager [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Took 4.37 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.876 247403 DEBUG oslo.service.loopingcall [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.877 247403 DEBUG nova.compute.manager [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.877 247403 DEBUG nova.network.neutron [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.883 247403 DEBUG nova.compute.manager [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.883 247403 DEBUG oslo_concurrency.lockutils [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.884 247403 DEBUG oslo_concurrency.lockutils [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.884 247403 DEBUG oslo_concurrency.lockutils [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.884 247403 DEBUG nova.compute.manager [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.884 247403 DEBUG nova.compute.manager [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-unplugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.884 247403 DEBUG nova.compute.manager [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.885 247403 DEBUG oslo_concurrency.lockutils [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "26e4f031-8730-4987-a2df-239ce4b73191-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.885 247403 DEBUG oslo_concurrency.lockutils [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.885 247403 DEBUG oslo_concurrency.lockutils [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.886 247403 DEBUG nova.compute.manager [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] No waiting events found dispatching network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:05 np0005603621 nova_compute[247399]: 2026-01-31 08:22:05.886 247403 WARNING nova.compute.manager [req-4509b0c5-a5e9-4407-a1f8-115682271242 req-40c20e9e-a599-4378-8c2f-1540551a5f95 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received unexpected event network-vif-plugged-30e3cd2c-30a0-4349-817c-7651950c3869 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:22:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 455 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 102 op/s
Jan 31 03:22:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:06.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:06.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:07 np0005603621 nova_compute[247399]: 2026-01-31 08:22:07.264 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 105 op/s
Jan 31 03:22:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:08.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:08.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:09 np0005603621 nova_compute[247399]: 2026-01-31 08:22:09.187 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:09 np0005603621 nova_compute[247399]: 2026-01-31 08:22:09.560 247403 DEBUG nova.network.neutron [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:09 np0005603621 nova_compute[247399]: 2026-01-31 08:22:09.640 247403 DEBUG nova.compute.manager [req-fa134fbc-d894-4956-b338-f5acab8ba6e4 req-a4ad60cd-3fc8-4c13-970e-57f655590d15 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Received event network-vif-deleted-30e3cd2c-30a0-4349-817c-7651950c3869 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:09 np0005603621 nova_compute[247399]: 2026-01-31 08:22:09.640 247403 INFO nova.compute.manager [req-fa134fbc-d894-4956-b338-f5acab8ba6e4 req-a4ad60cd-3fc8-4c13-970e-57f655590d15 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Neutron deleted interface 30e3cd2c-30a0-4349-817c-7651950c3869; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:22:09 np0005603621 nova_compute[247399]: 2026-01-31 08:22:09.640 247403 DEBUG nova.network.neutron [req-fa134fbc-d894-4956-b338-f5acab8ba6e4 req-a4ad60cd-3fc8-4c13-970e-57f655590d15 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 376 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 13 KiB/s wr, 101 op/s
Jan 31 03:22:10 np0005603621 nova_compute[247399]: 2026-01-31 08:22:10.281 247403 INFO nova.compute.manager [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Took 4.40 seconds to deallocate network for instance.#033[00m
Jan 31 03:22:10 np0005603621 nova_compute[247399]: 2026-01-31 08:22:10.288 247403 DEBUG nova.compute.manager [req-fa134fbc-d894-4956-b338-f5acab8ba6e4 req-a4ad60cd-3fc8-4c13-970e-57f655590d15 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Detach interface failed, port_id=30e3cd2c-30a0-4349-817c-7651950c3869, reason: Instance 26e4f031-8730-4987-a2df-239ce4b73191 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:22:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:10.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:10 np0005603621 nova_compute[247399]: 2026-01-31 08:22:10.786 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:10 np0005603621 nova_compute[247399]: 2026-01-31 08:22:10.787 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:11 np0005603621 nova_compute[247399]: 2026-01-31 08:22:11.105 247403 DEBUG oslo_concurrency.processutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:11 np0005603621 nova_compute[247399]: 2026-01-31 08:22:11.197 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/271207730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:11 np0005603621 nova_compute[247399]: 2026-01-31 08:22:11.568 247403 DEBUG oslo_concurrency.processutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:11 np0005603621 nova_compute[247399]: 2026-01-31 08:22:11.576 247403 DEBUG nova.compute.provider_tree [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:22:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 03:22:11 np0005603621 nova_compute[247399]: 2026-01-31 08:22:11.785 247403 DEBUG nova.scheduler.client.report [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:22:11 np0005603621 nova_compute[247399]: 2026-01-31 08:22:11.878 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 376 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 8.6 KiB/s wr, 29 op/s
Jan 31 03:22:12 np0005603621 nova_compute[247399]: 2026-01-31 08:22:12.268 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:12.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:12.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:13 np0005603621 nova_compute[247399]: 2026-01-31 08:22:13.083 247403 INFO nova.scheduler.client.report [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Deleted allocations for instance 26e4f031-8730-4987-a2df-239ce4b73191#033[00m
Jan 31 03:22:13 np0005603621 nova_compute[247399]: 2026-01-31 08:22:13.339 247403 DEBUG oslo_concurrency.lockutils [None req-fd60739f-3212-427c-8dfa-c479ecc80b4a a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "26e4f031-8730-4987-a2df-239ce4b73191" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 376 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 8.6 KiB/s wr, 29 op/s
Jan 31 03:22:14 np0005603621 nova_compute[247399]: 2026-01-31 08:22:14.189 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:14.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:14.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 376 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.9 KiB/s wr, 23 op/s
Jan 31 03:22:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:16.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:22:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:16.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:22:16 np0005603621 nova_compute[247399]: 2026-01-31 08:22:16.735 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847721.7352023, 26e4f031-8730-4987-a2df-239ce4b73191 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:16 np0005603621 nova_compute[247399]: 2026-01-31 08:22:16.736 247403 INFO nova.compute.manager [-] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.052 247403 DEBUG nova.compute.manager [None req-5556acf5-61b4-4cae-b695-5bd26fbf099a - - - - - -] [instance: 26e4f031-8730-4987-a2df-239ce4b73191] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.110 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.111 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.111 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.111 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.111 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.112 247403 INFO nova.compute.manager [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Terminating instance#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.113 247403 DEBUG nova.compute.manager [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:22:17 np0005603621 kernel: tapf45c5fd8-45 (unregistering): left promiscuous mode
Jan 31 03:22:17 np0005603621 NetworkManager[49013]: <info>  [1769847737.2554] device (tapf45c5fd8-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:17Z|00433|binding|INFO|Releasing lport f45c5fd8-45be-479f-bfb8-5305390417f3 from this chassis (sb_readonly=0)
Jan 31 03:22:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:17Z|00434|binding|INFO|Setting lport f45c5fd8-45be-479f-bfb8-5305390417f3 down in Southbound
Jan 31 03:22:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:17Z|00435|binding|INFO|Removing iface tapf45c5fd8-45 ovn-installed in OVS
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.269 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.270 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:17 np0005603621 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 31 03:22:17 np0005603621 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000069.scope: Consumed 15.809s CPU time.
Jan 31 03:22:17 np0005603621 systemd-machined[212769]: Machine qemu-47-instance-00000069 terminated.
Jan 31 03:22:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:17.340 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:55:8c 10.100.0.10'], port_security=['fa:16:3e:78:55:8c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'b6bf273c-d5a3-4f02-bddd-465a846a764d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df6e7a91-2b55-4315-a605-78d32dbfee77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '29d7f464a8694725aa9692aac772c256', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a90bfd45-70a0-49a1-8926-c539bffb0c4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b645f3cd-3282-44b6-817d-693b5aef0523, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f45c5fd8-45be-479f-bfb8-5305390417f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:17.341 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f45c5fd8-45be-479f-bfb8-5305390417f3 in datapath df6e7a91-2b55-4315-a605-78d32dbfee77 unbound from our chassis#033[00m
Jan 31 03:22:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:17.342 159734 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network df6e7a91-2b55-4315-a605-78d32dbfee77 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 31 03:22:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:17.343 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[94056923-1a1e-49b5-83f5-cddc247549ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.352 247403 INFO nova.virt.libvirt.driver [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Instance destroyed successfully.#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.352 247403 DEBUG nova.objects.instance [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lazy-loading 'resources' on Instance uuid b6bf273c-d5a3-4f02-bddd-465a846a764d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.593 247403 DEBUG nova.virt.libvirt.vif [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:19:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1598984785',display_name='tempest-ServerRescueTestJSON-server-1598984785',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1598984785',id=105,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='29d7f464a8694725aa9692aac772c256',ramdisk_id='',reservation_id='r-7pc5icog',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-476946386',owner_user_name='tempest-ServerRescueTestJSON-476946386-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:47Z,user_data=None,user_id='a8897cd859ff4a79a1a16eaee71d22ed',uuid=b6bf273c-d5a3-4f02-bddd-465a846a764d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.594 247403 DEBUG nova.network.os_vif_util [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converting VIF {"id": "f45c5fd8-45be-479f-bfb8-5305390417f3", "address": "fa:16:3e:78:55:8c", "network": {"id": "df6e7a91-2b55-4315-a605-78d32dbfee77", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1486638082-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "29d7f464a8694725aa9692aac772c256", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf45c5fd8-45", "ovs_interfaceid": "f45c5fd8-45be-479f-bfb8-5305390417f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.595 247403 DEBUG nova.network.os_vif_util [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.595 247403 DEBUG os_vif [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.597 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.597 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf45c5fd8-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.635 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:22:17 np0005603621 nova_compute[247399]: 2026-01-31 08:22:17.641 247403 INFO os_vif [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:55:8c,bridge_name='br-int',has_traffic_filtering=True,id=f45c5fd8-45be-479f-bfb8-5305390417f3,network=Network(df6e7a91-2b55-4315-a605-78d32dbfee77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf45c5fd8-45')#033[00m
Jan 31 03:22:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 376 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.9 KiB/s wr, 23 op/s
Jan 31 03:22:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:18.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:18.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.089 247403 DEBUG nova.compute.manager [req-1d0e379f-33db-4bdd-b2b4-1204149f5243 req-18846651-cd06-428e-b571-b909d4b02aec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-unplugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.089 247403 DEBUG oslo_concurrency.lockutils [req-1d0e379f-33db-4bdd-b2b4-1204149f5243 req-18846651-cd06-428e-b571-b909d4b02aec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.089 247403 DEBUG oslo_concurrency.lockutils [req-1d0e379f-33db-4bdd-b2b4-1204149f5243 req-18846651-cd06-428e-b571-b909d4b02aec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.090 247403 DEBUG oslo_concurrency.lockutils [req-1d0e379f-33db-4bdd-b2b4-1204149f5243 req-18846651-cd06-428e-b571-b909d4b02aec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.090 247403 DEBUG nova.compute.manager [req-1d0e379f-33db-4bdd-b2b4-1204149f5243 req-18846651-cd06-428e-b571-b909d4b02aec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-unplugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.090 247403 DEBUG nova.compute.manager [req-1d0e379f-33db-4bdd-b2b4-1204149f5243 req-18846651-cd06-428e-b571-b909d4b02aec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-unplugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:19.491 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:19 np0005603621 nova_compute[247399]: 2026-01-31 08:22:19.492 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:19.492 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:22:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 352 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Jan 31 03:22:20 np0005603621 nova_compute[247399]: 2026-01-31 08:22:20.416 247403 INFO nova.virt.libvirt.driver [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Deleting instance files /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d_del#033[00m
Jan 31 03:22:20 np0005603621 nova_compute[247399]: 2026-01-31 08:22:20.417 247403 INFO nova.virt.libvirt.driver [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Deletion of /var/lib/nova/instances/b6bf273c-d5a3-4f02-bddd-465a846a764d_del complete#033[00m
Jan 31 03:22:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:20.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:20.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.008 247403 INFO nova.compute.manager [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Took 3.89 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.009 247403 DEBUG oslo.service.loopingcall [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.009 247403 DEBUG nova.compute.manager [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.009 247403 DEBUG nova.network.neutron [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.226 247403 DEBUG nova.compute.manager [req-d197f17e-6739-4acd-ad2c-508314d368a3 req-ff0fce35-cf17-4551-a4ad-69c34e5de88b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.226 247403 DEBUG oslo_concurrency.lockutils [req-d197f17e-6739-4acd-ad2c-508314d368a3 req-ff0fce35-cf17-4551-a4ad-69c34e5de88b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.226 247403 DEBUG oslo_concurrency.lockutils [req-d197f17e-6739-4acd-ad2c-508314d368a3 req-ff0fce35-cf17-4551-a4ad-69c34e5de88b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.227 247403 DEBUG oslo_concurrency.lockutils [req-d197f17e-6739-4acd-ad2c-508314d368a3 req-ff0fce35-cf17-4551-a4ad-69c34e5de88b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.227 247403 DEBUG nova.compute.manager [req-d197f17e-6739-4acd-ad2c-508314d368a3 req-ff0fce35-cf17-4551-a4ad-69c34e5de88b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] No waiting events found dispatching network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:21 np0005603621 nova_compute[247399]: 2026-01-31 08:22:21.227 247403 WARNING nova.compute.manager [req-d197f17e-6739-4acd-ad2c-508314d368a3 req-ff0fce35-cf17-4551-a4ad-69c34e5de88b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received unexpected event network-vif-plugged-f45c5fd8-45be-479f-bfb8-5305390417f3 for instance with vm_state rescued and task_state deleting.#033[00m
Jan 31 03:22:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 339 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 256 KiB/s wr, 26 op/s
Jan 31 03:22:22 np0005603621 nova_compute[247399]: 2026-01-31 08:22:22.248 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:22:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:22.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:22.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:22 np0005603621 nova_compute[247399]: 2026-01-31 08:22:22.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:22 np0005603621 nova_compute[247399]: 2026-01-31 08:22:22.782 247403 DEBUG nova.network.neutron [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:23 np0005603621 nova_compute[247399]: 2026-01-31 08:22:23.218 247403 DEBUG nova.compute.manager [req-1d702b07-422b-408d-8a3d-22abd6abd82f req-96e562c0-8f6a-4d34-8d81-7249a99947c0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Received event network-vif-deleted-f45c5fd8-45be-479f-bfb8-5305390417f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:23 np0005603621 nova_compute[247399]: 2026-01-31 08:22:23.218 247403 INFO nova.compute.manager [req-1d702b07-422b-408d-8a3d-22abd6abd82f req-96e562c0-8f6a-4d34-8d81-7249a99947c0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Neutron deleted interface f45c5fd8-45be-479f-bfb8-5305390417f3; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:22:23 np0005603621 nova_compute[247399]: 2026-01-31 08:22:23.218 247403 DEBUG nova.network.neutron [req-1d702b07-422b-408d-8a3d-22abd6abd82f req-96e562c0-8f6a-4d34-8d81-7249a99947c0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:23 np0005603621 nova_compute[247399]: 2026-01-31 08:22:23.462 247403 INFO nova.compute.manager [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Took 2.45 seconds to deallocate network for instance.#033[00m
Jan 31 03:22:23 np0005603621 nova_compute[247399]: 2026-01-31 08:22:23.737 247403 DEBUG nova.compute.manager [req-1d702b07-422b-408d-8a3d-22abd6abd82f req-96e562c0-8f6a-4d34-8d81-7249a99947c0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Detach interface failed, port_id=f45c5fd8-45be-479f-bfb8-5305390417f3, reason: Instance b6bf273c-d5a3-4f02-bddd-465a846a764d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:22:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.235 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.236 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.331 247403 DEBUG oslo_concurrency.processutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:24.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649380850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.744 247403 DEBUG oslo_concurrency.processutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.749 247403 DEBUG nova.compute.provider_tree [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:22:24 np0005603621 nova_compute[247399]: 2026-01-31 08:22:24.810 247403 DEBUG nova.scheduler.client.report [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:22:25 np0005603621 nova_compute[247399]: 2026-01-31 08:22:25.094 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 03:22:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:26.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:26 np0005603621 podman[320096]: 2026-01-31 08:22:26.556757165 +0000 UTC m=+0.089648655 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 31 03:22:26 np0005603621 podman[320097]: 2026-01-31 08:22:26.579490241 +0000 UTC m=+0.112380551 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:22:26 np0005603621 nova_compute[247399]: 2026-01-31 08:22:26.745 247403 INFO nova.scheduler.client.report [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Deleted allocations for instance b6bf273c-d5a3-4f02-bddd-465a846a764d#033[00m
Jan 31 03:22:27 np0005603621 nova_compute[247399]: 2026-01-31 08:22:27.505 247403 DEBUG oslo_concurrency.lockutils [None req-096dbbd5-7147-4eb0-b194-0fb5aff97d80 a8897cd859ff4a79a1a16eaee71d22ed 29d7f464a8694725aa9692aac772c256 - - default default] Lock "b6bf273c-d5a3-4f02-bddd-465a846a764d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:27 np0005603621 nova_compute[247399]: 2026-01-31 08:22:27.671 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:22:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.272 247403 INFO nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance failed to shutdown in 60 seconds.#033[00m
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:22:28 np0005603621 kernel: tapfb9bab50-6b (unregistering): left promiscuous mode
Jan 31 03:22:28 np0005603621 NetworkManager[49013]: <info>  [1769847748.4540] device (tapfb9bab50-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:28 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:28Z|00436|binding|INFO|Releasing lport fb9bab50-6b70-4dec-9b6d-9d961083b257 from this chassis (sb_readonly=0)
Jan 31 03:22:28 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:28Z|00437|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 down in Southbound
Jan 31 03:22:28 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:28Z|00438|binding|INFO|Removing iface tapfb9bab50-6b ovn-installed in OVS
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.460 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.466 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.494 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:28 np0005603621 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 31 03:22:28 np0005603621 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000006e.scope: Consumed 1.042s CPU time.
Jan 31 03:22:28 np0005603621 systemd-machined[212769]: Machine qemu-48-instance-0000006e terminated.
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:28.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.614 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:49:09 10.100.0.8'], port_security=['fa:16:3e:69:49:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb9bab50-6b70-4dec-9b6d-9d961083b257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.615 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb9bab50-6b70-4dec-9b6d-9d961083b257 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab unbound from our chassis#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.677 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.689 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[86a8bf72-810d-4441-a2d5-2d9cc798529a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.701 247403 INFO nova.virt.libvirt.driver [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance destroyed successfully.#033[00m
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.701 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'numa_topology' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.714 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb490a4-ddcb-48f2-8934-d54d1729f614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.716 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[08f1d324-ff61-442e-a792-36887ace9b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.737 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[45a970a7-e890-475d-ac98-a53341cd9659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.749 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8e499a74-a480-4c44-b3ed-62ed7a9a522a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320437, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.757 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e54fe978-0ce1-4294-91ee-74ddde94c000]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320438, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320438, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.759 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.760 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.764 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.765 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.765 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:28.765 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:22:28 np0005603621 nova_compute[247399]: 2026-01-31 08:22:28.841 247403 INFO nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Attempting a stable device rescue#033[00m
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2b6bb9ad-d65d-41ad-8dff-eeae5013ed18 does not exist
Jan 31 03:22:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6d4fce83-8aa5-4746-a3ee-930ccd1d8ca0 does not exist
Jan 31 03:22:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bbfc83f7-7171-412e-9c76-ecfb11ea7d35 does not exist
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:22:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.175 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.179 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.179 247403 INFO nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Creating image(s)#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.205 247403 DEBUG nova.storage.rbd_utils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.210 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:29 np0005603621 podman[320597]: 2026-01-31 08:22:29.335474803 +0000 UTC m=+0.021202659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.443 247403 DEBUG nova.storage.rbd_utils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.475 247403 DEBUG nova.storage.rbd_utils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.479 247403 DEBUG oslo_concurrency.lockutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "0819b56ce6eb0189c93f5ffc619576a9d678f0cf" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.480 247403 DEBUG oslo_concurrency.lockutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "0819b56ce6eb0189c93f5ffc619576a9d678f0cf" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:29 np0005603621 podman[320597]: 2026-01-31 08:22:29.604933292 +0000 UTC m=+0.290661128 container create 306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_satoshi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:22:29 np0005603621 systemd[1]: Started libpod-conmon-306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929.scope.
Jan 31 03:22:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:22:29 np0005603621 podman[320597]: 2026-01-31 08:22:29.891228191 +0000 UTC m=+0.576956057 container init 306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_satoshi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:22:29 np0005603621 podman[320597]: 2026-01-31 08:22:29.899174582 +0000 UTC m=+0.584902418 container start 306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.898 247403 DEBUG nova.virt.libvirt.imagebackend [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/4659652d-d50c-40d4-aa17-bb68b7e73113/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/4659652d-d50c-40d4-aa17-bb68b7e73113/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:22:29 np0005603621 boring_satoshi[320650]: 167 167
Jan 31 03:22:29 np0005603621 systemd[1]: libpod-306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929.scope: Deactivated successfully.
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.958 247403 DEBUG nova.virt.libvirt.imagebackend [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Selected location: {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/4659652d-d50c-40d4-aa17-bb68b7e73113/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:22:29 np0005603621 nova_compute[247399]: 2026-01-31 08:22:29.959 247403 DEBUG nova.storage.rbd_utils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] cloning images/4659652d-d50c-40d4-aa17-bb68b7e73113@snap to None/23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:22:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.058 247403 DEBUG nova.compute.manager [req-00e1f7c2-65c4-4e77-8919-8249ee206a57 req-4abcb13c-92b7-4976-8b36-be4d31d43288 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.058 247403 DEBUG oslo_concurrency.lockutils [req-00e1f7c2-65c4-4e77-8919-8249ee206a57 req-4abcb13c-92b7-4976-8b36-be4d31d43288 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.059 247403 DEBUG oslo_concurrency.lockutils [req-00e1f7c2-65c4-4e77-8919-8249ee206a57 req-4abcb13c-92b7-4976-8b36-be4d31d43288 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.059 247403 DEBUG oslo_concurrency.lockutils [req-00e1f7c2-65c4-4e77-8919-8249ee206a57 req-4abcb13c-92b7-4976-8b36-be4d31d43288 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.059 247403 DEBUG nova.compute.manager [req-00e1f7c2-65c4-4e77-8919-8249ee206a57 req-4abcb13c-92b7-4976-8b36-be4d31d43288 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.059 247403 WARNING nova.compute.manager [req-00e1f7c2-65c4-4e77-8919-8249ee206a57 req-4abcb13c-92b7-4976-8b36-be4d31d43288 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:22:30 np0005603621 podman[320597]: 2026-01-31 08:22:30.104567622 +0000 UTC m=+0.790295478 container attach 306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_satoshi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:22:30 np0005603621 podman[320597]: 2026-01-31 08:22:30.105167671 +0000 UTC m=+0.790895507 container died 306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8e7c4f06af0326aa718e6fc509c138d14f13771f8aa2bbf7afb7fcbee5326620-merged.mount: Deactivated successfully.
Jan 31 03:22:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:30.504 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:30.504 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:30.506 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:30.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.531 247403 DEBUG oslo_concurrency.lockutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "0819b56ce6eb0189c93f5ffc619576a9d678f0cf" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:30 np0005603621 podman[320597]: 2026-01-31 08:22:30.570456769 +0000 UTC m=+1.256184605 container remove 306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_satoshi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.586 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'migration_context' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.638 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.641 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Start _get_guest_xml network_info=[{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "vif_mac": "fa:16:3e:69:49:09"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4659652d-d50c-40d4-aa17-bb68b7e73113', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-afd3af33-f7a2-401c-9400-47dff922d4f4', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'afd3af33-f7a2-401c-9400-47dff922d4f4', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'attached_at': '', 'detached_at': '', 'volume_id': 'afd3af33-f7a2-401c-9400-47dff922d4f4', 'serial': 'afd3af33-f7a2-401c-9400-47dff922d4f4'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': 'd62d58cb-62c8-4814-ab07-94c3742a7f65', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.641 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'resources' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:30 np0005603621 systemd[1]: libpod-conmon-306e93da7067ebc99687ab01a18c8e9f353e596a58ce064c6c529ff9c28fb929.scope: Deactivated successfully.
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.713 247403 WARNING nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:22:30 np0005603621 podman[320762]: 2026-01-31 08:22:30.720175055 +0000 UTC m=+0.052285698 container create 3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.720 247403 DEBUG nova.virt.libvirt.host [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.721 247403 DEBUG nova.virt.libvirt.host [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.726 247403 DEBUG nova.virt.libvirt.host [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.726 247403 DEBUG nova.virt.libvirt.host [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.728 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.728 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.728 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.729 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.729 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.729 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.729 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.729 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.729 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.730 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.730 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.730 247403 DEBUG nova.virt.hardware [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.730 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:30 np0005603621 systemd[1]: Started libpod-conmon-3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6.scope.
Jan 31 03:22:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:22:30 np0005603621 podman[320762]: 2026-01-31 08:22:30.689125568 +0000 UTC m=+0.021236231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:22:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a774c3b42a31160b7db4a2d8c20783fd581efc5b3308a981f9402c10fbfafd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a774c3b42a31160b7db4a2d8c20783fd581efc5b3308a981f9402c10fbfafd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a774c3b42a31160b7db4a2d8c20783fd581efc5b3308a981f9402c10fbfafd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a774c3b42a31160b7db4a2d8c20783fd581efc5b3308a981f9402c10fbfafd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91a774c3b42a31160b7db4a2d8c20783fd581efc5b3308a981f9402c10fbfafd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:30 np0005603621 podman[320762]: 2026-01-31 08:22:30.812609487 +0000 UTC m=+0.144720160 container init 3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haslett, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:22:30 np0005603621 podman[320762]: 2026-01-31 08:22:30.818157952 +0000 UTC m=+0.150268595 container start 3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:22:30 np0005603621 podman[320762]: 2026-01-31 08:22:30.824466741 +0000 UTC m=+0.156577394 container attach 3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:22:30 np0005603621 nova_compute[247399]: 2026-01-31 08:22:30.884 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:22:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071010343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.327 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.457 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:31 np0005603621 peaceful_haslett[320778]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:22:31 np0005603621 peaceful_haslett[320778]: --> relative data size: 1.0
Jan 31 03:22:31 np0005603621 peaceful_haslett[320778]: --> All data devices are unavailable
Jan 31 03:22:31 np0005603621 systemd[1]: libpod-3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6.scope: Deactivated successfully.
Jan 31 03:22:31 np0005603621 podman[320762]: 2026-01-31 08:22:31.613851689 +0000 UTC m=+0.945962332 container died 3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:22:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-91a774c3b42a31160b7db4a2d8c20783fd581efc5b3308a981f9402c10fbfafd-merged.mount: Deactivated successfully.
Jan 31 03:22:31 np0005603621 podman[320762]: 2026-01-31 08:22:31.791390532 +0000 UTC m=+1.123501185 container remove 3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_haslett, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 03:22:31 np0005603621 systemd[1]: libpod-conmon-3362313f860b17027a19bffdbe6c579625af2bd0759eab26969a76d191386ec6.scope: Deactivated successfully.
Jan 31 03:22:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:22:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3891557994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.897 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.900 247403 DEBUG nova.virt.libvirt.vif [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-668266523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-668266523',id=110,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:21:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-6l4wap3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:21:15Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=23200b4a-e522-43bf-a83e-cb2f9bb31571,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "vif_mac": "fa:16:3e:69:49:09"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.900 247403 DEBUG nova.network.os_vif_util [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "vif_mac": "fa:16:3e:69:49:09"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.901 247403 DEBUG nova.network.os_vif_util [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:22:31 np0005603621 nova_compute[247399]: 2026-01-31 08:22:31.902 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'pci_devices' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.009 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <uuid>23200b4a-e522-43bf-a83e-cb2f9bb31571</uuid>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <name>instance-0000006e</name>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-668266523</nova:name>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:22:30</nova:creationTime>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:user uuid="f4d66dd0b7ff443cbcdb6e2c9f5c4c8c">tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member</nova:user>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:project uuid="cf024d54545b4af882a87c721105742a">tempest-ServerBootFromVolumeStableRescueTest-468517745</nova:project>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <nova:port uuid="fb9bab50-6b70-4dec-9b6d-9d961083b257">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <entry name="serial">23200b4a-e522-43bf-a83e-cb2f9bb31571</entry>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <entry name="uuid">23200b4a-e522-43bf-a83e-cb2f9bb31571</entry>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-afd3af33-f7a2-401c-9400-47dff922d4f4">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <serial>afd3af33-f7a2-401c-9400-47dff922d4f4</serial>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.rescue">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <boot order="1"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:69:49:09"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <target dev="tapfb9bab50-6b"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/console.log" append="off"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:22:32 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:22:32 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:22:32 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:22:32 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.015 247403 INFO nova.virt.libvirt.driver [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance destroyed successfully.#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.248 247403 DEBUG nova.compute.manager [req-3cc7e714-4a02-4c71-ab75-6c6093b40d7c req-d3f109e1-1467-4591-aa62-9cdfd0bc8013 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.249 247403 DEBUG oslo_concurrency.lockutils [req-3cc7e714-4a02-4c71-ab75-6c6093b40d7c req-d3f109e1-1467-4591-aa62-9cdfd0bc8013 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.249 247403 DEBUG oslo_concurrency.lockutils [req-3cc7e714-4a02-4c71-ab75-6c6093b40d7c req-d3f109e1-1467-4591-aa62-9cdfd0bc8013 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.249 247403 DEBUG oslo_concurrency.lockutils [req-3cc7e714-4a02-4c71-ab75-6c6093b40d7c req-d3f109e1-1467-4591-aa62-9cdfd0bc8013 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.250 247403 DEBUG nova.compute.manager [req-3cc7e714-4a02-4c71-ab75-6c6093b40d7c req-d3f109e1-1467-4591-aa62-9cdfd0bc8013 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.250 247403 WARNING nova.compute.manager [req-3cc7e714-4a02-4c71-ab75-6c6093b40d7c req-d3f109e1-1467-4591-aa62-9cdfd0bc8013 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.270 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.270 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.270 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.271 247403 DEBUG nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No VIF found with MAC fa:16:3e:69:49:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.271 247403 INFO nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Using config drive#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.301 247403 DEBUG nova.storage.rbd_utils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.350 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769847737.349293, b6bf273c-d5a3-4f02-bddd-465a846a764d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.350 247403 INFO nova.compute.manager [-] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:22:32 np0005603621 podman[321007]: 2026-01-31 08:22:32.303431853 +0000 UTC m=+0.022584632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.440 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'ec2_ids' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:32 np0005603621 podman[321007]: 2026-01-31 08:22:32.475943597 +0000 UTC m=+0.195096356 container create adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:22:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:32.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.587 247403 DEBUG nova.compute.manager [None req-dd72771c-3281-4c75-a2d5-87711569ad2a - - - - - -] [instance: b6bf273c-d5a3-4f02-bddd-465a846a764d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:32 np0005603621 systemd[1]: Started libpod-conmon-adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f.scope.
Jan 31 03:22:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.635 247403 DEBUG nova.objects.instance [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'keypairs' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:32 np0005603621 nova_compute[247399]: 2026-01-31 08:22:32.711 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:32 np0005603621 podman[321007]: 2026-01-31 08:22:32.910700304 +0000 UTC m=+0.629853093 container init adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:22:32 np0005603621 podman[321007]: 2026-01-31 08:22:32.919377517 +0000 UTC m=+0.638530266 container start adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:22:32 np0005603621 nostalgic_swanson[321041]: 167 167
Jan 31 03:22:32 np0005603621 systemd[1]: libpod-adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f.scope: Deactivated successfully.
Jan 31 03:22:33 np0005603621 podman[321007]: 2026-01-31 08:22:33.233776612 +0000 UTC m=+0.952929371 container attach adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:22:33 np0005603621 podman[321007]: 2026-01-31 08:22:33.235208597 +0000 UTC m=+0.954361366 container died adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:22:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6c7449d52fb89d18291b892cdecbc8614cdd805f1031986e0387248b2a215f61-merged.mount: Deactivated successfully.
Jan 31 03:22:33 np0005603621 nova_compute[247399]: 2026-01-31 08:22:33.808 247403 INFO nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Creating config drive at /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config.rescue#033[00m
Jan 31 03:22:33 np0005603621 nova_compute[247399]: 2026-01-31 08:22:33.811 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpp63uqt6x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:33 np0005603621 podman[321007]: 2026-01-31 08:22:33.865363619 +0000 UTC m=+1.584516378 container remove adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:22:33 np0005603621 systemd[1]: libpod-conmon-adf4e314b954a0a26f475178adcabea3a48dcb69db4dcebf023f4b27b91fc53f.scope: Deactivated successfully.
Jan 31 03:22:33 np0005603621 nova_compute[247399]: 2026-01-31 08:22:33.931 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpp63uqt6x" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:33 np0005603621 nova_compute[247399]: 2026-01-31 08:22:33.964 247403 DEBUG nova.storage.rbd_utils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:33 np0005603621 nova_compute[247399]: 2026-01-31 08:22:33.968 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config.rescue 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.5 MiB/s wr, 73 op/s
Jan 31 03:22:34 np0005603621 podman[321083]: 2026-01-31 08:22:33.983015765 +0000 UTC m=+0.023342817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:22:34 np0005603621 podman[321083]: 2026-01-31 08:22:34.129579752 +0000 UTC m=+0.169906784 container create 27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:22:34 np0005603621 systemd[1]: Started libpod-conmon-27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8.scope.
Jan 31 03:22:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:22:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0db071ce9fe7147a9612b6e416567170ea2bfa8ccca7370aa2a7c8b02a612e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0db071ce9fe7147a9612b6e416567170ea2bfa8ccca7370aa2a7c8b02a612e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0db071ce9fe7147a9612b6e416567170ea2bfa8ccca7370aa2a7c8b02a612e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0db071ce9fe7147a9612b6e416567170ea2bfa8ccca7370aa2a7c8b02a612e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.239 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603621 podman[321083]: 2026-01-31 08:22:34.322827581 +0000 UTC m=+0.363154633 container init 27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:22:34 np0005603621 podman[321083]: 2026-01-31 08:22:34.328282973 +0000 UTC m=+0.368610005 container start 27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:22:34 np0005603621 podman[321083]: 2026-01-31 08:22:34.361195479 +0000 UTC m=+0.401522531 container attach 27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 31 03:22:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:34.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:34.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.574 247403 DEBUG oslo_concurrency.processutils [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config.rescue 23200b4a-e522-43bf-a83e-cb2f9bb31571_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.575 247403 INFO nova.virt.libvirt.driver [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Deleting local config drive /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:22:34 np0005603621 kernel: tapfb9bab50-6b: entered promiscuous mode
Jan 31 03:22:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:34Z|00439|binding|INFO|Claiming lport fb9bab50-6b70-4dec-9b6d-9d961083b257 for this chassis.
Jan 31 03:22:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:34Z|00440|binding|INFO|fb9bab50-6b70-4dec-9b6d-9d961083b257: Claiming fa:16:3e:69:49:09 10.100.0.8
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.620 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603621 NetworkManager[49013]: <info>  [1769847754.6269] manager: (tapfb9bab50-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Jan 31 03:22:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:34Z|00441|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 ovn-installed in OVS
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.634 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603621 systemd-machined[212769]: New machine qemu-52-instance-0000006e.
Jan 31 03:22:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:34Z|00442|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 up in Southbound
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.646 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:49:09 10.100.0.8'], port_security=['fa:16:3e:69:49:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb9bab50-6b70-4dec-9b6d-9d961083b257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.647 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb9bab50-6b70-4dec-9b6d-9d961083b257 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab bound to our chassis#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.649 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:22:34 np0005603621 systemd[1]: Started Virtual Machine qemu-52-instance-0000006e.
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.659 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f25dfba4-a15d-47d5-812d-41b57fa9cef6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603621 systemd-udevd[321144]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:34 np0005603621 NetworkManager[49013]: <info>  [1769847754.6872] device (tapfb9bab50-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:22:34 np0005603621 NetworkManager[49013]: <info>  [1769847754.6880] device (tapfb9bab50-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.688 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[60003b02-fe39-4424-ad8a-2ee284ab006b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.691 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[41f3ee7a-b2b9-4130-baec-cd47513460dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.713 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2036da53-4a4b-46d9-af4c-4c42d2779d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.726 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[49d796c1-8593-423f-85f2-ec766693acf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321155, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.737 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9afd61-d573-4c35-b006-dea255b0e78b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321156, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321156, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.738 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.739 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603621 nova_compute[247399]: 2026-01-31 08:22:34.740 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.741 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.741 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.741 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:34.741 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]: {
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:    "0": [
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:        {
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "devices": [
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "/dev/loop3"
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            ],
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "lv_name": "ceph_lv0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "lv_size": "7511998464",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "name": "ceph_lv0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "tags": {
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.cluster_name": "ceph",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.crush_device_class": "",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.encrypted": "0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.osd_id": "0",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.type": "block",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:                "ceph.vdo": "0"
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            },
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "type": "block",
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:            "vg_name": "ceph_vg0"
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:        }
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]:    ]
Jan 31 03:22:35 np0005603621 condescending_euclid[321123]: }
Jan 31 03:22:35 np0005603621 systemd[1]: libpod-27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8.scope: Deactivated successfully.
Jan 31 03:22:35 np0005603621 podman[321083]: 2026-01-31 08:22:35.11097334 +0000 UTC m=+1.151300382 container died 27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:22:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b0db071ce9fe7147a9612b6e416567170ea2bfa8ccca7370aa2a7c8b02a612e0-merged.mount: Deactivated successfully.
Jan 31 03:22:35 np0005603621 podman[321083]: 2026-01-31 08:22:35.331455516 +0000 UTC m=+1.371782548 container remove 27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:22:35 np0005603621 systemd[1]: libpod-conmon-27df663bf766686ef4b8c9515f93084b443b5d596774af0a7059fbd5f8ec79a8.scope: Deactivated successfully.
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.477 247403 DEBUG nova.compute.manager [req-eaf73564-a17e-4def-a6be-38f929c7d0d4 req-61c08df5-bb88-4d30-873d-2cffa0feabf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.478 247403 DEBUG oslo_concurrency.lockutils [req-eaf73564-a17e-4def-a6be-38f929c7d0d4 req-61c08df5-bb88-4d30-873d-2cffa0feabf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.478 247403 DEBUG oslo_concurrency.lockutils [req-eaf73564-a17e-4def-a6be-38f929c7d0d4 req-61c08df5-bb88-4d30-873d-2cffa0feabf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.479 247403 DEBUG oslo_concurrency.lockutils [req-eaf73564-a17e-4def-a6be-38f929c7d0d4 req-61c08df5-bb88-4d30-873d-2cffa0feabf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.479 247403 DEBUG nova.compute.manager [req-eaf73564-a17e-4def-a6be-38f929c7d0d4 req-61c08df5-bb88-4d30-873d-2cffa0feabf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.479 247403 WARNING nova.compute.manager [req-eaf73564-a17e-4def-a6be-38f929c7d0d4 req-61c08df5-bb88-4d30-873d-2cffa0feabf4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.511 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 23200b4a-e522-43bf-a83e-cb2f9bb31571 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.512 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847755.5107899, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.512 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.516 247403 DEBUG nova.compute.manager [None req-88ecf94a-adbc-48d2-9fe9-f2c5fe632065 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.699 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.702 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.828 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847755.513664, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:35 np0005603621 nova_compute[247399]: 2026-01-31 08:22:35.828 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Started (Lifecycle Event)#033[00m
Jan 31 03:22:35 np0005603621 podman[321373]: 2026-01-31 08:22:35.834266018 +0000 UTC m=+0.042289914 container create 769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bouman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:22:35 np0005603621 systemd[1]: Started libpod-conmon-769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c.scope.
Jan 31 03:22:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:22:35 np0005603621 podman[321373]: 2026-01-31 08:22:35.811691966 +0000 UTC m=+0.019715892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:22:35 np0005603621 podman[321373]: 2026-01-31 08:22:35.916219509 +0000 UTC m=+0.124243435 container init 769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 03:22:35 np0005603621 podman[321373]: 2026-01-31 08:22:35.924608893 +0000 UTC m=+0.132632789 container start 769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:22:35 np0005603621 focused_bouman[321390]: 167 167
Jan 31 03:22:35 np0005603621 systemd[1]: libpod-769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c.scope: Deactivated successfully.
Jan 31 03:22:35 np0005603621 podman[321373]: 2026-01-31 08:22:35.933168263 +0000 UTC m=+0.141192159 container attach 769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bouman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:22:35 np0005603621 podman[321373]: 2026-01-31 08:22:35.933848744 +0000 UTC m=+0.141872640 container died 769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 03:22:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2d97c987443302db8cd8e0956cf52b19977125d54feafcd090bd29c5f15d04cc-merged.mount: Deactivated successfully.
Jan 31 03:22:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Jan 31 03:22:36 np0005603621 podman[321373]: 2026-01-31 08:22:36.014502055 +0000 UTC m=+0.222525941 container remove 769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:22:36 np0005603621 systemd[1]: libpod-conmon-769bd36a98151abf28eb430514671cf21d5c06e221e648a4749b0e8b37e44a8c.scope: Deactivated successfully.
Jan 31 03:22:36 np0005603621 nova_compute[247399]: 2026-01-31 08:22:36.041 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:36 np0005603621 nova_compute[247399]: 2026-01-31 08:22:36.047 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:36Z|00443|binding|INFO|Releasing lport dad27cfe-7e8a-4f55-a945-07f9cae848c1 from this chassis (sb_readonly=0)
Jan 31 03:22:36 np0005603621 nova_compute[247399]: 2026-01-31 08:22:36.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:36 np0005603621 podman[321413]: 2026-01-31 08:22:36.187200097 +0000 UTC m=+0.070341097 container create f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:22:36 np0005603621 nova_compute[247399]: 2026-01-31 08:22:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:36 np0005603621 systemd[1]: Started libpod-conmon-f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7.scope.
Jan 31 03:22:36 np0005603621 podman[321413]: 2026-01-31 08:22:36.146965839 +0000 UTC m=+0.030106859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:22:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:22:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd1c050245c518e67eda92992bd56a76de0f36bb267d91282777d9adedfe4d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd1c050245c518e67eda92992bd56a76de0f36bb267d91282777d9adedfe4d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd1c050245c518e67eda92992bd56a76de0f36bb267d91282777d9adedfe4d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dd1c050245c518e67eda92992bd56a76de0f36bb267d91282777d9adedfe4d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:22:36 np0005603621 podman[321413]: 2026-01-31 08:22:36.280494295 +0000 UTC m=+0.163635315 container init f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:22:36 np0005603621 podman[321413]: 2026-01-31 08:22:36.286505735 +0000 UTC m=+0.169646735 container start f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:22:36 np0005603621 podman[321413]: 2026-01-31 08:22:36.293377182 +0000 UTC m=+0.176518212 container attach f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:22:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]: {
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:        "osd_id": 0,
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:        "type": "bluestore"
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]:    }
Jan 31 03:22:37 np0005603621 happy_engelbart[321430]: }
Jan 31 03:22:37 np0005603621 systemd[1]: libpod-f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7.scope: Deactivated successfully.
Jan 31 03:22:37 np0005603621 podman[321413]: 2026-01-31 08:22:37.140212149 +0000 UTC m=+1.023353169 container died f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:22:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3dd1c050245c518e67eda92992bd56a76de0f36bb267d91282777d9adedfe4d5-merged.mount: Deactivated successfully.
Jan 31 03:22:37 np0005603621 podman[321413]: 2026-01-31 08:22:37.398371452 +0000 UTC m=+1.281512452 container remove f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:22:37 np0005603621 systemd[1]: libpod-conmon-f596afa4a059e5065898dbc05d787413a285e4400b94ccb7dd31933fcffcbed7.scope: Deactivated successfully.
Jan 31 03:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.713 247403 DEBUG nova.compute.manager [req-cfae0e09-ca56-4f07-8934-b4935d3fd9cb req-1eaa259a-f31b-40f4-a6bd-23f49a1bff31 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.714 247403 DEBUG oslo_concurrency.lockutils [req-cfae0e09-ca56-4f07-8934-b4935d3fd9cb req-1eaa259a-f31b-40f4-a6bd-23f49a1bff31 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.715 247403 DEBUG oslo_concurrency.lockutils [req-cfae0e09-ca56-4f07-8934-b4935d3fd9cb req-1eaa259a-f31b-40f4-a6bd-23f49a1bff31 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.715 247403 DEBUG oslo_concurrency.lockutils [req-cfae0e09-ca56-4f07-8934-b4935d3fd9cb req-1eaa259a-f31b-40f4-a6bd-23f49a1bff31 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.715 247403 DEBUG nova.compute.manager [req-cfae0e09-ca56-4f07-8934-b4935d3fd9cb req-1eaa259a-f31b-40f4-a6bd-23f49a1bff31 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.715 247403 WARNING nova.compute.manager [req-cfae0e09-ca56-4f07-8934-b4935d3fd9cb req-1eaa259a-f31b-40f4-a6bd-23f49a1bff31 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:22:37 np0005603621 nova_compute[247399]: 2026-01-31 08:22:37.716 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev eefd5943-1af8-49b4-8cba-5db0ac558ad5 does not exist
Jan 31 03:22:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev af8234ea-0e58-4bb8-80ea-11af4b495594 does not exist
Jan 31 03:22:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6af56833-b9cd-4cde-9710-a56f40720bd3 does not exist
Jan 31 03:22:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 295 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 470 KiB/s rd, 13 KiB/s wr, 40 op/s
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:22:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:38.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:22:38
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'vms']
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:22:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:22:39 np0005603621 nova_compute[247399]: 2026-01-31 08:22:39.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:39 np0005603621 nova_compute[247399]: 2026-01-31 08:22:39.250 247403 INFO nova.compute.manager [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Unrescuing#033[00m
Jan 31 03:22:39 np0005603621 nova_compute[247399]: 2026-01-31 08:22:39.250 247403 DEBUG oslo_concurrency.lockutils [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:22:39 np0005603621 nova_compute[247399]: 2026-01-31 08:22:39.251 247403 DEBUG oslo_concurrency.lockutils [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquired lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:22:39 np0005603621 nova_compute[247399]: 2026-01-31 08:22:39.251 247403 DEBUG nova.network.neutron [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:22:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 383 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 186 op/s
Jan 31 03:22:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:40.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:40.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 388 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 222 op/s
Jan 31 03:22:42 np0005603621 nova_compute[247399]: 2026-01-31 08:22:42.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:42 np0005603621 nova_compute[247399]: 2026-01-31 08:22:42.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:42 np0005603621 nova_compute[247399]: 2026-01-31 08:22:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:22:42 np0005603621 nova_compute[247399]: 2026-01-31 08:22:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:22:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:42.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:22:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:22:42 np0005603621 nova_compute[247399]: 2026-01-31 08:22:42.750 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.551 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.552 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.552 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.552 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f02cbbe1-1133-4659-a065-630c53ee2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.635 247403 DEBUG nova.network.neutron [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.936 247403 DEBUG oslo_concurrency.lockutils [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Releasing lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:22:43 np0005603621 nova_compute[247399]: 2026-01-31 08:22:43.937 247403 DEBUG nova.objects.instance [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'flavor' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 388 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 225 op/s
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.244 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 kernel: tapfb9bab50-6b (unregistering): left promiscuous mode
Jan 31 03:22:44 np0005603621 NetworkManager[49013]: <info>  [1769847764.3162] device (tapfb9bab50-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.321 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00444|binding|INFO|Releasing lport fb9bab50-6b70-4dec-9b6d-9d961083b257 from this chassis (sb_readonly=0)
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00445|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 down in Southbound
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00446|binding|INFO|Removing iface tapfb9bab50-6b ovn-installed in OVS
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.324 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.329 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.345 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:49:09 10.100.0.8'], port_security=['fa:16:3e:69:49:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb9bab50-6b70-4dec-9b6d-9d961083b257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.347 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb9bab50-6b70-4dec-9b6d-9d961083b257 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab unbound from our chassis#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.349 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.359 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ba7341-bc8e-43e0-bf0b-6d960b07f716]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 31 03:22:44 np0005603621 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006e.scope: Consumed 9.599s CPU time.
Jan 31 03:22:44 np0005603621 systemd-machined[212769]: Machine qemu-52-instance-0000006e terminated.
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.382 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f3779c15-33d9-4206-b6af-c8ea8904ffda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.386 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f6318ec7-0b94-4049-8ca5-4f3ff769c545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.406 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[186df645-1bce-4a8e-954f-152e9620999a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.419 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b83168ec-88fd-46a1-81ee-d16bd10319b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321532, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.430 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc60c09e-ff50-40d8-b4c3-0266620b1e88]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321533, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321533, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.431 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.435 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.435 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.436 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.436 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.514 247403 INFO nova.virt.libvirt.driver [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance destroyed successfully.#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.515 247403 DEBUG nova.objects.instance [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'numa_topology' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:44.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:44.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:44 np0005603621 systemd-udevd[321522]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:22:44 np0005603621 kernel: tapfb9bab50-6b: entered promiscuous mode
Jan 31 03:22:44 np0005603621 NetworkManager[49013]: <info>  [1769847764.6450] manager: (tapfb9bab50-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.646 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00447|binding|INFO|Claiming lport fb9bab50-6b70-4dec-9b6d-9d961083b257 for this chassis.
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00448|binding|INFO|fb9bab50-6b70-4dec-9b6d-9d961083b257: Claiming fa:16:3e:69:49:09 10.100.0.8
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00449|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 ovn-installed in OVS
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.653 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 NetworkManager[49013]: <info>  [1769847764.6541] device (tapfb9bab50-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:22:44 np0005603621 NetworkManager[49013]: <info>  [1769847764.6554] device (tapfb9bab50-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.661 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 systemd-machined[212769]: New machine qemu-53-instance-0000006e.
Jan 31 03:22:44 np0005603621 systemd[1]: Started Virtual Machine qemu-53-instance-0000006e.
Jan 31 03:22:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:22:44Z|00450|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 up in Southbound
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.749 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:49:09 10.100.0.8'], port_security=['fa:16:3e:69:49:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb9bab50-6b70-4dec-9b6d-9d961083b257) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.751 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb9bab50-6b70-4dec-9b6d-9d961083b257 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab bound to our chassis#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.753 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.767 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2a61e47a-91c3-4171-8c0e-855562cdd55a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.790 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[99fc072a-0af2-413e-b03e-28f1cca4baf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.792 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[90d04eb0-cb23-4274-a91e-ea1805f39ea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.815 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6c502f47-0b0b-4051-b390-df94db0782a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.827 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[985ac2a8-6026-4db6-a6b9-bba7501f253b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321571, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.837 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8d583f-6bba-46a8-bae7-83046c35e5c4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321572, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321572, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.839 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.841 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.842 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.842 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:22:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:22:44.842 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.932 247403 DEBUG nova.compute.manager [req-ad99795e-faf4-49f0-af63-1815ba22ec26 req-4254c915-fa13-472a-99a8-c7dde2eea8a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.933 247403 DEBUG oslo_concurrency.lockutils [req-ad99795e-faf4-49f0-af63-1815ba22ec26 req-4254c915-fa13-472a-99a8-c7dde2eea8a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.933 247403 DEBUG oslo_concurrency.lockutils [req-ad99795e-faf4-49f0-af63-1815ba22ec26 req-4254c915-fa13-472a-99a8-c7dde2eea8a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.934 247403 DEBUG oslo_concurrency.lockutils [req-ad99795e-faf4-49f0-af63-1815ba22ec26 req-4254c915-fa13-472a-99a8-c7dde2eea8a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.935 247403 DEBUG nova.compute.manager [req-ad99795e-faf4-49f0-af63-1815ba22ec26 req-4254c915-fa13-472a-99a8-c7dde2eea8a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:44 np0005603621 nova_compute[247399]: 2026-01-31 08:22:44.935 247403 WARNING nova.compute.manager [req-ad99795e-faf4-49f0-af63-1815ba22ec26 req-4254c915-fa13-472a-99a8-c7dde2eea8a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.192 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 23200b4a-e522-43bf-a83e-cb2f9bb31571 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.192 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847765.1917017, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.192 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.275 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.278 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.339 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.339 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847765.1924284, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.339 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Started (Lifecycle Event)#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.389 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.393 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:22:45 np0005603621 nova_compute[247399]: 2026-01-31 08:22:45.419 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:22:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 388 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Jan 31 03:22:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:46.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:46.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.144 247403 DEBUG nova.compute.manager [req-2f530857-d2b9-4c51-8e1d-65899f932036 req-2c811c21-e352-4d31-98bb-5439c3377c9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.144 247403 DEBUG oslo_concurrency.lockutils [req-2f530857-d2b9-4c51-8e1d-65899f932036 req-2c811c21-e352-4d31-98bb-5439c3377c9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.145 247403 DEBUG oslo_concurrency.lockutils [req-2f530857-d2b9-4c51-8e1d-65899f932036 req-2c811c21-e352-4d31-98bb-5439c3377c9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.145 247403 DEBUG oslo_concurrency.lockutils [req-2f530857-d2b9-4c51-8e1d-65899f932036 req-2c811c21-e352-4d31-98bb-5439c3377c9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.145 247403 DEBUG nova.compute.manager [req-2f530857-d2b9-4c51-8e1d-65899f932036 req-2c811c21-e352-4d31-98bb-5439c3377c9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.145 247403 WARNING nova.compute.manager [req-2f530857-d2b9-4c51-8e1d-65899f932036 req-2c811c21-e352-4d31-98bb-5439c3377c9a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.590 247403 DEBUG nova.compute.manager [None req-e99c3ed8-3f5d-43f8-9948-8669f4ca1898 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:22:47 np0005603621 nova_compute[247399]: 2026-01-31 08:22:47.753 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 388 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Jan 31 03:22:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:48.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:48.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007359024578913084 of space, bias 1.0, pg target 2.207707373673925 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8618352177554439 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.395 247403 DEBUG nova.compute.manager [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.395 247403 DEBUG oslo_concurrency.lockutils [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.396 247403 DEBUG oslo_concurrency.lockutils [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.396 247403 DEBUG oslo_concurrency.lockutils [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.396 247403 DEBUG nova.compute.manager [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.396 247403 WARNING nova.compute.manager [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.396 247403 DEBUG nova.compute.manager [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.397 247403 DEBUG oslo_concurrency.lockutils [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.397 247403 DEBUG oslo_concurrency.lockutils [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.397 247403 DEBUG oslo_concurrency.lockutils [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.397 247403 DEBUG nova.compute.manager [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.397 247403 WARNING nova.compute.manager [req-9a7533c9-1bd9-46e6-8517-1ca200aa86f0 req-cdd48469-e483-4ba0-a9c8-a3be866b8c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:22:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.891 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updating instance_info_cache with network_info: [{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.940 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.940 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.940 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.941 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.941 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.941 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.941 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:22:49 np0005603621 nova_compute[247399]: 2026-01-31 08:22:49.941 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 412 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.6 MiB/s wr, 261 op/s
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.019 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.020 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.020 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.020 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.020 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220984349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.501 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:50.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.958 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.959 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.963 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:50 np0005603621 nova_compute[247399]: 2026-01-31 08:22:50.963 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.112 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.114 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4023MB free_disk=20.811019897460938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.114 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.114 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.575 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.575 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.575 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.575 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:22:51 np0005603621 nova_compute[247399]: 2026-01-31 08:22:51.699 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 415 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 31 03:22:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831063405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:52 np0005603621 nova_compute[247399]: 2026-01-31 08:22:52.105 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:52 np0005603621 nova_compute[247399]: 2026-01-31 08:22:52.109 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:22:52 np0005603621 nova_compute[247399]: 2026-01-31 08:22:52.152 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:22:52 np0005603621 nova_compute[247399]: 2026-01-31 08:22:52.242 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:22:52 np0005603621 nova_compute[247399]: 2026-01-31 08:22:52.242 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:52.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:52 np0005603621 nova_compute[247399]: 2026-01-31 08:22:52.757 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Jan 31 03:22:54 np0005603621 nova_compute[247399]: 2026-01-31 08:22:54.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:54.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:54.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.169 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.170 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.243 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.340 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.340 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.347 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.348 247403 INFO nova.compute.claims [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:22:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:56.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:22:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:56.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.570 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:22:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723936629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.985 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:56 np0005603621 nova_compute[247399]: 2026-01-31 08:22:56.989 247403 DEBUG nova.compute.provider_tree [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.028 247403 DEBUG nova.scheduler.client.report [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.080 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.081 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.178 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.179 247403 DEBUG nova.network.neutron [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.216 247403 INFO nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.245 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.397 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.398 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.399 247403 INFO nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Creating image(s)#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.426 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.455 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.485 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.491 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.515 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:57 np0005603621 podman[321773]: 2026-01-31 08:22:57.536495701 +0000 UTC m=+0.085202105 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:22:57 np0005603621 podman[321777]: 2026-01-31 08:22:57.539558538 +0000 UTC m=+0.088262662 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.551 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.552 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.553 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.553 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.578 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.582 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.603 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.743 247403 DEBUG nova.policy [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4d66dd0b7ff443cbcdb6e2c9f5c4c8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf024d54545b4af882a87c721105742a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.758 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.848 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:22:57 np0005603621 nova_compute[247399]: 2026-01-31 08:22:57.926 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] resizing rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:22:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 421 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 213 op/s
Jan 31 03:22:58 np0005603621 nova_compute[247399]: 2026-01-31 08:22:58.106 247403 DEBUG nova.objects.instance [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'migration_context' on Instance uuid dbf4e573-8e19-4920-aab9-c290d7d8eeec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:22:58 np0005603621 nova_compute[247399]: 2026-01-31 08:22:58.144 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:22:58 np0005603621 nova_compute[247399]: 2026-01-31 08:22:58.144 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Ensure instance console log exists: /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:22:58 np0005603621 nova_compute[247399]: 2026-01-31 08:22:58.145 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:22:58 np0005603621 nova_compute[247399]: 2026-01-31 08:22:58.145 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:22:58 np0005603621 nova_compute[247399]: 2026-01-31 08:22:58.145 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:22:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:22:58.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:22:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:22:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:22:58.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:22:59 np0005603621 nova_compute[247399]: 2026-01-31 08:22:59.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:22:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:22:59 np0005603621 nova_compute[247399]: 2026-01-31 08:22:59.606 247403 DEBUG nova.network.neutron [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Successfully created port: 3765398c-c6d8-4598-98d8-447d2d17b347 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:23:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 333 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 301 op/s
Jan 31 03:23:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:00.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:00 np0005603621 nova_compute[247399]: 2026-01-31 08:23:00.843 247403 DEBUG nova.network.neutron [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Successfully updated port: 3765398c-c6d8-4598-98d8-447d2d17b347 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:23:00 np0005603621 nova_compute[247399]: 2026-01-31 08:23:00.906 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "refresh_cache-dbf4e573-8e19-4920-aab9-c290d7d8eeec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:00 np0005603621 nova_compute[247399]: 2026-01-31 08:23:00.906 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquired lock "refresh_cache-dbf4e573-8e19-4920-aab9-c290d7d8eeec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:00 np0005603621 nova_compute[247399]: 2026-01-31 08:23:00.906 247403 DEBUG nova.network.neutron [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:23:01 np0005603621 nova_compute[247399]: 2026-01-31 08:23:01.375 247403 DEBUG nova.network.neutron [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:23:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 280 op/s
Jan 31 03:23:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:02.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:02 np0005603621 nova_compute[247399]: 2026-01-31 08:23:02.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:02 np0005603621 nova_compute[247399]: 2026-01-31 08:23:02.994 247403 DEBUG nova.network.neutron [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Updating instance_info_cache with network_info: [{"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.082 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Releasing lock "refresh_cache-dbf4e573-8e19-4920-aab9-c290d7d8eeec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.082 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Instance network_info: |[{"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.085 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Start _get_guest_xml network_info=[{"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.090 247403 WARNING nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.095 247403 DEBUG nova.virt.libvirt.host [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.096 247403 DEBUG nova.virt.libvirt.host [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.099 247403 DEBUG nova.virt.libvirt.host [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.099 247403 DEBUG nova.virt.libvirt.host [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.100 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.101 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.101 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.101 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.102 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.102 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.102 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.102 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.103 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.103 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.103 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.103 247403 DEBUG nova.virt.hardware [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.106 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.185 247403 DEBUG nova.compute.manager [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-changed-3765398c-c6d8-4598-98d8-447d2d17b347 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.186 247403 DEBUG nova.compute.manager [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Refreshing instance network info cache due to event network-changed-3765398c-c6d8-4598-98d8-447d2d17b347. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.186 247403 DEBUG oslo_concurrency.lockutils [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-dbf4e573-8e19-4920-aab9-c290d7d8eeec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.186 247403 DEBUG oslo_concurrency.lockutils [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-dbf4e573-8e19-4920-aab9-c290d7d8eeec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:03 np0005603621 nova_compute[247399]: 2026-01-31 08:23:03.186 247403 DEBUG nova.network.neutron [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Refreshing network info cache for port 3765398c-c6d8-4598-98d8-447d2d17b347 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:23:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 341 MiB data, 1007 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.7 MiB/s wr, 290 op/s
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.232 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.259 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.264 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:04.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:23:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2515438539' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.745 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.746 247403 DEBUG nova.virt.libvirt.vif [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:22:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1917640469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1917640469',id=115,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-k61p3csp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:22:57Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=dbf4e573-8e19-4920-aab9-c290d7d8eeec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.747 247403 DEBUG nova.network.os_vif_util [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.748 247403 DEBUG nova.network.os_vif_util [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.749 247403 DEBUG nova.objects.instance [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'pci_devices' on Instance uuid dbf4e573-8e19-4920-aab9-c290d7d8eeec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.777 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <uuid>dbf4e573-8e19-4920-aab9-c290d7d8eeec</uuid>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <name>instance-00000073</name>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1917640469</nova:name>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:23:03</nova:creationTime>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:user uuid="f4d66dd0b7ff443cbcdb6e2c9f5c4c8c">tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member</nova:user>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:project uuid="cf024d54545b4af882a87c721105742a">tempest-ServerBootFromVolumeStableRescueTest-468517745</nova:project>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <nova:port uuid="3765398c-c6d8-4598-98d8-447d2d17b347">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <entry name="serial">dbf4e573-8e19-4920-aab9-c290d7d8eeec</entry>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <entry name="uuid">dbf4e573-8e19-4920-aab9-c290d7d8eeec</entry>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk.config">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:6b:1d:95"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <target dev="tap3765398c-c6"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/console.log" append="off"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:23:04 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:23:04 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:23:04 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:23:04 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.778 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Preparing to wait for external event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.778 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.779 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.779 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.779 247403 DEBUG nova.virt.libvirt.vif [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:22:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1917640469',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1917640469',id=115,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-k61p3csp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:22:57Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=dbf4e573-8e19-4920-aab9-c290d7d8eeec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.780 247403 DEBUG nova.network.os_vif_util [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.780 247403 DEBUG nova.network.os_vif_util [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.781 247403 DEBUG os_vif [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.781 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.782 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.782 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.785 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3765398c-c6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.786 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3765398c-c6, col_values=(('external_ids', {'iface-id': '3765398c-c6d8-4598-98d8-447d2d17b347', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6b:1d:95', 'vm-uuid': 'dbf4e573-8e19-4920-aab9-c290d7d8eeec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:04 np0005603621 NetworkManager[49013]: <info>  [1769847784.7891] manager: (tap3765398c-c6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.790 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.797 247403 INFO os_vif [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6')#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.955 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.955 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.956 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No VIF found with MAC fa:16:3e:6b:1d:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:23:04 np0005603621 nova_compute[247399]: 2026-01-31 08:23:04.956 247403 INFO nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Using config drive#033[00m
Jan 31 03:23:05 np0005603621 nova_compute[247399]: 2026-01-31 08:23:05.624 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 341 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Jan 31 03:23:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:06.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:07 np0005603621 nova_compute[247399]: 2026-01-31 08:23:07.562 247403 INFO nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Creating config drive at /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/disk.config#033[00m
Jan 31 03:23:07 np0005603621 nova_compute[247399]: 2026-01-31 08:23:07.566 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8iqbxa58 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:07 np0005603621 nova_compute[247399]: 2026-01-31 08:23:07.689 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8iqbxa58" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:07 np0005603621 nova_compute[247399]: 2026-01-31 08:23:07.715 247403 DEBUG nova.storage.rbd_utils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] rbd image dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:23:07 np0005603621 nova_compute[247399]: 2026-01-31 08:23:07.718 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/disk.config dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 341 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:08.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:08.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:08 np0005603621 nova_compute[247399]: 2026-01-31 08:23:08.937 247403 DEBUG nova.network.neutron [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Updated VIF entry in instance network info cache for port 3765398c-c6d8-4598-98d8-447d2d17b347. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:23:08 np0005603621 nova_compute[247399]: 2026-01-31 08:23:08.937 247403 DEBUG nova.network.neutron [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Updating instance_info_cache with network_info: [{"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.343 247403 DEBUG oslo_concurrency.processutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/disk.config dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.343 247403 INFO nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Deleting local config drive /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec/disk.config because it was imported into RBD.#033[00m
Jan 31 03:23:09 np0005603621 kernel: tap3765398c-c6: entered promiscuous mode
Jan 31 03:23:09 np0005603621 NetworkManager[49013]: <info>  [1769847789.3855] manager: (tap3765398c-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Jan 31 03:23:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:23:09Z|00451|binding|INFO|Claiming lport 3765398c-c6d8-4598-98d8-447d2d17b347 for this chassis.
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:23:09Z|00452|binding|INFO|3765398c-c6d8-4598-98d8-447d2d17b347: Claiming fa:16:3e:6b:1d:95 10.100.0.6
Jan 31 03:23:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:23:09Z|00453|binding|INFO|Setting lport 3765398c-c6d8-4598-98d8-447d2d17b347 ovn-installed in OVS
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.394 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603621 systemd-udevd[322155]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:23:09 np0005603621 NetworkManager[49013]: <info>  [1769847789.4208] device (tap3765398c-c6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:23:09 np0005603621 NetworkManager[49013]: <info>  [1769847789.4217] device (tap3765398c-c6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:23:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:23:09Z|00454|binding|INFO|Setting lport 3765398c-c6d8-4598-98d8-447d2d17b347 up in Southbound
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.832 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.837 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:1d:95 10.100.0.6'], port_security=['fa:16:3e:6b:1d:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dbf4e573-8e19-4920-aab9-c290d7d8eeec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=3765398c-c6d8-4598-98d8-447d2d17b347) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.838 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 3765398c-c6d8-4598-98d8-447d2d17b347 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab bound to our chassis#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.840 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:23:09 np0005603621 systemd-machined[212769]: New machine qemu-54-instance-00000073.
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.850 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37730dd4-65d1-4228-9b9a-d7a1a8b306a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.852 247403 DEBUG oslo_concurrency.lockutils [req-7210b76a-b8ac-49f7-a932-b7d87fb33f9c req-118ddd99-90b2-47b3-8094-1a3a50e1b171 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-dbf4e573-8e19-4920-aab9-c290d7d8eeec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:23:09 np0005603621 systemd[1]: Started Virtual Machine qemu-54-instance-00000073.
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.868 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a43a35-b848-4d76-b796-268cbcf9641a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.871 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3e150bf0-5760-48a5-876c-358ae6eafa83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.895 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[88624c20-02af-4e94-8623-593ae43583d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.910 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[63c88192-fee3-4a51-badd-589e9c0d1e95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322172, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.922 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6bea25b7-67be-410b-adba-79a3860363a9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322173, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322173, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.923 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:09 np0005603621 nova_compute[247399]: 2026-01-31 08:23:09.925 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.926 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.926 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.927 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:09.927 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:23:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 341 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 236 op/s
Jan 31 03:23:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:10.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:10.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:10 np0005603621 nova_compute[247399]: 2026-01-31 08:23:10.922 247403 DEBUG nova.compute.manager [req-b7e6e314-9780-4165-b3be-682c51303969 req-626b4951-77c8-4cfb-98b9-8040a002e236 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:10 np0005603621 nova_compute[247399]: 2026-01-31 08:23:10.922 247403 DEBUG oslo_concurrency.lockutils [req-b7e6e314-9780-4165-b3be-682c51303969 req-626b4951-77c8-4cfb-98b9-8040a002e236 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:10 np0005603621 nova_compute[247399]: 2026-01-31 08:23:10.923 247403 DEBUG oslo_concurrency.lockutils [req-b7e6e314-9780-4165-b3be-682c51303969 req-626b4951-77c8-4cfb-98b9-8040a002e236 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:10 np0005603621 nova_compute[247399]: 2026-01-31 08:23:10.923 247403 DEBUG oslo_concurrency.lockutils [req-b7e6e314-9780-4165-b3be-682c51303969 req-626b4951-77c8-4cfb-98b9-8040a002e236 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:10 np0005603621 nova_compute[247399]: 2026-01-31 08:23:10.923 247403 DEBUG nova.compute.manager [req-b7e6e314-9780-4165-b3be-682c51303969 req-626b4951-77c8-4cfb-98b9-8040a002e236 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Processing event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.108 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.109 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847791.1072435, dbf4e573-8e19-4920-aab9-c290d7d8eeec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.109 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] VM Started (Lifecycle Event)#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.112 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.116 247403 INFO nova.virt.libvirt.driver [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Instance spawned successfully.#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.116 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.148 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.148 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.149 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.149 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.150 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.150 247403 DEBUG nova.virt.libvirt.driver [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.154 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.156 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.192 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.193 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847791.10754, dbf4e573-8e19-4920-aab9-c290d7d8eeec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.193 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.239 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.242 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847791.1122124, dbf4e573-8e19-4920-aab9-c290d7d8eeec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.242 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.266 247403 INFO nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Took 13.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.267 247403 DEBUG nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.293 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.296 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.365 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.433 247403 INFO nova.compute.manager [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Took 15.12 seconds to build instance.#033[00m
Jan 31 03:23:11 np0005603621 nova_compute[247399]: 2026-01-31 08:23:11.457 247403 DEBUG oslo_concurrency.lockutils [None req-b4fa39e0-a95c-4d6b-9b55-1bd8cf240c0f f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 341 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 149 op/s
Jan 31 03:23:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:12.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:13 np0005603621 nova_compute[247399]: 2026-01-31 08:23:13.245 247403 DEBUG nova.compute.manager [req-aaeebdbc-03a0-404d-b2fb-931abf3872bc req-7b8c75fe-d3a6-4ab2-85b0-6164bb31de59 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:23:13 np0005603621 nova_compute[247399]: 2026-01-31 08:23:13.245 247403 DEBUG oslo_concurrency.lockutils [req-aaeebdbc-03a0-404d-b2fb-931abf3872bc req-7b8c75fe-d3a6-4ab2-85b0-6164bb31de59 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:13 np0005603621 nova_compute[247399]: 2026-01-31 08:23:13.246 247403 DEBUG oslo_concurrency.lockutils [req-aaeebdbc-03a0-404d-b2fb-931abf3872bc req-7b8c75fe-d3a6-4ab2-85b0-6164bb31de59 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:13 np0005603621 nova_compute[247399]: 2026-01-31 08:23:13.246 247403 DEBUG oslo_concurrency.lockutils [req-aaeebdbc-03a0-404d-b2fb-931abf3872bc req-7b8c75fe-d3a6-4ab2-85b0-6164bb31de59 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:13 np0005603621 nova_compute[247399]: 2026-01-31 08:23:13.246 247403 DEBUG nova.compute.manager [req-aaeebdbc-03a0-404d-b2fb-931abf3872bc req-7b8c75fe-d3a6-4ab2-85b0-6164bb31de59 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] No waiting events found dispatching network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:23:13 np0005603621 nova_compute[247399]: 2026-01-31 08:23:13.246 247403 WARNING nova.compute.manager [req-aaeebdbc-03a0-404d-b2fb-931abf3872bc req-7b8c75fe-d3a6-4ab2-85b0-6164bb31de59 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received unexpected event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:23:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 308 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 828 KiB/s wr, 179 op/s
Jan 31 03:23:14 np0005603621 nova_compute[247399]: 2026-01-31 08:23:14.270 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:14 np0005603621 nova_compute[247399]: 2026-01-31 08:23:14.372 247403 DEBUG nova.compute.manager [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:23:14 np0005603621 nova_compute[247399]: 2026-01-31 08:23:14.486 247403 INFO nova.compute.manager [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] instance snapshotting#033[00m
Jan 31 03:23:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:14.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:14.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:14 np0005603621 nova_compute[247399]: 2026-01-31 08:23:14.834 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:15 np0005603621 nova_compute[247399]: 2026-01-31 08:23:15.135 247403 INFO nova.virt.libvirt.driver [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Beginning live snapshot process#033[00m
Jan 31 03:23:15 np0005603621 nova_compute[247399]: 2026-01-31 08:23:15.425 247403 DEBUG nova.virt.libvirt.imagebackend [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:23:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 308 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 151 op/s
Jan 31 03:23:16 np0005603621 nova_compute[247399]: 2026-01-31 08:23:16.030 247403 DEBUG nova.storage.rbd_utils [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] creating snapshot(3ac2f9ed2c3840f8b58402f51be8e510) on rbd image(dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:23:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:16.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 310 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 645 KiB/s wr, 144 op/s
Jan 31 03:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 31 03:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 31 03:23:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 31 03:23:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:18.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:18.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:19 np0005603621 nova_compute[247399]: 2026-01-31 08:23:19.270 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:19 np0005603621 nova_compute[247399]: 2026-01-31 08:23:19.860 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 344 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 161 op/s
Jan 31 03:23:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:20.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:21 np0005603621 nova_compute[247399]: 2026-01-31 08:23:21.769 247403 DEBUG nova.storage.rbd_utils [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] cloning vms/dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk@3ac2f9ed2c3840f8b58402f51be8e510 to images/27d563e8-5fbe-4653-b526-f8048559e6cd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:23:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 344 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 145 op/s
Jan 31 03:23:22 np0005603621 nova_compute[247399]: 2026-01-31 08:23:22.288 247403 DEBUG nova.storage.rbd_utils [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] flattening images/27d563e8-5fbe-4653-b526-f8048559e6cd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:23:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:22.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:22.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 363 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 892 KiB/s rd, 3.9 MiB/s wr, 76 op/s
Jan 31 03:23:24 np0005603621 nova_compute[247399]: 2026-01-31 08:23:24.357 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:24.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:24.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:24 np0005603621 nova_compute[247399]: 2026-01-31 08:23:24.821 247403 DEBUG nova.storage.rbd_utils [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] removing snapshot(3ac2f9ed2c3840f8b58402f51be8e510) on rbd image(dbf4e573-8e19-4920-aab9-c290d7d8eeec_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:23:24 np0005603621 nova_compute[247399]: 2026-01-31 08:23:24.862 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 31 03:23:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 31 03:23:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 31 03:23:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 426 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.3 MiB/s wr, 146 op/s
Jan 31 03:23:26 np0005603621 nova_compute[247399]: 2026-01-31 08:23:26.118 247403 DEBUG nova.storage.rbd_utils [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] creating snapshot(snap) on rbd image(27d563e8-5fbe-4653-b526-f8048559e6cd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:23:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:26.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:26.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:26.763 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:23:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:26.764 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:23:26 np0005603621 nova_compute[247399]: 2026-01-31 08:23:26.765 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 31 03:23:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 31 03:23:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 31 03:23:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 444 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.5 MiB/s wr, 155 op/s
Jan 31 03:23:28 np0005603621 ovn_controller[149152]: 2026-01-31T08:23:28Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:1d:95 10.100.0.6
Jan 31 03:23:28 np0005603621 ovn_controller[149152]: 2026-01-31T08:23:28Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:1d:95 10.100.0.6
Jan 31 03:23:28 np0005603621 podman[322417]: 2026-01-31 08:23:28.503529587 +0000 UTC m=+0.050531933 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:23:28 np0005603621 podman[322418]: 2026-01-31 08:23:28.521159023 +0000 UTC m=+0.067612241 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:23:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:28.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:28.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:29 np0005603621 nova_compute[247399]: 2026-01-31 08:23:29.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:29.768 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:23:29 np0005603621 nova_compute[247399]: 2026-01-31 08:23:29.864 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 460 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 8.3 MiB/s wr, 341 op/s
Jan 31 03:23:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:30.505 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:30.505 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:23:30.506 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:30.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:30.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 460 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.3 MiB/s wr, 319 op/s
Jan 31 03:23:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:32.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:32.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:33 np0005603621 nova_compute[247399]: 2026-01-31 08:23:33.453 247403 INFO nova.virt.libvirt.driver [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Snapshot image upload complete#033[00m
Jan 31 03:23:33 np0005603621 nova_compute[247399]: 2026-01-31 08:23:33.453 247403 INFO nova.compute.manager [None req-9e361c03-c1bf-4b5f-9303-19c0d895daea f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Took 18.97 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:23:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 460 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.9 MiB/s wr, 362 op/s
Jan 31 03:23:34 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 31 03:23:34 np0005603621 nova_compute[247399]: 2026-01-31 08:23:34.312 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 31 03:23:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:23:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:34.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:23:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:34.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:34 np0005603621 nova_compute[247399]: 2026-01-31 08:23:34.866 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 31 03:23:35 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 31 03:23:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 465 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.7 MiB/s wr, 317 op/s
Jan 31 03:23:36 np0005603621 nova_compute[247399]: 2026-01-31 08:23:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:36.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:36.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:37 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 301 op/s
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:23:38
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.mgr', 'backups', 'default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:23:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:38.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:38.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:23:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:23:39 np0005603621 nova_compute[247399]: 2026-01-31 08:23:39.313 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:23:39 np0005603621 podman[322735]: 2026-01-31 08:23:39.396343278 +0000 UTC m=+0.019625119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:23:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:39 np0005603621 podman[322735]: 2026-01-31 08:23:39.738992622 +0000 UTC m=+0.362274443 container create 4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:23:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:39 np0005603621 nova_compute[247399]: 2026-01-31 08:23:39.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 443 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 95 KiB/s wr, 181 op/s
Jan 31 03:23:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:23:40 np0005603621 systemd[1]: Started libpod-conmon-4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65.scope.
Jan 31 03:23:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:23:40 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 31 03:23:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:40.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:23:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:40.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:23:40 np0005603621 podman[322735]: 2026-01-31 08:23:40.699436519 +0000 UTC m=+1.322718370 container init 4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:23:40 np0005603621 podman[322735]: 2026-01-31 08:23:40.70582359 +0000 UTC m=+1.329105421 container start 4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:40 np0005603621 optimistic_chaum[322752]: 167 167
Jan 31 03:23:40 np0005603621 systemd[1]: libpod-4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65.scope: Deactivated successfully.
Jan 31 03:23:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:40 np0005603621 podman[322735]: 2026-01-31 08:23:40.995709282 +0000 UTC m=+1.618991403 container attach 4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaum, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:23:40 np0005603621 podman[322735]: 2026-01-31 08:23:40.997167709 +0000 UTC m=+1.620449530 container died 4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaum, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:23:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7be4a7b5c47bd5f10cbe63b68158a810abba13e4f0fda5d83aacdc7c85086347-merged.mount: Deactivated successfully.
Jan 31 03:23:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:23:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 443 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 95 KiB/s wr, 181 op/s
Jan 31 03:23:42 np0005603621 nova_compute[247399]: 2026-01-31 08:23:42.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:42 np0005603621 nova_compute[247399]: 2026-01-31 08:23:42.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:42 np0005603621 nova_compute[247399]: 2026-01-31 08:23:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:23:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:42 np0005603621 podman[322735]: 2026-01-31 08:23:42.551707421 +0000 UTC m=+3.174989242 container remove 4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chaum, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:23:42 np0005603621 systemd[1]: libpod-conmon-4f7855a3d6e5679d13802eab8d19a814f8100b981ffc6c9987ea2fe825c68d65.scope: Deactivated successfully.
Jan 31 03:23:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:42.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:42 np0005603621 podman[322780]: 2026-01-31 08:23:42.669337726 +0000 UTC m=+0.018230865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:23:43 np0005603621 podman[322780]: 2026-01-31 08:23:43.01289405 +0000 UTC m=+0.361787169 container create fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:23:43 np0005603621 systemd[1]: Started libpod-conmon-fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166.scope.
Jan 31 03:23:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:23:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d510700a96e3faf41f74925b9fd92b4b643de50696961dd4f5b16750d641e686/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d510700a96e3faf41f74925b9fd92b4b643de50696961dd4f5b16750d641e686/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d510700a96e3faf41f74925b9fd92b4b643de50696961dd4f5b16750d641e686/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d510700a96e3faf41f74925b9fd92b4b643de50696961dd4f5b16750d641e686/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:43 np0005603621 podman[322780]: 2026-01-31 08:23:43.717096055 +0000 UTC m=+1.065989274 container init fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:43 np0005603621 podman[322780]: 2026-01-31 08:23:43.724265581 +0000 UTC m=+1.073158740 container start fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:23:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 414 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 36 KiB/s wr, 115 op/s
Jan 31 03:23:44 np0005603621 nova_compute[247399]: 2026-01-31 08:23:44.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:44 np0005603621 podman[322780]: 2026-01-31 08:23:44.407570967 +0000 UTC m=+1.756464116 container attach fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:23:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:23:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:44.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:23:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:44.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:44 np0005603621 nova_compute[247399]: 2026-01-31 08:23:44.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:44 np0005603621 keen_haibt[322796]: [
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:    {
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "available": false,
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "ceph_device": false,
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "lsm_data": {},
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "lvs": [],
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "path": "/dev/sr0",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "rejected_reasons": [
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "Insufficient space (<5GB)",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "Has a FileSystem"
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        ],
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        "sys_api": {
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "actuators": null,
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "device_nodes": "sr0",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "devname": "sr0",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "human_readable_size": "482.00 KB",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "id_bus": "ata",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "model": "QEMU DVD-ROM",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "nr_requests": "2",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "parent": "/dev/sr0",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "partitions": {},
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "path": "/dev/sr0",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "removable": "1",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "rev": "2.5+",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "ro": "0",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "rotational": "1",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "sas_address": "",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "sas_device_handle": "",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "scheduler_mode": "mq-deadline",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "sectors": 0,
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "sectorsize": "2048",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "size": 493568.0,
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "support_discard": "2048",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "type": "disk",
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:            "vendor": "QEMU"
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:        }
Jan 31 03:23:44 np0005603621 keen_haibt[322796]:    }
Jan 31 03:23:44 np0005603621 keen_haibt[322796]: ]
Jan 31 03:23:44 np0005603621 nova_compute[247399]: 2026-01-31 08:23:44.894 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:23:44 np0005603621 nova_compute[247399]: 2026-01-31 08:23:44.895 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:23:44 np0005603621 nova_compute[247399]: 2026-01-31 08:23:44.895 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:23:44 np0005603621 systemd[1]: libpod-fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166.scope: Deactivated successfully.
Jan 31 03:23:44 np0005603621 systemd[1]: libpod-fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166.scope: Consumed 1.096s CPU time.
Jan 31 03:23:44 np0005603621 podman[322780]: 2026-01-31 08:23:44.932293577 +0000 UTC m=+2.281186716 container died fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d510700a96e3faf41f74925b9fd92b4b643de50696961dd4f5b16750d641e686-merged.mount: Deactivated successfully.
Jan 31 03:23:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 374 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 27 KiB/s wr, 87 op/s
Jan 31 03:23:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:46.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:46.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:47 np0005603621 podman[322780]: 2026-01-31 08:23:47.256500678 +0000 UTC m=+4.605393827 container remove fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_haibt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:23:47 np0005603621 systemd[1]: libpod-conmon-fc6e867e5222b40d2065054479e9890ac665074979457202f57d4ab28a8d1166.scope: Deactivated successfully.
Jan 31 03:23:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:23:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 374 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 24 KiB/s wr, 81 op/s
Jan 31 03:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3713642323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:23:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:48.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.689 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:23:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.784 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.784 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.784 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.785 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.785 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.785 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.785 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.861 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.861 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.862 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.862 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:23:48 np0005603621 nova_compute[247399]: 2026-01-31 08:23:48.862 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:23:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310969183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006524775207053787 of space, bias 1.0, pg target 1.9574325621161361 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.274 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.386 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.386 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.390 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.390 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.393 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.394 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:23:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.524 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.525 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3887MB free_disk=20.851348876953125GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.525 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.525 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:23:49 np0005603621 nova_compute[247399]: 2026-01-31 08:23:49.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 409 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 1.1 MiB/s wr, 88 op/s
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:23:50 np0005603621 nova_compute[247399]: 2026-01-31 08:23:50.291 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:23:50 np0005603621 nova_compute[247399]: 2026-01-31 08:23:50.292 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:23:50 np0005603621 nova_compute[247399]: 2026-01-31 08:23:50.292 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance dbf4e573-8e19-4920-aab9-c290d7d8eeec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:23:50 np0005603621 nova_compute[247399]: 2026-01-31 08:23:50.292 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:23:50 np0005603621 nova_compute[247399]: 2026-01-31 08:23:50.292 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:23:50 np0005603621 nova_compute[247399]: 2026-01-31 08:23:50.372 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7362df32-6469-41a2-aa15-0d177fc4e1f8 does not exist
Jan 31 03:23:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9e1e60f6-0a62-407f-82a4-c763689923a9 does not exist
Jan 31 03:23:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b50ba004-4c31-4fdf-91ef-b9abe34e3710 does not exist
Jan 31 03:23:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:50.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:50.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:23:51 np0005603621 nova_compute[247399]: 2026-01-31 08:23:51.054 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:23:51 np0005603621 nova_compute[247399]: 2026-01-31 08:23:51.061 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:23:51 np0005603621 podman[324258]: 2026-01-31 08:23:51.213759615 +0000 UTC m=+0.021813208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:23:51 np0005603621 podman[324258]: 2026-01-31 08:23:51.417877825 +0000 UTC m=+0.225931398 container create 861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:23:51 np0005603621 nova_compute[247399]: 2026-01-31 08:23:51.486 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:23:51 np0005603621 systemd[1]: Started libpod-conmon-861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523.scope.
Jan 31 03:23:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:23:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 409 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.1 MiB/s wr, 64 op/s
Jan 31 03:23:52 np0005603621 podman[324258]: 2026-01-31 08:23:52.120701417 +0000 UTC m=+0.928755010 container init 861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:52 np0005603621 podman[324258]: 2026-01-31 08:23:52.128875344 +0000 UTC m=+0.936928917 container start 861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:23:52 np0005603621 naughty_hopper[324275]: 167 167
Jan 31 03:23:52 np0005603621 systemd[1]: libpod-861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523.scope: Deactivated successfully.
Jan 31 03:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:23:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:23:52 np0005603621 podman[324258]: 2026-01-31 08:23:52.282487654 +0000 UTC m=+1.090541247 container attach 861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:23:52 np0005603621 podman[324258]: 2026-01-31 08:23:52.283359331 +0000 UTC m=+1.091412894 container died 861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:23:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:52.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:52.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-755522ee126e92e838559a3387f54126e0adc83d31c55fd26ab59250c3a34edb-merged.mount: Deactivated successfully.
Jan 31 03:23:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 03:23:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 31 03:23:54 np0005603621 podman[324258]: 2026-01-31 08:23:54.175177921 +0000 UTC m=+2.983231484 container remove 861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:23:54 np0005603621 systemd[1]: libpod-conmon-861f44e9c0f3348e4a4e1d22297ff2bebbccb112895c12702afa72fba4cbb523.scope: Deactivated successfully.
Jan 31 03:23:54 np0005603621 nova_compute[247399]: 2026-01-31 08:23:54.318 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 31 03:23:54 np0005603621 podman[324301]: 2026-01-31 08:23:54.296674787 +0000 UTC m=+0.017403720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:23:54 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 31 03:23:54 np0005603621 podman[324301]: 2026-01-31 08:23:54.578547467 +0000 UTC m=+0.299276379 container create c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dirac, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:23:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:23:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:54.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:23:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:54.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:54 np0005603621 systemd[1]: Started libpod-conmon-c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793.scope.
Jan 31 03:23:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb598b2d68142db0cedabf71f6b40c4cead9091657664f9b9c945369cd5dd13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb598b2d68142db0cedabf71f6b40c4cead9091657664f9b9c945369cd5dd13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb598b2d68142db0cedabf71f6b40c4cead9091657664f9b9c945369cd5dd13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb598b2d68142db0cedabf71f6b40c4cead9091657664f9b9c945369cd5dd13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb598b2d68142db0cedabf71f6b40c4cead9091657664f9b9c945369cd5dd13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:23:54 np0005603621 nova_compute[247399]: 2026-01-31 08:23:54.875 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:55 np0005603621 podman[324301]: 2026-01-31 08:23:55.076402212 +0000 UTC m=+0.797131144 container init c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dirac, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 03:23:55 np0005603621 podman[324301]: 2026-01-31 08:23:55.08746298 +0000 UTC m=+0.808191892 container start c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dirac, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:23:55 np0005603621 podman[324301]: 2026-01-31 08:23:55.25124273 +0000 UTC m=+0.971971642 container attach c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:23:55 np0005603621 heuristic_dirac[324317]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:23:55 np0005603621 heuristic_dirac[324317]: --> relative data size: 1.0
Jan 31 03:23:55 np0005603621 heuristic_dirac[324317]: --> All data devices are unavailable
Jan 31 03:23:55 np0005603621 systemd[1]: libpod-c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793.scope: Deactivated successfully.
Jan 31 03:23:55 np0005603621 podman[324301]: 2026-01-31 08:23:55.86388652 +0000 UTC m=+1.584615432 container died c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dirac, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:23:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 31 03:23:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:56.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:56 np0005603621 nova_compute[247399]: 2026-01-31 08:23:56.759 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:23:56 np0005603621 nova_compute[247399]: 2026-01-31 08:23:56.762 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:23:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8cb598b2d68142db0cedabf71f6b40c4cead9091657664f9b9c945369cd5dd13-merged.mount: Deactivated successfully.
Jan 31 03:23:57 np0005603621 nova_compute[247399]: 2026-01-31 08:23:57.176 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:57 np0005603621 nova_compute[247399]: 2026-01-31 08:23:57.176 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:23:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 31 03:23:58 np0005603621 podman[324301]: 2026-01-31 08:23:58.60103066 +0000 UTC m=+4.321759572 container remove c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_dirac, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:23:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:23:58 np0005603621 systemd[1]: libpod-conmon-c1713354beb7b295db2fb5d62ad0c806379f58bde802508e78320bd115812793.scope: Deactivated successfully.
Jan 31 03:23:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:23:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:23:58.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:23:58 np0005603621 podman[324347]: 2026-01-31 08:23:58.774067861 +0000 UTC m=+0.111122181 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:23:58 np0005603621 podman[324349]: 2026-01-31 08:23:58.806878044 +0000 UTC m=+0.143398918 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:23:59 np0005603621 podman[324533]: 2026-01-31 08:23:59.188362783 +0000 UTC m=+0.024560405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:23:59 np0005603621 podman[324533]: 2026-01-31 08:23:59.291251264 +0000 UTC m=+0.127448896 container create 35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:23:59 np0005603621 nova_compute[247399]: 2026-01-31 08:23:59.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:23:59 np0005603621 systemd[1]: Started libpod-conmon-35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c.scope.
Jan 31 03:23:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:23:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:23:59 np0005603621 podman[324533]: 2026-01-31 08:23:59.733647381 +0000 UTC m=+0.569845023 container init 35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:23:59 np0005603621 podman[324533]: 2026-01-31 08:23:59.742048136 +0000 UTC m=+0.578245758 container start 35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:23:59 np0005603621 systemd[1]: libpod-35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c.scope: Deactivated successfully.
Jan 31 03:23:59 np0005603621 hungry_rhodes[324549]: 167 167
Jan 31 03:23:59 np0005603621 conmon[324549]: conmon 35104557eb2e21b724f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c.scope/container/memory.events
Jan 31 03:23:59 np0005603621 podman[324533]: 2026-01-31 08:23:59.848052565 +0000 UTC m=+0.684250187 container attach 35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 31 03:23:59 np0005603621 podman[324533]: 2026-01-31 08:23:59.849163121 +0000 UTC m=+0.685360743 container died 35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:23:59 np0005603621 nova_compute[247399]: 2026-01-31 08:23:59.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 811 KiB/s wr, 79 op/s
Jan 31 03:24:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6a645b353f71900f0afe592af7a25358a1681e343acf0e60d075aa4d9760818c-merged.mount: Deactivated successfully.
Jan 31 03:24:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:00.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:01 np0005603621 podman[324533]: 2026-01-31 08:24:01.372855532 +0000 UTC m=+2.209053154 container remove 35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_rhodes, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:24:01 np0005603621 systemd[1]: libpod-conmon-35104557eb2e21b724f5ee0ee94d83df69fb3f9c72bcabc757b9b8d84ab5df6c.scope: Deactivated successfully.
Jan 31 03:24:01 np0005603621 podman[324576]: 2026-01-31 08:24:01.48421173 +0000 UTC m=+0.020179947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:24:01 np0005603621 podman[324576]: 2026-01-31 08:24:01.68165734 +0000 UTC m=+0.217625547 container create f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:24:01 np0005603621 systemd[1]: Started libpod-conmon-f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc.scope.
Jan 31 03:24:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:24:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76487ca279341f36dcaeaf0355586428a664fae523aac6b514a855de55b79d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76487ca279341f36dcaeaf0355586428a664fae523aac6b514a855de55b79d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76487ca279341f36dcaeaf0355586428a664fae523aac6b514a855de55b79d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76487ca279341f36dcaeaf0355586428a664fae523aac6b514a855de55b79d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 811 KiB/s wr, 79 op/s
Jan 31 03:24:02 np0005603621 podman[324576]: 2026-01-31 08:24:02.200500456 +0000 UTC m=+0.736468673 container init f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:24:02 np0005603621 podman[324576]: 2026-01-31 08:24:02.207043482 +0000 UTC m=+0.743011679 container start f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:24:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:02 np0005603621 podman[324576]: 2026-01-31 08:24:02.619949809 +0000 UTC m=+1.155918026 container attach f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_carson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:24:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:02.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:02 np0005603621 jovial_carson[324593]: {
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:    "0": [
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:        {
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "devices": [
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "/dev/loop3"
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            ],
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "lv_name": "ceph_lv0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "lv_size": "7511998464",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "name": "ceph_lv0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "tags": {
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.cluster_name": "ceph",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.crush_device_class": "",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.encrypted": "0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.osd_id": "0",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.type": "block",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:                "ceph.vdo": "0"
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            },
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "type": "block",
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:            "vg_name": "ceph_vg0"
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:        }
Jan 31 03:24:02 np0005603621 jovial_carson[324593]:    ]
Jan 31 03:24:02 np0005603621 jovial_carson[324593]: }
Jan 31 03:24:02 np0005603621 systemd[1]: libpod-f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc.scope: Deactivated successfully.
Jan 31 03:24:02 np0005603621 podman[324576]: 2026-01-31 08:24:02.967626662 +0000 UTC m=+1.503594859 container died f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_carson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:24:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-76487ca279341f36dcaeaf0355586428a664fae523aac6b514a855de55b79d81-merged.mount: Deactivated successfully.
Jan 31 03:24:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 KiB/s wr, 120 op/s
Jan 31 03:24:04 np0005603621 nova_compute[247399]: 2026-01-31 08:24:04.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:04.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:04.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:04 np0005603621 podman[324576]: 2026-01-31 08:24:04.866952057 +0000 UTC m=+3.402920254 container remove f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:24:04 np0005603621 nova_compute[247399]: 2026-01-31 08:24:04.880 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:04 np0005603621 systemd[1]: libpod-conmon-f2c5135622281347b6849bb0f84ae4d913eb0424426bdd3444f7fc01e6f66efc.scope: Deactivated successfully.
Jan 31 03:24:05 np0005603621 podman[324755]: 2026-01-31 08:24:05.356280543 +0000 UTC m=+0.022026675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:24:05 np0005603621 podman[324755]: 2026-01-31 08:24:05.617483911 +0000 UTC m=+0.283230023 container create d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:24:05 np0005603621 systemd[1]: Started libpod-conmon-d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81.scope.
Jan 31 03:24:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:24:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 KiB/s wr, 113 op/s
Jan 31 03:24:06 np0005603621 podman[324755]: 2026-01-31 08:24:06.192222588 +0000 UTC m=+0.857968710 container init d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:24:06 np0005603621 podman[324755]: 2026-01-31 08:24:06.198181716 +0000 UTC m=+0.863927828 container start d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:24:06 np0005603621 systemd[1]: libpod-d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81.scope: Deactivated successfully.
Jan 31 03:24:06 np0005603621 inspiring_dubinsky[324772]: 167 167
Jan 31 03:24:06 np0005603621 conmon[324772]: conmon d90f97ac73f079c0e217 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81.scope/container/memory.events
Jan 31 03:24:06 np0005603621 podman[324755]: 2026-01-31 08:24:06.24242883 +0000 UTC m=+0.908174942 container attach d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:24:06 np0005603621 podman[324755]: 2026-01-31 08:24:06.243244406 +0000 UTC m=+0.908990518 container died d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:24:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4fa947bb49dadc60674d2312bcc5fd618f56dde50ceb45b7dd035d383f2c4427-merged.mount: Deactivated successfully.
Jan 31 03:24:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:06.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:07 np0005603621 podman[324755]: 2026-01-31 08:24:07.062963849 +0000 UTC m=+1.728709961 container remove d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:24:07 np0005603621 systemd[1]: libpod-conmon-d90f97ac73f079c0e217148605d8696610a4eda2feb52e4d699152153f372b81.scope: Deactivated successfully.
Jan 31 03:24:07 np0005603621 podman[324848]: 2026-01-31 08:24:07.18840426 +0000 UTC m=+0.025473973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:24:07 np0005603621 podman[324848]: 2026-01-31 08:24:07.436158735 +0000 UTC m=+0.273228408 container create 96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:24:07 np0005603621 systemd[1]: Started libpod-conmon-96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656.scope.
Jan 31 03:24:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:24:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaffdee97dc22d512d13c19560e5530f5de8488a8345d5d99f833ddb776bd36c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaffdee97dc22d512d13c19560e5530f5de8488a8345d5d99f833ddb776bd36c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaffdee97dc22d512d13c19560e5530f5de8488a8345d5d99f833ddb776bd36c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaffdee97dc22d512d13c19560e5530f5de8488a8345d5d99f833ddb776bd36c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 KiB/s wr, 121 op/s
Jan 31 03:24:08 np0005603621 podman[324848]: 2026-01-31 08:24:08.043135948 +0000 UTC m=+0.880205651 container init 96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:24:08 np0005603621 podman[324848]: 2026-01-31 08:24:08.049816898 +0000 UTC m=+0.886886581 container start 96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:24:08 np0005603621 podman[324848]: 2026-01-31 08:24:08.383921563 +0000 UTC m=+1.220991276 container attach 96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:08.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:08.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]: {
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:        "osd_id": 0,
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:        "type": "bluestore"
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]:    }
Jan 31 03:24:08 np0005603621 adoring_meitner[324866]: }
Jan 31 03:24:08 np0005603621 systemd[1]: libpod-96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656.scope: Deactivated successfully.
Jan 31 03:24:08 np0005603621 podman[324848]: 2026-01-31 08:24:08.859188795 +0000 UTC m=+1.696258478 container died 96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:24:09 np0005603621 nova_compute[247399]: 2026-01-31 08:24:09.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aaffdee97dc22d512d13c19560e5530f5de8488a8345d5d99f833ddb776bd36c-merged.mount: Deactivated successfully.
Jan 31 03:24:09 np0005603621 nova_compute[247399]: 2026-01-31 08:24:09.882 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 18 KiB/s wr, 163 op/s
Jan 31 03:24:10 np0005603621 podman[324848]: 2026-01-31 08:24:10.481134504 +0000 UTC m=+3.318204187 container remove 96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:24:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:24:10 np0005603621 systemd[1]: libpod-conmon-96b35e4e6fd9752ba820ce7d9e1c01b3f6d406ef53c3ad82037481cc0a48c656.scope: Deactivated successfully.
Jan 31 03:24:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:10.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:10.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 31 03:24:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:24:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:24:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 31 03:24:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 31 03:24:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:24:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c1a3a841-69f4-4f98-aa62-64ddd23dbe0c does not exist
Jan 31 03:24:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7b0bb6cf-3526-43bd-8e64-bf5ca33d7312 does not exist
Jan 31 03:24:11 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b8c2f687-6c55-4ddf-9503-6a15365a1ffc does not exist
Jan 31 03:24:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 21 KiB/s wr, 177 op/s
Jan 31 03:24:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:24:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:24:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:12.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:12.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 30 KiB/s wr, 172 op/s
Jan 31 03:24:14 np0005603621 nova_compute[247399]: 2026-01-31 08:24:14.328 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:14.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:14.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:24:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/434238851' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:24:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:24:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/434238851' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:24:14 np0005603621 nova_compute[247399]: 2026-01-31 08:24:14.885 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 32 KiB/s wr, 189 op/s
Jan 31 03:24:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:16.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 49 KiB/s wr, 199 op/s
Jan 31 03:24:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:24:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:24:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:18.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:19 np0005603621 nova_compute[247399]: 2026-01-31 08:24:19.330 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 31 03:24:19 np0005603621 nova_compute[247399]: 2026-01-31 08:24:19.947 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 459 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 169 op/s
Jan 31 03:24:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:24:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:20.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:24:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:20.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 31 03:24:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 31 03:24:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 459 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 169 op/s
Jan 31 03:24:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:22.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:24:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:22.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:24:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 885 KiB/s rd, 2.6 MiB/s wr, 113 op/s
Jan 31 03:24:24 np0005603621 nova_compute[247399]: 2026-01-31 08:24:24.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:24.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:24.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:24 np0005603621 nova_compute[247399]: 2026-01-31 08:24:24.948 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 475 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 219 KiB/s rd, 3.6 MiB/s wr, 90 op/s
Jan 31 03:24:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:26.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:26.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Jan 31 03:24:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:28.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:28.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:29 np0005603621 nova_compute[247399]: 2026-01-31 08:24:29.333 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:29 np0005603621 podman[325010]: 2026-01-31 08:24:29.497496683 +0000 UTC m=+0.054219710 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:24:29 np0005603621 podman[325011]: 2026-01-31 08:24:29.545871726 +0000 UTC m=+0.100182647 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 31 03:24:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:29 np0005603621 nova_compute[247399]: 2026-01-31 08:24:29.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 434 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 250 KiB/s rd, 2.6 MiB/s wr, 75 op/s
Jan 31 03:24:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:24:30.506 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:24:30.506 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:24:30.507 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:30.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:30.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:24:31.521 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:24:31 np0005603621 nova_compute[247399]: 2026-01-31 08:24:31.521 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:24:31.522 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:24:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:24:31.523 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:24:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 434 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 231 KiB/s rd, 2.4 MiB/s wr, 69 op/s
Jan 31 03:24:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:32.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:32.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 407 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 286 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 31 03:24:34 np0005603621 nova_compute[247399]: 2026-01-31 08:24:34.334 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:34.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:34.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:34 np0005603621 nova_compute[247399]: 2026-01-31 08:24:34.998 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 1.6 MiB/s wr, 81 op/s
Jan 31 03:24:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:36.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:36.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:37 np0005603621 nova_compute[247399]: 2026-01-31 08:24:37.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 414 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 888 KiB/s wr, 79 op/s
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:24:38
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'vms']
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:24:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:38.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:38.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:24:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:24:39 np0005603621 nova_compute[247399]: 2026-01-31 08:24:39.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:40 np0005603621 nova_compute[247399]: 2026-01-31 08:24:39.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 725 KiB/s wr, 112 op/s
Jan 31 03:24:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:24:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:24:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:40.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 414 KiB/s rd, 113 KiB/s wr, 85 op/s
Jan 31 03:24:42 np0005603621 nova_compute[247399]: 2026-01-31 08:24:42.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:42.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:42.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 113 KiB/s wr, 129 op/s
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.652 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.652 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.652 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:24:44 np0005603621 nova_compute[247399]: 2026-01-31 08:24:44.653 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f02cbbe1-1133-4659-a065-630c53ee2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:24:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:44.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:45 np0005603621 nova_compute[247399]: 2026-01-31 08:24:45.000 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 120 KiB/s wr, 122 op/s
Jan 31 03:24:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:46.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:46.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 108 KiB/s wr, 105 op/s
Jan 31 03:24:48 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:24:48 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:24:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:48.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:48.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006523321177647497 of space, bias 1.0, pg target 1.9569963532942491 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:24:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:24:49 np0005603621 nova_compute[247399]: 2026-01-31 08:24:49.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:50 np0005603621 nova_compute[247399]: 2026-01-31 08:24:50.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 104 KiB/s wr, 93 op/s
Jan 31 03:24:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:50.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:50.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 418 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 59 op/s
Jan 31 03:24:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:52.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.732 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updating instance_info_cache with network_info: [{"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.903 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-f02cbbe1-1133-4659-a065-630c53ee2683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.904 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.904 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.905 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.905 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.905 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.905 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:24:52 np0005603621 nova_compute[247399]: 2026-01-31 08:24:52.905 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.001 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.002 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.002 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.002 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.002 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349791374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.467 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.820 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.821 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.824 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.825 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.828 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:53 np0005603621 nova_compute[247399]: 2026-01-31 08:24:53.829 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.007 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.009 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3926MB free_disk=20.851356506347656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.009 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.009 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:24:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 429 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 82 op/s
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.344 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.408 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.409 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.409 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance dbf4e573-8e19-4920-aab9-c290d7d8eeec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.409 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.409 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:24:54 np0005603621 nova_compute[247399]: 2026-01-31 08:24:54.542 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:24:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:54.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:24:55 np0005603621 nova_compute[247399]: 2026-01-31 08:24:55.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:24:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180935132' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:24:55 np0005603621 nova_compute[247399]: 2026-01-31 08:24:55.102 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:24:55 np0005603621 nova_compute[247399]: 2026-01-31 08:24:55.106 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:24:55 np0005603621 nova_compute[247399]: 2026-01-31 08:24:55.333 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:24:55 np0005603621 nova_compute[247399]: 2026-01-31 08:24:55.335 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:24:55 np0005603621 nova_compute[247399]: 2026-01-31 08:24:55.336 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:24:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 444 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 655 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 03:24:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:24:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:56.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:24:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 31 03:24:58 np0005603621 nova_compute[247399]: 2026-01-31 08:24:58.629 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:24:58.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:24:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:24:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:24:58.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:24:58 np0005603621 nova_compute[247399]: 2026-01-31 08:24:58.944 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:24:59 np0005603621 nova_compute[247399]: 2026-01-31 08:24:59.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:24:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:00 np0005603621 nova_compute[247399]: 2026-01-31 08:25:00.049 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 31 03:25:00 np0005603621 podman[325165]: 2026-01-31 08:25:00.488781514 +0000 UTC m=+0.050636036 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:25:00 np0005603621 podman[325166]: 2026-01-31 08:25:00.509647061 +0000 UTC m=+0.069664866 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 03:25:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:00.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 69 op/s
Jan 31 03:25:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:02.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:02.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 453 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 332 KiB/s rd, 2.3 MiB/s wr, 78 op/s
Jan 31 03:25:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 31 03:25:04 np0005603621 nova_compute[247399]: 2026-01-31 08:25:04.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 31 03:25:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:04.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:04.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 31 03:25:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:05 np0005603621 nova_compute[247399]: 2026-01-31 08:25:05.051 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 475 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 138 KiB/s rd, 765 KiB/s wr, 51 op/s
Jan 31 03:25:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:06.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:06.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 95 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:08.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:08.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:09 np0005603621 nova_compute[247399]: 2026-01-31 08:25:09.349 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:10 np0005603621 nova_compute[247399]: 2026-01-31 08:25:10.052 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 03:25:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:10.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:10.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 03:25:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:25:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:12.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:12.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:25:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:25:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:25:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 506 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 915 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:25:14 np0005603621 nova_compute[247399]: 2026-01-31 08:25:14.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188503688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188503688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:25:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:14.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:25:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:25:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:15 np0005603621 nova_compute[247399]: 2026-01-31 08:25:15.053 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 543 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.8 MiB/s wr, 81 op/s
Jan 31 03:25:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 03:25:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:25:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 03:25:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:25:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:16.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:25:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:25:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 567 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 84 op/s
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 48da66b5-c78d-47e4-8f08-0cef964bce13 does not exist
Jan 31 03:25:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ffb01dfe-6bad-4148-ada3-f083711ef212 does not exist
Jan 31 03:25:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5a59e920-317c-469f-8945-53a23bcda8b6 does not exist
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:25:18 np0005603621 nova_compute[247399]: 2026-01-31 08:25:18.279 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:18 np0005603621 nova_compute[247399]: 2026-01-31 08:25:18.280 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:25:18 np0005603621 podman[325660]: 2026-01-31 08:25:18.537923521 +0000 UTC m=+0.019823695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:25:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:18.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:18 np0005603621 podman[325660]: 2026-01-31 08:25:18.720063669 +0000 UTC m=+0.201963813 container create f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:18 np0005603621 systemd[1]: Started libpod-conmon-f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51.scope.
Jan 31 03:25:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:18 np0005603621 nova_compute[247399]: 2026-01-31 08:25:18.873 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:25:18 np0005603621 podman[325660]: 2026-01-31 08:25:18.943989984 +0000 UTC m=+0.425890178 container init f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:18 np0005603621 podman[325660]: 2026-01-31 08:25:18.953223665 +0000 UTC m=+0.435123809 container start f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:25:18 np0005603621 confident_ellis[325677]: 167 167
Jan 31 03:25:18 np0005603621 systemd[1]: libpod-f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51.scope: Deactivated successfully.
Jan 31 03:25:18 np0005603621 conmon[325677]: conmon f81663b1a2c0605d01f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51.scope/container/memory.events
Jan 31 03:25:19 np0005603621 podman[325660]: 2026-01-31 08:25:19.045412979 +0000 UTC m=+0.527313153 container attach f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:25:19 np0005603621 podman[325660]: 2026-01-31 08:25:19.045908295 +0000 UTC m=+0.527808439 container died f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.051 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.052 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.058 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.058 247403 INFO nova.compute.claims [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.380 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 31 03:25:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0eb2a52a9e56cff99dd6b6ba381be9f3927c7a1f98b6c7636b7fab8946fb2240-merged.mount: Deactivated successfully.
Jan 31 03:25:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1649128750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.827 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:19 np0005603621 nova_compute[247399]: 2026-01-31 08:25:19.833 247403 DEBUG nova.compute.provider_tree [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:25:20 np0005603621 nova_compute[247399]: 2026-01-31 08:25:20.054 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 31 03:25:20 np0005603621 nova_compute[247399]: 2026-01-31 08:25:20.119 247403 DEBUG nova.scheduler.client.report [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:25:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 31 03:25:20 np0005603621 nova_compute[247399]: 2026-01-31 08:25:20.619 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:20 np0005603621 nova_compute[247399]: 2026-01-31 08:25:20.620 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:25:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:20.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 31 03:25:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:20.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:20 np0005603621 nova_compute[247399]: 2026-01-31 08:25:20.948 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:25:20 np0005603621 nova_compute[247399]: 2026-01-31 08:25:20.948 247403 DEBUG nova.network.neutron [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:25:21 np0005603621 nova_compute[247399]: 2026-01-31 08:25:21.112 247403 INFO nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:25:21 np0005603621 nova_compute[247399]: 2026-01-31 08:25:21.252 247403 DEBUG nova.policy [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef51681d234a4abc88ff433d0640b6e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '953a213fa5cb435ab3c04ad96152685f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:25:21 np0005603621 nova_compute[247399]: 2026-01-31 08:25:21.530 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:25:21 np0005603621 podman[325660]: 2026-01-31 08:25:21.808490986 +0000 UTC m=+3.290391130 container remove f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:25:21 np0005603621 systemd[1]: libpod-conmon-f81663b1a2c0605d01f1ac42a068d518521a569370015d1f94301a09b08fce51.scope: Deactivated successfully.
Jan 31 03:25:22 np0005603621 podman[325724]: 2026-01-31 08:25:21.916730156 +0000 UTC m=+0.027493767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:25:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 87 op/s
Jan 31 03:25:22 np0005603621 podman[325724]: 2026-01-31 08:25:22.345511184 +0000 UTC m=+0.456274775 container create 5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:25:22 np0005603621 systemd[1]: Started libpod-conmon-5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97.scope.
Jan 31 03:25:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fa9fc152810c9b02b30f0351a7232d66d2a86422ab200e5ba7dedf8173195a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fa9fc152810c9b02b30f0351a7232d66d2a86422ab200e5ba7dedf8173195a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fa9fc152810c9b02b30f0351a7232d66d2a86422ab200e5ba7dedf8173195a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fa9fc152810c9b02b30f0351a7232d66d2a86422ab200e5ba7dedf8173195a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59fa9fc152810c9b02b30f0351a7232d66d2a86422ab200e5ba7dedf8173195a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:22.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:22 np0005603621 nova_compute[247399]: 2026-01-31 08:25:22.950 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:25:22 np0005603621 nova_compute[247399]: 2026-01-31 08:25:22.952 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:25:22 np0005603621 nova_compute[247399]: 2026-01-31 08:25:22.952 247403 INFO nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Creating image(s)#033[00m
Jan 31 03:25:23 np0005603621 podman[325724]: 2026-01-31 08:25:23.039057903 +0000 UTC m=+1.149821524 container init 5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 03:25:23 np0005603621 podman[325724]: 2026-01-31 08:25:23.047675984 +0000 UTC m=+1.158439575 container start 5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.068 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.091 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.113 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.116 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.137 247403 DEBUG nova.network.neutron [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Successfully created port: 109b6929-6b88-494a-b397-b36c434ed7a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.171 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.172 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.173 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.174 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:23 np0005603621 podman[325724]: 2026-01-31 08:25:23.184609698 +0000 UTC m=+1.295373319 container attach 5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hodgkin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.355 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:23 np0005603621 nova_compute[247399]: 2026-01-31 08:25:23.362 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:23 np0005603621 hungry_hodgkin[325743]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:25:23 np0005603621 hungry_hodgkin[325743]: --> relative data size: 1.0
Jan 31 03:25:23 np0005603621 hungry_hodgkin[325743]: --> All data devices are unavailable
Jan 31 03:25:23 np0005603621 systemd[1]: libpod-5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97.scope: Deactivated successfully.
Jan 31 03:25:23 np0005603621 conmon[325743]: conmon 5d83d6bf38abef75808b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97.scope/container/memory.events
Jan 31 03:25:23 np0005603621 podman[325724]: 2026-01-31 08:25:23.829644788 +0000 UTC m=+1.940408379 container died 5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hodgkin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.3 MiB/s wr, 166 op/s
Jan 31 03:25:24 np0005603621 nova_compute[247399]: 2026-01-31 08:25:24.354 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 31 03:25:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:24.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:24.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:24 np0005603621 nova_compute[247399]: 2026-01-31 08:25:24.769 247403 DEBUG nova.network.neutron [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Successfully updated port: 109b6929-6b88-494a-b397-b36c434ed7a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:25:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-59fa9fc152810c9b02b30f0351a7232d66d2a86422ab200e5ba7dedf8173195a-merged.mount: Deactivated successfully.
Jan 31 03:25:24 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 31 03:25:24 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:24.941975) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:25:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 31 03:25:24 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847924942011, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2231, "num_deletes": 253, "total_data_size": 4100653, "memory_usage": 4160080, "flush_reason": "Manual Compaction"}
Jan 31 03:25:24 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.033 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.033 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.033 247403 DEBUG nova.network.neutron [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.070 247403 DEBUG nova.compute.manager [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-changed-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.070 247403 DEBUG nova.compute.manager [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Refreshing instance network info cache due to event network-changed-109b6929-6b88-494a-b397-b36c434ed7a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.071 247403 DEBUG oslo_concurrency.lockutils [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:25 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847925408527, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3969027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46559, "largest_seqno": 48789, "table_properties": {"data_size": 3958817, "index_size": 6512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21416, "raw_average_key_size": 20, "raw_value_size": 3938382, "raw_average_value_size": 3812, "num_data_blocks": 282, "num_entries": 1033, "num_filter_entries": 1033, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847719, "oldest_key_time": 1769847719, "file_creation_time": 1769847924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:25:25 np0005603621 nova_compute[247399]: 2026-01-31 08:25:25.424 247403 DEBUG nova.network.neutron [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:25:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 577 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.9 MiB/s wr, 210 op/s
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 1163696 microseconds, and 5843 cpu microseconds.
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 31 03:25:26 np0005603621 podman[325724]: 2026-01-31 08:25:26.255644966 +0000 UTC m=+4.366408557 container remove 5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:25:26 np0005603621 systemd[1]: libpod-conmon-5d83d6bf38abef75808bab4f41d852987458d9b1ce91c3f58f70699d4f123d97.scope: Deactivated successfully.
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:25.408580) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3969027 bytes OK
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:26.105868) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:26.306636) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:26.306688) EVENT_LOG_v1 {"time_micros": 1769847926306678, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:26.306714) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 4091395, prev total WAL file size 4137500, number of live WAL files 2.
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:26.308062) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3876KB)], [104(9288KB)]
Jan 31 03:25:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847926308094, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 13480910, "oldest_snapshot_seqno": -1}
Jan 31 03:25:26 np0005603621 nova_compute[247399]: 2026-01-31 08:25:26.470 247403 DEBUG nova.network.neutron [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:26.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:26.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:26 np0005603621 nova_compute[247399]: 2026-01-31 08:25:26.767 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:26 np0005603621 nova_compute[247399]: 2026-01-31 08:25:26.767 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance network_info: |[{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:25:26 np0005603621 nova_compute[247399]: 2026-01-31 08:25:26.767 247403 DEBUG oslo_concurrency.lockutils [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:26 np0005603621 nova_compute[247399]: 2026-01-31 08:25:26.767 247403 DEBUG nova.network.neutron [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Refreshing network info cache for port 109b6929-6b88-494a-b397-b36c434ed7a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:25:26 np0005603621 podman[326007]: 2026-01-31 08:25:26.763009839 +0000 UTC m=+0.015729126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7690 keys, 11540981 bytes, temperature: kUnknown
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847927483293, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11540981, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11490013, "index_size": 30632, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 198481, "raw_average_key_size": 25, "raw_value_size": 11353683, "raw_average_value_size": 1476, "num_data_blocks": 1207, "num_entries": 7690, "num_filter_entries": 7690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769847926, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.483674) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11540981 bytes
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.782171) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 11.5 rd, 9.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 8216, records dropped: 526 output_compression: NoCompression
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.782250) EVENT_LOG_v1 {"time_micros": 1769847927782204, "job": 62, "event": "compaction_finished", "compaction_time_micros": 1175443, "compaction_time_cpu_micros": 20532, "output_level": 6, "num_output_files": 1, "total_output_size": 11540981, "num_input_records": 8216, "num_output_records": 7690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847927782808, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 31 03:25:27 np0005603621 podman[326007]: 2026-01-31 08:25:27.782323502 +0000 UTC m=+1.035042769 container create f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769847927783982, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:26.307972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.784062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.784069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.784071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.784073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:27 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:25:27.784075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:25:27 np0005603621 nova_compute[247399]: 2026-01-31 08:25:27.888 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 581 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 485 KiB/s wr, 208 op/s
Jan 31 03:25:28 np0005603621 systemd[1]: Started libpod-conmon-f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67.scope.
Jan 31 03:25:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:28 np0005603621 podman[326007]: 2026-01-31 08:25:28.345154873 +0000 UTC m=+1.597874170 container init f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:25:28 np0005603621 podman[326007]: 2026-01-31 08:25:28.353135274 +0000 UTC m=+1.605854541 container start f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:25:28 np0005603621 optimistic_williamson[326081]: 167 167
Jan 31 03:25:28 np0005603621 systemd[1]: libpod-f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67.scope: Deactivated successfully.
Jan 31 03:25:28 np0005603621 podman[326007]: 2026-01-31 08:25:28.378758141 +0000 UTC m=+1.631477408 container attach f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:25:28 np0005603621 podman[326007]: 2026-01-31 08:25:28.381177127 +0000 UTC m=+1.633896394 container died f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.424 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] resizing rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.613 247403 DEBUG nova.objects.instance [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'migration_context' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:25:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:28.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:25:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a5a2e1e77e6021d453fea41aa34735289e6692813253ed0ec98466cb96b24fe8-merged.mount: Deactivated successfully.
Jan 31 03:25:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:28.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.833 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.834 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Ensure instance console log exists: /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.835 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.835 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.835 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.838 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Start _get_guest_xml network_info=[{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.843 247403 WARNING nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.850 247403 DEBUG nova.virt.libvirt.host [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.851 247403 DEBUG nova.virt.libvirt.host [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.854 247403 DEBUG nova.virt.libvirt.host [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.854 247403 DEBUG nova.virt.libvirt.host [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.856 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.856 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.856 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.857 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.857 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.857 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.857 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.857 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.858 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.858 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.858 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.858 247403 DEBUG nova.virt.hardware [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:25:28 np0005603621 nova_compute[247399]: 2026-01-31 08:25:28.861 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:29 np0005603621 nova_compute[247399]: 2026-01-31 08:25:29.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:25:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2777367651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:25:29 np0005603621 nova_compute[247399]: 2026-01-31 08:25:29.890 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:29 np0005603621 podman[326007]: 2026-01-31 08:25:29.965237181 +0000 UTC m=+3.217956448 container remove f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.013 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.017 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:30 np0005603621 systemd[1]: libpod-conmon-f672257ee5c5872db01b8f9832e559ef1b8a5febc2fa8549bc7778bde0163a67.scope: Deactivated successfully.
Jan 31 03:25:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 620 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.3 MiB/s wr, 216 op/s
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603621 podman[326211]: 2026-01-31 08:25:30.100027817 +0000 UTC m=+0.030537333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.232 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:25:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 31 03:25:30 np0005603621 podman[326211]: 2026-01-31 08:25:30.38357594 +0000 UTC m=+0.314085436 container create 9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:25:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1999605162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.445 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.447 247403 DEBUG nova.virt.libvirt.vif [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:25:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-899650284',display_name='tempest-ServerActionsTestOtherB-server-899650284',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-899650284',id=121,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-tuc10ywh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=cca881fe-18fa-40c1-b9ef-2b1f28855b53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.447 247403 DEBUG nova.network.os_vif_util [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.448 247403 DEBUG nova.network.os_vif_util [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.450 247403 DEBUG nova.objects.instance [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'pci_devices' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.471 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <uuid>cca881fe-18fa-40c1-b9ef-2b1f28855b53</uuid>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <name>instance-00000079</name>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerActionsTestOtherB-server-899650284</nova:name>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:25:28</nova:creationTime>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:user uuid="ef51681d234a4abc88ff433d0640b6e7">tempest-ServerActionsTestOtherB-1048458052-project-member</nova:user>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:project uuid="953a213fa5cb435ab3c04ad96152685f">tempest-ServerActionsTestOtherB-1048458052</nova:project>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <nova:port uuid="109b6929-6b88-494a-b397-b36c434ed7a7">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <entry name="serial">cca881fe-18fa-40c1-b9ef-2b1f28855b53</entry>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <entry name="uuid">cca881fe-18fa-40c1-b9ef-2b1f28855b53</entry>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk.config">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:06:b8:22"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <target dev="tap109b6929-6b"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/console.log" append="off"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:25:30 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:25:30 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:25:30 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:25:30 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.473 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Preparing to wait for external event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.473 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.474 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.474 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.475 247403 DEBUG nova.virt.libvirt.vif [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:25:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-899650284',display_name='tempest-ServerActionsTestOtherB-server-899650284',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-899650284',id=121,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-tuc10ywh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:25:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=cca881fe-18fa-40c1-b9ef-2b1f28855b53,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.475 247403 DEBUG nova.network.os_vif_util [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.476 247403 DEBUG nova.network.os_vif_util [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.477 247403 DEBUG os_vif [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.477 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.478 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.478 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.483 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.484 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap109b6929-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.485 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap109b6929-6b, col_values=(('external_ids', {'iface-id': '109b6929-6b88-494a-b397-b36c434ed7a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:b8:22', 'vm-uuid': 'cca881fe-18fa-40c1-b9ef-2b1f28855b53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.486 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603621 NetworkManager[49013]: <info>  [1769847930.4876] manager: (tap109b6929-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.488 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.494 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.497 247403 INFO os_vif [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b')#033[00m
Jan 31 03:25:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:30.507 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:30.508 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:30.510 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:30.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:30 np0005603621 systemd[1]: Started libpod-conmon-9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103.scope.
Jan 31 03:25:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.723 247403 DEBUG nova.network.neutron [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updated VIF entry in instance network info cache for port 109b6929-6b88-494a-b397-b36c434ed7a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.724 247403 DEBUG nova.network.neutron [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba175629445f9b2f657cc13a50c6bd8941a3adc1b835e38417c2f65041b7a10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba175629445f9b2f657cc13a50c6bd8941a3adc1b835e38417c2f65041b7a10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba175629445f9b2f657cc13a50c6bd8941a3adc1b835e38417c2f65041b7a10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba175629445f9b2f657cc13a50c6bd8941a3adc1b835e38417c2f65041b7a10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:30.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.755 247403 DEBUG oslo_concurrency.lockutils [req-f3823e6b-267f-4e68-ad05-02d58e699f50 req-9b33dadb-526a-4e3b-ad10-f11d241881be fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 31 03:25:30 np0005603621 podman[326254]: 2026-01-31 08:25:30.914104513 +0000 UTC m=+0.194926522 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:25:30 np0005603621 podman[326211]: 2026-01-31 08:25:30.91336739 +0000 UTC m=+0.843876916 container init 9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 31 03:25:30 np0005603621 podman[326211]: 2026-01-31 08:25:30.921675962 +0000 UTC m=+0.852185478 container start 9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.940 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.941 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.941 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No VIF found with MAC fa:16:3e:06:b8:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:25:30 np0005603621 nova_compute[247399]: 2026-01-31 08:25:30.941 247403 INFO nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Using config drive#033[00m
Jan 31 03:25:31 np0005603621 podman[326211]: 2026-01-31 08:25:31.25967922 +0000 UTC m=+1.190188716 container attach 9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:25:31 np0005603621 podman[326252]: 2026-01-31 08:25:31.302502979 +0000 UTC m=+0.583471103 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]: {
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:    "0": [
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:        {
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "devices": [
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "/dev/loop3"
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            ],
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "lv_name": "ceph_lv0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "lv_size": "7511998464",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "name": "ceph_lv0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "tags": {
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.cluster_name": "ceph",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.crush_device_class": "",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.encrypted": "0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.osd_id": "0",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.type": "block",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:                "ceph.vdo": "0"
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            },
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "type": "block",
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:            "vg_name": "ceph_vg0"
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:        }
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]:    ]
Jan 31 03:25:31 np0005603621 objective_wescoff[326251]: }
Jan 31 03:25:31 np0005603621 systemd[1]: libpod-9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103.scope: Deactivated successfully.
Jan 31 03:25:31 np0005603621 nova_compute[247399]: 2026-01-31 08:25:31.701 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:31 np0005603621 podman[326322]: 2026-01-31 08:25:31.711813424 +0000 UTC m=+0.025536456 container died 9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 175 op/s
Jan 31 03:25:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cba175629445f9b2f657cc13a50c6bd8941a3adc1b835e38417c2f65041b7a10-merged.mount: Deactivated successfully.
Jan 31 03:25:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:32.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:32.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:32 np0005603621 nova_compute[247399]: 2026-01-31 08:25:32.947 247403 INFO nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Creating config drive at /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/disk.config#033[00m
Jan 31 03:25:32 np0005603621 nova_compute[247399]: 2026-01-31 08:25:32.951 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn03lu9gr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:33 np0005603621 nova_compute[247399]: 2026-01-31 08:25:33.075 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpn03lu9gr" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:33 np0005603621 podman[326322]: 2026-01-31 08:25:33.208157183 +0000 UTC m=+1.521880185 container remove 9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:25:33 np0005603621 systemd[1]: libpod-conmon-9afee12ceb30320a22297838b1bc7061b0e3c6fdc384e706ac5fb970956e8103.scope: Deactivated successfully.
Jan 31 03:25:33 np0005603621 nova_compute[247399]: 2026-01-31 08:25:33.229 247403 DEBUG nova.storage.rbd_utils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:25:33 np0005603621 nova_compute[247399]: 2026-01-31 08:25:33.233 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/disk.config cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:33 np0005603621 podman[326517]: 2026-01-31 08:25:33.70895203 +0000 UTC m=+0.017943936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:25:34 np0005603621 podman[326517]: 2026-01-31 08:25:34.045005327 +0000 UTC m=+0.353997203 container create c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:25:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 623 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 287 KiB/s rd, 2.6 MiB/s wr, 70 op/s
Jan 31 03:25:34 np0005603621 nova_compute[247399]: 2026-01-31 08:25:34.358 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:34 np0005603621 systemd[1]: Started libpod-conmon-c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973.scope.
Jan 31 03:25:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:34.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:35 np0005603621 podman[326517]: 2026-01-31 08:25:35.009851934 +0000 UTC m=+1.318843830 container init c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:25:35 np0005603621 podman[326517]: 2026-01-31 08:25:35.017241116 +0000 UTC m=+1.326232992 container start c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:25:35 np0005603621 silly_elion[326538]: 167 167
Jan 31 03:25:35 np0005603621 systemd[1]: libpod-c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973.scope: Deactivated successfully.
Jan 31 03:25:35 np0005603621 podman[326517]: 2026-01-31 08:25:35.162388568 +0000 UTC m=+1.471380444 container attach c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:25:35 np0005603621 podman[326517]: 2026-01-31 08:25:35.16338623 +0000 UTC m=+1.472378106 container died c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:25:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:35 np0005603621 nova_compute[247399]: 2026-01-31 08:25:35.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-801e94db28a0642db174ecf2dfe428cfabafa6c5cb50367ddc2c4bfe05426d08-merged.mount: Deactivated successfully.
Jan 31 03:25:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 637 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 180 op/s
Jan 31 03:25:36 np0005603621 podman[326517]: 2026-01-31 08:25:36.662962842 +0000 UTC m=+2.971954718 container remove c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elion, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:25:36 np0005603621 systemd[1]: libpod-conmon-c16f3b3cd92ef0ada19cb64527d175888edaad39abb84951e979a7506d96d973.scope: Deactivated successfully.
Jan 31 03:25:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:36.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:36 np0005603621 podman[326563]: 2026-01-31 08:25:36.777097118 +0000 UTC m=+0.021187918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:25:37 np0005603621 podman[326563]: 2026-01-31 08:25:37.008057404 +0000 UTC m=+0.252148184 container create b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 03:25:37 np0005603621 systemd[1]: Started libpod-conmon-b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76.scope.
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.282 247403 DEBUG oslo_concurrency.processutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/disk.config cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.283 247403 INFO nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Deleting local config drive /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/disk.config because it was imported into RBD.#033[00m
Jan 31 03:25:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ee0661a194e6439f400a63170b70a6075c028bc9e36e6d5a310a11fa4a1ff03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ee0661a194e6439f400a63170b70a6075c028bc9e36e6d5a310a11fa4a1ff03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ee0661a194e6439f400a63170b70a6075c028bc9e36e6d5a310a11fa4a1ff03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ee0661a194e6439f400a63170b70a6075c028bc9e36e6d5a310a11fa4a1ff03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:37 np0005603621 kernel: tap109b6929-6b: entered promiscuous mode
Jan 31 03:25:37 np0005603621 NetworkManager[49013]: <info>  [1769847937.3271] manager: (tap109b6929-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.328 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:37Z|00455|binding|INFO|Claiming lport 109b6929-6b88-494a-b397-b36c434ed7a7 for this chassis.
Jan 31 03:25:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:37Z|00456|binding|INFO|109b6929-6b88-494a-b397-b36c434ed7a7: Claiming fa:16:3e:06:b8:22 10.100.0.13
Jan 31 03:25:37 np0005603621 systemd-udevd[326595]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:25:37 np0005603621 systemd-machined[212769]: New machine qemu-55-instance-00000079.
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:37 np0005603621 NetworkManager[49013]: <info>  [1769847937.3629] device (tap109b6929-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:25:37 np0005603621 NetworkManager[49013]: <info>  [1769847937.3638] device (tap109b6929-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:25:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:37Z|00457|binding|INFO|Setting lport 109b6929-6b88-494a-b397-b36c434ed7a7 ovn-installed in OVS
Jan 31 03:25:37 np0005603621 systemd[1]: Started Virtual Machine qemu-55-instance-00000079.
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:37 np0005603621 podman[326563]: 2026-01-31 08:25:37.394521929 +0000 UTC m=+0.638612729 container init b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 03:25:37 np0005603621 podman[326563]: 2026-01-31 08:25:37.401693785 +0000 UTC m=+0.645784565 container start b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:25:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:37Z|00458|binding|INFO|Setting lport 109b6929-6b88-494a-b397-b36c434ed7a7 up in Southbound
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.434 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:b8:22 10.100.0.13'], port_security=['fa:16:3e:06:b8:22 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cca881fe-18fa-40c1-b9ef-2b1f28855b53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5b1bd8ad-0d2a-4d57-a00a-9a6b59df86e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=109b6929-6b88-494a-b397-b36c434ed7a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.435 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 109b6929-6b88-494a-b397-b36c434ed7a7 in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 bound to our chassis#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.437 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44469d8b-ad30-4270-88fa-e67c568f3150#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.446 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbd9e52-856b-4f6f-9095-fd193960f138]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.446 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44469d8b-a1 in ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.448 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44469d8b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.449 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[524e1049-c12e-439c-9745-d2ca2267b247]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.450 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6b00a7-5709-4981-a276-f2c4c408a3ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.463 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8f21f9-35d2-405a-a731-21ca74ece309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.475 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5a29f39d-f4b5-4226-aac4-1c116b410737]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.504 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[67553997-2f77-438e-b86f-c49671dad77f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 systemd-udevd[326598]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:25:37 np0005603621 NetworkManager[49013]: <info>  [1769847937.5114] manager: (tap44469d8b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/209)
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.510 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d647d80-9bcb-4a51-bf60-665b4ee65b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.540 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[182b29d4-3a6d-45a0-9cc2-f9bf05111c92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.544 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[642c4ad0-01a9-4f19-b4f6-c0322bf0c8ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 NetworkManager[49013]: <info>  [1769847937.5618] device (tap44469d8b-a0): carrier: link connected
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.566 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[03b493b7-686e-42b0-a690-3fc912831977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 podman[326563]: 2026-01-31 08:25:37.572411033 +0000 UTC m=+0.816501813 container attach b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.581 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[86e87b99-d549-4441-9ae7-99256f61451f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724307, 'reachable_time': 44569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326631, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.594 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbf642a-a2e0-4893-b096-013595d1720b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:9820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 724307, 'tstamp': 724307}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326632, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.607 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1a18d8-ee2e-4851-b87b-88c19c94dde8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724307, 'reachable_time': 44569, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326633, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.627 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2f2e28-5f42-4d14-ba22-986ff44916ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.666 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[97f0bd13-1021-4b08-bbfb-fb8bd3d81b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.668 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.668 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.668 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44469d8b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:37 np0005603621 kernel: tap44469d8b-a0: entered promiscuous mode
Jan 31 03:25:37 np0005603621 NetworkManager[49013]: <info>  [1769847937.6708] manager: (tap44469d8b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.671 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.706 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44469d8b-a0, col_values=(('external_ids', {'iface-id': '7e288124-e200-4c03-8a4a-baab3e3f3d7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.707 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:37Z|00459|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.708 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.709 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9f7dd9-0ac5-4e68-8c98-beebdf105fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.710 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:25:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:25:37.711 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'env', 'PROCESS_TAG=haproxy-44469d8b-ad30-4270-88fa-e67c568f3150', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44469d8b-ad30-4270-88fa-e67c568f3150.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:25:37 np0005603621 nova_compute[247399]: 2026-01-31 08:25:37.713 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:38 np0005603621 podman[326701]: 2026-01-31 08:25:37.98657532 +0000 UTC m=+0.018114581 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 638 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 168 op/s
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.104 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847938.1035922, cca881fe-18fa-40c1-b9ef-2b1f28855b53 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.104 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] VM Started (Lifecycle Event)#033[00m
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.148 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.153 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847938.1037695, cca881fe-18fa-40c1-b9ef-2b1f28855b53 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.153 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.221 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.224 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]: {
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:        "osd_id": 0,
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:        "type": "bluestore"
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]:    }
Jan 31 03:25:38 np0005603621 hungry_shirley[326580]: }
Jan 31 03:25:38 np0005603621 nova_compute[247399]: 2026-01-31 08:25:38.270 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:25:38 np0005603621 systemd[1]: libpod-b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76.scope: Deactivated successfully.
Jan 31 03:25:38 np0005603621 podman[326563]: 2026-01-31 08:25:38.298117405 +0000 UTC m=+1.542208225 container died b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:25:38
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'volumes', 'images', '.rgw.root', 'vms', 'default.rgw.log']
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:25:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:38.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3ee0661a194e6439f400a63170b70a6075c028bc9e36e6d5a310a11fa4a1ff03-merged.mount: Deactivated successfully.
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:25:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:25:39 np0005603621 podman[326701]: 2026-01-31 08:25:39.035693442 +0000 UTC m=+1.067232653 container create bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.232 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.359 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:39 np0005603621 systemd[1]: Started libpod-conmon-bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818.scope.
Jan 31 03:25:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:25:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85b788af7580b439956c915e61dfa68818561c33bd0098155ca99043715be413/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.551 247403 DEBUG nova.compute.manager [req-6ff81b46-518b-4d2e-833e-a3d497ec1bd6 req-a126c63a-b19e-4595-82e6-4afbe4c97d47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.551 247403 DEBUG oslo_concurrency.lockutils [req-6ff81b46-518b-4d2e-833e-a3d497ec1bd6 req-a126c63a-b19e-4595-82e6-4afbe4c97d47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.552 247403 DEBUG oslo_concurrency.lockutils [req-6ff81b46-518b-4d2e-833e-a3d497ec1bd6 req-a126c63a-b19e-4595-82e6-4afbe4c97d47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.552 247403 DEBUG oslo_concurrency.lockutils [req-6ff81b46-518b-4d2e-833e-a3d497ec1bd6 req-a126c63a-b19e-4595-82e6-4afbe4c97d47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.552 247403 DEBUG nova.compute.manager [req-6ff81b46-518b-4d2e-833e-a3d497ec1bd6 req-a126c63a-b19e-4595-82e6-4afbe4c97d47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Processing event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.553 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.555 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769847939.5556633, cca881fe-18fa-40c1-b9ef-2b1f28855b53 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.556 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.557 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.560 247403 INFO nova.virt.libvirt.driver [-] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance spawned successfully.#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.560 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.582 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.584 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.605 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.606 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.606 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.607 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.607 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.607 247403 DEBUG nova.virt.libvirt.driver [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.631 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.705 247403 INFO nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Took 16.75 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.706 247403 DEBUG nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.781 247403 INFO nova.compute.manager [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Took 20.76 seconds to build instance.#033[00m
Jan 31 03:25:39 np0005603621 nova_compute[247399]: 2026-01-31 08:25:39.822 247403 DEBUG oslo_concurrency.lockutils [None req-f2894478-7a11-486e-ae71-66811d2326a1 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 170 op/s
Jan 31 03:25:40 np0005603621 podman[326563]: 2026-01-31 08:25:40.140964662 +0000 UTC m=+3.385055442 container remove b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:25:40 np0005603621 systemd[1]: libpod-conmon-b8c7ef8584e6b7da17282685dbe8357a403cbcb3c7720754f810117bf22ced76.scope: Deactivated successfully.
Jan 31 03:25:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:25:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:40 np0005603621 podman[326701]: 2026-01-31 08:25:40.496967417 +0000 UTC m=+2.528506658 container init bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:25:40 np0005603621 podman[326701]: 2026-01-31 08:25:40.501968824 +0000 UTC m=+2.533508045 container start bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:25:40 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [NOTICE]   (326758) : New worker (326760) forked
Jan 31 03:25:40 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [NOTICE]   (326758) : Loading success.
Jan 31 03:25:40 np0005603621 nova_compute[247399]: 2026-01-31 08:25:40.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:40.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:25:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 07cfd2d5-14f0-4875-b8af-7cdd15e18610 does not exist
Jan 31 03:25:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 174f13fb-9646-4530-a281-991f2274addf does not exist
Jan 31 03:25:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1126ed01-b26d-4d0c-b922-34ab6ad8d94b does not exist
Jan 31 03:25:41 np0005603621 nova_compute[247399]: 2026-01-31 08:25:41.892 247403 DEBUG nova.compute.manager [req-8e2a1cc9-b1e2-4a0b-a61f-0a8ee7f9b43a req-60ba1b37-7791-4b6f-b176-7bda5966c52b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:41 np0005603621 nova_compute[247399]: 2026-01-31 08:25:41.893 247403 DEBUG oslo_concurrency.lockutils [req-8e2a1cc9-b1e2-4a0b-a61f-0a8ee7f9b43a req-60ba1b37-7791-4b6f-b176-7bda5966c52b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:41 np0005603621 nova_compute[247399]: 2026-01-31 08:25:41.893 247403 DEBUG oslo_concurrency.lockutils [req-8e2a1cc9-b1e2-4a0b-a61f-0a8ee7f9b43a req-60ba1b37-7791-4b6f-b176-7bda5966c52b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:41 np0005603621 nova_compute[247399]: 2026-01-31 08:25:41.893 247403 DEBUG oslo_concurrency.lockutils [req-8e2a1cc9-b1e2-4a0b-a61f-0a8ee7f9b43a req-60ba1b37-7791-4b6f-b176-7bda5966c52b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:41 np0005603621 nova_compute[247399]: 2026-01-31 08:25:41.893 247403 DEBUG nova.compute.manager [req-8e2a1cc9-b1e2-4a0b-a61f-0a8ee7f9b43a req-60ba1b37-7791-4b6f-b176-7bda5966c52b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] No waiting events found dispatching network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:25:41 np0005603621 nova_compute[247399]: 2026-01-31 08:25:41.893 247403 WARNING nova.compute.manager [req-8e2a1cc9-b1e2-4a0b-a61f-0a8ee7f9b43a req-60ba1b37-7791-4b6f-b176-7bda5966c52b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received unexpected event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:25:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 649 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 31 03:25:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:25:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:42.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:42.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:43 np0005603621 nova_compute[247399]: 2026-01-31 08:25:43.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:43 np0005603621 NetworkManager[49013]: <info>  [1769847943.8529] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Jan 31 03:25:43 np0005603621 nova_compute[247399]: 2026-01-31 08:25:43.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:43 np0005603621 NetworkManager[49013]: <info>  [1769847943.8543] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Jan 31 03:25:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:43Z|00460|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:25:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:25:43Z|00461|binding|INFO|Releasing lport dad27cfe-7e8a-4f55-a945-07f9cae848c1 from this chassis (sb_readonly=0)
Jan 31 03:25:43 np0005603621 nova_compute[247399]: 2026-01-31 08:25:43.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:44 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 31 03:25:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 649 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Jan 31 03:25:44 np0005603621 nova_compute[247399]: 2026-01-31 08:25:44.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:44.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:45 np0005603621 nova_compute[247399]: 2026-01-31 08:25:45.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:45 np0005603621 nova_compute[247399]: 2026-01-31 08:25:45.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:25:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:45 np0005603621 nova_compute[247399]: 2026-01-31 08:25:45.536 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 649 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 227 op/s
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.130 247403 DEBUG nova.compute.manager [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-changed-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.130 247403 DEBUG nova.compute.manager [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Refreshing instance network info cache due to event network-changed-109b6929-6b88-494a-b397-b36c434ed7a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.131 247403 DEBUG oslo_concurrency.lockutils [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.132 247403 DEBUG oslo_concurrency.lockutils [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.132 247403 DEBUG nova.network.neutron [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Refreshing network info cache for port 109b6929-6b88-494a-b397-b36c434ed7a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:25:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:46.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.723 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.724 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:25:46 np0005603621 nova_compute[247399]: 2026-01-31 08:25:46.724 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:25:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:46.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:47 np0005603621 nova_compute[247399]: 2026-01-31 08:25:47.705 247403 DEBUG nova.network.neutron [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updated VIF entry in instance network info cache for port 109b6929-6b88-494a-b397-b36c434ed7a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:25:47 np0005603621 nova_compute[247399]: 2026-01-31 08:25:47.705 247403 DEBUG nova.network.neutron [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:47 np0005603621 nova_compute[247399]: 2026-01-31 08:25:47.837 247403 DEBUG oslo_concurrency.lockutils [req-8f85ee39-c0d5-42b0-8450-9e0ab7b6fbbf req-c2dc09c2-6526-4666-a255-a6169d12c422 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 632 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 745 KiB/s wr, 160 op/s
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.196 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [{"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.420 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-23200b4a-e522-43bf-a83e-cb2f9bb31571" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.420 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.421 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.468 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.469 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.469 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.469 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:25:48 np0005603621 nova_compute[247399]: 2026-01-31 08:25:48.470 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:48.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512562875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.042 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.142 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.143 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.145 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.146 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.148 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.149 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.151 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.152 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009179651149264842 of space, bias 1.0, pg target 2.7538953447794525 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.6448598606458217 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006047490054438861 of space, bias 1.0, pg target 1.8021520362227808 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:25:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.362 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.366 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.368 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3730MB free_disk=20.785552978515625GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.369 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.369 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.469 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.470 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.470 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance dbf4e573-8e19-4920-aab9-c290d7d8eeec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.470 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance cca881fe-18fa-40c1-b9ef-2b1f28855b53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.470 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.471 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:25:49 np0005603621 nova_compute[247399]: 2026-01-31 08:25:49.571 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:25:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 590 KiB/s wr, 209 op/s
Jan 31 03:25:50 np0005603621 nova_compute[247399]: 2026-01-31 08:25:50.563 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:25:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2756238669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:25:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:50 np0005603621 nova_compute[247399]: 2026-01-31 08:25:50.859 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:25:50 np0005603621 nova_compute[247399]: 2026-01-31 08:25:50.863 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:25:50 np0005603621 nova_compute[247399]: 2026-01-31 08:25:50.880 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:25:50 np0005603621 nova_compute[247399]: 2026-01-31 08:25:50.912 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:25:50 np0005603621 nova_compute[247399]: 2026-01-31 08:25:50.912 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:25:51 np0005603621 nova_compute[247399]: 2026-01-31 08:25:51.689 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:51 np0005603621 nova_compute[247399]: 2026-01-31 08:25:51.690 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:51 np0005603621 nova_compute[247399]: 2026-01-31 08:25:51.690 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 156 KiB/s wr, 178 op/s
Jan 31 03:25:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:52.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:52.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 104 KiB/s wr, 152 op/s
Jan 31 03:25:54 np0005603621 nova_compute[247399]: 2026-01-31 08:25:54.364 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:25:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:25:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:55 np0005603621 nova_compute[247399]: 2026-01-31 08:25:55.566 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:25:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:25:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 589 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 173 op/s
Jan 31 03:25:56 np0005603621 nova_compute[247399]: 2026-01-31 08:25:56.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:56.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:56.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 589 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 524 KiB/s rd, 1.4 MiB/s wr, 98 op/s
Jan 31 03:25:58 np0005603621 nova_compute[247399]: 2026-01-31 08:25:58.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:25:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:25:58.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:25:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:25:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:25:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:25:58.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:25:59 np0005603621 nova_compute[247399]: 2026-01-31 08:25:59.272 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:25:59 np0005603621 nova_compute[247399]: 2026-01-31 08:25:59.273 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:25:59 np0005603621 nova_compute[247399]: 2026-01-31 08:25:59.366 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 598 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 323 KiB/s rd, 2.0 MiB/s wr, 89 op/s
Jan 31 03:26:00 np0005603621 nova_compute[247399]: 2026-01-31 08:26:00.568 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:00.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:01 np0005603621 podman[326926]: 2026-01-31 08:26:01.509774104 +0000 UTC m=+0.055895232 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:26:01 np0005603621 podman[326927]: 2026-01-31 08:26:01.533666996 +0000 UTC m=+0.079896927 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 03:26:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 598 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Jan 31 03:26:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:26:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3884603639' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:26:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:02Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:b8:22 10.100.0.13
Jan 31 03:26:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:02Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:b8:22 10.100.0.13
Jan 31 03:26:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:02.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:02.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 598 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Jan 31 03:26:04 np0005603621 nova_compute[247399]: 2026-01-31 08:26:04.389 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:04.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:04.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:05 np0005603621 nova_compute[247399]: 2026-01-31 08:26:05.570 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:05.934 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:26:05 np0005603621 nova_compute[247399]: 2026-01-31 08:26:05.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:05.936 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:26:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 603 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 03:26:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:06.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:06.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 603 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 195 KiB/s rd, 853 KiB/s wr, 37 op/s
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:08.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:08.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:09 np0005603621 nova_compute[247399]: 2026-01-31 08:26:09.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 608 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 234 KiB/s rd, 824 KiB/s wr, 48 op/s
Jan 31 03:26:10 np0005603621 nova_compute[247399]: 2026-01-31 08:26:10.615 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:10.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:10.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 177 KiB/s wr, 52 op/s
Jan 31 03:26:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:12.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 224 KiB/s rd, 175 KiB/s wr, 45 op/s
Jan 31 03:26:14 np0005603621 nova_compute[247399]: 2026-01-31 08:26:14.394 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1207290520' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1207290520' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 31 03:26:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 31 03:26:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:14.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:15 np0005603621 nova_compute[247399]: 2026-01-31 08:26:15.619 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:15.937 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 610 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 86 KiB/s rd, 72 KiB/s wr, 57 op/s
Jan 31 03:26:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:16.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 624 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 80 op/s
Jan 31 03:26:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:18.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:18.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:19 np0005603621 nova_compute[247399]: 2026-01-31 08:26:19.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:19 np0005603621 nova_compute[247399]: 2026-01-31 08:26:19.641 247403 DEBUG nova.compute.manager [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:19 np0005603621 nova_compute[247399]: 2026-01-31 08:26:19.698 247403 INFO nova.compute.manager [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] instance snapshotting#033[00m
Jan 31 03:26:19 np0005603621 nova_compute[247399]: 2026-01-31 08:26:19.699 247403 DEBUG nova.objects.instance [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'flavor' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 678 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 106 op/s
Jan 31 03:26:20 np0005603621 nova_compute[247399]: 2026-01-31 08:26:20.151 247403 INFO nova.virt.libvirt.driver [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Beginning live snapshot process#033[00m
Jan 31 03:26:20 np0005603621 nova_compute[247399]: 2026-01-31 08:26:20.315 247403 DEBUG nova.virt.libvirt.imagebackend [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:26:20 np0005603621 nova_compute[247399]: 2026-01-31 08:26:20.524 247403 DEBUG nova.storage.rbd_utils [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(f8bddb255b334de28f9bd124c97f6791) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:26:20 np0005603621 nova_compute[247399]: 2026-01-31 08:26:20.622 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 31 03:26:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:20.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 31 03:26:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 31 03:26:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 31 03:26:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 678 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 111 op/s
Jan 31 03:26:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:22.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:22.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 31 03:26:23 np0005603621 nova_compute[247399]: 2026-01-31 08:26:23.515 247403 DEBUG nova.storage.rbd_utils [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] cloning vms/cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk@f8bddb255b334de28f9bd124c97f6791 to images/913987a4-398c-4df5-8903-96535743e328 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:26:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 31 03:26:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 92 op/s
Jan 31 03:26:24 np0005603621 nova_compute[247399]: 2026-01-31 08:26:24.398 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 31 03:26:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:26:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:26:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:25 np0005603621 nova_compute[247399]: 2026-01-31 08:26:25.625 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 31 03:26:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.6 MiB/s wr, 86 op/s
Jan 31 03:26:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 31 03:26:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 689 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 687 KiB/s wr, 54 op/s
Jan 31 03:26:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:28 np0005603621 nova_compute[247399]: 2026-01-31 08:26:28.233 247403 DEBUG nova.storage.rbd_utils [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] flattening images/913987a4-398c-4df5-8903-96535743e328 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:26:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:29 np0005603621 nova_compute[247399]: 2026-01-31 08:26:29.399 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 701 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.9 MiB/s wr, 81 op/s
Jan 31 03:26:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:30.508 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:30.508 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:30.509 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:30 np0005603621 nova_compute[247399]: 2026-01-31 08:26:30.628 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 701 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.3 MiB/s wr, 65 op/s
Jan 31 03:26:32 np0005603621 podman[327188]: 2026-01-31 08:26:32.528643502 +0000 UTC m=+0.078793203 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:26:32 np0005603621 podman[327189]: 2026-01-31 08:26:32.528862819 +0000 UTC m=+0.076925255 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 03:26:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:32.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 726 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 63 op/s
Jan 31 03:26:34 np0005603621 nova_compute[247399]: 2026-01-31 08:26:34.401 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:34.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:35 np0005603621 nova_compute[247399]: 2026-01-31 08:26:35.210 247403 DEBUG nova.storage.rbd_utils [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] removing snapshot(f8bddb255b334de28f9bd124c97f6791) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:26:35 np0005603621 nova_compute[247399]: 2026-01-31 08:26:35.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:35Z|00462|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 31 03:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 31 03:26:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 762 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.4 MiB/s wr, 76 op/s
Jan 31 03:26:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:36.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 31 03:26:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 31 03:26:37 np0005603621 nova_compute[247399]: 2026-01-31 08:26:37.331 247403 DEBUG nova.storage.rbd_utils [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(snap) on rbd image(913987a4-398c-4df5-8903-96535743e328) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:26:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 750 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 83 op/s
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:26:38
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.meta']
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:26:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:38.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:26:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:26:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 31 03:26:39 np0005603621 nova_compute[247399]: 2026-01-31 08:26:39.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 31 03:26:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.4 MiB/s wr, 136 op/s
Jan 31 03:26:40 np0005603621 nova_compute[247399]: 2026-01-31 08:26:40.332 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:40 np0005603621 nova_compute[247399]: 2026-01-31 08:26:40.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:40.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:26:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:26:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.222 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.222 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.223 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.223 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.223 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.224 247403 INFO nova.compute.manager [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Terminating instance#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.225 247403 DEBUG nova.compute.manager [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:26:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:42.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:42.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.892 247403 INFO nova.virt.libvirt.driver [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Snapshot image upload complete#033[00m
Jan 31 03:26:42 np0005603621 nova_compute[247399]: 2026-01-31 08:26:42.893 247403 INFO nova.compute.manager [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Took 23.16 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:26:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:26:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:26:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:26:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:26:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.337 247403 DEBUG nova.compute.manager [None req-82573a2d-9d32-4153-942a-c62ec53f93c5 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:26:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9ce6517f-788f-4398-b6a9-8d64970667df does not exist
Jan 31 03:26:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fdbfe5fa-9c03-433f-ab53-e15a7dc2caa0 does not exist
Jan 31 03:26:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8f96fdc2-5027-4d46-961c-6d797719b079 does not exist
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:26:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:26:43 np0005603621 kernel: tap3765398c-c6 (unregistering): left promiscuous mode
Jan 31 03:26:43 np0005603621 NetworkManager[49013]: <info>  [1769848003.8433] device (tap3765398c-c6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:26:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:43Z|00463|binding|INFO|Releasing lport 3765398c-c6d8-4598-98d8-447d2d17b347 from this chassis (sb_readonly=0)
Jan 31 03:26:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:43Z|00464|binding|INFO|Setting lport 3765398c-c6d8-4598-98d8-447d2d17b347 down in Southbound
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.849 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:43Z|00465|binding|INFO|Removing iface tap3765398c-c6 ovn-installed in OVS
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.852 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.857 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.873 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:1d:95 10.100.0.6'], port_security=['fa:16:3e:6b:1d:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dbf4e573-8e19-4920-aab9-c290d7d8eeec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=3765398c-c6d8-4598-98d8-447d2d17b347) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.874 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 3765398c-c6d8-4598-98d8-447d2d17b347 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab unbound from our chassis#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.876 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.885 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6681a4f4-7b1c-4cae-b2c6-635615e4ca65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:43 np0005603621 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 31 03:26:43 np0005603621 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000073.scope: Consumed 21.261s CPU time.
Jan 31 03:26:43 np0005603621 systemd-machined[212769]: Machine qemu-54-instance-00000073 terminated.
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.910 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[38500c44-9bb4-4fe6-86bc-b606373efe56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.912 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1e2367-c5cb-481e-ac12-6d3463468607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.932 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[78959332-33f7-48c3-88cc-58503121ea17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.947 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[08c0113e-70be-4a90-9ec1-36efebaf1c08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327515, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.964 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d39e4f2-aa39-4334-a5f9-ab19f88c4679]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327518, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327518, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.966 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.967 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:43 np0005603621 nova_compute[247399]: 2026-01-31 08:26:43.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.972 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.972 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.973 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:43.973 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:26:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:26:44 np0005603621 kernel: tap3765398c-c6: entered promiscuous mode
Jan 31 03:26:44 np0005603621 NetworkManager[49013]: <info>  [1769848004.0496] manager: (tap3765398c-c6): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.050 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:44Z|00466|binding|INFO|Claiming lport 3765398c-c6d8-4598-98d8-447d2d17b347 for this chassis.
Jan 31 03:26:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:44Z|00467|binding|INFO|3765398c-c6d8-4598-98d8-447d2d17b347: Claiming fa:16:3e:6b:1d:95 10.100.0.6
Jan 31 03:26:44 np0005603621 kernel: tap3765398c-c6 (unregistering): left promiscuous mode
Jan 31 03:26:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:44Z|00468|binding|INFO|Setting lport 3765398c-c6d8-4598-98d8-447d2d17b347 ovn-installed in OVS
Jan 31 03:26:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:44Z|00469|if_status|INFO|Dropped 109 log messages in last 631 seconds (most recently, 630 seconds ago) due to excessive rate
Jan 31 03:26:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:44Z|00470|if_status|INFO|Not setting lport 3765398c-c6d8-4598-98d8-447d2d17b347 down as sb is readonly
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.071 247403 INFO nova.virt.libvirt.driver [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Instance destroyed successfully.#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.072 247403 DEBUG nova.objects.instance [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'resources' on Instance uuid dbf4e573-8e19-4920-aab9-c290d7d8eeec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:26:44Z|00471|binding|INFO|Releasing lport 3765398c-c6d8-4598-98d8-447d2d17b347 from this chassis (sb_readonly=0)
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.093 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:1d:95 10.100.0.6'], port_security=['fa:16:3e:6b:1d:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dbf4e573-8e19-4920-aab9-c290d7d8eeec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=3765398c-c6d8-4598-98d8-447d2d17b347) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.095 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 3765398c-c6d8-4598-98d8-447d2d17b347 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab bound to our chassis#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.096 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.099 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.107 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:1d:95 10.100.0.6'], port_security=['fa:16:3e:6b:1d:95 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dbf4e573-8e19-4920-aab9-c290d7d8eeec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=3765398c-c6d8-4598-98d8-447d2d17b347) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.111 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[420dce39-edbb-48ec-9898-55cccee74d8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 377 KiB/s wr, 179 op/s
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.131 247403 DEBUG nova.virt.libvirt.vif [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:22:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1917640469',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1917640469',id=115,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:23:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-k61p3csp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:23:33Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=dbf4e573-8e19-4920-aab9-c290d7d8eeec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.131 247403 DEBUG nova.network.os_vif_util [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "3765398c-c6d8-4598-98d8-447d2d17b347", "address": "fa:16:3e:6b:1d:95", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3765398c-c6", "ovs_interfaceid": "3765398c-c6d8-4598-98d8-447d2d17b347", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.134 247403 DEBUG nova.network.os_vif_util [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.134 247403 DEBUG os_vif [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.135 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c35e4427-7472-4105-ab29-6960dbf6e208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.136 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.136 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3765398c-c6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.143 247403 INFO os_vif [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:1d:95,bridge_name='br-int',has_traffic_filtering=True,id=3765398c-c6d8-4598-98d8-447d2d17b347,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3765398c-c6')#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.143 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f02c6870-2619-4b43-9133-1e53eab80403]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.163 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[80ad86cb-6457-4c3c-9d47-a1f5648252d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.175 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[21455067-4e31-4e96-94ac-d61d85491a82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327578, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.188 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0c80242b-4810-4d7d-a1ee-e5824dbf8489]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327582, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327582, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.189 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.191 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.191 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.193 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.193 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.194 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.194 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.195 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 3765398c-c6d8-4598-98d8-447d2d17b347 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab unbound from our chassis#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.196 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.202 247403 DEBUG nova.compute.manager [req-141fc6f2-d728-4e09-ab30-650ce8a0666a req-af57aa8e-5589-4263-a4ff-49ff6e8526af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-vif-unplugged-3765398c-c6d8-4598-98d8-447d2d17b347 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.202 247403 DEBUG oslo_concurrency.lockutils [req-141fc6f2-d728-4e09-ab30-650ce8a0666a req-af57aa8e-5589-4263-a4ff-49ff6e8526af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.202 247403 DEBUG oslo_concurrency.lockutils [req-141fc6f2-d728-4e09-ab30-650ce8a0666a req-af57aa8e-5589-4263-a4ff-49ff6e8526af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.202 247403 DEBUG oslo_concurrency.lockutils [req-141fc6f2-d728-4e09-ab30-650ce8a0666a req-af57aa8e-5589-4263-a4ff-49ff6e8526af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.202 247403 DEBUG nova.compute.manager [req-141fc6f2-d728-4e09-ab30-650ce8a0666a req-af57aa8e-5589-4263-a4ff-49ff6e8526af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] No waiting events found dispatching network-vif-unplugged-3765398c-c6d8-4598-98d8-447d2d17b347 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.203 247403 DEBUG nova.compute.manager [req-141fc6f2-d728-4e09-ab30-650ce8a0666a req-af57aa8e-5589-4263-a4ff-49ff6e8526af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-vif-unplugged-3765398c-c6d8-4598-98d8-447d2d17b347 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.207 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfabc4d-f1d2-4018-a7bc-b38f0e9a8baf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.230 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2f1b0520-e33b-418a-ad66-b1596da36b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.233 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a144dcf1-7097-42c9-9995-428d8359e667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.263 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8c969c0e-347b-4a56-97ea-01b9736dbac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.279 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ada789e7-b922-41a2-9250-b2e1edfeead9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 18378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327615, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.296 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b1707151-34cb-4134-995a-5866bd32fd6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327616, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327616, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.297 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.302 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.303 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.304 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:44.304 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:26:44 np0005603621 podman[327599]: 2026-01-31 08:26:44.265945456 +0000 UTC m=+0.024776052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:26:44 np0005603621 nova_compute[247399]: 2026-01-31 08:26:44.404 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:44 np0005603621 podman[327599]: 2026-01-31 08:26:44.638604845 +0000 UTC m=+0.397435421 container create 5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:26:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:45 np0005603621 nova_compute[247399]: 2026-01-31 08:26:45.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:45 np0005603621 nova_compute[247399]: 2026-01-31 08:26:45.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:26:45 np0005603621 systemd[1]: Started libpod-conmon-5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195.scope.
Jan 31 03:26:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:26:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 31 03:26:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:26:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:26:45 np0005603621 podman[327599]: 2026-01-31 08:26:45.783432092 +0000 UTC m=+1.542262758 container init 5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:26:45 np0005603621 podman[327599]: 2026-01-31 08:26:45.792233819 +0000 UTC m=+1.551064405 container start 5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:26:45 np0005603621 relaxed_volhard[327620]: 167 167
Jan 31 03:26:45 np0005603621 systemd[1]: libpod-5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195.scope: Deactivated successfully.
Jan 31 03:26:46 np0005603621 podman[327599]: 2026-01-31 08:26:46.070396022 +0000 UTC m=+1.829226608 container attach 5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 03:26:46 np0005603621 podman[327599]: 2026-01-31 08:26:46.07226355 +0000 UTC m=+1.831094136 container died 5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:26:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 332 KiB/s wr, 248 op/s
Jan 31 03:26:46 np0005603621 nova_compute[247399]: 2026-01-31 08:26:46.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:46 np0005603621 nova_compute[247399]: 2026-01-31 08:26:46.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:26:46 np0005603621 nova_compute[247399]: 2026-01-31 08:26:46.772 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Jan 31 03:26:46 np0005603621 nova_compute[247399]: 2026-01-31 08:26:46.772 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:26:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:46.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:46 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:46.987 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:26:46 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:46.988 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:26:46 np0005603621 nova_compute[247399]: 2026-01-31 08:26:46.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.261 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.261 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.261 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.261 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.262 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.284 247403 DEBUG nova.compute.manager [req-7e605cde-b1bb-4392-b975-780fc75e8737 req-606b15ff-3225-4507-864a-c0b227a71de6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.285 247403 DEBUG oslo_concurrency.lockutils [req-7e605cde-b1bb-4392-b975-780fc75e8737 req-606b15ff-3225-4507-864a-c0b227a71de6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.285 247403 DEBUG oslo_concurrency.lockutils [req-7e605cde-b1bb-4392-b975-780fc75e8737 req-606b15ff-3225-4507-864a-c0b227a71de6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.285 247403 DEBUG oslo_concurrency.lockutils [req-7e605cde-b1bb-4392-b975-780fc75e8737 req-606b15ff-3225-4507-864a-c0b227a71de6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.286 247403 DEBUG nova.compute.manager [req-7e605cde-b1bb-4392-b975-780fc75e8737 req-606b15ff-3225-4507-864a-c0b227a71de6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] No waiting events found dispatching network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.286 247403 WARNING nova.compute.manager [req-7e605cde-b1bb-4392-b975-780fc75e8737 req-606b15ff-3225-4507-864a-c0b227a71de6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received unexpected event network-vif-plugged-3765398c-c6d8-4598-98d8-447d2d17b347 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.634 247403 DEBUG nova.compute.manager [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.675 247403 INFO nova.compute.manager [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] instance snapshotting#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.676 247403 DEBUG nova.objects.instance [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'flavor' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:26:47 np0005603621 nova_compute[247399]: 2026-01-31 08:26:47.993 247403 INFO nova.virt.libvirt.driver [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Beginning live snapshot process#033[00m
Jan 31 03:26:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 22 KiB/s wr, 222 op/s
Jan 31 03:26:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:26:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758269115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:26:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 31 03:26:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 31 03:26:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c72f3630c1316eeb7359c8b5b2af3ae37ee2c6d58735b3ee21ffedcc1706315d-merged.mount: Deactivated successfully.
Jan 31 03:26:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.765 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.772 247403 DEBUG nova.virt.libvirt.imagebackend [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:26:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:48.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.921 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.922 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.925 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.925 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.929 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.929 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.934 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:48 np0005603621 nova_compute[247399]: 2026-01-31 08:26:48.934 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.080 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.081 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3795MB free_disk=20.760211944580078GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.081 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.081 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.148 247403 DEBUG nova.storage.rbd_utils [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(701f2c95acff4c928a142862c13cb567) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01086632526056207 of space, bias 1.0, pg target 3.259897578168621 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.6426959013819096 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007221255292667039 of space, bias 1.0, pg target 2.1447128219221105 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:26:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 03:26:49 np0005603621 nova_compute[247399]: 2026-01-31 08:26:49.681 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:49 np0005603621 podman[327599]: 2026-01-31 08:26:49.887476135 +0000 UTC m=+5.646306711 container remove 5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_volhard, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 03:26:49 np0005603621 systemd[1]: libpod-conmon-5fc5c5476b64ca80947bd4458fd589e955e130305395984c7744f2100927d195.scope: Deactivated successfully.
Jan 31 03:26:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 31 03:26:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.007 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f02cbbe1-1133-4659-a065-630c53ee2683 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.008 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.008 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance dbf4e573-8e19-4920-aab9-c290d7d8eeec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.008 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance cca881fe-18fa-40c1-b9ef-2b1f28855b53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.008 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.008 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:26:50 np0005603621 podman[327772]: 2026-01-31 08:26:50.000896457 +0000 UTC m=+0.018193663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:26:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 KiB/s wr, 213 op/s
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.317 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:26:50 np0005603621 podman[327772]: 2026-01-31 08:26:50.335660263 +0000 UTC m=+0.352957449 container create 27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:26:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:26:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927933421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.709 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.714 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:26:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:26:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:50.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:26:50 np0005603621 nova_compute[247399]: 2026-01-31 08:26:50.863 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:26:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 31 03:26:51 np0005603621 systemd[1]: Started libpod-conmon-27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f.scope.
Jan 31 03:26:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:26:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75186f784b60f5a25002c03a56cb79507527004001ee0532e1b42a45baffc6ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75186f784b60f5a25002c03a56cb79507527004001ee0532e1b42a45baffc6ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75186f784b60f5a25002c03a56cb79507527004001ee0532e1b42a45baffc6ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75186f784b60f5a25002c03a56cb79507527004001ee0532e1b42a45baffc6ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75186f784b60f5a25002c03a56cb79507527004001ee0532e1b42a45baffc6ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:26:51 np0005603621 podman[327772]: 2026-01-31 08:26:51.416171094 +0000 UTC m=+1.433468300 container init 27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:26:51 np0005603621 podman[327772]: 2026-01-31 08:26:51.422610536 +0000 UTC m=+1.439907732 container start 27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_booth, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:26:51 np0005603621 nova_compute[247399]: 2026-01-31 08:26:51.445 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:26:51 np0005603621 nova_compute[247399]: 2026-01-31 08:26:51.446 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:26:51 np0005603621 podman[327772]: 2026-01-31 08:26:51.666674235 +0000 UTC m=+1.683971431 container attach 27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:26:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:26:51.990 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:26:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 KiB/s wr, 117 op/s
Jan 31 03:26:52 np0005603621 mystifying_booth[327812]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:26:52 np0005603621 mystifying_booth[327812]: --> relative data size: 1.0
Jan 31 03:26:52 np0005603621 mystifying_booth[327812]: --> All data devices are unavailable
Jan 31 03:26:52 np0005603621 systemd[1]: libpod-27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f.scope: Deactivated successfully.
Jan 31 03:26:52 np0005603621 podman[327772]: 2026-01-31 08:26:52.498938345 +0000 UTC m=+2.516235541 container died 27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_booth, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:26:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 31 03:26:52 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 31 03:26:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:52.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:53 np0005603621 nova_compute[247399]: 2026-01-31 08:26:53.173 247403 DEBUG nova.storage.rbd_utils [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] cloning vms/cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk@701f2c95acff4c928a142862c13cb567 to images/9dcdf407-c698-4570-bce4-edde88d1813b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:26:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-75186f784b60f5a25002c03a56cb79507527004001ee0532e1b42a45baffc6ee-merged.mount: Deactivated successfully.
Jan 31 03:26:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 722 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 170 B/s wr, 15 op/s
Jan 31 03:26:54 np0005603621 nova_compute[247399]: 2026-01-31 08:26:54.142 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:54 np0005603621 nova_compute[247399]: 2026-01-31 08:26:54.409 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:54 np0005603621 nova_compute[247399]: 2026-01-31 08:26:54.447 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:54 np0005603621 nova_compute[247399]: 2026-01-31 08:26:54.495 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:54 np0005603621 nova_compute[247399]: 2026-01-31 08:26:54.496 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:54 np0005603621 nova_compute[247399]: 2026-01-31 08:26:54.497 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:54 np0005603621 podman[327772]: 2026-01-31 08:26:54.809059161 +0000 UTC m=+4.826356357 container remove 27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:26:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:26:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:26:54 np0005603621 systemd[1]: libpod-conmon-27cdb6840241d10bd6cd264870e560dcdb96710077b6e4629eec111c6de3398f.scope: Deactivated successfully.
Jan 31 03:26:55 np0005603621 podman[328014]: 2026-01-31 08:26:55.265672935 +0000 UTC m=+0.018877345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:26:55 np0005603621 podman[328014]: 2026-01-31 08:26:55.749585161 +0000 UTC m=+0.502789541 container create 06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:26:55 np0005603621 systemd[1]: Started libpod-conmon-06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb.scope.
Jan 31 03:26:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:26:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 139 KiB/s rd, 15 KiB/s wr, 61 op/s
Jan 31 03:26:56 np0005603621 podman[328014]: 2026-01-31 08:26:56.484141621 +0000 UTC m=+1.237346011 container init 06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:26:56 np0005603621 podman[328014]: 2026-01-31 08:26:56.489620234 +0000 UTC m=+1.242824624 container start 06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tharp, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:26:56 np0005603621 adoring_tharp[328033]: 167 167
Jan 31 03:26:56 np0005603621 systemd[1]: libpod-06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb.scope: Deactivated successfully.
Jan 31 03:26:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:26:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:26:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:56.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:57 np0005603621 podman[328014]: 2026-01-31 08:26:57.336027548 +0000 UTC m=+2.089231948 container attach 06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tharp, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:26:57 np0005603621 podman[328014]: 2026-01-31 08:26:57.336657699 +0000 UTC m=+2.089862089 container died 06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 31 03:26:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 647 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 14 KiB/s wr, 74 op/s
Jan 31 03:26:58 np0005603621 nova_compute[247399]: 2026-01-31 08:26:58.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:26:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:26:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 31 03:26:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:26:58.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:26:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:26:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:26:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.071 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848004.0698342, dbf4e573-8e19-4920-aab9-c290d7d8eeec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.072 247403 INFO nova.compute.manager [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.168 247403 DEBUG nova.compute.manager [None req-6e7f667b-05fa-4f5d-b24e-e5a6ec36be56 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.172 247403 DEBUG nova.compute.manager [None req-6e7f667b-05fa-4f5d-b24e-e5a6ec36be56 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.386 247403 INFO nova.compute.manager [None req-6e7f667b-05fa-4f5d-b24e-e5a6ec36be56 - - - - - -] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 31 03:26:59 np0005603621 nova_compute[247399]: 2026-01-31 08:26:59.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 31 03:27:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 668 KiB/s rd, 15 KiB/s wr, 94 op/s
Jan 31 03:27:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-42001227af2b058cfe3100e7fe6d798b0f01e622961353d2ebbbe32d17925811-merged.mount: Deactivated successfully.
Jan 31 03:27:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:00.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 31 03:27:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 701 KiB/s rd, 16 KiB/s wr, 95 op/s
Jan 31 03:27:02 np0005603621 podman[328014]: 2026-01-31 08:27:02.225549274 +0000 UTC m=+6.978753694 container remove 06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:27:02 np0005603621 systemd[1]: libpod-conmon-06fcc483320a9af358c3f269bfeed9de14c9332efb6551baf4f9601887e641eb.scope: Deactivated successfully.
Jan 31 03:27:02 np0005603621 podman[328062]: 2026-01-31 08:27:02.356312215 +0000 UTC m=+0.023333937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:27:02 np0005603621 nova_compute[247399]: 2026-01-31 08:27:02.636 247403 DEBUG nova.storage.rbd_utils [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] flattening images/9dcdf407-c698-4570-bce4-edde88d1813b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:27:02 np0005603621 podman[328062]: 2026-01-31 08:27:02.668495419 +0000 UTC m=+0.335517131 container create deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:27:02 np0005603621 systemd[1]: Started libpod-conmon-deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879.scope.
Jan 31 03:27:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:27:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d9b8e73effb36203ed4fd029d041ac4ec19f5518a42efb0c5098dbab2a3b12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d9b8e73effb36203ed4fd029d041ac4ec19f5518a42efb0c5098dbab2a3b12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d9b8e73effb36203ed4fd029d041ac4ec19f5518a42efb0c5098dbab2a3b12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d9b8e73effb36203ed4fd029d041ac4ec19f5518a42efb0c5098dbab2a3b12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:02 np0005603621 podman[328094]: 2026-01-31 08:27:02.824487573 +0000 UTC m=+0.120463565 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:27:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:02.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:03 np0005603621 podman[328062]: 2026-01-31 08:27:03.098547878 +0000 UTC m=+0.765569630 container init deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:27:03 np0005603621 podman[328062]: 2026-01-31 08:27:03.104571307 +0000 UTC m=+0.771593009 container start deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:27:03 np0005603621 podman[328062]: 2026-01-31 08:27:03.295425519 +0000 UTC m=+0.962447241 container attach deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:27:03 np0005603621 podman[328095]: 2026-01-31 08:27:03.386822109 +0000 UTC m=+0.681331645 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:27:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]: {
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:    "0": [
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:        {
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "devices": [
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "/dev/loop3"
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            ],
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "lv_name": "ceph_lv0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "lv_size": "7511998464",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "name": "ceph_lv0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "tags": {
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.cluster_name": "ceph",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.crush_device_class": "",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.encrypted": "0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.osd_id": "0",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.type": "block",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:                "ceph.vdo": "0"
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            },
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "type": "block",
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:            "vg_name": "ceph_vg0"
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:        }
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]:    ]
Jan 31 03:27:03 np0005603621 wonderful_mirzakhani[328130]: }
Jan 31 03:27:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 31 03:27:03 np0005603621 systemd[1]: libpod-deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879.scope: Deactivated successfully.
Jan 31 03:27:03 np0005603621 podman[328062]: 2026-01-31 08:27:03.843256518 +0000 UTC m=+1.510278220 container died deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:27:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 643 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 662 KiB/s rd, 15 KiB/s wr, 91 op/s
Jan 31 03:27:04 np0005603621 nova_compute[247399]: 2026-01-31 08:27:04.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c2d9b8e73effb36203ed4fd029d041ac4ec19f5518a42efb0c5098dbab2a3b12-merged.mount: Deactivated successfully.
Jan 31 03:27:04 np0005603621 nova_compute[247399]: 2026-01-31 08:27:04.412 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 31 03:27:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 31 03:27:04 np0005603621 podman[328062]: 2026-01-31 08:27:04.731322915 +0000 UTC m=+2.398344617 container remove deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:27:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:04 np0005603621 systemd[1]: libpod-conmon-deea61baf0c46db5fefc0159b67bf01e440d851537fc22c6d109453980742879.scope: Deactivated successfully.
Jan 31 03:27:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:27:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:04.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.251970677 +0000 UTC m=+0.049126358 container create 53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.223276433 +0000 UTC m=+0.020432144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:27:05 np0005603621 systemd[1]: Started libpod-conmon-53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610.scope.
Jan 31 03:27:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.409301383 +0000 UTC m=+0.206457064 container init 53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.41743688 +0000 UTC m=+0.214592551 container start 53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:27:05 np0005603621 distracted_williamson[328320]: 167 167
Jan 31 03:27:05 np0005603621 systemd[1]: libpod-53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610.scope: Deactivated successfully.
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.452689501 +0000 UTC m=+0.249845202 container attach 53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.453488605 +0000 UTC m=+0.250644286 container died 53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:27:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0cf048a223b8b99e5d15b19fcd4f1ba455ddaeb3b530ad72119c71019887cdee-merged.mount: Deactivated successfully.
Jan 31 03:27:05 np0005603621 podman[328304]: 2026-01-31 08:27:05.823082728 +0000 UTC m=+0.620238409 container remove 53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:27:05 np0005603621 systemd[1]: libpod-conmon-53a60c5074b6527eef9af4e2303fbc77fd8fa894137b8bdadfdf6201df591610.scope: Deactivated successfully.
Jan 31 03:27:06 np0005603621 podman[328347]: 2026-01-31 08:27:06.022543852 +0000 UTC m=+0.073567169 container create 806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_carver, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:27:06 np0005603621 podman[328347]: 2026-01-31 08:27:05.972423213 +0000 UTC m=+0.023446550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:27:06 np0005603621 systemd[1]: Started libpod-conmon-806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96.scope.
Jan 31 03:27:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:27:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a191887a22e75b41ef7e001ae3801f748acfa79b7f7462f0148a2a0b2d31c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a191887a22e75b41ef7e001ae3801f748acfa79b7f7462f0148a2a0b2d31c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a191887a22e75b41ef7e001ae3801f748acfa79b7f7462f0148a2a0b2d31c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a191887a22e75b41ef7e001ae3801f748acfa79b7f7462f0148a2a0b2d31c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:27:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 621 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.6 MiB/s wr, 83 op/s
Jan 31 03:27:06 np0005603621 podman[328347]: 2026-01-31 08:27:06.172369523 +0000 UTC m=+0.223392870 container init 806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:06 np0005603621 podman[328347]: 2026-01-31 08:27:06.179529798 +0000 UTC m=+0.230553115 container start 806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:27:06 np0005603621 podman[328347]: 2026-01-31 08:27:06.208000615 +0000 UTC m=+0.259023962 container attach 806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_carver, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:27:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:06.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:06.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:07 np0005603621 elated_carver[328364]: {
Jan 31 03:27:07 np0005603621 elated_carver[328364]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:27:07 np0005603621 elated_carver[328364]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:27:07 np0005603621 elated_carver[328364]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:27:07 np0005603621 elated_carver[328364]:        "osd_id": 0,
Jan 31 03:27:07 np0005603621 elated_carver[328364]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:27:07 np0005603621 elated_carver[328364]:        "type": "bluestore"
Jan 31 03:27:07 np0005603621 elated_carver[328364]:    }
Jan 31 03:27:07 np0005603621 elated_carver[328364]: }
Jan 31 03:27:07 np0005603621 systemd[1]: libpod-806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96.scope: Deactivated successfully.
Jan 31 03:27:07 np0005603621 podman[328347]: 2026-01-31 08:27:07.079155649 +0000 UTC m=+1.130178966 container died 806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:27:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-48a191887a22e75b41ef7e001ae3801f748acfa79b7f7462f0148a2a0b2d31c1-merged.mount: Deactivated successfully.
Jan 31 03:27:07 np0005603621 podman[328347]: 2026-01-31 08:27:07.596624151 +0000 UTC m=+1.647647468 container remove 806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:27:07 np0005603621 systemd[1]: libpod-conmon-806f6540e7789b7b9025844e3858c6a03e9809fc856de46cb20549dd3cbf6b96.scope: Deactivated successfully.
Jan 31 03:27:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:27:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:27:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:27:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5b1d671e-eda6-4ce3-a734-4150abc2e0d9 does not exist
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e659f2a4-32fa-4d24-a8c1-491ec6d4f875 does not exist
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b242456f-1f45-4bc5-9ea1-a00c0f5d4c20 does not exist
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 613 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.4 MiB/s wr, 68 op/s
Jan 31 03:27:08 np0005603621 nova_compute[247399]: 2026-01-31 08:27:08.254 247403 INFO nova.virt.libvirt.driver [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Deleting instance files /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec_del#033[00m
Jan 31 03:27:08 np0005603621 nova_compute[247399]: 2026-01-31 08:27:08.255 247403 INFO nova.virt.libvirt.driver [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Deletion of /var/lib/nova/instances/dbf4e573-8e19-4920-aab9-c290d7d8eeec_del complete#033[00m
Jan 31 03:27:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:27:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:27:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:08 np0005603621 nova_compute[247399]: 2026-01-31 08:27:08.540 247403 DEBUG nova.storage.rbd_utils [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] removing snapshot(701f2c95acff4c928a142862c13cb567) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:27:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:08.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:09 np0005603621 nova_compute[247399]: 2026-01-31 08:27:09.097 247403 INFO nova.compute.manager [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Took 26.87 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:27:09 np0005603621 nova_compute[247399]: 2026-01-31 08:27:09.098 247403 DEBUG oslo.service.loopingcall [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:27:09 np0005603621 nova_compute[247399]: 2026-01-31 08:27:09.098 247403 DEBUG nova.compute.manager [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:27:09 np0005603621 nova_compute[247399]: 2026-01-31 08:27:09.098 247403 DEBUG nova.network.neutron [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:27:09 np0005603621 nova_compute[247399]: 2026-01-31 08:27:09.203 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:09 np0005603621 nova_compute[247399]: 2026-01-31 08:27:09.415 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 115 op/s
Jan 31 03:27:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:10 np0005603621 nova_compute[247399]: 2026-01-31 08:27:10.829 247403 DEBUG nova.network.neutron [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:10.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:10 np0005603621 nova_compute[247399]: 2026-01-31 08:27:10.873 247403 INFO nova.compute.manager [-] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Took 1.77 seconds to deallocate network for instance.#033[00m
Jan 31 03:27:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 31 03:27:10 np0005603621 nova_compute[247399]: 2026-01-31 08:27:10.935 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:10 np0005603621 nova_compute[247399]: 2026-01-31 08:27:10.936 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.016 247403 DEBUG nova.scheduler.client.report [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.061 247403 DEBUG nova.scheduler.client.report [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.061 247403 DEBUG nova.compute.provider_tree [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.105 247403 DEBUG nova.scheduler.client.report [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.138 247403 DEBUG nova.scheduler.client.report [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.234 247403 DEBUG oslo_concurrency.processutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 31 03:27:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 31 03:27:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54084041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.952 247403 DEBUG oslo_concurrency.processutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.718s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:11 np0005603621 nova_compute[247399]: 2026-01-31 08:27:11.958 247403 DEBUG nova.compute.provider_tree [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 138 op/s
Jan 31 03:27:12 np0005603621 nova_compute[247399]: 2026-01-31 08:27:12.245 247403 DEBUG nova.storage.rbd_utils [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(snap) on rbd image(9dcdf407-c698-4570-bce4-edde88d1813b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:27:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:12.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:12 np0005603621 nova_compute[247399]: 2026-01-31 08:27:12.872 247403 DEBUG nova.scheduler.client.report [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:27:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 31 03:27:12 np0005603621 nova_compute[247399]: 2026-01-31 08:27:12.924 247403 DEBUG nova.compute.manager [req-e181d6c9-2c8d-4e84-bd35-83f254b379a9 req-320bfb02-3b97-4562-bbf7-2a1916a14810 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dbf4e573-8e19-4920-aab9-c290d7d8eeec] Received event network-vif-deleted-3765398c-c6d8-4598-98d8-447d2d17b347 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:12 np0005603621 nova_compute[247399]: 2026-01-31 08:27:12.952 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:13 np0005603621 nova_compute[247399]: 2026-01-31 08:27:13.033 247403 INFO nova.scheduler.client.report [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Deleted allocations for instance dbf4e573-8e19-4920-aab9-c290d7d8eeec#033[00m
Jan 31 03:27:13 np0005603621 nova_compute[247399]: 2026-01-31 08:27:13.212 247403 DEBUG oslo_concurrency.lockutils [None req-380bf620-0067-4173-8e04-0e0eb9d9415a f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "dbf4e573-8e19-4920-aab9-c290d7d8eeec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 30.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 31 03:27:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.121 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.121 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.122 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.122 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.122 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.123 247403 INFO nova.compute.manager [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Terminating instance#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.124 247403 DEBUG nova.compute.manager [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:27:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.2 MiB/s wr, 172 op/s
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.207 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.417 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 kernel: tapfb9bab50-6b (unregistering): left promiscuous mode
Jan 31 03:27:14 np0005603621 NetworkManager[49013]: <info>  [1769848034.5360] device (tapfb9bab50-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.541 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:27:14Z|00472|binding|INFO|Releasing lport fb9bab50-6b70-4dec-9b6d-9d961083b257 from this chassis (sb_readonly=0)
Jan 31 03:27:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:27:14Z|00473|binding|INFO|Setting lport fb9bab50-6b70-4dec-9b6d-9d961083b257 down in Southbound
Jan 31 03:27:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:27:14Z|00474|binding|INFO|Removing iface tapfb9bab50-6b ovn-installed in OVS
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.544 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 31 03:27:14 np0005603621 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006e.scope: Consumed 2.028s CPU time.
Jan 31 03:27:14 np0005603621 systemd-machined[212769]: Machine qemu-53-instance-0000006e terminated.
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.626 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:49:09 10.100.0.8'], port_security=['fa:16:3e:69:49:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '23200b4a-e522-43bf-a83e-cb2f9bb31571', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '8', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb9bab50-6b70-4dec-9b6d-9d961083b257) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.628 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb9bab50-6b70-4dec-9b6d-9d961083b257 in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab unbound from our chassis#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.630 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 98be5db6-5633-4d23-b9a9-16382d8e99ab#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.641 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[430f7ca7-71f8-4f09-89ff-7f418a4f1640]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.661 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[90c1fc06-601a-40de-b436-2507fe59b216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.663 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5b556ffa-b0f0-4ac2-827d-b307d928d436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.684 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e2feb7fb-9f71-485e-b2e8-5f2e8895d825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.695 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[78e5db19-0672-462f-9a9f-effb6a6b3612]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap98be5db6-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:3a:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 120], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693251, 'reachable_time': 23770, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328574, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.707 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8c317999-51df-450f-9aa8-1e838fd43240]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693259, 'tstamp': 693259}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328575, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap98be5db6-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 693261, 'tstamp': 693261}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328575, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.708 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.759 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.765 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.766 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98be5db6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.766 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.767 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap98be5db6-50, col_values=(('external_ids', {'iface-id': 'dad27cfe-7e8a-4f55-a945-07f9cae848c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:14.767 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.789 247403 INFO nova.virt.libvirt.driver [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Instance destroyed successfully.#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.790 247403 DEBUG nova.objects.instance [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'resources' on Instance uuid 23200b4a-e522-43bf-a83e-cb2f9bb31571 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.818 247403 DEBUG nova.virt.libvirt.vif [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-668266523',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-668266523',id=110,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:22:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-6l4wap3n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:22:47Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=23200b4a-e522-43bf-a83e-cb2f9bb31571,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.819 247403 DEBUG nova.network.os_vif_util [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "address": "fa:16:3e:69:49:09", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb9bab50-6b", "ovs_interfaceid": "fb9bab50-6b70-4dec-9b6d-9d961083b257", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.819 247403 DEBUG nova.network.os_vif_util [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.820 247403 DEBUG os_vif [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.821 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.822 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb9bab50-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.825 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.826 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:27:14 np0005603621 nova_compute[247399]: 2026-01-31 08:27:14.828 247403 INFO os_vif [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:49:09,bridge_name='br-int',has_traffic_filtering=True,id=fb9bab50-6b70-4dec-9b6d-9d961083b257,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb9bab50-6b')#033[00m
Jan 31 03:27:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.4 MiB/s wr, 180 op/s
Jan 31 03:27:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:16.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:16.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 645 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.9 KiB/s wr, 126 op/s
Jan 31 03:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 31 03:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 31 03:27:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 31 03:27:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:18.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.158 247403 INFO nova.virt.libvirt.driver [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Snapshot image upload complete#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.158 247403 INFO nova.compute.manager [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Took 31.45 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.419 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.824 247403 DEBUG nova.compute.manager [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.824 247403 DEBUG oslo_concurrency.lockutils [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.824 247403 DEBUG oslo_concurrency.lockutils [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.825 247403 DEBUG oslo_concurrency.lockutils [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.825 247403 DEBUG nova.compute.manager [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.825 247403 DEBUG nova.compute.manager [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-unplugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.825 247403 DEBUG nova.compute.manager [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.825 247403 DEBUG oslo_concurrency.lockutils [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.825 247403 DEBUG oslo_concurrency.lockutils [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.826 247403 DEBUG oslo_concurrency.lockutils [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.826 247403 DEBUG nova.compute.manager [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] No waiting events found dispatching network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.826 247403 WARNING nova.compute.manager [req-055d2f75-4651-471a-bb8e-eaacec0bc2ed req-5e4291f8-dadd-4e57-8c20-573faeb5ffbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received unexpected event network-vif-plugged-fb9bab50-6b70-4dec-9b6d-9d961083b257 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:27:19 np0005603621 nova_compute[247399]: 2026-01-31 08:27:19.826 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 551 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.0 KiB/s wr, 166 op/s
Jan 31 03:27:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:20.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:20 np0005603621 nova_compute[247399]: 2026-01-31 08:27:20.815 247403 DEBUG nova.compute.manager [None req-7f0aa5ea-6310-4329-b880-0da55c36f2ad ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 31 03:27:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:20.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.163 247403 INFO nova.virt.libvirt.driver [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Deleting instance files /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571_del#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.164 247403 INFO nova.virt.libvirt.driver [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Deletion of /var/lib/nova/instances/23200b4a-e522-43bf-a83e-cb2f9bb31571_del complete#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.268 247403 INFO nova.compute.manager [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Took 7.14 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.269 247403 DEBUG oslo.service.loopingcall [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.270 247403 DEBUG nova.compute.manager [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.270 247403 DEBUG nova.network.neutron [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.692 247403 DEBUG nova.compute.manager [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.735 247403 INFO nova.compute.manager [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] instance snapshotting#033[00m
Jan 31 03:27:21 np0005603621 nova_compute[247399]: 2026-01-31 08:27:21.736 247403 DEBUG nova.objects.instance [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'flavor' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.004 247403 INFO nova.virt.libvirt.driver [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Beginning live snapshot process#033[00m
Jan 31 03:27:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 551 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 864 KiB/s rd, 6.3 KiB/s wr, 90 op/s
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.168 247403 DEBUG nova.virt.libvirt.imagebackend [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.409 247403 DEBUG nova.storage.rbd_utils [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(237a1fd44f854ce58c2bb6d2f4776344) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.562 247403 DEBUG nova.network.neutron [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.613 247403 DEBUG nova.compute.manager [req-13387e77-d5cb-465d-8197-d4f6dcc76016 req-bb3cf4fa-ec8e-4a3f-817d-8157742c9635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Received event network-vif-deleted-fb9bab50-6b70-4dec-9b6d-9d961083b257 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.613 247403 INFO nova.compute.manager [req-13387e77-d5cb-465d-8197-d4f6dcc76016 req-bb3cf4fa-ec8e-4a3f-817d-8157742c9635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Neutron deleted interface fb9bab50-6b70-4dec-9b6d-9d961083b257; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.614 247403 DEBUG nova.network.neutron [req-13387e77-d5cb-465d-8197-d4f6dcc76016 req-bb3cf4fa-ec8e-4a3f-817d-8157742c9635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.751 247403 INFO nova.compute.manager [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Took 1.48 seconds to deallocate network for instance.#033[00m
Jan 31 03:27:22 np0005603621 nova_compute[247399]: 2026-01-31 08:27:22.766 247403 DEBUG nova.compute.manager [req-13387e77-d5cb-465d-8197-d4f6dcc76016 req-bb3cf4fa-ec8e-4a3f-817d-8157742c9635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Detach interface failed, port_id=fb9bab50-6b70-4dec-9b6d-9d961083b257, reason: Instance 23200b4a-e522-43bf-a83e-cb2f9bb31571 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:27:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:22.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.018 247403 INFO nova.compute.manager [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.073 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.073 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.165 247403 DEBUG oslo_concurrency.processutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 31 03:27:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1105254108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.610 247403 DEBUG oslo_concurrency.processutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.616 247403 DEBUG nova.compute.provider_tree [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.647 247403 DEBUG nova.scheduler.client.report [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.678 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.707 247403 INFO nova.scheduler.client.report [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Deleted allocations for instance 23200b4a-e522-43bf-a83e-cb2f9bb31571#033[00m
Jan 31 03:27:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 31 03:27:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 31 03:27:23 np0005603621 nova_compute[247399]: 2026-01-31 08:27:23.819 247403 DEBUG oslo_concurrency.lockutils [None req-fa4053b8-2019-4639-935b-a1712a1a2607 f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "23200b4a-e522-43bf-a83e-cb2f9bb31571" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 495 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.6 KiB/s wr, 74 op/s
Jan 31 03:27:24 np0005603621 nova_compute[247399]: 2026-01-31 08:27:24.227 247403 DEBUG nova.storage.rbd_utils [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] cloning vms/cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk@237a1fd44f854ce58c2bb6d2f4776344 to images/3d658df0-f3c7-45a7-bf8b-c899f015d20b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:27:24 np0005603621 nova_compute[247399]: 2026-01-31 08:27:24.421 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:24.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:24 np0005603621 nova_compute[247399]: 2026-01-31 08:27:24.827 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:27:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:24.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:27:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 31 03:27:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 31 03:27:25 np0005603621 nova_compute[247399]: 2026-01-31 08:27:25.511 247403 DEBUG nova.storage.rbd_utils [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] flattening images/3d658df0-f3c7-45a7-bf8b-c899f015d20b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:27:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 31 03:27:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 5.4 KiB/s wr, 133 op/s
Jan 31 03:27:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:26.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 503 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 1.4 MiB/s wr, 93 op/s
Jan 31 03:27:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:28.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:29 np0005603621 nova_compute[247399]: 2026-01-31 08:27:29.423 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:29 np0005603621 nova_compute[247399]: 2026-01-31 08:27:29.788 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848034.7868795, 23200b4a-e522-43bf-a83e-cb2f9bb31571 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:29 np0005603621 nova_compute[247399]: 2026-01-31 08:27:29.788 247403 INFO nova.compute.manager [-] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:27:29 np0005603621 nova_compute[247399]: 2026-01-31 08:27:29.828 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:29 np0005603621 nova_compute[247399]: 2026-01-31 08:27:29.840 247403 DEBUG nova.compute.manager [None req-2084c4f4-ed90-49b5-b34a-4de13da4d4c7 - - - - - -] [instance: 23200b4a-e522-43bf-a83e-cb2f9bb31571] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 556 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 165 op/s
Jan 31 03:27:30 np0005603621 nova_compute[247399]: 2026-01-31 08:27:30.173 247403 DEBUG nova.storage.rbd_utils [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] removing snapshot(237a1fd44f854ce58c2bb6d2f4776344) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:27:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:30.509 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:30.510 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:30.510 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:30.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:30.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 31 03:27:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 31 03:27:31 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 31 03:27:31 np0005603621 nova_compute[247399]: 2026-01-31 08:27:31.968 247403 DEBUG nova.storage.rbd_utils [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(snap) on rbd image(3d658df0-f3c7-45a7-bf8b-c899f015d20b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:27:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 556 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.4 MiB/s wr, 149 op/s
Jan 31 03:27:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 31 03:27:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:32.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 31 03:27:33 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 31 03:27:33 np0005603621 podman[328829]: 2026-01-31 08:27:33.498859999 +0000 UTC m=+0.052266588 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:27:33 np0005603621 podman[328830]: 2026-01-31 08:27:33.523550187 +0000 UTC m=+0.076947696 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:27:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 31 03:27:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 31 03:27:33 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 31 03:27:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 6.0 MiB/s wr, 102 op/s
Jan 31 03:27:34 np0005603621 nova_compute[247399]: 2026-01-31 08:27:34.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:34.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:34 np0005603621 nova_compute[247399]: 2026-01-31 08:27:34.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:34.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 569 KiB/s wr, 56 op/s
Jan 31 03:27:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:36.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:36.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.933 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.934 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.934 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.934 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.934 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.935 247403 INFO nova.compute.manager [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Terminating instance#033[00m
Jan 31 03:27:36 np0005603621 nova_compute[247399]: 2026-01-31 08:27:36.936 247403 DEBUG nova.compute.manager [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:27:37 np0005603621 kernel: tapcc10268d-b3 (unregistering): left promiscuous mode
Jan 31 03:27:37 np0005603621 NetworkManager[49013]: <info>  [1769848057.5964] device (tapcc10268d-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:27:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:27:37Z|00475|binding|INFO|Releasing lport cc10268d-b3b3-404e-ba33-00ef9ef3ce4f from this chassis (sb_readonly=0)
Jan 31 03:27:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:27:37Z|00476|binding|INFO|Setting lport cc10268d-b3b3-404e-ba33-00ef9ef3ce4f down in Southbound
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.606 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:27:37Z|00477|binding|INFO|Removing iface tapcc10268d-b3 ovn-installed in OVS
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.608 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:37.623 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:99:c6 10.100.0.4'], port_security=['fa:16:3e:fb:99:c6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f02cbbe1-1133-4659-a065-630c53ee2683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf024d54545b4af882a87c721105742a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '634aba40-50e3-4365-94d0-0773f42bafa5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed2028d9-0505-431d-85ea-94f27c9f5ff6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:27:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:37.625 159734 INFO neutron.agent.ovn.metadata.agent [-] Port cc10268d-b3b3-404e-ba33-00ef9ef3ce4f in datapath 98be5db6-5633-4d23-b9a9-16382d8e99ab unbound from our chassis#033[00m
Jan 31 03:27:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:37.627 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 98be5db6-5633-4d23-b9a9-16382d8e99ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:27:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:37.628 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6b303cb6-a83b-43d0-844f-771d74f86dbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:37.628 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab namespace which is not needed anymore#033[00m
Jan 31 03:27:37 np0005603621 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 31 03:27:37 np0005603621 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000006d.scope: Consumed 27.765s CPU time.
Jan 31 03:27:37 np0005603621 systemd-machined[212769]: Machine qemu-46-instance-0000006d terminated.
Jan 31 03:27:37 np0005603621 kernel: tapcc10268d-b3: entered promiscuous mode
Jan 31 03:27:37 np0005603621 kernel: tapcc10268d-b3 (unregistering): left promiscuous mode
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.759 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.772 247403 INFO nova.virt.libvirt.driver [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Instance destroyed successfully.#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.772 247403 DEBUG nova.objects.instance [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lazy-loading 'resources' on Instance uuid f02cbbe1-1133-4659-a065-630c53ee2683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.793 247403 DEBUG nova.virt.libvirt.vif [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:20:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1916861428',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1916861428',id=109,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:20:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf024d54545b4af882a87c721105742a',ramdisk_id='',reservation_id='r-7u9orvmp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-468517745',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-468517745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:20:49Z,user_data=None,user_id='f4d66dd0b7ff443cbcdb6e2c9f5c4c8c',uuid=f02cbbe1-1133-4659-a065-630c53ee2683,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.794 247403 DEBUG nova.network.os_vif_util [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converting VIF {"id": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "address": "fa:16:3e:fb:99:c6", "network": {"id": "98be5db6-5633-4d23-b9a9-16382d8e99ab", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-2138383352-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf024d54545b4af882a87c721105742a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcc10268d-b3", "ovs_interfaceid": "cc10268d-b3b3-404e-ba33-00ef9ef3ce4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.794 247403 DEBUG nova.network.os_vif_util [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.795 247403 DEBUG os_vif [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.796 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcc10268d-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.798 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.799 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:37 np0005603621 nova_compute[247399]: 2026-01-31 08:27:37.802 247403 INFO os_vif [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fb:99:c6,bridge_name='br-int',has_traffic_filtering=True,id=cc10268d-b3b3-404e-ba33-00ef9ef3ce4f,network=Network(98be5db6-5633-4d23-b9a9-16382d8e99ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcc10268d-b3')#033[00m
Jan 31 03:27:37 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [NOTICE]   (317095) : haproxy version is 2.8.14-c23fe91
Jan 31 03:27:37 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [NOTICE]   (317095) : path to executable is /usr/sbin/haproxy
Jan 31 03:27:37 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [WARNING]  (317095) : Exiting Master process...
Jan 31 03:27:37 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [WARNING]  (317095) : Exiting Master process...
Jan 31 03:27:37 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [ALERT]    (317095) : Current worker (317097) exited with code 143 (Terminated)
Jan 31 03:27:37 np0005603621 neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab[317091]: [WARNING]  (317095) : All workers exited. Exiting... (0)
Jan 31 03:27:37 np0005603621 systemd[1]: libpod-a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724.scope: Deactivated successfully.
Jan 31 03:27:37 np0005603621 podman[328896]: 2026-01-31 08:27:37.966010448 +0000 UTC m=+0.263055018 container died a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:27:38 np0005603621 nova_compute[247399]: 2026-01-31 08:27:38.137 247403 DEBUG nova.compute.manager [req-d3ebdaa0-3ca1-4427-843d-92058467ef17 req-924cf264-2955-4b75-b2d8-bc27b9c94b33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-vif-unplugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:38 np0005603621 nova_compute[247399]: 2026-01-31 08:27:38.137 247403 DEBUG oslo_concurrency.lockutils [req-d3ebdaa0-3ca1-4427-843d-92058467ef17 req-924cf264-2955-4b75-b2d8-bc27b9c94b33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:38 np0005603621 nova_compute[247399]: 2026-01-31 08:27:38.138 247403 DEBUG oslo_concurrency.lockutils [req-d3ebdaa0-3ca1-4427-843d-92058467ef17 req-924cf264-2955-4b75-b2d8-bc27b9c94b33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:38 np0005603621 nova_compute[247399]: 2026-01-31 08:27:38.138 247403 DEBUG oslo_concurrency.lockutils [req-d3ebdaa0-3ca1-4427-843d-92058467ef17 req-924cf264-2955-4b75-b2d8-bc27b9c94b33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:38 np0005603621 nova_compute[247399]: 2026-01-31 08:27:38.138 247403 DEBUG nova.compute.manager [req-d3ebdaa0-3ca1-4427-843d-92058467ef17 req-924cf264-2955-4b75-b2d8-bc27b9c94b33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] No waiting events found dispatching network-vif-unplugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:27:38 np0005603621 nova_compute[247399]: 2026-01-31 08:27:38.138 247403 DEBUG nova.compute.manager [req-d3ebdaa0-3ca1-4427-843d-92058467ef17 req-924cf264-2955-4b75-b2d8-bc27b9c94b33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-vif-unplugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 517 KiB/s wr, 53 op/s
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:27:38
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes']
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:27:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:38.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:27:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:27:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724-userdata-shm.mount: Deactivated successfully.
Jan 31 03:27:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c6177bf735eff8a7e9956e732c3d523a0a76c36ee9257961eea414ab6962a474-merged.mount: Deactivated successfully.
Jan 31 03:27:39 np0005603621 nova_compute[247399]: 2026-01-31 08:27:39.426 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:39 np0005603621 nova_compute[247399]: 2026-01-31 08:27:39.466 247403 INFO nova.virt.libvirt.driver [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Snapshot image upload complete#033[00m
Jan 31 03:27:39 np0005603621 nova_compute[247399]: 2026-01-31 08:27:39.466 247403 INFO nova.compute.manager [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Took 17.70 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:27:39 np0005603621 nova_compute[247399]: 2026-01-31 08:27:39.869 247403 DEBUG nova.compute.manager [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 31 03:27:39 np0005603621 nova_compute[247399]: 2026-01-31 08:27:39.869 247403 DEBUG nova.compute.manager [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Jan 31 03:27:39 np0005603621 nova_compute[247399]: 2026-01-31 08:27:39.869 247403 DEBUG nova.compute.manager [None req-378e5283-1c2b-40fe-8731-f340335d119a ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Deleting image 913987a4-398c-4df5-8903-96535743e328 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Jan 31 03:27:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 3.1 MiB/s wr, 92 op/s
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.239 247403 DEBUG nova.compute.manager [req-1fe272e9-33e0-456c-807f-ab707b99e6e9 req-8112c584-a0e5-4226-a88c-5abb5a92a28b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.240 247403 DEBUG oslo_concurrency.lockutils [req-1fe272e9-33e0-456c-807f-ab707b99e6e9 req-8112c584-a0e5-4226-a88c-5abb5a92a28b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.240 247403 DEBUG oslo_concurrency.lockutils [req-1fe272e9-33e0-456c-807f-ab707b99e6e9 req-8112c584-a0e5-4226-a88c-5abb5a92a28b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.240 247403 DEBUG oslo_concurrency.lockutils [req-1fe272e9-33e0-456c-807f-ab707b99e6e9 req-8112c584-a0e5-4226-a88c-5abb5a92a28b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.240 247403 DEBUG nova.compute.manager [req-1fe272e9-33e0-456c-807f-ab707b99e6e9 req-8112c584-a0e5-4226-a88c-5abb5a92a28b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] No waiting events found dispatching network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:27:40 np0005603621 nova_compute[247399]: 2026-01-31 08:27:40.240 247403 WARNING nova.compute.manager [req-1fe272e9-33e0-456c-807f-ab707b99e6e9 req-8112c584-a0e5-4226-a88c-5abb5a92a28b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received unexpected event network-vif-plugged-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:27:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 31 03:27:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:40.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:40.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:41 np0005603621 podman[328896]: 2026-01-31 08:27:41.116693505 +0000 UTC m=+3.413738065 container cleanup a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:27:41 np0005603621 systemd[1]: libpod-conmon-a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724.scope: Deactivated successfully.
Jan 31 03:27:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 31 03:27:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Jan 31 03:27:42 np0005603621 podman[328958]: 2026-01-31 08:27:42.274585422 +0000 UTC m=+1.139250611 container remove a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.279 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0370f91-48e9-4f45-bc1a-06fdd3201274]: (4, ('Sat Jan 31 08:27:37 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab (a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724)\na9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724\nSat Jan 31 08:27:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab (a9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724)\na9c80720b8fdc4721d754dfd21ba9a93a34657dd6318d6ff248034f8d22f7724\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.280 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7910da-49f1-4578-bbd6-6a513b9773f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.281 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98be5db6-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:27:42 np0005603621 nova_compute[247399]: 2026-01-31 08:27:42.282 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:42 np0005603621 kernel: tap98be5db6-50: left promiscuous mode
Jan 31 03:27:42 np0005603621 nova_compute[247399]: 2026-01-31 08:27:42.288 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.291 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c7bdb1f1-de3f-4431-b956-b6e586cdda4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.309 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[76338c1a-35cc-4385-9d4e-a5092d979b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.310 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[785d5db8-d6e1-4dc7-8139-6284c5eac832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.326 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7784dad7-7f92-4f73-b85d-f8e515f0ff68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 693246, 'reachable_time': 28296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328975, 'error': None, 'target': 'ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 systemd[1]: run-netns-ovnmeta\x2d98be5db6\x2d5633\x2d4d23\x2db9a9\x2d16382d8e99ab.mount: Deactivated successfully.
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.330 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-98be5db6-5633-4d23-b9a9-16382d8e99ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:27:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:42.330 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c16bf5a1-ae1d-4086-b633-1aff923b7948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:27:42 np0005603621 nova_compute[247399]: 2026-01-31 08:27:42.798 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:42.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 31 03:27:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 527 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 31 03:27:44 np0005603621 nova_compute[247399]: 2026-01-31 08:27:44.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 31 03:27:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:44.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:44.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 31 03:27:45 np0005603621 nova_compute[247399]: 2026-01-31 08:27:45.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 505 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 3.6 MiB/s wr, 114 op/s
Jan 31 03:27:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:46.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:46.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.383 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.524 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.524 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.525 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.525 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:27:47 np0005603621 nova_compute[247399]: 2026-01-31 08:27:47.800 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.7 MiB/s wr, 68 op/s
Jan 31 03:27:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:48.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:48.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004156343057882003 of space, bias 1.0, pg target 1.246902917364601 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.6470238199097339 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.008393748255164712 of space, bias 1.0, pg target 2.5097307282942487 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:27:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:27:49 np0005603621 nova_compute[247399]: 2026-01-31 08:27:49.430 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 526 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.9 MiB/s wr, 97 op/s
Jan 31 03:27:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:50 np0005603621 nova_compute[247399]: 2026-01-31 08:27:50.865 247403 INFO nova.virt.libvirt.driver [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Deleting instance files /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683_del#033[00m
Jan 31 03:27:50 np0005603621 nova_compute[247399]: 2026-01-31 08:27:50.866 247403 INFO nova.virt.libvirt.driver [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Deletion of /var/lib/nova/instances/f02cbbe1-1133-4659-a065-630c53ee2683_del complete#033[00m
Jan 31 03:27:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:50 np0005603621 nova_compute[247399]: 2026-01-31 08:27:50.949 247403 INFO nova.compute.manager [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Took 14.01 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:27:50 np0005603621 nova_compute[247399]: 2026-01-31 08:27:50.950 247403 DEBUG oslo.service.loopingcall [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:27:50 np0005603621 nova_compute[247399]: 2026-01-31 08:27:50.950 247403 DEBUG nova.compute.manager [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:27:50 np0005603621 nova_compute[247399]: 2026-01-31 08:27:50.950 247403 DEBUG nova.network.neutron [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:27:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:51.992 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:27:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:27:51.993 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.033 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.038 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.079 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.079 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.080 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.080 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.080 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.080 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:27:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 504 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.234 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.234 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.234 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.235 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.235 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089573599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.654 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.673 247403 DEBUG nova.network.neutron [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.734 247403 INFO nova.compute.manager [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Took 1.78 seconds to deallocate network for instance.#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.770 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848057.7702057, f02cbbe1-1133-4659-a065-630c53ee2683 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.771 247403 INFO nova.compute.manager [-] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.792 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.793 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.802 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.818 247403 DEBUG nova.compute.manager [None req-12939340-b970-412f-8cce-84c1667fcf1c - - - - - -] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.824 247403 DEBUG nova.compute.manager [req-f5c52596-7e66-4c8b-a167-ddb2ac88faa8 req-5f89e439-dc6f-48ed-8fbe-df35a3ab5219 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f02cbbe1-1133-4659-a065-630c53ee2683] Received event network-vif-deleted-cc10268d-b3b3-404e-ba33-00ef9ef3ce4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.829 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.829 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.915 247403 DEBUG oslo_concurrency.processutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.981 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.982 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4144MB free_disk=20.90088653564453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:27:52 np0005603621 nova_compute[247399]: 2026-01-31 08:27:52.983 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:27:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801916818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.451 247403 DEBUG oslo_concurrency.processutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.457 247403 DEBUG nova.compute.provider_tree [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.474 247403 DEBUG nova.scheduler.client.report [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.498 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.500 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.541 247403 INFO nova.scheduler.client.report [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Deleted allocations for instance f02cbbe1-1133-4659-a065-630c53ee2683#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.594 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance cca881fe-18fa-40c1-b9ef-2b1f28855b53 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.594 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.594 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.624 247403 DEBUG oslo_concurrency.lockutils [None req-d8187516-8d6d-46f4-9559-9188c77c8a0c f4d66dd0b7ff443cbcdb6e2c9f5c4c8c cf024d54545b4af882a87c721105742a - - default default] Lock "f02cbbe1-1133-4659-a065-630c53ee2683" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:27:53 np0005603621 nova_compute[247399]: 2026-01-31 08:27:53.652 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:27:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 473 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 03:27:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:54 np0005603621 nova_compute[247399]: 2026-01-31 08:27:54.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:27:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675068662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:27:54 np0005603621 nova_compute[247399]: 2026-01-31 08:27:54.579 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.927s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:27:54 np0005603621 nova_compute[247399]: 2026-01-31 08:27:54.585 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:27:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:54.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:54.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 31 03:27:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 141 op/s
Jan 31 03:27:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 31 03:27:56 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 31 03:27:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:27:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:27:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:57 np0005603621 nova_compute[247399]: 2026-01-31 08:27:57.804 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 834 KiB/s wr, 140 op/s
Jan 31 03:27:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:27:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:27:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:27:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:27:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:27:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:27:58.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:27:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:27:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 31 03:27:59 np0005603621 nova_compute[247399]: 2026-01-31 08:27:59.466 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:27:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 31 03:27:59 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 31 03:28:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 419 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 45 KiB/s wr, 158 op/s
Jan 31 03:28:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:00.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:00.996 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 398 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 44 KiB/s wr, 149 op/s
Jan 31 03:28:02 np0005603621 nova_compute[247399]: 2026-01-31 08:28:02.806 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:02.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:03 np0005603621 nova_compute[247399]: 2026-01-31 08:28:03.334 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:03 np0005603621 nova_compute[247399]: 2026-01-31 08:28:03.457 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:28:03 np0005603621 nova_compute[247399]: 2026-01-31 08:28:03.458 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 9.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:03 np0005603621 nova_compute[247399]: 2026-01-31 08:28:03.576 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:03 np0005603621 nova_compute[247399]: 2026-01-31 08:28:03.577 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:03 np0005603621 nova_compute[247399]: 2026-01-31 08:28:03.577 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 23 KiB/s wr, 45 op/s
Jan 31 03:28:04 np0005603621 nova_compute[247399]: 2026-01-31 08:28:04.469 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:04 np0005603621 podman[329107]: 2026-01-31 08:28:04.491539752 +0000 UTC m=+0.048131538 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:28:04 np0005603621 podman[329108]: 2026-01-31 08:28:04.513525574 +0000 UTC m=+0.070100399 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 31 03:28:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 31 03:28:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:04.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 31 03:28:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:04.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 31 03:28:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 31 03:28:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 31 03:28:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 31 03:28:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 389 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 124 op/s
Jan 31 03:28:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:07 np0005603621 nova_compute[247399]: 2026-01-31 08:28:07.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 393 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.0 MiB/s wr, 143 op/s
Jan 31 03:28:08 np0005603621 ovn_controller[149152]: 2026-01-31T08:28:08Z|00478|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:28:08 np0005603621 nova_compute[247399]: 2026-01-31 08:28:08.363 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:08.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:08.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:09 np0005603621 nova_compute[247399]: 2026-01-31 08:28:09.470 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.632968) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848089633010, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1757, "num_deletes": 272, "total_data_size": 3046602, "memory_usage": 3101072, "flush_reason": "Manual Compaction"}
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848089657257, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2999456, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48790, "largest_seqno": 50546, "table_properties": {"data_size": 2990892, "index_size": 5314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17994, "raw_average_key_size": 20, "raw_value_size": 2973760, "raw_average_value_size": 3461, "num_data_blocks": 230, "num_entries": 859, "num_filter_entries": 859, "num_deletions": 272, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769847925, "oldest_key_time": 1769847925, "file_creation_time": 1769848089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 24340 microseconds, and 4610 cpu microseconds.
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.657304) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2999456 bytes OK
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.657325) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.687520) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.687577) EVENT_LOG_v1 {"time_micros": 1769848089687567, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.687608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3038807, prev total WAL file size 3038807, number of live WAL files 2.
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.688509) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373537' seq:72057594037927935, type:22 .. '6C6F676D0032303133' seq:0, type:0; will stop at (end)
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2929KB)], [107(11MB)]
Jan 31 03:28:09 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848089688545, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 14540437, "oldest_snapshot_seqno": -1}
Jan 31 03:28:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 370 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.1 MiB/s wr, 191 op/s
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 7997 keys, 14407545 bytes, temperature: kUnknown
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848090401975, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 14407545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14351019, "index_size": 35422, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20037, "raw_key_size": 206347, "raw_average_key_size": 25, "raw_value_size": 14205777, "raw_average_value_size": 1776, "num_data_blocks": 1406, "num_entries": 7997, "num_filter_entries": 7997, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.402238) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 14407545 bytes
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.584813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.4 rd, 20.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 11.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 8549, records dropped: 552 output_compression: NoCompression
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.584861) EVENT_LOG_v1 {"time_micros": 1769848090584845, "job": 64, "event": "compaction_finished", "compaction_time_micros": 713539, "compaction_time_cpu_micros": 26014, "output_level": 6, "num_output_files": 1, "total_output_size": 14407545, "num_input_records": 8549, "num_output_records": 7997, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848090585520, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848090587182, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:09.688410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.587341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.587350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.587352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.587354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:28:10.587357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:28:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:10.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:10.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:28:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 346 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 216 op/s
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:12 np0005603621 nova_compute[247399]: 2026-01-31 08:28:12.810 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:12.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:12.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a59893d2-6f13-48f6-8cdd-fdd4a6fa53d6 does not exist
Jan 31 03:28:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev dcec4eeb-a68d-476c-9631-753591c905d1 does not exist
Jan 31 03:28:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b10ca7c7-35ec-41b3-8fda-e144c72ea281 does not exist
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:28:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:28:13 np0005603621 podman[329476]: 2026-01-31 08:28:13.428402282 +0000 UTC m=+0.050814652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:28:13 np0005603621 podman[329476]: 2026-01-31 08:28:13.536922331 +0000 UTC m=+0.159334681 container create 0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:13 np0005603621 systemd[1]: Started libpod-conmon-0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e.scope.
Jan 31 03:28:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:28:13 np0005603621 podman[329476]: 2026-01-31 08:28:13.68766648 +0000 UTC m=+0.310078860 container init 0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:28:13 np0005603621 podman[329476]: 2026-01-31 08:28:13.694585048 +0000 UTC m=+0.316997398 container start 0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:28:13 np0005603621 infallible_boyd[329492]: 167 167
Jan 31 03:28:13 np0005603621 systemd[1]: libpod-0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e.scope: Deactivated successfully.
Jan 31 03:28:13 np0005603621 conmon[329492]: conmon 0952b5d5961670d4334a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e.scope/container/memory.events
Jan 31 03:28:13 np0005603621 podman[329476]: 2026-01-31 08:28:13.766294797 +0000 UTC m=+0.388707147 container attach 0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 03:28:13 np0005603621 podman[329476]: 2026-01-31 08:28:13.76671538 +0000 UTC m=+0.389127750 container died 0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:28:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:28:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:28:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2b4168363a5ef0f68b8ec6097005c3b7bcc9709e97ced294db77402266664a5c-merged.mount: Deactivated successfully.
Jan 31 03:28:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.0 MiB/s wr, 177 op/s
Jan 31 03:28:14 np0005603621 podman[329476]: 2026-01-31 08:28:14.359861686 +0000 UTC m=+0.982274046 container remove 0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:14 np0005603621 systemd[1]: libpod-conmon-0952b5d5961670d4334ac089be9c929524b5ce04d8d20b37026aa0672bbe143e.scope: Deactivated successfully.
Jan 31 03:28:14 np0005603621 nova_compute[247399]: 2026-01-31 08:28:14.518 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:14 np0005603621 podman[329518]: 2026-01-31 08:28:14.459546197 +0000 UTC m=+0.020980382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:28:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 31 03:28:14 np0005603621 podman[329518]: 2026-01-31 08:28:14.613506528 +0000 UTC m=+0.174940693 container create 740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hugle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:28:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 31 03:28:14 np0005603621 systemd[1]: Started libpod-conmon-740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba.scope.
Jan 31 03:28:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 31 03:28:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:28:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4dc7c6e525a1f275083e83d7acf0efa16c42801230680e649f8212d56c276/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4dc7c6e525a1f275083e83d7acf0efa16c42801230680e649f8212d56c276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4dc7c6e525a1f275083e83d7acf0efa16c42801230680e649f8212d56c276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4dc7c6e525a1f275083e83d7acf0efa16c42801230680e649f8212d56c276/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b4dc7c6e525a1f275083e83d7acf0efa16c42801230680e649f8212d56c276/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:14 np0005603621 podman[329518]: 2026-01-31 08:28:14.812015812 +0000 UTC m=+0.373450007 container init 740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:28:14 np0005603621 podman[329518]: 2026-01-31 08:28:14.82183717 +0000 UTC m=+0.383271335 container start 740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hugle, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:28:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:14 np0005603621 podman[329518]: 2026-01-31 08:28:14.944860266 +0000 UTC m=+0.506294451 container attach 740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:28:15 np0005603621 bold_hugle[329534]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:28:15 np0005603621 bold_hugle[329534]: --> relative data size: 1.0
Jan 31 03:28:15 np0005603621 bold_hugle[329534]: --> All data devices are unavailable
Jan 31 03:28:15 np0005603621 systemd[1]: libpod-740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba.scope: Deactivated successfully.
Jan 31 03:28:15 np0005603621 podman[329518]: 2026-01-31 08:28:15.591222958 +0000 UTC m=+1.152657123 container died 740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:28:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 486 KiB/s wr, 109 op/s
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.280 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "25d125af-48f6-4cfa-974d-a2be1548182c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.281 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "25d125af-48f6-4cfa-974d-a2be1548182c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.296 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.360 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.360 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.368 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.369 247403 INFO nova.compute.claims [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:28:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-11b4dc7c6e525a1f275083e83d7acf0efa16c42801230680e649f8212d56c276-merged.mount: Deactivated successfully.
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.500 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:16 np0005603621 podman[329518]: 2026-01-31 08:28:16.6006851 +0000 UTC m=+2.162119265 container remove 740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:28:16 np0005603621 systemd[1]: libpod-conmon-740d336ff38a7bcb18e34350ac72258e1e8b3c67f1ca46e119987ac4acc5f6ba.scope: Deactivated successfully.
Jan 31 03:28:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:16.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:16.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979387540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.943 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.953 247403 DEBUG nova.compute.provider_tree [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:16 np0005603621 nova_compute[247399]: 2026-01-31 08:28:16.969 247403 DEBUG nova.scheduler.client.report [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.020 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.021 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.071 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.089 247403 INFO nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.116 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:28:17 np0005603621 podman[329726]: 2026-01-31 08:28:17.166936659 +0000 UTC m=+0.085491365 container create 7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:28:17 np0005603621 podman[329726]: 2026-01-31 08:28:17.101372773 +0000 UTC m=+0.019927499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.228 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.229 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.229 247403 INFO nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Creating image(s)#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.252 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:17 np0005603621 systemd[1]: Started libpod-conmon-7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921.scope.
Jan 31 03:28:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.347 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.376 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.380 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.407 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "038896ea-1b16-4301-8907-31daac46f76a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.408 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "038896ea-1b16-4301-8907-31daac46f76a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.438 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.439 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.439 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.440 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:17 np0005603621 podman[329726]: 2026-01-31 08:28:17.465007789 +0000 UTC m=+0.383562525 container init 7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.469 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:17 np0005603621 podman[329726]: 2026-01-31 08:28:17.471004718 +0000 UTC m=+0.389559424 container start 7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 03:28:17 np0005603621 elastic_agnesi[329767]: 167 167
Jan 31 03:28:17 np0005603621 systemd[1]: libpod-7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921.scope: Deactivated successfully.
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.477 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 25d125af-48f6-4cfa-974d-a2be1548182c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:17 np0005603621 podman[329726]: 2026-01-31 08:28:17.498593617 +0000 UTC m=+0.417148333 container attach 7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:28:17 np0005603621 podman[329726]: 2026-01-31 08:28:17.499413663 +0000 UTC m=+0.417968359 container died 7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.501 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.601 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.602 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.610 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.610 247403 INFO nova.compute.claims [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:28:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fd8a364dbda323b09ab680a3ffa05953a8422d557f360372f456e26d937afa29-merged.mount: Deactivated successfully.
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.773 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:17 np0005603621 nova_compute[247399]: 2026-01-31 08:28:17.812 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:18 np0005603621 podman[329726]: 2026-01-31 08:28:18.088549912 +0000 UTC m=+1.007104608 container remove 7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:28:18 np0005603621 systemd[1]: libpod-conmon-7bfb740fe74a4fdd5b7624896497bbeb0238a41ffee056f9ad9709ba17615921.scope: Deactivated successfully.
Jan 31 03:28:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 695 KiB/s rd, 649 KiB/s wr, 83 op/s
Jan 31 03:28:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2269315864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.189 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.197 247403 DEBUG nova.compute.provider_tree [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.260 247403 DEBUG nova.scheduler.client.report [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:18 np0005603621 podman[329881]: 2026-01-31 08:28:18.203445122 +0000 UTC m=+0.024963828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:28:18 np0005603621 podman[329881]: 2026-01-31 08:28:18.331254009 +0000 UTC m=+0.152772695 container create 79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.488 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.488 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:28:18 np0005603621 systemd[1]: Started libpod-conmon-79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e.scope.
Jan 31 03:28:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:28:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834e19bd2d2c1913e342f709b0439c82086aaf7ecb36a84f876ceb0e417aa333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834e19bd2d2c1913e342f709b0439c82086aaf7ecb36a84f876ceb0e417aa333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834e19bd2d2c1913e342f709b0439c82086aaf7ecb36a84f876ceb0e417aa333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834e19bd2d2c1913e342f709b0439c82086aaf7ecb36a84f876ceb0e417aa333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.684 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.739 247403 INFO nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:28:18 np0005603621 podman[329881]: 2026-01-31 08:28:18.793496021 +0000 UTC m=+0.615014737 container init 79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:28:18 np0005603621 podman[329881]: 2026-01-31 08:28:18.798773457 +0000 UTC m=+0.620292143 container start 79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.815 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:28:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:18.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:18 np0005603621 nova_compute[247399]: 2026-01-31 08:28:18.899 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 25d125af-48f6-4cfa-974d-a2be1548182c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:18.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:19 np0005603621 podman[329881]: 2026-01-31 08:28:19.114600177 +0000 UTC m=+0.936118863 container attach 79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]: {
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:    "0": [
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:        {
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "devices": [
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "/dev/loop3"
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            ],
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "lv_name": "ceph_lv0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "lv_size": "7511998464",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "name": "ceph_lv0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "tags": {
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.cluster_name": "ceph",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.crush_device_class": "",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.encrypted": "0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.osd_id": "0",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.type": "block",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:                "ceph.vdo": "0"
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            },
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "type": "block",
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:            "vg_name": "ceph_vg0"
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:        }
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]:    ]
Jan 31 03:28:19 np0005603621 awesome_swanson[329901]: }
Jan 31 03:28:19 np0005603621 systemd[1]: libpod-79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e.scope: Deactivated successfully.
Jan 31 03:28:19 np0005603621 podman[329935]: 2026-01-31 08:28:19.595052272 +0000 UTC m=+0.022706446 container died 79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:28:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.645 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.660 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] resizing rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.818 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.819 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.820 247403 INFO nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Creating image(s)#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.845 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.868 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.898 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.903 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.966 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.967 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.967 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:19 np0005603621 nova_compute[247399]: 2026-01-31 08:28:19.968 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.000 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.004 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 038896ea-1b16-4301-8907-31daac46f76a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 355 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 222 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.230 247403 DEBUG nova.objects.instance [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'migration_context' on Instance uuid 25d125af-48f6-4cfa-974d-a2be1548182c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-834e19bd2d2c1913e342f709b0439c82086aaf7ecb36a84f876ceb0e417aa333-merged.mount: Deactivated successfully.
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.362 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.362 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Ensure instance console log exists: /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.363 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.363 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.363 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.365 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.369 247403 WARNING nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.374 247403 DEBUG nova.virt.libvirt.host [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.375 247403 DEBUG nova.virt.libvirt.host [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.379 247403 DEBUG nova.virt.libvirt.host [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.380 247403 DEBUG nova.virt.libvirt.host [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.381 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.382 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.382 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.382 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.383 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.383 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.383 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.383 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.384 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.384 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.384 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.384 247403 DEBUG nova.virt.hardware [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.388 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:28:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/893742631' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.798 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.826 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:20 np0005603621 nova_compute[247399]: 2026-01-31 08:28:20.830 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:20 np0005603621 podman[329935]: 2026-01-31 08:28:20.87323424 +0000 UTC m=+1.300888394 container remove 79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swanson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:28:20 np0005603621 systemd[1]: libpod-conmon-79beebd653beb70cc22061483b9fec717f88b50798f64fbb1c2f1595e815f03e.scope: Deactivated successfully.
Jan 31 03:28:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:28:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289543237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.287 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.292 247403 DEBUG nova.objects.instance [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25d125af-48f6-4cfa-974d-a2be1548182c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.391 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <uuid>25d125af-48f6-4cfa-974d-a2be1548182c</uuid>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <name>instance-0000007c</name>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerShowV247Test-server-641053966</nova:name>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:28:20</nova:creationTime>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:user uuid="4bda95d045de4dfeaa9bb7be3ab9970b">tempest-ServerShowV247Test-790634158-project-member</nova:user>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <nova:project uuid="6ebb12f413f2487db425a12bb8b17261">tempest-ServerShowV247Test-790634158</nova:project>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <entry name="serial">25d125af-48f6-4cfa-974d-a2be1548182c</entry>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <entry name="uuid">25d125af-48f6-4cfa-974d-a2be1548182c</entry>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/25d125af-48f6-4cfa-974d-a2be1548182c_disk">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/25d125af-48f6-4cfa-974d-a2be1548182c_disk.config">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/console.log" append="off"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:28:21 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:28:21 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:28:21 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:28:21 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:28:21 np0005603621 podman[330297]: 2026-01-31 08:28:21.403059771 +0000 UTC m=+0.021194859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:28:21 np0005603621 podman[330297]: 2026-01-31 08:28:21.543450384 +0000 UTC m=+0.161585482 container create b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:28:21 np0005603621 systemd[1]: Started libpod-conmon-b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40.scope.
Jan 31 03:28:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.709 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.710 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.710 247403 INFO nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Using config drive#033[00m
Jan 31 03:28:21 np0005603621 podman[330297]: 2026-01-31 08:28:21.726845112 +0000 UTC m=+0.344980210 container init b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:28:21 np0005603621 podman[330297]: 2026-01-31 08:28:21.733012746 +0000 UTC m=+0.351147814 container start b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:28:21 np0005603621 wizardly_goldwasser[330313]: 167 167
Jan 31 03:28:21 np0005603621 systemd[1]: libpod-b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40.scope: Deactivated successfully.
Jan 31 03:28:21 np0005603621 nova_compute[247399]: 2026-01-31 08:28:21.743 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:21 np0005603621 podman[330297]: 2026-01-31 08:28:21.821004248 +0000 UTC m=+0.439139326 container attach b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:28:21 np0005603621 podman[330297]: 2026-01-31 08:28:21.821446542 +0000 UTC m=+0.439581620 container died b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:28:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f25e896899438af7fce172d1b04a193fd17b2b2503e63ae3e65a92b150835774-merged.mount: Deactivated successfully.
Jan 31 03:28:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 385 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 116 KiB/s rd, 3.9 MiB/s wr, 67 op/s
Jan 31 03:28:22 np0005603621 podman[330297]: 2026-01-31 08:28:22.384115778 +0000 UTC m=+1.002250846 container remove b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:28:22 np0005603621 systemd[1]: libpod-conmon-b3e50f8de44a6ca7770bd9a24affaa8ccb028d9ea2e5178ec04a8afb27c0ed40.scope: Deactivated successfully.
Jan 31 03:28:22 np0005603621 podman[330358]: 2026-01-31 08:28:22.515805066 +0000 UTC m=+0.023358537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:28:22 np0005603621 podman[330358]: 2026-01-31 08:28:22.672853824 +0000 UTC m=+0.180407265 container create f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:28:22 np0005603621 systemd[1]: Started libpod-conmon-f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776.scope.
Jan 31 03:28:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:28:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28a88b25475f571616a9387d590a1562343c34efbe5aef2a4f403d510e3ffc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28a88b25475f571616a9387d590a1562343c34efbe5aef2a4f403d510e3ffc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28a88b25475f571616a9387d590a1562343c34efbe5aef2a4f403d510e3ffc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28a88b25475f571616a9387d590a1562343c34efbe5aef2a4f403d510e3ffc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:28:22 np0005603621 nova_compute[247399]: 2026-01-31 08:28:22.798 247403 INFO nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Creating config drive at /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/disk.config#033[00m
Jan 31 03:28:22 np0005603621 nova_compute[247399]: 2026-01-31 08:28:22.803 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxt5cwvsk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:22 np0005603621 nova_compute[247399]: 2026-01-31 08:28:22.824 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:22 np0005603621 podman[330358]: 2026-01-31 08:28:22.911946786 +0000 UTC m=+0.419500257 container init f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:28:22 np0005603621 podman[330358]: 2026-01-31 08:28:22.917682656 +0000 UTC m=+0.425236117 container start f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:28:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:22 np0005603621 nova_compute[247399]: 2026-01-31 08:28:22.930 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxt5cwvsk" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:22 np0005603621 podman[330358]: 2026-01-31 08:28:22.97621396 +0000 UTC m=+0.483767431 container attach f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:28:23 np0005603621 nova_compute[247399]: 2026-01-31 08:28:23.281 247403 DEBUG nova.storage.rbd_utils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 25d125af-48f6-4cfa-974d-a2be1548182c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:23 np0005603621 nova_compute[247399]: 2026-01-31 08:28:23.285 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/disk.config 25d125af-48f6-4cfa-974d-a2be1548182c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]: {
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:        "osd_id": 0,
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:        "type": "bluestore"
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]:    }
Jan 31 03:28:23 np0005603621 eager_montalcini[330374]: }
Jan 31 03:28:23 np0005603621 systemd[1]: libpod-f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776.scope: Deactivated successfully.
Jan 31 03:28:23 np0005603621 podman[330432]: 2026-01-31 08:28:23.732857387 +0000 UTC m=+0.023437239 container died f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b28a88b25475f571616a9387d590a1562343c34efbe5aef2a4f403d510e3ffc3-merged.mount: Deactivated successfully.
Jan 31 03:28:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 413 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 5.3 MiB/s wr, 86 op/s
Jan 31 03:28:24 np0005603621 podman[330432]: 2026-01-31 08:28:24.278705903 +0000 UTC m=+0.569285735 container remove f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:28:24 np0005603621 systemd[1]: libpod-conmon-f244511fce93329d394104a5f864b0c1984a96cd5ecae929ea9fb978664c0776.scope: Deactivated successfully.
Jan 31 03:28:24 np0005603621 nova_compute[247399]: 2026-01-31 08:28:24.306 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 038896ea-1b16-4301-8907-31daac46f76a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:28:24 np0005603621 nova_compute[247399]: 2026-01-31 08:28:24.414 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] resizing rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:28:24 np0005603621 nova_compute[247399]: 2026-01-31 08:28:24.569 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:28:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9782bdcd-5abb-493d-9ad6-96e67b077b6d does not exist
Jan 31 03:28:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1980e631-f9a9-4762-ac88-107d5f3030f4 does not exist
Jan 31 03:28:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 30de4403-f20d-47b5-b5f0-6cc77b0c9d1e does not exist
Jan 31 03:28:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 390 KiB/s rd, 7.5 MiB/s wr, 148 op/s
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.208 247403 DEBUG nova.objects.instance [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'migration_context' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.337 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.337 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Ensure instance console log exists: /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.338 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.338 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.338 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.339 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.343 247403 WARNING nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.349 247403 DEBUG nova.virt.libvirt.host [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.349 247403 DEBUG nova.virt.libvirt.host [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.352 247403 DEBUG nova.virt.libvirt.host [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.353 247403 DEBUG nova.virt.libvirt.host [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.354 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.354 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.354 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.354 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.354 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.355 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.355 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.355 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.355 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.355 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.355 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.356 247403 DEBUG nova.virt.hardware [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.358 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:28:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258517781' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.846 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.870 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:26 np0005603621 nova_compute[247399]: 2026-01-31 08:28:26.874 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:26.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.126 247403 DEBUG oslo_concurrency.processutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/disk.config 25d125af-48f6-4cfa-974d-a2be1548182c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.841s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.127 247403 INFO nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Deleting local config drive /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c/disk.config because it was imported into RBD.#033[00m
Jan 31 03:28:27 np0005603621 systemd-machined[212769]: New machine qemu-56-instance-0000007c.
Jan 31 03:28:27 np0005603621 systemd[1]: Started Virtual Machine qemu-56-instance-0000007c.
Jan 31 03:28:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:28:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427587762' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.307 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.309 247403 DEBUG nova.objects.instance [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.362 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <uuid>038896ea-1b16-4301-8907-31daac46f76a</uuid>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <name>instance-0000007d</name>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerShowV247Test-server-378815655</nova:name>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:28:26</nova:creationTime>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:user uuid="4bda95d045de4dfeaa9bb7be3ab9970b">tempest-ServerShowV247Test-790634158-project-member</nova:user>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <nova:project uuid="6ebb12f413f2487db425a12bb8b17261">tempest-ServerShowV247Test-790634158</nova:project>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <entry name="serial">038896ea-1b16-4301-8907-31daac46f76a</entry>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <entry name="uuid">038896ea-1b16-4301-8907-31daac46f76a</entry>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/038896ea-1b16-4301-8907-31daac46f76a_disk">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/038896ea-1b16-4301-8907-31daac46f76a_disk.config">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/console.log" append="off"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:28:27 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:28:27 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:28:27 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:28:27 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.577 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.578 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.579 247403 INFO nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Using config drive#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.603 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.827 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.848 247403 INFO nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Creating config drive at /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.853 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp4x0u9qvn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:27 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.975 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp4x0u9qvn" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:27.999 247403 DEBUG nova.storage.rbd_utils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.003 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config 038896ea-1b16-4301-8907-31daac46f76a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 7.5 MiB/s wr, 141 op/s
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.182 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848108.1823905, 25d125af-48f6-4cfa-974d-a2be1548182c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.184 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.186 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.187 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.191 247403 INFO nova.virt.libvirt.driver [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Instance spawned successfully.#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.191 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.250 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.253 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.305 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.306 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.306 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.307 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.307 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.308 247403 DEBUG nova.virt.libvirt.driver [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.322 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.322 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848108.183348, 25d125af-48f6-4cfa-974d-a2be1548182c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.323 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] VM Started (Lifecycle Event)#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.371 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.374 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.413 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.448 247403 INFO nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Took 11.22 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.448 247403 DEBUG nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.621 247403 INFO nova.compute.manager [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Took 12.28 seconds to build instance.#033[00m
Jan 31 03:28:28 np0005603621 nova_compute[247399]: 2026-01-31 08:28:28.679 247403 DEBUG oslo_concurrency.lockutils [None req-555cae4f-ec01-44e3-9a08-c1b8a1684df0 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "25d125af-48f6-4cfa-974d-a2be1548182c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:28.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:29 np0005603621 nova_compute[247399]: 2026-01-31 08:28:29.524 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:29 np0005603621 nova_compute[247399]: 2026-01-31 08:28:29.962 247403 DEBUG oslo_concurrency.processutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config 038896ea-1b16-4301-8907-31daac46f76a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.959s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:29 np0005603621 nova_compute[247399]: 2026-01-31 08:28:29.962 247403 INFO nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deleting local config drive /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config because it was imported into RBD.#033[00m
Jan 31 03:28:30 np0005603621 systemd-machined[212769]: New machine qemu-57-instance-0000007d.
Jan 31 03:28:30 np0005603621 systemd[1]: Started Virtual Machine qemu-57-instance-0000007d.
Jan 31 03:28:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 495 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 7.1 MiB/s wr, 180 op/s
Jan 31 03:28:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:30.510 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:30.511 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:30.511 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.734 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848110.7343357, 038896ea-1b16-4301-8907-31daac46f76a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.735 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.737 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.737 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.740 247403 INFO nova.virt.libvirt.driver [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance spawned successfully.#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.740 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.850 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.856 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.860 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.861 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.862 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.862 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.863 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.864 247403 DEBUG nova.virt.libvirt.driver [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:28:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:30.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.896 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.897 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848110.7373474, 038896ea-1b16-4301-8907-31daac46f76a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.897 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.924 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.928 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:28:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:30.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.937 247403 INFO nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Took 11.12 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.938 247403 DEBUG nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.951 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:28:30 np0005603621 nova_compute[247399]: 2026-01-31 08:28:30.994 247403 INFO nova.compute.manager [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Took 13.41 seconds to build instance.#033[00m
Jan 31 03:28:31 np0005603621 nova_compute[247399]: 2026-01-31 08:28:31.074 247403 DEBUG oslo_concurrency.lockutils [None req-6d5db15f-6de9-4255-87d7-647e9ad897f4 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "038896ea-1b16-4301-8907-31daac46f76a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.7 MiB/s wr, 185 op/s
Jan 31 03:28:32 np0005603621 nova_compute[247399]: 2026-01-31 08:28:32.794 247403 INFO nova.compute.manager [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Rebuilding instance#033[00m
Jan 31 03:28:32 np0005603621 nova_compute[247399]: 2026-01-31 08:28:32.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:32.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.048 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.066 247403 DEBUG nova.compute.manager [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.115 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'pci_requests' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.126 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'pci_devices' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.141 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'resources' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.152 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'migration_context' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.192 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:28:33 np0005603621 nova_compute[247399]: 2026-01-31 08:28:33.196 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:28:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 235 op/s
Jan 31 03:28:34 np0005603621 nova_compute[247399]: 2026-01-31 08:28:34.569 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:35 np0005603621 podman[330863]: 2026-01-31 08:28:35.491786551 +0000 UTC m=+0.049177730 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:28:35 np0005603621 podman[330864]: 2026-01-31 08:28:35.525201124 +0000 UTC m=+0.082224522 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:28:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 3.1 MiB/s wr, 375 op/s
Jan 31 03:28:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:36.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:37 np0005603621 nova_compute[247399]: 2026-01-31 08:28:37.830 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 432 KiB/s wr, 306 op/s
Jan 31 03:28:38 np0005603621 nova_compute[247399]: 2026-01-31 08:28:38.229 247403 DEBUG nova.compute.manager [None req-ec1319c0-5ce7-45e1-964f-d805cea6a13b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:28:38
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms']
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:28:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:28:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:38.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:28:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:28:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:38.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:39 np0005603621 nova_compute[247399]: 2026-01-31 08:28:39.571 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 73 KiB/s wr, 301 op/s
Jan 31 03:28:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:40.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 501 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 579 KiB/s wr, 268 op/s
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.491 247403 DEBUG oslo_concurrency.lockutils [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.492 247403 DEBUG oslo_concurrency.lockutils [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.492 247403 DEBUG nova.compute.manager [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.496 247403 DEBUG nova.compute.manager [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.497 247403 DEBUG nova.objects.instance [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'flavor' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.567 247403 DEBUG nova.virt.libvirt.driver [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:28:42 np0005603621 nova_compute[247399]: 2026-01-31 08:28:42.833 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:42.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:42.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:43 np0005603621 nova_compute[247399]: 2026-01-31 08:28:43.251 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:28:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 505 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 840 KiB/s wr, 245 op/s
Jan 31 03:28:44 np0005603621 nova_compute[247399]: 2026-01-31 08:28:44.573 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:44.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:44.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 545 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.6 MiB/s wr, 231 op/s
Jan 31 03:28:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:46.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.258 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.258 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.259 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.259 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.836 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:47 np0005603621 kernel: tap109b6929-6b (unregistering): left promiscuous mode
Jan 31 03:28:47 np0005603621 NetworkManager[49013]: <info>  [1769848127.9285] device (tap109b6929-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:28:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:28:47Z|00479|binding|INFO|Releasing lport 109b6929-6b88-494a-b397-b36c434ed7a7 from this chassis (sb_readonly=0)
Jan 31 03:28:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:28:47Z|00480|binding|INFO|Setting lport 109b6929-6b88-494a-b397-b36c434ed7a7 down in Southbound
Jan 31 03:28:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:28:47Z|00481|binding|INFO|Removing iface tap109b6929-6b ovn-installed in OVS
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:47 np0005603621 nova_compute[247399]: 2026-01-31 08:28:47.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:47 np0005603621 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 31 03:28:47 np0005603621 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000079.scope: Consumed 19.186s CPU time.
Jan 31 03:28:47 np0005603621 systemd-machined[212769]: Machine qemu-55-instance-00000079 terminated.
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.036 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:b8:22 10.100.0.13'], port_security=['fa:16:3e:06:b8:22 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cca881fe-18fa-40c1-b9ef-2b1f28855b53', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5b1bd8ad-0d2a-4d57-a00a-9a6b59df86e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=109b6929-6b88-494a-b397-b36c434ed7a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.037 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 109b6929-6b88-494a-b397-b36c434ed7a7 in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 unbound from our chassis#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.038 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44469d8b-ad30-4270-88fa-e67c568f3150, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.039 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[60d2fced-d650-4bcd-abc7-d08203cc9b97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.040 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace which is not needed anymore#033[00m
Jan 31 03:28:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 368 KiB/s rd, 6.1 MiB/s wr, 112 op/s
Jan 31 03:28:48 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [NOTICE]   (326758) : haproxy version is 2.8.14-c23fe91
Jan 31 03:28:48 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [NOTICE]   (326758) : path to executable is /usr/sbin/haproxy
Jan 31 03:28:48 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [WARNING]  (326758) : Exiting Master process...
Jan 31 03:28:48 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [WARNING]  (326758) : Exiting Master process...
Jan 31 03:28:48 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [ALERT]    (326758) : Current worker (326760) exited with code 143 (Terminated)
Jan 31 03:28:48 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[326753]: [WARNING]  (326758) : All workers exited. Exiting... (0)
Jan 31 03:28:48 np0005603621 systemd[1]: libpod-bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818.scope: Deactivated successfully.
Jan 31 03:28:48 np0005603621 podman[330937]: 2026-01-31 08:28:48.289631725 +0000 UTC m=+0.175363296 container died bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.510 247403 DEBUG nova.compute.manager [req-8d6fa2d7-4fa3-4e56-bbd5-576304190f45 req-48846326-1a74-48ef-a3ee-30f72915bd02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-vif-unplugged-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.511 247403 DEBUG oslo_concurrency.lockutils [req-8d6fa2d7-4fa3-4e56-bbd5-576304190f45 req-48846326-1a74-48ef-a3ee-30f72915bd02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.511 247403 DEBUG oslo_concurrency.lockutils [req-8d6fa2d7-4fa3-4e56-bbd5-576304190f45 req-48846326-1a74-48ef-a3ee-30f72915bd02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.511 247403 DEBUG oslo_concurrency.lockutils [req-8d6fa2d7-4fa3-4e56-bbd5-576304190f45 req-48846326-1a74-48ef-a3ee-30f72915bd02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.511 247403 DEBUG nova.compute.manager [req-8d6fa2d7-4fa3-4e56-bbd5-576304190f45 req-48846326-1a74-48ef-a3ee-30f72915bd02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] No waiting events found dispatching network-vif-unplugged-109b6929-6b88-494a-b397-b36c434ed7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.511 247403 WARNING nova.compute.manager [req-8d6fa2d7-4fa3-4e56-bbd5-576304190f45 req-48846326-1a74-48ef-a3ee-30f72915bd02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received unexpected event network-vif-unplugged-109b6929-6b88-494a-b397-b36c434ed7a7 for instance with vm_state active and task_state powering-off.#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.591 247403 INFO nova.virt.libvirt.driver [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance shutdown successfully after 6 seconds.#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.597 247403 INFO nova.virt.libvirt.driver [-] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance destroyed successfully.#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.598 247403 DEBUG nova.objects.instance [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'numa_topology' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:28:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818-userdata-shm.mount: Deactivated successfully.
Jan 31 03:28:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-85b788af7580b439956c915e61dfa68818561c33bd0098155ca99043715be413-merged.mount: Deactivated successfully.
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.686 247403 DEBUG nova.compute.manager [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:28:48 np0005603621 podman[330937]: 2026-01-31 08:28:48.831142074 +0000 UTC m=+0.716873635 container cleanup bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:28:48 np0005603621 systemd[1]: libpod-conmon-bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818.scope: Deactivated successfully.
Jan 31 03:28:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:48.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:48 np0005603621 podman[331027]: 2026-01-31 08:28:48.924715592 +0000 UTC m=+0.071278107 container remove bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.928 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfe5017-9560-4667-a786-bb2a911014a7]: (4, ('Sat Jan 31 08:28:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818)\nbfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818\nSat Jan 31 08:28:48 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (bfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818)\nbfffd12e8a4fb59b1a6d229468532198ac6497348982d1c9dc4718d25086c818\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.929 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[94cb0fb2-977f-4583-bc28-5aebb803cd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.930 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.932 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:48 np0005603621 kernel: tap44469d8b-a0: left promiscuous mode
Jan 31 03:28:48 np0005603621 nova_compute[247399]: 2026-01-31 08:28:48.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.946 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[40e4f29a-cd00-4b20-8f28-cdb24129d23f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.964 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb36089-bb33-41be-830f-ee905c3d5389]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.965 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[906d7487-00f3-480c-a584-38ed121927da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.976 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4940aa8d-4612-4dee-baae-5856a747e74c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 724301, 'reachable_time': 24258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331044, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:48 np0005603621 systemd[1]: run-netns-ovnmeta\x2d44469d8b\x2dad30\x2d4270\x2d88fa\x2de67c568f3150.mount: Deactivated successfully.
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.978 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:28:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:28:48.978 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f9526604-70e0-4b93-8b74-ddef7e42bc42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:28:49 np0005603621 nova_compute[247399]: 2026-01-31 08:28:49.075 247403 DEBUG oslo_concurrency.lockutils [None req-e263c4b1-20b1-48c8-b83e-284f60444756 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 6.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01290578325656058 of space, bias 1.0, pg target 3.8717349769681744 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.6428038630653266 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:28:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:28:49 np0005603621 nova_compute[247399]: 2026-01-31 08:28:49.575 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:49 np0005603621 nova_compute[247399]: 2026-01-31 08:28:49.847 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:28:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 31 03:28:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 31 03:28:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.151 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.152 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.152 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.152 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:28:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 564 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 7.3 MiB/s wr, 132 op/s
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.651 247403 DEBUG nova.compute.manager [req-05aedbf0-ac4c-4a04-99d1-43fe630c8b18 req-a76e7864-f9eb-4fe5-a7e3-4c77e837967f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.651 247403 DEBUG oslo_concurrency.lockutils [req-05aedbf0-ac4c-4a04-99d1-43fe630c8b18 req-a76e7864-f9eb-4fe5-a7e3-4c77e837967f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.651 247403 DEBUG oslo_concurrency.lockutils [req-05aedbf0-ac4c-4a04-99d1-43fe630c8b18 req-a76e7864-f9eb-4fe5-a7e3-4c77e837967f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.652 247403 DEBUG oslo_concurrency.lockutils [req-05aedbf0-ac4c-4a04-99d1-43fe630c8b18 req-a76e7864-f9eb-4fe5-a7e3-4c77e837967f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.652 247403 DEBUG nova.compute.manager [req-05aedbf0-ac4c-4a04-99d1-43fe630c8b18 req-a76e7864-f9eb-4fe5-a7e3-4c77e837967f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] No waiting events found dispatching network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:28:50 np0005603621 nova_compute[247399]: 2026-01-31 08:28:50.652 247403 WARNING nova.compute.manager [req-05aedbf0-ac4c-4a04-99d1-43fe630c8b18 req-a76e7864-f9eb-4fe5-a7e3-4c77e837967f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received unexpected event network-vif-plugged-109b6929-6b88-494a-b397-b36c434ed7a7 for instance with vm_state stopped and task_state resize_prep.#033[00m
Jan 31 03:28:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:50.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:50.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.306 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.307 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.353 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.353 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.354 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.354 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.354 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1860078350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.800 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.943 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.943 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.946 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.946 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.949 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:51 np0005603621 nova_compute[247399]: 2026-01-31 08:28:51.949 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.090 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.091 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=20.717411041259766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.091 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.091 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 590 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 936 KiB/s rd, 7.1 MiB/s wr, 228 op/s
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.586 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating resource usage from migration 45b6c84f-a4f0-4db4-98c8-5e319b46ded0#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.608 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 25d125af-48f6-4cfa-974d-a2be1548182c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.609 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 038896ea-1b16-4301-8907-31daac46f76a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.609 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration 45b6c84f-a4f0-4db4-98c8-5e319b46ded0 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.609 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.609 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.694 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:52 np0005603621 nova_compute[247399]: 2026-01-31 08:28:52.839 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:52.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:52.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:28:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264528252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:28:53 np0005603621 nova_compute[247399]: 2026-01-31 08:28:53.163 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:53 np0005603621 nova_compute[247399]: 2026-01-31 08:28:53.169 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:28:53 np0005603621 nova_compute[247399]: 2026-01-31 08:28:53.237 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:28:53 np0005603621 nova_compute[247399]: 2026-01-31 08:28:53.369 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:28:53 np0005603621 nova_compute[247399]: 2026-01-31 08:28:53.370 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 595 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.1 MiB/s wr, 242 op/s
Jan 31 03:28:54 np0005603621 nova_compute[247399]: 2026-01-31 08:28:54.296 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 31 03:28:54 np0005603621 nova_compute[247399]: 2026-01-31 08:28:54.577 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:28:54 np0005603621 nova_compute[247399]: 2026-01-31 08:28:54.853 247403 DEBUG oslo_concurrency.lockutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:28:54 np0005603621 nova_compute[247399]: 2026-01-31 08:28:54.853 247403 DEBUG oslo_concurrency.lockutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:28:54 np0005603621 nova_compute[247399]: 2026-01-31 08:28:54.854 247403 DEBUG nova.network.neutron [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:28:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:54.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 31 03:28:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:54.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 31 03:28:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.137 247403 DEBUG nova.network.neutron [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:28:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 579 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.1 MiB/s wr, 242 op/s
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.261 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.261 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.270 247403 DEBUG oslo_concurrency.lockutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.616 247403 DEBUG nova.virt.libvirt.driver [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.616 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Creating file /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/b975261f80fd4a3fbeef513908c51f01.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 03:28:56 np0005603621 nova_compute[247399]: 2026-01-31 08:28:56.617 247403 DEBUG oslo_concurrency.processutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/b975261f80fd4a3fbeef513908c51f01.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:28:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:56.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:28:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:56.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.026 247403 DEBUG oslo_concurrency.processutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/b975261f80fd4a3fbeef513908c51f01.tmp" returned: 1 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.028 247403 DEBUG oslo_concurrency.processutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53/b975261f80fd4a3fbeef513908c51f01.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.028 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Creating directory /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.029 247403 DEBUG oslo_concurrency.processutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.216 247403 DEBUG oslo_concurrency.processutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/cca881fe-18fa-40c1-b9ef-2b1f28855b53" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.220 247403 INFO nova.virt.libvirt.driver [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance already shutdown.#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.225 247403 INFO nova.virt.libvirt.driver [-] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Instance destroyed successfully.#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.226 247403 DEBUG nova.virt.libvirt.vif [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:25:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-899650284',display_name='tempest-ServerActionsTestOtherB-server-899650284',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-899650284',id=121,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:25:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-tuc10ywh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:28:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=cca881fe-18fa-40c1-b9ef-2b1f28855b53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:06:b8:22"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.227 247403 DEBUG nova.network.os_vif_util [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:06:b8:22"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.228 247403 DEBUG nova.network.os_vif_util [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.228 247403 DEBUG os_vif [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.230 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.230 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap109b6929-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.232 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.239 247403 INFO os_vif [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b')#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.243 247403 DEBUG nova.virt.libvirt.driver [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.244 247403 DEBUG nova.virt.libvirt.driver [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.386 247403 DEBUG neutronclient.v2_0.client [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 109b6929-6b88-494a-b397-b36c434ed7a7 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.688 247403 DEBUG oslo_concurrency.lockutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.688 247403 DEBUG oslo_concurrency.lockutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:28:57 np0005603621 nova_compute[247399]: 2026-01-31 08:28:57.688 247403 DEBUG oslo_concurrency.lockutils [None req-a5ed510e-3a81-45be-b58e-cb22d8907bb4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:28:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 596 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.2 MiB/s wr, 271 op/s
Jan 31 03:28:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 31 03:28:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 31 03:28:58 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 31 03:28:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:28:58.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:28:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:28:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:28:58.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:28:59 np0005603621 nova_compute[247399]: 2026-01-31 08:28:59.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:28:59 np0005603621 nova_compute[247399]: 2026-01-31 08:28:59.578 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:28:59 np0005603621 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 31 03:28:59 np0005603621 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000007d.scope: Consumed 12.717s CPU time.
Jan 31 03:28:59 np0005603621 systemd-machined[212769]: Machine qemu-57-instance-0000007d terminated.
Jan 31 03:28:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 596 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 139 op/s
Jan 31 03:29:00 np0005603621 nova_compute[247399]: 2026-01-31 08:29:00.320 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance shutdown successfully after 27 seconds.#033[00m
Jan 31 03:29:00 np0005603621 nova_compute[247399]: 2026-01-31 08:29:00.326 247403 INFO nova.virt.libvirt.driver [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance destroyed successfully.#033[00m
Jan 31 03:29:00 np0005603621 nova_compute[247399]: 2026-01-31 08:29:00.330 247403 INFO nova.virt.libvirt.driver [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance destroyed successfully.#033[00m
Jan 31 03:29:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:29:00.512 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:29:00 np0005603621 nova_compute[247399]: 2026-01-31 08:29:00.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:29:00.513 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:29:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:29:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:00.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:29:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:00.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:01 np0005603621 nova_compute[247399]: 2026-01-31 08:29:01.338 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deleting instance files /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a_del#033[00m
Jan 31 03:29:01 np0005603621 nova_compute[247399]: 2026-01-31 08:29:01.339 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deletion of /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a_del complete#033[00m
Jan 31 03:29:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 596 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.4 MiB/s wr, 143 op/s
Jan 31 03:29:02 np0005603621 nova_compute[247399]: 2026-01-31 08:29:02.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:02.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:02.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:03 np0005603621 nova_compute[247399]: 2026-01-31 08:29:03.159 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848128.1587615, cca881fe-18fa-40c1-b9ef-2b1f28855b53 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:03 np0005603621 nova_compute[247399]: 2026-01-31 08:29:03.160 247403 INFO nova.compute.manager [-] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:29:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 580 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.2 MiB/s wr, 90 op/s
Jan 31 03:29:04 np0005603621 nova_compute[247399]: 2026-01-31 08:29:04.463 247403 DEBUG nova.compute.manager [None req-04d130e9-d6f7-44c8-bce4-51cbb67314d0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:04 np0005603621 nova_compute[247399]: 2026-01-31 08:29:04.468 247403 DEBUG nova.compute.manager [None req-04d130e9-d6f7-44c8-bce4-51cbb67314d0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_migrated, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:29:04 np0005603621 nova_compute[247399]: 2026-01-31 08:29:04.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:04 np0005603621 nova_compute[247399]: 2026-01-31 08:29:04.660 247403 INFO nova.compute.manager [None req-04d130e9-d6f7-44c8-bce4-51cbb67314d0 - - - - - -] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 31 03:29:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 31 03:29:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 31 03:29:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 31 03:29:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:04.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:04 np0005603621 nova_compute[247399]: 2026-01-31 08:29:04.989 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:29:04 np0005603621 nova_compute[247399]: 2026-01-31 08:29:04.992 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Creating image(s)#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.018 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.049 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.077 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.081 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.141 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.141 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "365f9823d2619ef09948bdeed685488da63755b5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.142 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.142 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.171 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.175 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 038896ea-1b16-4301-8907-31daac46f76a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.777 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 038896ea-1b16-4301-8907-31daac46f76a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:05 np0005603621 nova_compute[247399]: 2026-01-31 08:29:05.848 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] resizing rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.013 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.015 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Ensure instance console log exists: /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.015 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.016 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.016 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.018 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.022 247403 WARNING nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.029 247403 DEBUG nova.virt.libvirt.host [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.031 247403 DEBUG nova.virt.libvirt.host [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.035 247403 DEBUG nova.virt.libvirt.host [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.036 247403 DEBUG nova.virt.libvirt.host [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.038 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.038 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.039 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.039 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.039 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.039 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.039 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.040 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.040 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.040 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.040 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.040 247403 DEBUG nova.virt.hardware [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.041 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 543 KiB/s wr, 76 op/s
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.264 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:06 np0005603621 podman[331310]: 2026-01-31 08:29:06.522369281 +0000 UTC m=+0.074223530 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 03:29:06 np0005603621 podman[331311]: 2026-01-31 08:29:06.526235203 +0000 UTC m=+0.073887409 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:29:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:29:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/686473676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.743 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.770 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:06 np0005603621 nova_compute[247399]: 2026-01-31 08:29:06.774 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:06.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:06.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:29:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452277001' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.309 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.313 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <uuid>038896ea-1b16-4301-8907-31daac46f76a</uuid>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <name>instance-0000007d</name>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerShowV247Test-server-378815655</nova:name>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:29:06</nova:creationTime>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:user uuid="4bda95d045de4dfeaa9bb7be3ab9970b">tempest-ServerShowV247Test-790634158-project-member</nova:user>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <nova:project uuid="6ebb12f413f2487db425a12bb8b17261">tempest-ServerShowV247Test-790634158</nova:project>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="0864ca59-9877-4e6d-adfc-f0a3204ed8f8"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <nova:ports/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <entry name="serial">038896ea-1b16-4301-8907-31daac46f76a</entry>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <entry name="uuid">038896ea-1b16-4301-8907-31daac46f76a</entry>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/038896ea-1b16-4301-8907-31daac46f76a_disk">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/038896ea-1b16-4301-8907-31daac46f76a_disk.config">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/console.log" append="off"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:29:07 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:29:07 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:29:07 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:29:07 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.567 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.568 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.569 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Using config drive#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.604 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.613 247403 DEBUG nova.compute.manager [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Received event network-changed-109b6929-6b88-494a-b397-b36c434ed7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.613 247403 DEBUG nova.compute.manager [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Refreshing instance network info cache due to event network-changed-109b6929-6b88-494a-b397-b36c434ed7a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.613 247403 DEBUG oslo_concurrency.lockutils [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.614 247403 DEBUG oslo_concurrency.lockutils [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.614 247403 DEBUG nova.network.neutron [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Refreshing network info cache for port 109b6929-6b88-494a-b397-b36c434ed7a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.742 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:07 np0005603621 nova_compute[247399]: 2026-01-31 08:29:07.932 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'keypairs' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 62 KiB/s rd, 692 KiB/s wr, 92 op/s
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:08 np0005603621 nova_compute[247399]: 2026-01-31 08:29:08.860 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Creating config drive at /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config#033[00m
Jan 31 03:29:08 np0005603621 nova_compute[247399]: 2026-01-31 08:29:08.864 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_rl_wn7k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:08.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:08.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:08 np0005603621 nova_compute[247399]: 2026-01-31 08:29:08.990 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_rl_wn7k" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:09 np0005603621 nova_compute[247399]: 2026-01-31 08:29:09.018 247403 DEBUG nova.storage.rbd_utils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] rbd image 038896ea-1b16-4301-8907-31daac46f76a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:29:09 np0005603621 nova_compute[247399]: 2026-01-31 08:29:09.022 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config 038896ea-1b16-4301-8907-31daac46f76a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:29:09.515 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:09 np0005603621 nova_compute[247399]: 2026-01-31 08:29:09.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:09 np0005603621 nova_compute[247399]: 2026-01-31 08:29:09.817 247403 DEBUG oslo_concurrency.processutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config 038896ea-1b16-4301-8907-31daac46f76a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.795s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:09 np0005603621 nova_compute[247399]: 2026-01-31 08:29:09.817 247403 INFO nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deleting local config drive /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a/disk.config because it was imported into RBD.#033[00m
Jan 31 03:29:09 np0005603621 systemd-machined[212769]: New machine qemu-58-instance-0000007d.
Jan 31 03:29:09 np0005603621 systemd[1]: Started Virtual Machine qemu-58-instance-0000007d.
Jan 31 03:29:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 539 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 677 KiB/s wr, 90 op/s
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.713 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 038896ea-1b16-4301-8907-31daac46f76a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.714 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848150.713325, 038896ea-1b16-4301-8907-31daac46f76a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.714 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.716 247403 DEBUG nova.compute.manager [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.717 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.719 247403 INFO nova.virt.libvirt.driver [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance spawned successfully.#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.720 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.888 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.891 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.899356) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848150899450, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 869, "num_deletes": 253, "total_data_size": 1219112, "memory_usage": 1238464, "flush_reason": "Manual Compaction"}
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 31 03:29:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:10.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848150923570, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1205899, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50547, "largest_seqno": 51415, "table_properties": {"data_size": 1201456, "index_size": 2095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10360, "raw_average_key_size": 20, "raw_value_size": 1192318, "raw_average_value_size": 2337, "num_data_blocks": 91, "num_entries": 510, "num_filter_entries": 510, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848090, "oldest_key_time": 1769848090, "file_creation_time": 1769848150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 24258 microseconds, and 3528 cpu microseconds.
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.923629) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1205899 bytes OK
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.923651) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.927548) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.927568) EVENT_LOG_v1 {"time_micros": 1769848150927562, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.927593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1214816, prev total WAL file size 1214816, number of live WAL files 2.
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.928127) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1177KB)], [110(13MB)]
Jan 31 03:29:10 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848150928154, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 15613444, "oldest_snapshot_seqno": -1}
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.946 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.947 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.947 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.948 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.948 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.949 247403 DEBUG nova.virt.libvirt.driver [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:29:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:10.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.981 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.982 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848150.716167, 038896ea-1b16-4301-8907-31daac46f76a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:10 np0005603621 nova_compute[247399]: 2026-01-31 08:29:10.982 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] VM Started (Lifecycle Event)#033[00m
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7981 keys, 13671399 bytes, temperature: kUnknown
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848151161127, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 13671399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13615507, "index_size": 34852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 206794, "raw_average_key_size": 25, "raw_value_size": 13471070, "raw_average_value_size": 1687, "num_data_blocks": 1375, "num_entries": 7981, "num_filter_entries": 7981, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.161370) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 13671399 bytes
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.164765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.0 rd, 58.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.7 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(24.3) write-amplify(11.3) OK, records in: 8507, records dropped: 526 output_compression: NoCompression
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.164813) EVENT_LOG_v1 {"time_micros": 1769848151164791, "job": 66, "event": "compaction_finished", "compaction_time_micros": 233050, "compaction_time_cpu_micros": 23863, "output_level": 6, "num_output_files": 1, "total_output_size": 13671399, "num_input_records": 8507, "num_output_records": 7981, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848151165128, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848151166496, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:10.928029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.166595) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.166602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.166604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.166606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:11 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:11.166608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.189 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.192 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.248 247403 DEBUG nova.compute.manager [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.435 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.542 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.543 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.543 247403 DEBUG nova.objects.instance [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:29:11 np0005603621 nova_compute[247399]: 2026-01-31 08:29:11.745 247403 DEBUG oslo_concurrency.lockutils [None req-26033613-5aad-43aa-9dee-cf83a0e95919 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 03:29:12 np0005603621 nova_compute[247399]: 2026-01-31 08:29:12.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:12.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:12.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:13 np0005603621 nova_compute[247399]: 2026-01-31 08:29:13.649 247403 DEBUG nova.network.neutron [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updated VIF entry in instance network info cache for port 109b6929-6b88-494a-b397-b36c434ed7a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:29:13 np0005603621 nova_compute[247399]: 2026-01-31 08:29:13.649 247403 DEBUG nova.network.neutron [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 31 03:29:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 31 03:29:14 np0005603621 nova_compute[247399]: 2026-01-31 08:29:14.144 247403 DEBUG oslo_concurrency.lockutils [req-b6b7fd42-7ec5-4c2a-b0bc-ac2254aa38b9 req-ba833d40-3077-4e92-a7a0-8c02770fe94b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 31 03:29:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 375 KiB/s rd, 2.3 MiB/s wr, 86 op/s
Jan 31 03:29:14 np0005603621 nova_compute[247399]: 2026-01-31 08:29:14.582 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:14.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:14.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.310 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "038896ea-1b16-4301-8907-31daac46f76a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.310 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "038896ea-1b16-4301-8907-31daac46f76a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.311 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "038896ea-1b16-4301-8907-31daac46f76a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.311 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "038896ea-1b16-4301-8907-31daac46f76a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.311 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "038896ea-1b16-4301-8907-31daac46f76a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.313 247403 INFO nova.compute.manager [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Terminating instance#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.314 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "refresh_cache-038896ea-1b16-4301-8907-31daac46f76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.314 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquired lock "refresh_cache-038896ea-1b16-4301-8907-31daac46f76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.314 247403 DEBUG nova.network.neutron [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:29:15 np0005603621 nova_compute[247399]: 2026-01-31 08:29:15.876 247403 DEBUG nova.network.neutron [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:29:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 130 op/s
Jan 31 03:29:16 np0005603621 nova_compute[247399]: 2026-01-31 08:29:16.365 247403 DEBUG nova.network.neutron [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:16.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:16 np0005603621 nova_compute[247399]: 2026-01-31 08:29:16.945 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Releasing lock "refresh_cache-038896ea-1b16-4301-8907-31daac46f76a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:16 np0005603621 nova_compute[247399]: 2026-01-31 08:29:16.946 247403 DEBUG nova.compute.manager [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:29:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:16.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:17 np0005603621 nova_compute[247399]: 2026-01-31 08:29:17.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:17 np0005603621 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 31 03:29:17 np0005603621 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000007d.scope: Consumed 6.800s CPU time.
Jan 31 03:29:17 np0005603621 systemd-machined[212769]: Machine qemu-58-instance-0000007d terminated.
Jan 31 03:29:17 np0005603621 nova_compute[247399]: 2026-01-31 08:29:17.958 247403 INFO nova.virt.libvirt.driver [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance destroyed successfully.#033[00m
Jan 31 03:29:17 np0005603621 nova_compute[247399]: 2026-01-31 08:29:17.959 247403 DEBUG nova.objects.instance [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'resources' on Instance uuid 038896ea-1b16-4301-8907-31daac46f76a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Jan 31 03:29:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:19 np0005603621 nova_compute[247399]: 2026-01-31 08:29:19.583 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 563 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 102 op/s
Jan 31 03:29:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:29:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:20.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:29:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 506 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 14 KiB/s wr, 122 op/s
Jan 31 03:29:22 np0005603621 nova_compute[247399]: 2026-01-31 08:29:22.243 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:22 np0005603621 nova_compute[247399]: 2026-01-31 08:29:22.766 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:22 np0005603621 nova_compute[247399]: 2026-01-31 08:29:22.766 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:22 np0005603621 nova_compute[247399]: 2026-01-31 08:29:22.766 247403 DEBUG nova.compute.manager [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:29:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:22.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.067 247403 DEBUG neutronclient.v2_0.client [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 109b6929-6b88-494a-b397-b36c434ed7a7 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.068 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.068 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.069 247403 DEBUG nova.network.neutron [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.069 247403 DEBUG nova.objects.instance [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'info_cache' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 475 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 126 op/s
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.622 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.898 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:24.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:24 np0005603621 nova_compute[247399]: 2026-01-31 08:29:24.951 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 115 op/s
Jan 31 03:29:26 np0005603621 podman[331769]: 2026-01-31 08:29:26.252262598 +0000 UTC m=+0.338166995 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:26 np0005603621 podman[331790]: 2026-01-31 08:29:26.478016351 +0000 UTC m=+0.138191615 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:29:26 np0005603621 podman[331769]: 2026-01-31 08:29:26.576123001 +0000 UTC m=+0.662027378 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:29:26 np0005603621 nova_compute[247399]: 2026-01-31 08:29:26.691 247403 DEBUG nova.network.neutron [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Updating instance_info_cache with network_info: [{"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:26.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:27 np0005603621 nova_compute[247399]: 2026-01-31 08:29:27.164 247403 INFO nova.virt.libvirt.driver [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deleting instance files /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a_del#033[00m
Jan 31 03:29:27 np0005603621 nova_compute[247399]: 2026-01-31 08:29:27.164 247403 INFO nova.virt.libvirt.driver [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deletion of /var/lib/nova/instances/038896ea-1b16-4301-8907-31daac46f76a_del complete#033[00m
Jan 31 03:29:27 np0005603621 nova_compute[247399]: 2026-01-31 08:29:27.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:27 np0005603621 podman[331924]: 2026-01-31 08:29:27.406106299 +0000 UTC m=+0.153612591 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:29:27 np0005603621 podman[331945]: 2026-01-31 08:29:27.551925683 +0000 UTC m=+0.134479998 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:29:27 np0005603621 podman[331924]: 2026-01-31 08:29:27.671968414 +0000 UTC m=+0.419474706 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:29:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 57 op/s
Jan 31 03:29:28 np0005603621 podman[331988]: 2026-01-31 08:29:28.559507045 +0000 UTC m=+0.364356560 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20)
Jan 31 03:29:28 np0005603621 podman[332008]: 2026-01-31 08:29:28.648064285 +0000 UTC m=+0.072635479 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, release=1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 03:29:28 np0005603621 podman[331988]: 2026-01-31 08:29:28.763545013 +0000 UTC m=+0.568394498 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, release=1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20)
Jan 31 03:29:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:29:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:29:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:28.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:28.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.492 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-cca881fe-18fa-40c1-b9ef-2b1f28855b53" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.492 247403 DEBUG nova.objects.instance [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'migration_context' on Instance uuid cca881fe-18fa-40c1-b9ef-2b1f28855b53 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cc12307e-76bc-4a8a-9960-86cf9845d093 does not exist
Jan 31 03:29:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d5bd4c1f-7720-46cf-817a-778b00b65ce8 does not exist
Jan 31 03:29:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ab41f795-c6c8-4ade-9d83-3bf08253773d does not exist
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.803 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.806 247403 DEBUG nova.storage.rbd_utils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] removing snapshot(nova-resize) on rbd image(cca881fe-18fa-40c1-b9ef-2b1f28855b53_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.810 247403 INFO nova.compute.manager [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Took 12.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.811 247403 DEBUG oslo.service.loopingcall [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.812 247403 DEBUG nova.compute.manager [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.812 247403 DEBUG nova.network.neutron [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:29:29 np0005603621 nova_compute[247399]: 2026-01-31 08:29:29.997 247403 DEBUG nova.network.neutron [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.038 247403 DEBUG nova.network.neutron [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.069828966 +0000 UTC m=+0.041721646 container create 25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.086 247403 INFO nova.compute.manager [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Took 0.27 seconds to deallocate network for instance.#033[00m
Jan 31 03:29:30 np0005603621 systemd[1]: Started libpod-conmon-25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995.scope.
Jan 31 03:29:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.051852489 +0000 UTC m=+0.023745189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.159529231 +0000 UTC m=+0.131421931 container init 25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.166323135 +0000 UTC m=+0.138215815 container start 25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:29:30 np0005603621 systemd[1]: libpod-25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995.scope: Deactivated successfully.
Jan 31 03:29:30 np0005603621 sharp_banzai[332395]: 167 167
Jan 31 03:29:30 np0005603621 conmon[332395]: conmon 25adfcb00c11397623d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995.scope/container/memory.events
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.174253205 +0000 UTC m=+0.146145905 container attach 25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.175492174 +0000 UTC m=+0.147384854 container died 25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 03:29:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 56 op/s
Jan 31 03:29:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8a784520017bff4148f12f7cc33aa9dfa159cd7d251ec050b2e1298cae518813-merged.mount: Deactivated successfully.
Jan 31 03:29:30 np0005603621 podman[332378]: 2026-01-31 08:29:30.244984043 +0000 UTC m=+0.216876723 container remove 25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:29:30 np0005603621 systemd[1]: libpod-conmon-25adfcb00c11397623d262bf4a12303583f43cf6fe5f596e78d1105ce5dc9995.scope: Deactivated successfully.
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.256 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.256 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:30 np0005603621 podman[332418]: 2026-01-31 08:29:30.367828533 +0000 UTC m=+0.043276604 container create ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.412 247403 DEBUG oslo_concurrency.processutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:30 np0005603621 systemd[1]: Started libpod-conmon-ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357.scope.
Jan 31 03:29:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:29:30 np0005603621 podman[332418]: 2026-01-31 08:29:30.348104012 +0000 UTC m=+0.023552103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:29:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23ff671eb2bcb02ff9690b4caa69cf10550eb1db355c814d8f31fbd5dbae240/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23ff671eb2bcb02ff9690b4caa69cf10550eb1db355c814d8f31fbd5dbae240/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23ff671eb2bcb02ff9690b4caa69cf10550eb1db355c814d8f31fbd5dbae240/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23ff671eb2bcb02ff9690b4caa69cf10550eb1db355c814d8f31fbd5dbae240/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e23ff671eb2bcb02ff9690b4caa69cf10550eb1db355c814d8f31fbd5dbae240/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:30 np0005603621 podman[332418]: 2026-01-31 08:29:30.460050358 +0000 UTC m=+0.135498449 container init ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 03:29:30 np0005603621 podman[332418]: 2026-01-31 08:29:30.465568172 +0000 UTC m=+0.141016243 container start ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:29:30 np0005603621 podman[332418]: 2026-01-31 08:29:30.469899369 +0000 UTC m=+0.145347470 container attach ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 03:29:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:29:30.511 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:29:30.511 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:29:30.512 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1410494216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.839 247403 DEBUG oslo_concurrency.processutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.844 247403 DEBUG nova.compute.provider_tree [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.880 247403 DEBUG nova.scheduler.client.report [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:29:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:29:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:30.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:29:30 np0005603621 nova_compute[247399]: 2026-01-31 08:29:30.937 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 31 03:29:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.017 247403 INFO nova.scheduler.client.report [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Deleted allocations for instance 038896ea-1b16-4301-8907-31daac46f76a#033[00m
Jan 31 03:29:31 np0005603621 vigorous_gagarin[332434]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:29:31 np0005603621 vigorous_gagarin[332434]: --> relative data size: 1.0
Jan 31 03:29:31 np0005603621 vigorous_gagarin[332434]: --> All data devices are unavailable
Jan 31 03:29:31 np0005603621 systemd[1]: libpod-ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357.scope: Deactivated successfully.
Jan 31 03:29:31 np0005603621 podman[332418]: 2026-01-31 08:29:31.297192231 +0000 UTC m=+0.972640302 container died ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.461 247403 DEBUG oslo_concurrency.lockutils [None req-a6f37eba-74c9-47ee-9fcc-59b5539fd2cb 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "038896ea-1b16-4301-8907-31daac46f76a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 16.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:31 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 31 03:29:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e23ff671eb2bcb02ff9690b4caa69cf10550eb1db355c814d8f31fbd5dbae240-merged.mount: Deactivated successfully.
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.825 247403 DEBUG nova.virt.libvirt.vif [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:25:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-899650284',display_name='tempest-ServerActionsTestOtherB-server-899650284',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-899650284',id=121,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:29:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-tuc10ywh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:29:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=cca881fe-18fa-40c1-b9ef-2b1f28855b53,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.826 247403 DEBUG nova.network.os_vif_util [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "109b6929-6b88-494a-b397-b36c434ed7a7", "address": "fa:16:3e:06:b8:22", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap109b6929-6b", "ovs_interfaceid": "109b6929-6b88-494a-b397-b36c434ed7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.826 247403 DEBUG nova.network.os_vif_util [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.827 247403 DEBUG os_vif [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.830 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.830 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap109b6929-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.831 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.832 247403 INFO os_vif [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b8:22,bridge_name='br-int',has_traffic_filtering=True,id=109b6929-6b88-494a-b397-b36c434ed7a7,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap109b6929-6b')#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.833 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:31 np0005603621 nova_compute[247399]: 2026-01-31 08:29:31.833 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.016 247403 DEBUG oslo_concurrency.processutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.5 KiB/s wr, 55 op/s
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:32 np0005603621 podman[332418]: 2026-01-31 08:29:32.332884119 +0000 UTC m=+2.008332190 container remove ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:29:32 np0005603621 systemd[1]: libpod-conmon-ee40e9d43c10148704705685c38c1489dcb56225fb2be754dc40279094916357.scope: Deactivated successfully.
Jan 31 03:29:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438069850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.462 247403 DEBUG oslo_concurrency.processutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.469 247403 DEBUG nova.compute.provider_tree [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.537 247403 DEBUG nova.scheduler.client.report [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.770 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.771 247403 DEBUG nova.compute.manager [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: cca881fe-18fa-40c1-b9ef-2b1f28855b53] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.840 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "25d125af-48f6-4cfa-974d-a2be1548182c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.840 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "25d125af-48f6-4cfa-974d-a2be1548182c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.840 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "25d125af-48f6-4cfa-974d-a2be1548182c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.840 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "25d125af-48f6-4cfa-974d-a2be1548182c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.841 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "25d125af-48f6-4cfa-974d-a2be1548182c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.842 247403 INFO nova.compute.manager [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Terminating instance#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.842 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "refresh_cache-25d125af-48f6-4cfa-974d-a2be1548182c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.843 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquired lock "refresh_cache-25d125af-48f6-4cfa-974d-a2be1548182c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.843 247403 DEBUG nova.network.neutron [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:29:32 np0005603621 podman[332647]: 2026-01-31 08:29:32.807730668 +0000 UTC m=+0.021667413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:29:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:32.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.958 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848157.9564183, 038896ea-1b16-4301-8907-31daac46f76a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:32 np0005603621 nova_compute[247399]: 2026-01-31 08:29:32.958 247403 INFO nova.compute.manager [-] [instance: 038896ea-1b16-4301-8907-31daac46f76a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:29:32 np0005603621 podman[332647]: 2026-01-31 08:29:32.977158956 +0000 UTC m=+0.191095681 container create 9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:29:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:33.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:33 np0005603621 systemd[1]: Started libpod-conmon-9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c.scope.
Jan 31 03:29:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:29:33 np0005603621 nova_compute[247399]: 2026-01-31 08:29:33.154 247403 DEBUG nova.compute.manager [None req-56b780c5-9b56-4a97-b91a-fb50070f13e3 - - - - - -] [instance: 038896ea-1b16-4301-8907-31daac46f76a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:33 np0005603621 podman[332647]: 2026-01-31 08:29:33.19344424 +0000 UTC m=+0.407380985 container init 9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:33 np0005603621 podman[332647]: 2026-01-31 08:29:33.198314173 +0000 UTC m=+0.412250898 container start 9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:33 np0005603621 stoic_chandrasekhar[332663]: 167 167
Jan 31 03:29:33 np0005603621 systemd[1]: libpod-9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c.scope: Deactivated successfully.
Jan 31 03:29:33 np0005603621 podman[332647]: 2026-01-31 08:29:33.394669099 +0000 UTC m=+0.608605824 container attach 9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:33 np0005603621 podman[332647]: 2026-01-31 08:29:33.395575588 +0000 UTC m=+0.609512353 container died 9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:29:33 np0005603621 nova_compute[247399]: 2026-01-31 08:29:33.472 247403 INFO nova.scheduler.client.report [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Deleted allocation for migration 45b6c84f-a4f0-4db4-98c8-5e319b46ded0#033[00m
Jan 31 03:29:33 np0005603621 nova_compute[247399]: 2026-01-31 08:29:33.583 247403 DEBUG nova.network.neutron [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:29:33 np0005603621 nova_compute[247399]: 2026-01-31 08:29:33.690 247403 DEBUG oslo_concurrency.lockutils [None req-fe84cf8a-e9a0-452f-a876-18e6ed999d61 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "cca881fe-18fa-40c1-b9ef-2b1f28855b53" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 10.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b46f23c30cfb4fba3f5fe5a2e41f45c9accfdf7d5f839c25f935c84890f8a92a-merged.mount: Deactivated successfully.
Jan 31 03:29:34 np0005603621 podman[332647]: 2026-01-31 08:29:34.112177793 +0000 UTC m=+1.326114518 container remove 9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 03:29:34 np0005603621 systemd[1]: libpod-conmon-9d243747be08e2ae63ed21cf4b38f24acb9fac7fe1154b89dc11d14c6ea1618c.scope: Deactivated successfully.
Jan 31 03:29:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Jan 31 03:29:34 np0005603621 podman[332688]: 2026-01-31 08:29:34.291977898 +0000 UTC m=+0.097612536 container create ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shirley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:29:34 np0005603621 podman[332688]: 2026-01-31 08:29:34.216342165 +0000 UTC m=+0.021976833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:29:34 np0005603621 systemd[1]: Started libpod-conmon-ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c.scope.
Jan 31 03:29:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:29:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19efda0d8dbd7e6bfa89cb8a5d62f1290507b2a553e4b0c0bc68ec25924b226e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19efda0d8dbd7e6bfa89cb8a5d62f1290507b2a553e4b0c0bc68ec25924b226e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19efda0d8dbd7e6bfa89cb8a5d62f1290507b2a553e4b0c0bc68ec25924b226e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19efda0d8dbd7e6bfa89cb8a5d62f1290507b2a553e4b0c0bc68ec25924b226e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:34 np0005603621 podman[332688]: 2026-01-31 08:29:34.484618686 +0000 UTC m=+0.290253344 container init ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:29:34 np0005603621 podman[332688]: 2026-01-31 08:29:34.49043949 +0000 UTC m=+0.296074128 container start ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shirley, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:29:34 np0005603621 nova_compute[247399]: 2026-01-31 08:29:34.543 247403 DEBUG nova.network.neutron [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:34 np0005603621 podman[332688]: 2026-01-31 08:29:34.55678679 +0000 UTC m=+0.362421448 container attach ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:34 np0005603621 nova_compute[247399]: 2026-01-31 08:29:34.625 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:34 np0005603621 nova_compute[247399]: 2026-01-31 08:29:34.651 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Releasing lock "refresh_cache-25d125af-48f6-4cfa-974d-a2be1548182c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:29:34 np0005603621 nova_compute[247399]: 2026-01-31 08:29:34.651 247403 DEBUG nova.compute.manager [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.798661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848174798828, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 494, "num_deletes": 250, "total_data_size": 497147, "memory_usage": 507160, "flush_reason": "Manual Compaction"}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848174803813, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 400096, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51416, "largest_seqno": 51909, "table_properties": {"data_size": 397401, "index_size": 731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7410, "raw_average_key_size": 20, "raw_value_size": 391745, "raw_average_value_size": 1106, "num_data_blocks": 32, "num_entries": 354, "num_filter_entries": 354, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848152, "oldest_key_time": 1769848152, "file_creation_time": 1769848174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 5197 microseconds, and 2127 cpu microseconds.
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.803872) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 400096 bytes OK
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.803894) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.805303) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.805321) EVENT_LOG_v1 {"time_micros": 1769848174805315, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.805344) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 494255, prev total WAL file size 494255, number of live WAL files 2.
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.805791) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(390KB)], [113(13MB)]
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848174805856, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 14071495, "oldest_snapshot_seqno": -1}
Jan 31 03:29:34 np0005603621 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 31 03:29:34 np0005603621 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d0000007c.scope: Consumed 14.022s CPU time.
Jan 31 03:29:34 np0005603621 systemd-machined[212769]: Machine qemu-56-instance-0000007c terminated.
Jan 31 03:29:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:34.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7825 keys, 10299776 bytes, temperature: kUnknown
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848174985017, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10299776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10249423, "index_size": 29697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 203812, "raw_average_key_size": 26, "raw_value_size": 10112108, "raw_average_value_size": 1292, "num_data_blocks": 1160, "num_entries": 7825, "num_filter_entries": 7825, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848174, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.985267) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10299776 bytes
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.994153) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.5 rd, 57.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(60.9) write-amplify(25.7) OK, records in: 8335, records dropped: 510 output_compression: NoCompression
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.994185) EVENT_LOG_v1 {"time_micros": 1769848174994173, "job": 68, "event": "compaction_finished", "compaction_time_micros": 179251, "compaction_time_cpu_micros": 21021, "output_level": 6, "num_output_files": 1, "total_output_size": 10299776, "num_input_records": 8335, "num_output_records": 7825, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848174994358, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848174995127, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.805692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.995253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.995259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.995260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.995262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:34 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:29:34.995264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:29:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:35.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:35 np0005603621 nova_compute[247399]: 2026-01-31 08:29:35.074 247403 INFO nova.virt.libvirt.driver [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Instance destroyed successfully.#033[00m
Jan 31 03:29:35 np0005603621 nova_compute[247399]: 2026-01-31 08:29:35.074 247403 DEBUG nova.objects.instance [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lazy-loading 'resources' on Instance uuid 25d125af-48f6-4cfa-974d-a2be1548182c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:29:35 np0005603621 charming_shirley[332704]: {
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:    "0": [
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:        {
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "devices": [
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "/dev/loop3"
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            ],
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "lv_name": "ceph_lv0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "lv_size": "7511998464",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "name": "ceph_lv0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "tags": {
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.cluster_name": "ceph",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.crush_device_class": "",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.encrypted": "0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.osd_id": "0",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.type": "block",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:                "ceph.vdo": "0"
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            },
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "type": "block",
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:            "vg_name": "ceph_vg0"
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:        }
Jan 31 03:29:35 np0005603621 charming_shirley[332704]:    ]
Jan 31 03:29:35 np0005603621 charming_shirley[332704]: }
Jan 31 03:29:35 np0005603621 systemd[1]: libpod-ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c.scope: Deactivated successfully.
Jan 31 03:29:35 np0005603621 podman[332688]: 2026-01-31 08:29:35.374463619 +0000 UTC m=+1.180098257 container died ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:29:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-19efda0d8dbd7e6bfa89cb8a5d62f1290507b2a553e4b0c0bc68ec25924b226e-merged.mount: Deactivated successfully.
Jan 31 03:29:35 np0005603621 podman[332688]: 2026-01-31 08:29:35.737564868 +0000 UTC m=+1.543199506 container remove ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:29:35 np0005603621 systemd[1]: libpod-conmon-ffc8c42a69b3f855ddb0acc95cd526f46265c25411d24b1cf1c072c48297856c.scope: Deactivated successfully.
Jan 31 03:29:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.217178197 +0000 UTC m=+0.036865922 container create d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:29:36 np0005603621 systemd[1]: Started libpod-conmon-d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56.scope.
Jan 31 03:29:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.197461966 +0000 UTC m=+0.017149711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.296009731 +0000 UTC m=+0.115697476 container init d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.302218457 +0000 UTC m=+0.121906182 container start d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:29:36 np0005603621 crazy_satoshi[332905]: 167 167
Jan 31 03:29:36 np0005603621 systemd[1]: libpod-d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56.scope: Deactivated successfully.
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.306560633 +0000 UTC m=+0.126248358 container attach d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.307662218 +0000 UTC m=+0.127349973 container died d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:29:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6cfb4c36f160d137c8c94bf7d456cd8946c816f5c6836089cd4baea364a71454-merged.mount: Deactivated successfully.
Jan 31 03:29:36 np0005603621 podman[332889]: 2026-01-31 08:29:36.364911821 +0000 UTC m=+0.184599546 container remove d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:29:36 np0005603621 systemd[1]: libpod-conmon-d7953b9526f2f415ee8b9d5a77f23730597a3cca7b91eb1369df30abe74c9b56.scope: Deactivated successfully.
Jan 31 03:29:36 np0005603621 podman[332931]: 2026-01-31 08:29:36.468081591 +0000 UTC m=+0.020684482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:29:36 np0005603621 podman[332931]: 2026-01-31 08:29:36.672018446 +0000 UTC m=+0.224621307 container create 3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:29:36 np0005603621 systemd[1]: Started libpod-conmon-3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded.scope.
Jan 31 03:29:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:29:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aced5a03eb1c1fe3e21c459c970ebdfd1dd96abfa9578d2ea0b5f6eddf5820/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aced5a03eb1c1fe3e21c459c970ebdfd1dd96abfa9578d2ea0b5f6eddf5820/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aced5a03eb1c1fe3e21c459c970ebdfd1dd96abfa9578d2ea0b5f6eddf5820/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81aced5a03eb1c1fe3e21c459c970ebdfd1dd96abfa9578d2ea0b5f6eddf5820/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:29:36 np0005603621 podman[332945]: 2026-01-31 08:29:36.897662365 +0000 UTC m=+0.182414228 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:29:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:36.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:37 np0005603621 podman[332931]: 2026-01-31 08:29:37.00354652 +0000 UTC m=+0.556149411 container init 3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:29:37 np0005603621 podman[332946]: 2026-01-31 08:29:37.006199884 +0000 UTC m=+0.290881715 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:29:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:37.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:37 np0005603621 podman[332931]: 2026-01-31 08:29:37.011562473 +0000 UTC m=+0.564165334 container start 3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:29:37 np0005603621 podman[332931]: 2026-01-31 08:29:37.095780386 +0000 UTC m=+0.648383267 container attach 3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_newton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.252 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.427 247403 INFO nova.virt.libvirt.driver [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Deleting instance files /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c_del#033[00m
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.428 247403 INFO nova.virt.libvirt.driver [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Deletion of /var/lib/nova/instances/25d125af-48f6-4cfa-974d-a2be1548182c_del complete#033[00m
Jan 31 03:29:37 np0005603621 nice_newton[332984]: {
Jan 31 03:29:37 np0005603621 nice_newton[332984]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:29:37 np0005603621 nice_newton[332984]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:29:37 np0005603621 nice_newton[332984]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:29:37 np0005603621 nice_newton[332984]:        "osd_id": 0,
Jan 31 03:29:37 np0005603621 nice_newton[332984]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:29:37 np0005603621 nice_newton[332984]:        "type": "bluestore"
Jan 31 03:29:37 np0005603621 nice_newton[332984]:    }
Jan 31 03:29:37 np0005603621 nice_newton[332984]: }
Jan 31 03:29:37 np0005603621 systemd[1]: libpod-3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded.scope: Deactivated successfully.
Jan 31 03:29:37 np0005603621 podman[332931]: 2026-01-31 08:29:37.859216836 +0000 UTC m=+1.411819707 container died 3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_newton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.919 247403 INFO nova.compute.manager [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Took 3.27 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.919 247403 DEBUG oslo.service.loopingcall [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.920 247403 DEBUG nova.compute.manager [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:29:37 np0005603621 nova_compute[247399]: 2026-01-31 08:29:37.920 247403 DEBUG nova.network.neutron [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:29:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-81aced5a03eb1c1fe3e21c459c970ebdfd1dd96abfa9578d2ea0b5f6eddf5820-merged.mount: Deactivated successfully.
Jan 31 03:29:38 np0005603621 podman[332931]: 2026-01-31 08:29:38.034105206 +0000 UTC m=+1.586708067 container remove 3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_newton, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:29:38 np0005603621 systemd[1]: libpod-conmon-3161e8843764d50c450187be605ec994d09542bbf61a1cea4ce178673d480ded.scope: Deactivated successfully.
Jan 31 03:29:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:29:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:29:38 np0005603621 nova_compute[247399]: 2026-01-31 08:29:38.174 247403 DEBUG nova.network.neutron [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:29:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a24d3040-2ed1-43bc-a1ab-689a7d74ea64 does not exist
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 809a8a79-9095-40f5-8bdc-58d88434f4f8 does not exist
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7d8c8b7e-869e-4dd8-b3ef-9f9c786f4030 does not exist
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 420 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 19 op/s
Jan 31 03:29:38 np0005603621 nova_compute[247399]: 2026-01-31 08:29:38.379 247403 DEBUG nova.network.neutron [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:29:38
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', 'volumes']
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:29:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:29:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:38.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:39.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:39 np0005603621 nova_compute[247399]: 2026-01-31 08:29:39.089 247403 INFO nova.compute.manager [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Took 1.17 seconds to deallocate network for instance.#033[00m
Jan 31 03:29:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:29:39 np0005603621 nova_compute[247399]: 2026-01-31 08:29:39.588 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:39 np0005603621 nova_compute[247399]: 2026-01-31 08:29:39.588 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:39 np0005603621 nova_compute[247399]: 2026-01-31 08:29:39.626 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:39 np0005603621 nova_compute[247399]: 2026-01-31 08:29:39.678 247403 DEBUG oslo_concurrency.processutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 31 03:29:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 31 03:29:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 31 03:29:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/513565115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:40 np0005603621 nova_compute[247399]: 2026-01-31 08:29:40.121 247403 DEBUG oslo_concurrency.processutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:40 np0005603621 nova_compute[247399]: 2026-01-31 08:29:40.126 247403 DEBUG nova.compute.provider_tree [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:29:40 np0005603621 nova_compute[247399]: 2026-01-31 08:29:40.179 247403 DEBUG nova.scheduler.client.report [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:29:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 420 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 117 B/s wr, 3 op/s
Jan 31 03:29:40 np0005603621 nova_compute[247399]: 2026-01-31 08:29:40.247 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:40 np0005603621 nova_compute[247399]: 2026-01-31 08:29:40.333 247403 INFO nova.scheduler.client.report [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Deleted allocations for instance 25d125af-48f6-4cfa-974d-a2be1548182c#033[00m
Jan 31 03:29:40 np0005603621 nova_compute[247399]: 2026-01-31 08:29:40.908 247403 DEBUG oslo_concurrency.lockutils [None req-585c42a4-73f3-4c9e-8ddb-3a9a638e64be 4bda95d045de4dfeaa9bb7be3ab9970b 6ebb12f413f2487db425a12bb8b17261 - - default default] Lock "25d125af-48f6-4cfa-974d-a2be1548182c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:40.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:41.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:42 np0005603621 nova_compute[247399]: 2026-01-31 08:29:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 35 op/s
Jan 31 03:29:42 np0005603621 nova_compute[247399]: 2026-01-31 08:29:42.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:43.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Jan 31 03:29:44 np0005603621 nova_compute[247399]: 2026-01-31 08:29:44.628 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:44.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:45.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 31 03:29:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:46.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:47.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:47 np0005603621 nova_compute[247399]: 2026-01-31 08:29:47.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:47 np0005603621 nova_compute[247399]: 2026-01-31 08:29:47.296 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Jan 31 03:29:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:29:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1560 writes, 7420 keys, 1558 commit groups, 1.0 writes per commit group, ingest: 10.91 MB, 0.02 MB/s#012Interval WAL: 1560 writes, 1558 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     22.9      3.06              0.16        34    0.090       0      0       0.0       0.0#012  L6      1/0    9.82 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.6     45.1     38.0      8.43              0.69        33    0.255    207K    18K       0.0       0.0#012 Sum      1/0    9.82 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.6     33.1     33.9     11.48              0.86        67    0.171    207K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     22.7     22.4      4.09              0.18        14    0.292     57K   3677       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     45.1     38.0      8.43              0.69        33    0.255    207K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     22.9      3.05              0.16        33    0.092       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.068, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.38 GB write, 0.09 MB/s write, 0.37 GB read, 0.09 MB/s read, 11.5 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 4.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 42.00 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000343 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2435,40.44 MB,13.3014%) FilterBlock(68,581.61 KB,0.186835%) IndexBlock(68,1019.83 KB,0.327607%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:29:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:48.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:49.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:49 np0005603621 nova_compute[247399]: 2026-01-31 08:29:49.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:49 np0005603621 nova_compute[247399]: 2026-01-31 08:29:49.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004343367590266146 of space, bias 1.0, pg target 1.3030102770798437 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.647132508607854 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004069646554531919 of space, bias 1.0, pg target 1.216824319805044 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:29:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:29:49 np0005603621 nova_compute[247399]: 2026-01-31 08:29:49.342 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:29:49 np0005603621 nova_compute[247399]: 2026-01-31 08:29:49.342 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:49 np0005603621 nova_compute[247399]: 2026-01-31 08:29:49.343 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:29:49 np0005603621 nova_compute[247399]: 2026-01-31 08:29:49.632 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:50 np0005603621 nova_compute[247399]: 2026-01-31 08:29:50.073 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848175.0718138, 25d125af-48f6-4cfa-974d-a2be1548182c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:29:50 np0005603621 nova_compute[247399]: 2026-01-31 08:29:50.073 247403 INFO nova.compute.manager [-] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:29:50 np0005603621 nova_compute[247399]: 2026-01-31 08:29:50.128 247403 DEBUG nova.compute.manager [None req-e3f6fb8e-ad09-4590-959a-b623b804ad1a - - - - - -] [instance: 25d125af-48f6-4cfa-974d-a2be1548182c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:29:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Jan 31 03:29:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:50.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:51.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.251 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.251 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.251 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.252 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.252 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574545810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.659 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.832 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.834 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4275MB free_disk=20.89712142944336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.834 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:29:51 np0005603621 nova_compute[247399]: 2026-01-31 08:29:51.834 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.030 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.031 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.053 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:29:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 KiB/s wr, 85 op/s
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:29:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091772568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.595 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.600 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.660 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.721 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:29:52 np0005603621 nova_compute[247399]: 2026-01-31 08:29:52.721 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:29:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:52.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:53.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:53 np0005603621 nova_compute[247399]: 2026-01-31 08:29:53.722 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:54 np0005603621 nova_compute[247399]: 2026-01-31 08:29:54.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.9 KiB/s wr, 72 op/s
Jan 31 03:29:54 np0005603621 nova_compute[247399]: 2026-01-31 08:29:54.634 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:29:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:54.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:55.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.9 KiB/s wr, 79 op/s
Jan 31 03:29:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:29:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:56.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:29:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:57.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:57 np0005603621 nova_compute[247399]: 2026-01-31 08:29:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:29:57 np0005603621 nova_compute[247399]: 2026-01-31 08:29:57.302 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 348 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.9 KiB/s wr, 79 op/s
Jan 31 03:29:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:29:58.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:29:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:29:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:29:59.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:29:59 np0005603621 nova_compute[247399]: 2026-01-31 08:29:59.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:29:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 03:30:00 np0005603621 nova_compute[247399]: 2026-01-31 08:30:00.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 348 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 7.9 KiB/s wr, 79 op/s
Jan 31 03:30:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:00.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:01.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:01 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 03:30:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.7 MiB/s wr, 152 op/s
Jan 31 03:30:02 np0005603621 nova_compute[247399]: 2026-01-31 08:30:02.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:02.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 320 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.4 MiB/s wr, 99 op/s
Jan 31 03:30:04 np0005603621 nova_compute[247399]: 2026-01-31 08:30:04.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:05.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:05 np0005603621 nova_compute[247399]: 2026-01-31 08:30:05.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:05.780 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:05.781 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:30:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 106 op/s
Jan 31 03:30:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:07.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:07 np0005603621 nova_compute[247399]: 2026-01-31 08:30:07.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:07 np0005603621 podman[333213]: 2026-01-31 08:30:07.498840255 +0000 UTC m=+0.055277232 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:30:07 np0005603621 podman[333214]: 2026-01-31 08:30:07.524482293 +0000 UTC m=+0.080908110 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:09.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.162 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.163 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.194 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.295 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.296 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.304 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.304 247403 INFO nova.compute.claims [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.528 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.639 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:09.783 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2152842667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.943 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:09 np0005603621 nova_compute[247399]: 2026-01-31 08:30:09.949 247403 DEBUG nova.compute.provider_tree [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.188 247403 DEBUG nova.scheduler.client.report [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:30:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 99 op/s
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.269 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.270 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.510 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.510 247403 DEBUG nova.network.neutron [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.677 247403 INFO nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:30:10 np0005603621 nova_compute[247399]: 2026-01-31 08:30:10.792 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:30:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:10.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.157 247403 DEBUG nova.policy [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba81d91ee3fa41f0b2aec26a89112489', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3e158f8d290b47cbb39903484e3df783', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.374 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.375 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.375 247403 INFO nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Creating image(s)#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.395 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.417 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.438 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.441 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.513 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.514 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.514 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.515 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.536 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:11 np0005603621 nova_compute[247399]: 2026-01-31 08:30:11.539 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e9770be2-d264-4fa4-a72b-53341db043cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 106 op/s
Jan 31 03:30:12 np0005603621 nova_compute[247399]: 2026-01-31 08:30:12.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:12 np0005603621 nova_compute[247399]: 2026-01-31 08:30:12.643 247403 DEBUG nova.network.neutron [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Successfully created port: 6e5e4657-aea9-460d-85e3-fd231edfbce9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:30:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 31 03:30:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:12.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:13.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 31 03:30:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 31 03:30:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 757 KiB/s rd, 1.9 MiB/s wr, 47 op/s
Jan 31 03:30:14 np0005603621 nova_compute[247399]: 2026-01-31 08:30:14.642 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:14.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:15.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:15 np0005603621 nova_compute[247399]: 2026-01-31 08:30:15.440 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e9770be2-d264-4fa4-a72b-53341db043cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.901s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:15 np0005603621 nova_compute[247399]: 2026-01-31 08:30:15.741 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] resizing rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:30:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Jan 31 03:30:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:16.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.036 247403 DEBUG nova.network.neutron [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Successfully updated port: 6e5e4657-aea9-460d-85e3-fd231edfbce9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:30:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.091 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "refresh_cache-e9770be2-d264-4fa4-a72b-53341db043cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.092 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquired lock "refresh_cache-e9770be2-d264-4fa4-a72b-53341db043cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.092 247403 DEBUG nova.network.neutron [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.172 247403 DEBUG nova.objects.instance [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lazy-loading 'migration_context' on Instance uuid e9770be2-d264-4fa4-a72b-53341db043cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.194 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.194 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Ensure instance console log exists: /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.195 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.195 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.195 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.311 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.412 247403 DEBUG nova.compute.manager [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-changed-6e5e4657-aea9-460d-85e3-fd231edfbce9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.413 247403 DEBUG nova.compute.manager [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Refreshing instance network info cache due to event network-changed-6e5e4657-aea9-460d-85e3-fd231edfbce9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.413 247403 DEBUG oslo_concurrency.lockutils [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e9770be2-d264-4fa4-a72b-53341db043cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:30:17 np0005603621 nova_compute[247399]: 2026-01-31 08:30:17.554 247403 DEBUG nova.network.neutron [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:30:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 383 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 31 03:30:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.643 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.764 247403 DEBUG nova.network.neutron [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Updating instance_info_cache with network_info: [{"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:30:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 31 03:30:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 31 03:30:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.916 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Releasing lock "refresh_cache-e9770be2-d264-4fa4-a72b-53341db043cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.916 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Instance network_info: |[{"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.917 247403 DEBUG oslo_concurrency.lockutils [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e9770be2-d264-4fa4-a72b-53341db043cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.917 247403 DEBUG nova.network.neutron [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Refreshing network info cache for port 6e5e4657-aea9-460d-85e3-fd231edfbce9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.920 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Start _get_guest_xml network_info=[{"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.924 247403 WARNING nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.930 247403 DEBUG nova.virt.libvirt.host [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.931 247403 DEBUG nova.virt.libvirt.host [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.941 247403 DEBUG nova.virt.libvirt.host [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.942 247403 DEBUG nova.virt.libvirt.host [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.944 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.944 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.944 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.945 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.945 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.945 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.945 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.946 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.946 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.946 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.946 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.947 247403 DEBUG nova.virt.hardware [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:30:19 np0005603621 nova_compute[247399]: 2026-01-31 08:30:19.950 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 383 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 154 op/s
Jan 31 03:30:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:30:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249005224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.392 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.425 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.429 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:30:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601515333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.939 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.941 247403 DEBUG nova.virt.libvirt.vif [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:30:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-193408205',display_name='tempest-ServerAddressesNegativeTestJSON-server-193408205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-193408205',id=127,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e158f8d290b47cbb39903484e3df783',ramdisk_id='',reservation_id='r-i41ih04r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-804677957',owner_user_name='tempest-ServerAddressesNegativeTestJSON-804677957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:30:11Z,user_data=None,user_id='ba81d91ee3fa41f0b2aec26a89112489',uuid=e9770be2-d264-4fa4-a72b-53341db043cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.941 247403 DEBUG nova.network.os_vif_util [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Converting VIF {"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.942 247403 DEBUG nova.network.os_vif_util [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:30:20 np0005603621 nova_compute[247399]: 2026-01-31 08:30:20.943 247403 DEBUG nova.objects.instance [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9770be2-d264-4fa4-a72b-53341db043cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:20.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.046 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <uuid>e9770be2-d264-4fa4-a72b-53341db043cd</uuid>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <name>instance-0000007f</name>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-193408205</nova:name>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:30:19</nova:creationTime>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:user uuid="ba81d91ee3fa41f0b2aec26a89112489">tempest-ServerAddressesNegativeTestJSON-804677957-project-member</nova:user>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:project uuid="3e158f8d290b47cbb39903484e3df783">tempest-ServerAddressesNegativeTestJSON-804677957</nova:project>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <nova:port uuid="6e5e4657-aea9-460d-85e3-fd231edfbce9">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <entry name="serial">e9770be2-d264-4fa4-a72b-53341db043cd</entry>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <entry name="uuid">e9770be2-d264-4fa4-a72b-53341db043cd</entry>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e9770be2-d264-4fa4-a72b-53341db043cd_disk">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e9770be2-d264-4fa4-a72b-53341db043cd_disk.config">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:20:28:fb"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <target dev="tap6e5e4657-ae"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/console.log" append="off"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:30:21 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:30:21 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:30:21 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:30:21 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.047 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Preparing to wait for external event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.047 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.047 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.047 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.048 247403 DEBUG nova.virt.libvirt.vif [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:30:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-193408205',display_name='tempest-ServerAddressesNegativeTestJSON-server-193408205',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-193408205',id=127,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3e158f8d290b47cbb39903484e3df783',ramdisk_id='',reservation_id='r-i41ih04r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-804677957',owner_user_name='tempest-ServerAddressesNegativeTestJSON-804677957-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:30:11Z,user_data=None,user_id='ba81d91ee3fa41f0b2aec26a89112489',uuid=e9770be2-d264-4fa4-a72b-53341db043cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.048 247403 DEBUG nova.network.os_vif_util [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Converting VIF {"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.049 247403 DEBUG nova.network.os_vif_util [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.049 247403 DEBUG os_vif [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.050 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.050 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.050 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.053 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.053 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e5e4657-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.054 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e5e4657-ae, col_values=(('external_ids', {'iface-id': '6e5e4657-aea9-460d-85e3-fd231edfbce9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:28:fb', 'vm-uuid': 'e9770be2-d264-4fa4-a72b-53341db043cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:21 np0005603621 NetworkManager[49013]: <info>  [1769848221.0565] manager: (tap6e5e4657-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 31 03:30:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.058 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.062 247403 INFO os_vif [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae')#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.258 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.258 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.258 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] No VIF found with MAC fa:16:3e:20:28:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.259 247403 INFO nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Using config drive#033[00m
Jan 31 03:30:21 np0005603621 nova_compute[247399]: 2026-01-31 08:30:21.288 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Jan 31 03:30:22 np0005603621 nova_compute[247399]: 2026-01-31 08:30:22.639 247403 INFO nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Creating config drive at /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/disk.config#033[00m
Jan 31 03:30:22 np0005603621 nova_compute[247399]: 2026-01-31 08:30:22.643 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsh6qa4ka execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:22.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:22 np0005603621 nova_compute[247399]: 2026-01-31 08:30:22.994 247403 DEBUG nova.network.neutron [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Updated VIF entry in instance network info cache for port 6e5e4657-aea9-460d-85e3-fd231edfbce9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:30:22 np0005603621 nova_compute[247399]: 2026-01-31 08:30:22.995 247403 DEBUG nova.network.neutron [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Updating instance_info_cache with network_info: [{"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:30:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:23.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.161 247403 DEBUG oslo_concurrency.lockutils [req-9b4043a1-8b70-419b-bfd0-5bc02c634674 req-8f290e56-70e7-44d2-83ce-70fa2dccfc71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e9770be2-d264-4fa4-a72b-53341db043cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.376 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsh6qa4ka" returned: 0 in 0.733s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.407 247403 DEBUG nova.storage.rbd_utils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] rbd image e9770be2-d264-4fa4-a72b-53341db043cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.412 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/disk.config e9770be2-d264-4fa4-a72b-53341db043cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.680 247403 DEBUG oslo_concurrency.processutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/disk.config e9770be2-d264-4fa4-a72b-53341db043cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.681 247403 INFO nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Deleting local config drive /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd/disk.config because it was imported into RBD.#033[00m
Jan 31 03:30:23 np0005603621 kernel: tap6e5e4657-ae: entered promiscuous mode
Jan 31 03:30:23 np0005603621 NetworkManager[49013]: <info>  [1769848223.7174] manager: (tap6e5e4657-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Jan 31 03:30:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:23Z|00482|binding|INFO|Claiming lport 6e5e4657-aea9-460d-85e3-fd231edfbce9 for this chassis.
Jan 31 03:30:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:23Z|00483|binding|INFO|6e5e4657-aea9-460d-85e3-fd231edfbce9: Claiming fa:16:3e:20:28:fb 10.100.0.12
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.717 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.720 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:23 np0005603621 systemd-udevd[333634]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.743 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.745 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:28:fb 10.100.0.12'], port_security=['fa:16:3e:20:28:fb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e9770be2-d264-4fa4-a72b-53341db043cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e158f8d290b47cbb39903484e3df783', 'neutron:revision_number': '2', 'neutron:security_group_ids': '996ac298-41a5-4316-a202-3d7208a84ab8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d96d24d-3042-43ca-bddb-bc9d3b19ab74, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6e5e4657-aea9-460d-85e3-fd231edfbce9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.747 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6e5e4657-aea9-460d-85e3-fd231edfbce9 in datapath b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 bound to our chassis#033[00m
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.747 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.749 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b41aa345-cf8e-4ffd-8915-2ff219f3c7f6#033[00m
Jan 31 03:30:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:23Z|00484|binding|INFO|Setting lport 6e5e4657-aea9-460d-85e3-fd231edfbce9 ovn-installed in OVS
Jan 31 03:30:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:23Z|00485|binding|INFO|Setting lport 6e5e4657-aea9-460d-85e3-fd231edfbce9 up in Southbound
Jan 31 03:30:23 np0005603621 nova_compute[247399]: 2026-01-31 08:30:23.751 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:23 np0005603621 NetworkManager[49013]: <info>  [1769848223.7532] device (tap6e5e4657-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:30:23 np0005603621 NetworkManager[49013]: <info>  [1769848223.7543] device (tap6e5e4657-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.757 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80f7b052-5844-4097-8837-ef96cb53c2a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.758 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb41aa345-c1 in ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.760 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb41aa345-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.760 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7391c3d2-0850-4f14-8274-174b3b2b5432]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.760 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[97359c63-96b0-41f5-a236-d7d2e60b2b78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.770 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[2a134643-e67d-4dc1-abaf-c03534b513d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.780 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6dc466-0fc5-4247-aa3f-20f9cce4ddba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 systemd-machined[212769]: New machine qemu-59-instance-0000007f.
Jan 31 03:30:23 np0005603621 systemd[1]: Started Virtual Machine qemu-59-instance-0000007f.
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.803 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3b5f90-7e04-41ef-87cc-bebcc80f9731]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 NetworkManager[49013]: <info>  [1769848223.8089] manager: (tapb41aa345-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.808 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb78f5a-6d8c-4eda-b64b-ed9ba52523aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.835 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7abcceb2-34af-4892-a851-49f60c2da784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.838 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[81fe750d-9748-471b-bed0-208884d17372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 NetworkManager[49013]: <info>  [1769848223.8602] device (tapb41aa345-c0): carrier: link connected
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.865 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[aab2ca5e-31f7-4330-b037-054fe7b5a16f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.876 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8f592a-ab6b-4477-ba6f-b93b10ce8d51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb41aa345-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:bd:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752937, 'reachable_time': 36763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333670, 'error': None, 'target': 'ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.887 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3b67055c-2aec-408e-b5fa-6b5974ec90e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:bd1d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752937, 'tstamp': 752937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333671, 'error': None, 'target': 'ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.899 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a8dbfebc-a5c2-4fc8-9dd0-1cc67bf9c83d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb41aa345-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:bd:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752937, 'reachable_time': 36763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333672, 'error': None, 'target': 'ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.914 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa75858-e201-400a-8b71-edf5f83e99f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.955 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5e20e395-7951-4a6e-ac64-cee05d7c9158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.956 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb41aa345-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.957 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:30:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:23.957 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb41aa345-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:24 np0005603621 kernel: tapb41aa345-c0: entered promiscuous mode
Jan 31 03:30:24 np0005603621 NetworkManager[49013]: <info>  [1769848224.0035] manager: (tapb41aa345-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:24.005 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb41aa345-c0, col_values=(('external_ids', {'iface-id': 'b8052fe5-da98-495e-929b-174fe92704d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:24Z|00486|binding|INFO|Releasing lport b8052fe5-da98-495e-929b-174fe92704d0 from this chassis (sb_readonly=0)
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.011 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:24.012 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b41aa345-cf8e-4ffd-8915-2ff219f3c7f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b41aa345-cf8e-4ffd-8915-2ff219f3c7f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:24.013 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c00016bc-e804-4d8a-bf08-b79d568b8ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:24.014 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/b41aa345-cf8e-4ffd-8915-2ff219f3c7f6.pid.haproxy
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID b41aa345-cf8e-4ffd-8915-2ff219f3c7f6
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:30:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:24.014 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'env', 'PROCESS_TAG=haproxy-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b41aa345-cf8e-4ffd-8915-2ff219f3c7f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:30:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 332 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 128 op/s
Jan 31 03:30:24 np0005603621 podman[333706]: 2026-01-31 08:30:24.286439082 +0000 UTC m=+0.017755120 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.425 247403 DEBUG nova.compute.manager [req-7c7bd889-b1c9-4789-99f9-8022952547eb req-0e6e7a73-93c5-443f-8052-2ee1496177ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.425 247403 DEBUG oslo_concurrency.lockutils [req-7c7bd889-b1c9-4789-99f9-8022952547eb req-0e6e7a73-93c5-443f-8052-2ee1496177ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.426 247403 DEBUG oslo_concurrency.lockutils [req-7c7bd889-b1c9-4789-99f9-8022952547eb req-0e6e7a73-93c5-443f-8052-2ee1496177ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.426 247403 DEBUG oslo_concurrency.lockutils [req-7c7bd889-b1c9-4789-99f9-8022952547eb req-0e6e7a73-93c5-443f-8052-2ee1496177ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.427 247403 DEBUG nova.compute.manager [req-7c7bd889-b1c9-4789-99f9-8022952547eb req-0e6e7a73-93c5-443f-8052-2ee1496177ff fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Processing event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:30:24 np0005603621 podman[333706]: 2026-01-31 08:30:24.500064832 +0000 UTC m=+0.231380860 container create 89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:30:24 np0005603621 systemd[1]: Started libpod-conmon-89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707.scope.
Jan 31 03:30:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e29795d5dbe3795486dfb682c2a3ea3d5b21e8162bc35a0747effdf2c3e2ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.644 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:24 np0005603621 podman[333706]: 2026-01-31 08:30:24.702549281 +0000 UTC m=+0.433865319 container init 89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:30:24 np0005603621 podman[333706]: 2026-01-31 08:30:24.707644161 +0000 UTC m=+0.438960179 container start 89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:30:24 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [NOTICE]   (333743) : New worker (333745) forked
Jan 31 03:30:24 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [NOTICE]   (333743) : Loading success.
Jan 31 03:30:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.920 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.921 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848224.9192688, e9770be2-d264-4fa4-a72b-53341db043cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.921 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] VM Started (Lifecycle Event)#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.925 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.928 247403 INFO nova.virt.libvirt.driver [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Instance spawned successfully.#033[00m
Jan 31 03:30:24 np0005603621 nova_compute[247399]: 2026-01-31 08:30:24.928 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:30:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:24.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.004 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.007 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:30:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:25.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.121 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.122 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.122 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.123 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.123 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.124 247403 DEBUG nova.virt.libvirt.driver [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.389 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.389 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848224.9195824, e9770be2-d264-4fa4-a72b-53341db043cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.389 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.569 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.572 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848224.9253554, e9770be2-d264-4fa4-a72b-53341db043cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.573 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.614 247403 INFO nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Took 14.24 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.615 247403 DEBUG nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.735 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.739 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:30:25 np0005603621 nova_compute[247399]: 2026-01-31 08:30:25.883 247403 INFO nova.compute.manager [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Took 16.63 seconds to build instance.#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.111 247403 DEBUG oslo_concurrency.lockutils [None req-24739eca-4450-4567-9f00-484512bf9970 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 118 op/s
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.646 247403 DEBUG nova.compute.manager [req-6550c881-fc52-4d85-88d7-79526f0f57c6 req-5f77161a-d5d6-486a-a33e-e12736a83ca6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.646 247403 DEBUG oslo_concurrency.lockutils [req-6550c881-fc52-4d85-88d7-79526f0f57c6 req-5f77161a-d5d6-486a-a33e-e12736a83ca6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.646 247403 DEBUG oslo_concurrency.lockutils [req-6550c881-fc52-4d85-88d7-79526f0f57c6 req-5f77161a-d5d6-486a-a33e-e12736a83ca6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.647 247403 DEBUG oslo_concurrency.lockutils [req-6550c881-fc52-4d85-88d7-79526f0f57c6 req-5f77161a-d5d6-486a-a33e-e12736a83ca6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.647 247403 DEBUG nova.compute.manager [req-6550c881-fc52-4d85-88d7-79526f0f57c6 req-5f77161a-d5d6-486a-a33e-e12736a83ca6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] No waiting events found dispatching network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:26 np0005603621 nova_compute[247399]: 2026-01-31 08:30:26.647 247403 WARNING nova.compute.manager [req-6550c881-fc52-4d85-88d7-79526f0f57c6 req-5f77161a-d5d6-486a-a33e-e12736a83ca6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received unexpected event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:30:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Jan 31 03:30:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:28.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.480 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.481 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.482 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.482 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.482 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.483 247403 INFO nova.compute.manager [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Terminating instance#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.484 247403 DEBUG nova.compute.manager [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:30:29 np0005603621 kernel: tap6e5e4657-ae (unregistering): left promiscuous mode
Jan 31 03:30:29 np0005603621 NetworkManager[49013]: <info>  [1769848229.5332] device (tap6e5e4657-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:30:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:29Z|00487|binding|INFO|Releasing lport 6e5e4657-aea9-460d-85e3-fd231edfbce9 from this chassis (sb_readonly=0)
Jan 31 03:30:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:29Z|00488|binding|INFO|Setting lport 6e5e4657-aea9-460d-85e3-fd231edfbce9 down in Southbound
Jan 31 03:30:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:30:29Z|00489|binding|INFO|Removing iface tap6e5e4657-ae ovn-installed in OVS
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.543 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.545 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.552 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:29 np0005603621 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 31 03:30:29 np0005603621 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d0000007f.scope: Consumed 5.679s CPU time.
Jan 31 03:30:29 np0005603621 systemd-machined[212769]: Machine qemu-59-instance-0000007f terminated.
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.646 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.731 247403 INFO nova.virt.libvirt.driver [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Instance destroyed successfully.#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.732 247403 DEBUG nova.objects.instance [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lazy-loading 'resources' on Instance uuid e9770be2-d264-4fa4-a72b-53341db043cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:30:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:29.800 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:28:fb 10.100.0.12'], port_security=['fa:16:3e:20:28:fb 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e9770be2-d264-4fa4-a72b-53341db043cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3e158f8d290b47cbb39903484e3df783', 'neutron:revision_number': '4', 'neutron:security_group_ids': '996ac298-41a5-4316-a202-3d7208a84ab8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2d96d24d-3042-43ca-bddb-bc9d3b19ab74, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6e5e4657-aea9-460d-85e3-fd231edfbce9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:30:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:29.801 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6e5e4657-aea9-460d-85e3-fd231edfbce9 in datapath b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 unbound from our chassis#033[00m
Jan 31 03:30:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:29.803 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:30:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:29.804 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6651cdb1-de29-4402-bc3b-02232af2dfa4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:29.805 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 namespace which is not needed anymore#033[00m
Jan 31 03:30:29 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [NOTICE]   (333743) : haproxy version is 2.8.14-c23fe91
Jan 31 03:30:29 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [NOTICE]   (333743) : path to executable is /usr/sbin/haproxy
Jan 31 03:30:29 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [WARNING]  (333743) : Exiting Master process...
Jan 31 03:30:29 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [WARNING]  (333743) : Exiting Master process...
Jan 31 03:30:29 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [ALERT]    (333743) : Current worker (333745) exited with code 143 (Terminated)
Jan 31 03:30:29 np0005603621 neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6[333721]: [WARNING]  (333743) : All workers exited. Exiting... (0)
Jan 31 03:30:29 np0005603621 systemd[1]: libpod-89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707.scope: Deactivated successfully.
Jan 31 03:30:29 np0005603621 podman[333866]: 2026-01-31 08:30:29.950267013 +0000 UTC m=+0.073076773 container died 89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.965 247403 DEBUG nova.virt.libvirt.vif [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:30:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-193408205',display_name='tempest-ServerAddressesNegativeTestJSON-server-193408205',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-193408205',id=127,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:30:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3e158f8d290b47cbb39903484e3df783',ramdisk_id='',reservation_id='r-i41ih04r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-804677957',owner_user_name='tempest-ServerAddressesNegativeTestJSON-804677957-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:30:25Z,user_data=None,user_id='ba81d91ee3fa41f0b2aec26a89112489',uuid=e9770be2-d264-4fa4-a72b-53341db043cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.965 247403 DEBUG nova.network.os_vif_util [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Converting VIF {"id": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "address": "fa:16:3e:20:28:fb", "network": {"id": "b41aa345-cf8e-4ffd-8915-2ff219f3c7f6", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-36276392-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3e158f8d290b47cbb39903484e3df783", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e5e4657-ae", "ovs_interfaceid": "6e5e4657-aea9-460d-85e3-fd231edfbce9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.966 247403 DEBUG nova.network.os_vif_util [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.966 247403 DEBUG os_vif [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.968 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e5e4657-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:29 np0005603621 nova_compute[247399]: 2026-01-31 08:30:29.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.001 247403 INFO os_vif [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:28:fb,bridge_name='br-int',has_traffic_filtering=True,id=6e5e4657-aea9-460d-85e3-fd231edfbce9,network=Network(b41aa345-cf8e-4ffd-8915-2ff219f3c7f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e5e4657-ae')#033[00m
Jan 31 03:30:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707-userdata-shm.mount: Deactivated successfully.
Jan 31 03:30:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-09e29795d5dbe3795486dfb682c2a3ea3d5b21e8162bc35a0747effdf2c3e2ab-merged.mount: Deactivated successfully.
Jan 31 03:30:30 np0005603621 podman[333866]: 2026-01-31 08:30:30.206272218 +0000 UTC m=+0.329081948 container cleanup 89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:30:30 np0005603621 systemd[1]: libpod-conmon-89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707.scope: Deactivated successfully.
Jan 31 03:30:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 31 03:30:30 np0005603621 podman[333916]: 2026-01-31 08:30:30.283342166 +0000 UTC m=+0.061110956 container remove 89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.287 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3b40017c-3fed-479d-a163-e1c1f034b3b2]: (4, ('Sat Jan 31 08:30:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 (89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707)\n89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707\nSat Jan 31 08:30:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 (89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707)\n89ea5ebf732258557621209aee0234def6fd37895b126107144207fdf2302707\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.288 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[12bab539-69f5-47f1-8e6f-5a550dfe3eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.289 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb41aa345-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.291 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:30 np0005603621 kernel: tapb41aa345-c0: left promiscuous mode
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.295 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6324fb88-6a49-41d1-b79b-49e0a36ed533]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.312 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eea49877-e7d1-465e-a0af-65875923dae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.313 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7daecb4f-51fa-4415-abf1-6f9650a35331]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.327 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d60da8-3a46-4eb5-9418-8c532a87aab6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752931, 'reachable_time': 19004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333930, 'error': None, 'target': 'ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 systemd[1]: run-netns-ovnmeta\x2db41aa345\x2dcf8e\x2d4ffd\x2d8915\x2d2ff219f3c7f6.mount: Deactivated successfully.
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.330 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b41aa345-cf8e-4ffd-8915-2ff219f3c7f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.331 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[517f2b02-225a-444e-a41b-19105814ad2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.332 247403 DEBUG nova.compute.manager [req-b9ced6d1-65e7-4465-bb9e-c7a488bb3ca6 req-d67e8a52-ab72-4353-ac8f-0db5b632ca33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-vif-unplugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.333 247403 DEBUG oslo_concurrency.lockutils [req-b9ced6d1-65e7-4465-bb9e-c7a488bb3ca6 req-d67e8a52-ab72-4353-ac8f-0db5b632ca33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.333 247403 DEBUG oslo_concurrency.lockutils [req-b9ced6d1-65e7-4465-bb9e-c7a488bb3ca6 req-d67e8a52-ab72-4353-ac8f-0db5b632ca33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.333 247403 DEBUG oslo_concurrency.lockutils [req-b9ced6d1-65e7-4465-bb9e-c7a488bb3ca6 req-d67e8a52-ab72-4353-ac8f-0db5b632ca33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.333 247403 DEBUG nova.compute.manager [req-b9ced6d1-65e7-4465-bb9e-c7a488bb3ca6 req-d67e8a52-ab72-4353-ac8f-0db5b632ca33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] No waiting events found dispatching network-vif-unplugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:30 np0005603621 nova_compute[247399]: 2026-01-31 08:30:30.334 247403 DEBUG nova.compute.manager [req-b9ced6d1-65e7-4465-bb9e-c7a488bb3ca6 req-d67e8a52-ab72-4353-ac8f-0db5b632ca33 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-vif-unplugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.512 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.513 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:30:30.513 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:30.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:31.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:31 np0005603621 nova_compute[247399]: 2026-01-31 08:30:31.533 247403 INFO nova.virt.libvirt.driver [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Deleting instance files /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd_del#033[00m
Jan 31 03:30:31 np0005603621 nova_compute[247399]: 2026-01-31 08:30:31.533 247403 INFO nova.virt.libvirt.driver [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Deletion of /var/lib/nova/instances/e9770be2-d264-4fa4-a72b-53341db043cd_del complete#033[00m
Jan 31 03:30:31 np0005603621 nova_compute[247399]: 2026-01-31 08:30:31.800 247403 INFO nova.compute.manager [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Took 2.32 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:30:31 np0005603621 nova_compute[247399]: 2026-01-31 08:30:31.801 247403 DEBUG oslo.service.loopingcall [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:30:31 np0005603621 nova_compute[247399]: 2026-01-31 08:30:31.801 247403 DEBUG nova.compute.manager [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:30:31 np0005603621 nova_compute[247399]: 2026-01-31 08:30:31.801 247403 DEBUG nova.network.neutron [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:30:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 168 op/s
Jan 31 03:30:32 np0005603621 nova_compute[247399]: 2026-01-31 08:30:32.584 247403 DEBUG nova.compute.manager [req-557b5d58-1bd7-45a3-8f77-de793e4088fe req-7cde5d2b-881c-42c2-943a-5dbe4364e061 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:32 np0005603621 nova_compute[247399]: 2026-01-31 08:30:32.585 247403 DEBUG oslo_concurrency.lockutils [req-557b5d58-1bd7-45a3-8f77-de793e4088fe req-7cde5d2b-881c-42c2-943a-5dbe4364e061 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:32 np0005603621 nova_compute[247399]: 2026-01-31 08:30:32.585 247403 DEBUG oslo_concurrency.lockutils [req-557b5d58-1bd7-45a3-8f77-de793e4088fe req-7cde5d2b-881c-42c2-943a-5dbe4364e061 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:32 np0005603621 nova_compute[247399]: 2026-01-31 08:30:32.585 247403 DEBUG oslo_concurrency.lockutils [req-557b5d58-1bd7-45a3-8f77-de793e4088fe req-7cde5d2b-881c-42c2-943a-5dbe4364e061 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:32 np0005603621 nova_compute[247399]: 2026-01-31 08:30:32.586 247403 DEBUG nova.compute.manager [req-557b5d58-1bd7-45a3-8f77-de793e4088fe req-7cde5d2b-881c-42c2-943a-5dbe4364e061 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] No waiting events found dispatching network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:30:32 np0005603621 nova_compute[247399]: 2026-01-31 08:30:32.586 247403 WARNING nova.compute.manager [req-557b5d58-1bd7-45a3-8f77-de793e4088fe req-7cde5d2b-881c-42c2-943a-5dbe4364e061 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received unexpected event network-vif-plugged-6e5e4657-aea9-460d-85e3-fd231edfbce9 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:30:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:32.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.321 247403 DEBUG nova.network.neutron [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.327 247403 DEBUG nova.compute.manager [req-381b7498-7fcd-4673-a7be-333eec8a2c63 req-eec0b457-3794-4f70-832b-b0c0b986414a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Received event network-vif-deleted-6e5e4657-aea9-460d-85e3-fd231edfbce9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.328 247403 INFO nova.compute.manager [req-381b7498-7fcd-4673-a7be-333eec8a2c63 req-eec0b457-3794-4f70-832b-b0c0b986414a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Neutron deleted interface 6e5e4657-aea9-460d-85e3-fd231edfbce9; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.328 247403 DEBUG nova.network.neutron [req-381b7498-7fcd-4673-a7be-333eec8a2c63 req-eec0b457-3794-4f70-832b-b0c0b986414a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.346 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.445 247403 INFO nova.compute.manager [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Took 1.64 seconds to deallocate network for instance.#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.473 247403 DEBUG nova.compute.manager [req-381b7498-7fcd-4673-a7be-333eec8a2c63 req-eec0b457-3794-4f70-832b-b0c0b986414a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Detach interface failed, port_id=6e5e4657-aea9-460d-85e3-fd231edfbce9, reason: Instance e9770be2-d264-4fa4-a72b-53341db043cd could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.852 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:33 np0005603621 nova_compute[247399]: 2026-01-31 08:30:33.852 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:34 np0005603621 nova_compute[247399]: 2026-01-31 08:30:34.006 247403 DEBUG oslo_concurrency.processutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 353 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 157 op/s
Jan 31 03:30:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257459909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:34 np0005603621 nova_compute[247399]: 2026-01-31 08:30:34.433 247403 DEBUG oslo_concurrency.processutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:34 np0005603621 nova_compute[247399]: 2026-01-31 08:30:34.438 247403 DEBUG nova.compute.provider_tree [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:30:34 np0005603621 nova_compute[247399]: 2026-01-31 08:30:34.648 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:34 np0005603621 nova_compute[247399]: 2026-01-31 08:30:34.997 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:34.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:35 np0005603621 nova_compute[247399]: 2026-01-31 08:30:35.024 247403 DEBUG nova.scheduler.client.report [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:30:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:35 np0005603621 nova_compute[247399]: 2026-01-31 08:30:35.365 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:35 np0005603621 nova_compute[247399]: 2026-01-31 08:30:35.892 247403 INFO nova.scheduler.client.report [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Deleted allocations for instance e9770be2-d264-4fa4-a72b-53341db043cd#033[00m
Jan 31 03:30:36 np0005603621 nova_compute[247399]: 2026-01-31 08:30:36.183 247403 DEBUG oslo_concurrency.lockutils [None req-265865ff-c8f7-43c6-9279-41b188ac2bb2 ba81d91ee3fa41f0b2aec26a89112489 3e158f8d290b47cbb39903484e3df783 - - default default] Lock "e9770be2-d264-4fa4-a72b-53341db043cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 327 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.6 MiB/s wr, 231 op/s
Jan 31 03:30:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:36.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 327 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 31 KiB/s wr, 163 op/s
Jan 31 03:30:38 np0005603621 podman[333958]: 2026-01-31 08:30:38.492680747 +0000 UTC m=+0.052060921 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:30:38 np0005603621 podman[333959]: 2026-01-31 08:30:38.545796581 +0000 UTC m=+0.100574590 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:30:38
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:30:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:30:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:39.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:30:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a28d6971-35aa-4c9d-88f6-3adfa816c794 does not exist
Jan 31 03:30:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev dc6ebcf0-739c-4a94-a8cc-eb019b933266 does not exist
Jan 31 03:30:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 25575d43-60b5-4b2a-98e4-7a9f056f3005 does not exist
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:30:39 np0005603621 nova_compute[247399]: 2026-01-31 08:30:39.650 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:39 np0005603621 nova_compute[247399]: 2026-01-31 08:30:39.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.086434096 +0000 UTC m=+0.075251916 container create d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.031539286 +0000 UTC m=+0.020357136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:30:40 np0005603621 systemd[1]: Started libpod-conmon-d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61.scope.
Jan 31 03:30:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.182356828 +0000 UTC m=+0.171174668 container init d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.189173343 +0000 UTC m=+0.177991163 container start d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:40 np0005603621 systemd[1]: libpod-d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61.scope: Deactivated successfully.
Jan 31 03:30:40 np0005603621 laughing_archimedes[334292]: 167 167
Jan 31 03:30:40 np0005603621 conmon[334292]: conmon d4a159f92a8edb9b650c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61.scope/container/memory.events
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.197026703 +0000 UTC m=+0.185844553 container attach d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.198100537 +0000 UTC m=+0.186918367 container died d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-78fc5573960056e1cce535f220b9562f80a1044609ec3d869b6eb91bc1d17412-merged.mount: Deactivated successfully.
Jan 31 03:30:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 327 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 30 KiB/s wr, 150 op/s
Jan 31 03:30:40 np0005603621 podman[334275]: 2026-01-31 08:30:40.257008494 +0000 UTC m=+0.245826314 container remove d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_archimedes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:30:40 np0005603621 systemd[1]: libpod-conmon-d4a159f92a8edb9b650c82abd4d913c4cd68b113e87b5cc051317be2db106b61.scope: Deactivated successfully.
Jan 31 03:30:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:30:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:30:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:30:40 np0005603621 podman[334315]: 2026-01-31 08:30:40.40384868 +0000 UTC m=+0.048010573 container create 332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:40 np0005603621 systemd[1]: Started libpod-conmon-332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f.scope.
Jan 31 03:30:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693689112e3b3e6cb2e4255defa0c77e3e58f17dd25609f9f33bd5380969dce3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693689112e3b3e6cb2e4255defa0c77e3e58f17dd25609f9f33bd5380969dce3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693689112e3b3e6cb2e4255defa0c77e3e58f17dd25609f9f33bd5380969dce3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693689112e3b3e6cb2e4255defa0c77e3e58f17dd25609f9f33bd5380969dce3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693689112e3b3e6cb2e4255defa0c77e3e58f17dd25609f9f33bd5380969dce3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:40 np0005603621 podman[334315]: 2026-01-31 08:30:40.37926934 +0000 UTC m=+0.023431253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:30:40 np0005603621 podman[334315]: 2026-01-31 08:30:40.475994277 +0000 UTC m=+0.120156200 container init 332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamport, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:40 np0005603621 podman[334315]: 2026-01-31 08:30:40.481037477 +0000 UTC m=+0.125199360 container start 332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamport, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:40 np0005603621 podman[334315]: 2026-01-31 08:30:40.485045735 +0000 UTC m=+0.129207648 container attach 332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamport, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:30:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:41.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:41.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:41 np0005603621 musing_lamport[334332]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:30:41 np0005603621 musing_lamport[334332]: --> relative data size: 1.0
Jan 31 03:30:41 np0005603621 musing_lamport[334332]: --> All data devices are unavailable
Jan 31 03:30:41 np0005603621 systemd[1]: libpod-332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f.scope: Deactivated successfully.
Jan 31 03:30:41 np0005603621 podman[334315]: 2026-01-31 08:30:41.368580436 +0000 UTC m=+1.012742329 container died 332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamport, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:30:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-693689112e3b3e6cb2e4255defa0c77e3e58f17dd25609f9f33bd5380969dce3-merged.mount: Deactivated successfully.
Jan 31 03:30:41 np0005603621 podman[334315]: 2026-01-31 08:30:41.545535277 +0000 UTC m=+1.189697170 container remove 332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lamport, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:30:41 np0005603621 systemd[1]: libpod-conmon-332f209d7afb6ec37e445818322f44594da4f7eb766799321904aca246f94c6f.scope: Deactivated successfully.
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.060241575 +0000 UTC m=+0.038052427 container create 2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 03:30:42 np0005603621 systemd[1]: Started libpod-conmon-2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed.scope.
Jan 31 03:30:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.128368435 +0000 UTC m=+0.106179307 container init 2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.13452378 +0000 UTC m=+0.112334632 container start 2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:30:42 np0005603621 keen_dhawan[334517]: 167 167
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.042188852 +0000 UTC m=+0.019999724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:30:42 np0005603621 systemd[1]: libpod-2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed.scope: Deactivated successfully.
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.146555962 +0000 UTC m=+0.124366824 container attach 2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.148928687 +0000 UTC m=+0.126739539 container died 2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:30:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-058149261a64293411da0ab1c9bf9b8a5a5f7a591a08787479b2c7459f8acd8e-merged.mount: Deactivated successfully.
Jan 31 03:30:42 np0005603621 podman[334499]: 2026-01-31 08:30:42.20325837 +0000 UTC m=+0.181069222 container remove 2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:42 np0005603621 systemd[1]: libpod-conmon-2e0862048cd89df50c18ef2e445d0e40c40a6bc00b399028466e2fa96572f0ed.scope: Deactivated successfully.
Jan 31 03:30:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 314 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 191 op/s
Jan 31 03:30:42 np0005603621 podman[334543]: 2026-01-31 08:30:42.332381803 +0000 UTC m=+0.047896489 container create 1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hoover, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 03:30:42 np0005603621 podman[334543]: 2026-01-31 08:30:42.307001119 +0000 UTC m=+0.022515825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:30:42 np0005603621 systemd[1]: Started libpod-conmon-1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e.scope.
Jan 31 03:30:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87bb8ce7d160d7c0974425599a1efc1ec25958e8f1fcb2c0ffe5b6c5db56902e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87bb8ce7d160d7c0974425599a1efc1ec25958e8f1fcb2c0ffe5b6c5db56902e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87bb8ce7d160d7c0974425599a1efc1ec25958e8f1fcb2c0ffe5b6c5db56902e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87bb8ce7d160d7c0974425599a1efc1ec25958e8f1fcb2c0ffe5b6c5db56902e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:42 np0005603621 podman[334543]: 2026-01-31 08:30:42.547493423 +0000 UTC m=+0.263008139 container init 1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:30:42 np0005603621 podman[334543]: 2026-01-31 08:30:42.55368626 +0000 UTC m=+0.269200946 container start 1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:30:42 np0005603621 podman[334543]: 2026-01-31 08:30:42.59847091 +0000 UTC m=+0.313985626 container attach 1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:30:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:43.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:43.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]: {
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:    "0": [
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:        {
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "devices": [
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "/dev/loop3"
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            ],
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "lv_name": "ceph_lv0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "lv_size": "7511998464",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "name": "ceph_lv0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "tags": {
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.cluster_name": "ceph",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.crush_device_class": "",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.encrypted": "0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.osd_id": "0",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.type": "block",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:                "ceph.vdo": "0"
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            },
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "type": "block",
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:            "vg_name": "ceph_vg0"
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:        }
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]:    ]
Jan 31 03:30:43 np0005603621 trusting_hoover[334559]: }
Jan 31 03:30:43 np0005603621 systemd[1]: libpod-1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e.scope: Deactivated successfully.
Jan 31 03:30:43 np0005603621 podman[334543]: 2026-01-31 08:30:43.32456338 +0000 UTC m=+1.040078066 container died 1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:30:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-87bb8ce7d160d7c0974425599a1efc1ec25958e8f1fcb2c0ffe5b6c5db56902e-merged.mount: Deactivated successfully.
Jan 31 03:30:43 np0005603621 podman[334543]: 2026-01-31 08:30:43.425694076 +0000 UTC m=+1.141208762 container remove 1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 03:30:43 np0005603621 systemd[1]: libpod-conmon-1221e962175425940c4becf572c1a88b6124a178765c1d9b23c1628ae802293e.scope: Deactivated successfully.
Jan 31 03:30:43 np0005603621 podman[334720]: 2026-01-31 08:30:43.938151513 +0000 UTC m=+0.047192937 container create 8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:30:43 np0005603621 systemd[1]: Started libpod-conmon-8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089.scope.
Jan 31 03:30:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:44 np0005603621 podman[334720]: 2026-01-31 08:30:43.915904928 +0000 UTC m=+0.024946372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:30:44 np0005603621 podman[334720]: 2026-01-31 08:30:44.021727423 +0000 UTC m=+0.130768847 container init 8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 03:30:44 np0005603621 podman[334720]: 2026-01-31 08:30:44.029564572 +0000 UTC m=+0.138606006 container start 8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:30:44 np0005603621 wizardly_keller[334736]: 167 167
Jan 31 03:30:44 np0005603621 systemd[1]: libpod-8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089.scope: Deactivated successfully.
Jan 31 03:30:44 np0005603621 conmon[334736]: conmon 8268c6d855dddb459455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089.scope/container/memory.events
Jan 31 03:30:44 np0005603621 podman[334720]: 2026-01-31 08:30:44.045053343 +0000 UTC m=+0.154094787 container attach 8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:30:44 np0005603621 podman[334720]: 2026-01-31 08:30:44.045523058 +0000 UTC m=+0.154564482 container died 8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:30:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0a4604c953a0c4180445856693b7cd82a6999de2e6a9a78fddf12991755bd3ea-merged.mount: Deactivated successfully.
Jan 31 03:30:44 np0005603621 podman[334720]: 2026-01-31 08:30:44.10741283 +0000 UTC m=+0.216454254 container remove 8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_keller, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:30:44 np0005603621 systemd[1]: libpod-conmon-8268c6d855dddb4594559cb843161811073aacf5681591effa3bc9e5587b3089.scope: Deactivated successfully.
Jan 31 03:30:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 306 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 31 03:30:44 np0005603621 podman[334763]: 2026-01-31 08:30:44.249893887 +0000 UTC m=+0.058545187 container create 994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:30:44 np0005603621 systemd[1]: Started libpod-conmon-994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74.scope.
Jan 31 03:30:44 np0005603621 podman[334763]: 2026-01-31 08:30:44.21752037 +0000 UTC m=+0.026171690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:30:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c8dbc70e60daf868b53f0031421bff3b8312a808121ddd2548bb40515d828d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c8dbc70e60daf868b53f0031421bff3b8312a808121ddd2548bb40515d828d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c8dbc70e60daf868b53f0031421bff3b8312a808121ddd2548bb40515d828d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c8dbc70e60daf868b53f0031421bff3b8312a808121ddd2548bb40515d828d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:30:44 np0005603621 podman[334763]: 2026-01-31 08:30:44.34745343 +0000 UTC m=+0.156104760 container init 994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:30:44 np0005603621 nova_compute[247399]: 2026-01-31 08:30:44.346 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:44 np0005603621 podman[334763]: 2026-01-31 08:30:44.352849071 +0000 UTC m=+0.161500371 container start 994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:30:44 np0005603621 podman[334763]: 2026-01-31 08:30:44.367074532 +0000 UTC m=+0.175725872 container attach 994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:30:44 np0005603621 nova_compute[247399]: 2026-01-31 08:30:44.652 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:44 np0005603621 nova_compute[247399]: 2026-01-31 08:30:44.727 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848229.7255588, e9770be2-d264-4fa4-a72b-53341db043cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:30:44 np0005603621 nova_compute[247399]: 2026-01-31 08:30:44.727 247403 INFO nova.compute.manager [-] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:30:44 np0005603621 nova_compute[247399]: 2026-01-31 08:30:44.804 247403 DEBUG nova.compute.manager [None req-7855567b-7dec-4698-a3d9-66eb95521a2c - - - - - -] [instance: e9770be2-d264-4fa4-a72b-53341db043cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:30:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:45 np0005603621 nova_compute[247399]: 2026-01-31 08:30:45.001 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:45.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:45.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]: {
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:        "osd_id": 0,
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:        "type": "bluestore"
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]:    }
Jan 31 03:30:45 np0005603621 beautiful_torvalds[334780]: }
Jan 31 03:30:45 np0005603621 systemd[1]: libpod-994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74.scope: Deactivated successfully.
Jan 31 03:30:45 np0005603621 podman[334801]: 2026-01-31 08:30:45.32428298 +0000 UTC m=+0.026939435 container died 994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:30:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1c8dbc70e60daf868b53f0031421bff3b8312a808121ddd2548bb40515d828d1-merged.mount: Deactivated successfully.
Jan 31 03:30:45 np0005603621 podman[334801]: 2026-01-31 08:30:45.420442649 +0000 UTC m=+0.123099084 container remove 994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:30:45 np0005603621 systemd[1]: libpod-conmon-994a90415c581181a25e8f1ffca6e9b9c2c06f6661cdb563abc711345cb4ee74.scope: Deactivated successfully.
Jan 31 03:30:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:30:45 np0005603621 nova_compute[247399]: 2026-01-31 08:30:45.608 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:30:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:30:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Jan 31 03:30:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:30:46 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 76091122-81b8-4659-bc1f-ed9facb37693 does not exist
Jan 31 03:30:46 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a34243a6-e83f-4f90-97e2-b7c684e72cc6 does not exist
Jan 31 03:30:46 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b260ead0-3fc9-4045-b945-3c46ece40a0f does not exist
Jan 31 03:30:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:47.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:30:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:30:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 294 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 2.5 MiB/s wr, 77 op/s
Jan 31 03:30:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:49.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:49.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:49 np0005603621 nova_compute[247399]: 2026-01-31 08:30:49.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:49 np0005603621 nova_compute[247399]: 2026-01-31 08:30:49.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:49 np0005603621 nova_compute[247399]: 2026-01-31 08:30:49.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:30:49 np0005603621 nova_compute[247399]: 2026-01-31 08:30:49.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:30:49 np0005603621 nova_compute[247399]: 2026-01-31 08:30:49.293 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004648895519262982 of space, bias 1.0, pg target 1.3946686557788945 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.647132508607854 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:30:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:30:49 np0005603621 nova_compute[247399]: 2026-01-31 08:30:49.654 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:50 np0005603621 nova_compute[247399]: 2026-01-31 08:30:50.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:50 np0005603621 nova_compute[247399]: 2026-01-31 08:30:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:50 np0005603621 nova_compute[247399]: 2026-01-31 08:30:50.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:30:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 294 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 65 KiB/s rd, 2.5 MiB/s wr, 67 op/s
Jan 31 03:30:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:30:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:51.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:30:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 316 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 233 KiB/s rd, 3.8 MiB/s wr, 93 op/s
Jan 31 03:30:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:53.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:30:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:53.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.290 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.290 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.291 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.291 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.291 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3021669682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.854 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.983 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.985 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4262MB free_disk=20.877696990966797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.985 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:30:53 np0005603621 nova_compute[247399]: 2026-01-31 08:30:53.985 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.097 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.097 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.140 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:30:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 318 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 260 KiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 31 03:30:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:30:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2463420384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.589 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.593 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.656 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.669 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.797 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:30:54 np0005603621 nova_compute[247399]: 2026-01-31 08:30:54.798 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:30:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:30:55 np0005603621 nova_compute[247399]: 2026-01-31 08:30:55.004 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:55.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:55 np0005603621 nova_compute[247399]: 2026-01-31 08:30:55.800 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:55 np0005603621 nova_compute[247399]: 2026-01-31 08:30:55.800 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:56 np0005603621 nova_compute[247399]: 2026-01-31 08:30:56.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 323 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 31 03:30:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:57.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:57.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:30:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/284393595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:30:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:30:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/284393595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:30:58 np0005603621 nova_compute[247399]: 2026-01-31 08:30:58.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:30:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Jan 31 03:30:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:30:59.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:30:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:30:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:30:59.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:30:59 np0005603621 nova_compute[247399]: 2026-01-31 08:30:59.658 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:30:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:00 np0005603621 nova_compute[247399]: 2026-01-31 08:31:00.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 133 op/s
Jan 31 03:31:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:01.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:02 np0005603621 nova_compute[247399]: 2026-01-31 08:31:02.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 148 op/s
Jan 31 03:31:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 150 KiB/s wr, 123 op/s
Jan 31 03:31:04 np0005603621 nova_compute[247399]: 2026-01-31 08:31:04.660 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:05 np0005603621 nova_compute[247399]: 2026-01-31 08:31:05.009 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 337 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 127 op/s
Jan 31 03:31:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:08 np0005603621 nova_compute[247399]: 2026-01-31 08:31:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:08 np0005603621 nova_compute[247399]: 2026-01-31 08:31:08.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 1.0 MiB/s wr, 65 op/s
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:09.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:09 np0005603621 podman[334975]: 2026-01-31 08:31:09.502043777 +0000 UTC m=+0.054319343 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:31:09 np0005603621 podman[334976]: 2026-01-31 08:31:09.528427313 +0000 UTC m=+0.080699659 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 03:31:09 np0005603621 nova_compute[247399]: 2026-01-31 08:31:09.661 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:09 np0005603621 nova_compute[247399]: 2026-01-31 08:31:09.985 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:09 np0005603621 nova_compute[247399]: 2026-01-31 08:31:09.985 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.0 MiB/s wr, 34 op/s
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.430 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:31:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:10.691 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:31:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:10.691 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.934 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.934 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.944 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:31:10 np0005603621 nova_compute[247399]: 2026-01-31 08:31:10.945 247403 INFO nova.compute.claims [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:31:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:11.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:11.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:11 np0005603621 nova_compute[247399]: 2026-01-31 08:31:11.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 355 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Jan 31 03:31:12 np0005603621 nova_compute[247399]: 2026-01-31 08:31:12.338 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:12.694 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3526591616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:12 np0005603621 nova_compute[247399]: 2026-01-31 08:31:12.765 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:12 np0005603621 nova_compute[247399]: 2026-01-31 08:31:12.769 247403 DEBUG nova.compute.provider_tree [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:13.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:13 np0005603621 nova_compute[247399]: 2026-01-31 08:31:13.086 247403 DEBUG nova.scheduler.client.report [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:13.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:13 np0005603621 nova_compute[247399]: 2026-01-31 08:31:13.329 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:13 np0005603621 nova_compute[247399]: 2026-01-31 08:31:13.330 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:31:13 np0005603621 nova_compute[247399]: 2026-01-31 08:31:13.786 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:31:13 np0005603621 nova_compute[247399]: 2026-01-31 08:31:13.786 247403 DEBUG nova.network.neutron [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:31:14 np0005603621 nova_compute[247399]: 2026-01-31 08:31:14.070 247403 INFO nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:31:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 256 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 03:31:14 np0005603621 nova_compute[247399]: 2026-01-31 08:31:14.328 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:31:14 np0005603621 nova_compute[247399]: 2026-01-31 08:31:14.638 247403 DEBUG nova.policy [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb3f20f0143d465ebfe98f6a13200890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '40db421b27d84f809f8074c58151327f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:31:14 np0005603621 nova_compute[247399]: 2026-01-31 08:31:14.716 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:31:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3344330242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:31:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:31:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3344330242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:31:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:15.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.058 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:15.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.544 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.546 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.546 247403 INFO nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Creating image(s)#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.568 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.593 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.618 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.623 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.679 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.680 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.681 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.681 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.708 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:15 np0005603621 nova_compute[247399]: 2026-01-31 08:31:15.712 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 3a47b87f-9141-4131-b272-3ff82f226681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.079 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 3a47b87f-9141-4131-b272-3ff82f226681_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.155 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] resizing rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:31:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.330 247403 DEBUG nova.objects.instance [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'migration_context' on Instance uuid 3a47b87f-9141-4131-b272-3ff82f226681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.580 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.581 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Ensure instance console log exists: /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.581 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.582 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:16 np0005603621 nova_compute[247399]: 2026-01-31 08:31:16.582 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:17.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:17.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 373 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 31 03:31:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:31:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:31:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:19.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:19 np0005603621 nova_compute[247399]: 2026-01-31 08:31:19.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:20 np0005603621 nova_compute[247399]: 2026-01-31 08:31:20.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 373 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 1.7 MiB/s wr, 60 op/s
Jan 31 03:31:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:21.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:21.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:21 np0005603621 nova_compute[247399]: 2026-01-31 08:31:21.485 247403 DEBUG nova.network.neutron [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Successfully created port: d0ae5b47-a954-49b7-b658-33bf23e73e22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:31:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 2.9 MiB/s wr, 73 op/s
Jan 31 03:31:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:31:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:31:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:23.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 03:31:24 np0005603621 nova_compute[247399]: 2026-01-31 08:31:24.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:24 np0005603621 nova_compute[247399]: 2026-01-31 08:31:24.898 247403 DEBUG nova.network.neutron [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Successfully updated port: d0ae5b47-a954-49b7-b658-33bf23e73e22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:31:24 np0005603621 nova_compute[247399]: 2026-01-31 08:31:24.933 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "refresh_cache-3a47b87f-9141-4131-b272-3ff82f226681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:24 np0005603621 nova_compute[247399]: 2026-01-31 08:31:24.934 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquired lock "refresh_cache-3a47b87f-9141-4131-b272-3ff82f226681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:24 np0005603621 nova_compute[247399]: 2026-01-31 08:31:24.934 247403 DEBUG nova.network.neutron [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:31:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:25.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:25 np0005603621 nova_compute[247399]: 2026-01-31 08:31:25.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:25 np0005603621 nova_compute[247399]: 2026-01-31 08:31:25.195 247403 DEBUG nova.compute.manager [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-changed-d0ae5b47-a954-49b7-b658-33bf23e73e22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:25 np0005603621 nova_compute[247399]: 2026-01-31 08:31:25.196 247403 DEBUG nova.compute.manager [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Refreshing instance network info cache due to event network-changed-d0ae5b47-a954-49b7-b658-33bf23e73e22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:31:25 np0005603621 nova_compute[247399]: 2026-01-31 08:31:25.196 247403 DEBUG oslo_concurrency.lockutils [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-3a47b87f-9141-4131-b272-3ff82f226681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:25 np0005603621 nova_compute[247399]: 2026-01-31 08:31:25.624 247403 DEBUG nova.network.neutron [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:31:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 03:31:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:27.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:27 np0005603621 nova_compute[247399]: 2026-01-31 08:31:27.835 247403 DEBUG nova.network.neutron [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Updating instance_info_cache with network_info: [{"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:27 np0005603621 nova_compute[247399]: 2026-01-31 08:31:27.995 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Releasing lock "refresh_cache-3a47b87f-9141-4131-b272-3ff82f226681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:27 np0005603621 nova_compute[247399]: 2026-01-31 08:31:27.996 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Instance network_info: |[{"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:31:27 np0005603621 nova_compute[247399]: 2026-01-31 08:31:27.996 247403 DEBUG oslo_concurrency.lockutils [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-3a47b87f-9141-4131-b272-3ff82f226681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:27 np0005603621 nova_compute[247399]: 2026-01-31 08:31:27.997 247403 DEBUG nova.network.neutron [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Refreshing network info cache for port d0ae5b47-a954-49b7-b658-33bf23e73e22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:27.999 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Start _get_guest_xml network_info=[{"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.003 247403 WARNING nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.011 247403 DEBUG nova.virt.libvirt.host [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.012 247403 DEBUG nova.virt.libvirt.host [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.019 247403 DEBUG nova.virt.libvirt.host [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.020 247403 DEBUG nova.virt.libvirt.host [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.021 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.021 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.022 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.022 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.022 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.022 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.022 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.023 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.023 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.023 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.023 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.024 247403 DEBUG nova.virt.hardware [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.027 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 03:31:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:31:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2548768671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.468 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.496 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.500 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:31:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835917206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.939 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.941 247403 DEBUG nova.virt.libvirt.vif [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1783939189',display_name='tempest-ServersTestJSON-server-1783939189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1783939189',id=130,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-9vo2mh1n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:14Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=3a47b87f-9141-4131-b272-3ff82f226681,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.941 247403 DEBUG nova.network.os_vif_util [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.942 247403 DEBUG nova.network.os_vif_util [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:28 np0005603621 nova_compute[247399]: 2026-01-31 08:31:28.944 247403 DEBUG nova.objects.instance [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a47b87f-9141-4131-b272-3ff82f226681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:31:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:29.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.065 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <uuid>3a47b87f-9141-4131-b272-3ff82f226681</uuid>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <name>instance-00000082</name>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersTestJSON-server-1783939189</nova:name>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:31:28</nova:creationTime>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:user uuid="fb3f20f0143d465ebfe98f6a13200890">tempest-ServersTestJSON-1064072764-project-member</nova:user>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:project uuid="40db421b27d84f809f8074c58151327f">tempest-ServersTestJSON-1064072764</nova:project>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <nova:port uuid="d0ae5b47-a954-49b7-b658-33bf23e73e22">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <entry name="serial">3a47b87f-9141-4131-b272-3ff82f226681</entry>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <entry name="uuid">3a47b87f-9141-4131-b272-3ff82f226681</entry>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/3a47b87f-9141-4131-b272-3ff82f226681_disk">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/3a47b87f-9141-4131-b272-3ff82f226681_disk.config">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1e:46:d1"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <target dev="tapd0ae5b47-a9"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/console.log" append="off"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:31:29 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:31:29 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:31:29 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:31:29 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.067 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Preparing to wait for external event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.067 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.067 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.068 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.068 247403 DEBUG nova.virt.libvirt.vif [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1783939189',display_name='tempest-ServersTestJSON-server-1783939189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1783939189',id=130,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-9vo2mh1n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:14Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=3a47b87f-9141-4131-b272-3ff82f226681,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.069 247403 DEBUG nova.network.os_vif_util [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.069 247403 DEBUG nova.network.os_vif_util [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.070 247403 DEBUG os_vif [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.070 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.071 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.071 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.074 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.075 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0ae5b47-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.075 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0ae5b47-a9, col_values=(('external_ids', {'iface-id': 'd0ae5b47-a954-49b7-b658-33bf23e73e22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:46:d1', 'vm-uuid': '3a47b87f-9141-4131-b272-3ff82f226681'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.077 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:29 np0005603621 NetworkManager[49013]: <info>  [1769848289.0777] manager: (tapd0ae5b47-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.082 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.083 247403 INFO os_vif [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9')#033[00m
Jan 31 03:31:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:29.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.462 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.463 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.463 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No VIF found with MAC fa:16:3e:1e:46:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.463 247403 INFO nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Using config drive#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.484 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:29 np0005603621 nova_compute[247399]: 2026-01-31 08:31:29.759 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 31 03:31:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:30.513 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:30.513 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:30.513 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:30 np0005603621 nova_compute[247399]: 2026-01-31 08:31:30.656 247403 INFO nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Creating config drive at /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/disk.config#033[00m
Jan 31 03:31:30 np0005603621 nova_compute[247399]: 2026-01-31 08:31:30.659 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbgiwx3du execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:30 np0005603621 nova_compute[247399]: 2026-01-31 08:31:30.789 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbgiwx3du" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:30 np0005603621 nova_compute[247399]: 2026-01-31 08:31:30.822 247403 DEBUG nova.storage.rbd_utils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 3a47b87f-9141-4131-b272-3ff82f226681_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:31:30 np0005603621 nova_compute[247399]: 2026-01-31 08:31:30.826 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/disk.config 3a47b87f-9141-4131-b272-3ff82f226681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:31.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:31.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.242 247403 DEBUG oslo_concurrency.processutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/disk.config 3a47b87f-9141-4131-b272-3ff82f226681_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.243 247403 INFO nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Deleting local config drive /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681/disk.config because it was imported into RBD.#033[00m
Jan 31 03:31:31 np0005603621 kernel: tapd0ae5b47-a9: entered promiscuous mode
Jan 31 03:31:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:31Z|00490|binding|INFO|Claiming lport d0ae5b47-a954-49b7-b658-33bf23e73e22 for this chassis.
Jan 31 03:31:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:31Z|00491|binding|INFO|d0ae5b47-a954-49b7-b658-33bf23e73e22: Claiming fa:16:3e:1e:46:d1 10.100.0.3
Jan 31 03:31:31 np0005603621 NetworkManager[49013]: <info>  [1769848291.2937] manager: (tapd0ae5b47-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.295 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.307 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:46:d1 10.100.0.3'], port_security=['fa:16:3e:1e:46:d1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3a47b87f-9141-4131-b272-3ff82f226681', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d0ae5b47-a954-49b7-b658-33bf23e73e22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.308 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d0ae5b47-a954-49b7-b658-33bf23e73e22 in datapath f6071a46-64a6-45aa-97c6-06e6c564195b bound to our chassis#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.310 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6071a46-64a6-45aa-97c6-06e6c564195b#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 systemd-udevd[335454]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:31:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:31Z|00492|binding|INFO|Setting lport d0ae5b47-a954-49b7-b658-33bf23e73e22 ovn-installed in OVS
Jan 31 03:31:31 np0005603621 systemd-machined[212769]: New machine qemu-60-instance-00000082.
Jan 31 03:31:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:31Z|00493|binding|INFO|Setting lport d0ae5b47-a954-49b7-b658-33bf23e73e22 up in Southbound
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.321 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[16430257-6603-4e59-800f-adf245604ec4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.323 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6071a46-61 in ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.325 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6071a46-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.325 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4592e2c2-4665-458d-8868-eb5adc9b4b2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.326 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0870413-e9f9-43cf-a946-c8bc58319ec0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 NetworkManager[49013]: <info>  [1769848291.3296] device (tapd0ae5b47-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:31:31 np0005603621 NetworkManager[49013]: <info>  [1769848291.3304] device (tapd0ae5b47-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:31:31 np0005603621 systemd[1]: Started Virtual Machine qemu-60-instance-00000082.
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.334 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9e8ba3-07b9-4a3c-9e93-8eeb6b7b3f24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.355 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7c86d384-c0bc-473b-b1b6-e2fae9a11061]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.380 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c9151d4b-4212-4097-be63-3e042b98bb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.384 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8e88df27-3f51-4470-b076-cc9a60c180f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 NetworkManager[49013]: <info>  [1769848291.3851] manager: (tapf6071a46-60): new Veth device (/org/freedesktop/NetworkManager/Devices/220)
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.406 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0853b646-05c2-4cf7-ac6c-dfb137b3bcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.408 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1d408b-0045-4ae1-9778-082a87226d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 NetworkManager[49013]: <info>  [1769848291.4275] device (tapf6071a46-60): carrier: link connected
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.433 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb3b690-0d36-4c6f-80f6-959a982d4cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.444 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f7d66b-7595-45b3-9514-0bbdacd2944a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759694, 'reachable_time': 16494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335486, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.455 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1920971d-f470-414b-abe1-ef2f1ef96dbd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:8c48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 759694, 'tstamp': 759694}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335487, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.467 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[498e4700-807c-4685-b583-b654a551996c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759694, 'reachable_time': 16494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335488, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.492 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a217549d-8727-46bd-b408-d876fb51bd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.526 247403 DEBUG nova.network.neutron [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Updated VIF entry in instance network info cache for port d0ae5b47-a954-49b7-b658-33bf23e73e22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.527 247403 DEBUG nova.network.neutron [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Updating instance_info_cache with network_info: [{"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.533 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbbadd8-7581-4989-92c1-35b522be2385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.535 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.535 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.536 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6071a46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:31 np0005603621 kernel: tapf6071a46-60: entered promiscuous mode
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.537 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 NetworkManager[49013]: <info>  [1769848291.5383] manager: (tapf6071a46-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.541 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6071a46-60, col_values=(('external_ids', {'iface-id': 'e9a7861c-c6ea-4166-9252-dc2aacdf4771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.542 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:31Z|00494|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.543 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.544 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6268d3-75d7-4155-94fb-5ba3c39b44e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.545 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f6071a46-64a6-45aa-97c6-06e6c564195b
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f6071a46-64a6-45aa-97c6-06e6c564195b
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:31:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:31.546 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'env', 'PROCESS_TAG=haproxy-f6071a46-64a6-45aa-97c6-06e6c564195b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6071a46-64a6-45aa-97c6-06e6c564195b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.603 247403 DEBUG oslo_concurrency.lockutils [req-56193b04-2959-4412-bc84-3b6feb6132fc req-5eb719e4-1199-4826-af05-fabbb3cc9ffb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-3a47b87f-9141-4131-b272-3ff82f226681" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:31 np0005603621 podman[335520]: 2026-01-31 08:31:31.879631137 +0000 UTC m=+0.049717018 container create 6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:31:31 np0005603621 systemd[1]: Started libpod-conmon-6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d.scope.
Jan 31 03:31:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:31:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e61463b8c8bdba8f2982647eb090fc9119c920496f983edf4c946190ee7e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:31:31 np0005603621 podman[335520]: 2026-01-31 08:31:31.848569772 +0000 UTC m=+0.018655683 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:31:31 np0005603621 podman[335520]: 2026-01-31 08:31:31.972017776 +0000 UTC m=+0.142103687 container init 6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:31:31 np0005603621 podman[335520]: 2026-01-31 08:31:31.976141657 +0000 UTC m=+0.146227538 container start 6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:31:31 np0005603621 nova_compute[247399]: 2026-01-31 08:31:31.990 247403 DEBUG nova.compute.manager [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:31:31 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [NOTICE]   (335540) : New worker (335542) forked
Jan 31 03:31:31 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [NOTICE]   (335540) : Loading success.
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.038 247403 DEBUG nova.compute.manager [req-40a59158-3a10-4aa4-8679-6cbe135eb44a req-c7ad9512-5253-4b88-904e-57602f7e635a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.039 247403 DEBUG oslo_concurrency.lockutils [req-40a59158-3a10-4aa4-8679-6cbe135eb44a req-c7ad9512-5253-4b88-904e-57602f7e635a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.039 247403 DEBUG oslo_concurrency.lockutils [req-40a59158-3a10-4aa4-8679-6cbe135eb44a req-c7ad9512-5253-4b88-904e-57602f7e635a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.039 247403 DEBUG oslo_concurrency.lockutils [req-40a59158-3a10-4aa4-8679-6cbe135eb44a req-c7ad9512-5253-4b88-904e-57602f7e635a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.040 247403 DEBUG nova.compute.manager [req-40a59158-3a10-4aa4-8679-6cbe135eb44a req-c7ad9512-5253-4b88-904e-57602f7e635a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Processing event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.131 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.131 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.186 247403 DEBUG nova.objects.instance [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'pci_requests' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.221 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.222 247403 INFO nova.compute.claims [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.222 247403 DEBUG nova.objects.instance [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'resources' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.242 247403 DEBUG nova.objects.instance [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'pci_devices' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.317 247403 INFO nova.compute.resource_tracker [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating resource usage from migration 3a66ac3c-101b-497a-a72d-758b98e95184#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.318 247403 DEBUG nova.compute.resource_tracker [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Starting to track incoming migration 3a66ac3c-101b-497a-a72d-758b98e95184 with flavor f75c4aee-d826-4343-a7e3-f06a4b21de52 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.466 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.681 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.682 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848292.6809075, 3a47b87f-9141-4131-b272-3ff82f226681 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.682 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] VM Started (Lifecycle Event)#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.691 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.694 247403 INFO nova.virt.libvirt.driver [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Instance spawned successfully.#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.695 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.705 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.709 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.740 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.740 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848292.681058, 3a47b87f-9141-4131-b272-3ff82f226681 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.741 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.743 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.743 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.744 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.744 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.745 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.745 247403 DEBUG nova.virt.libvirt.driver [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.794 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.796 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848292.6860754, 3a47b87f-9141-4131-b272-3ff82f226681 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.797 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.825 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.827 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.857 247403 INFO nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Took 17.31 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.857 247403 DEBUG nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.863 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:31:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755942008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.894 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.899 247403 DEBUG nova.compute.provider_tree [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.946 247403 DEBUG nova.scheduler.client.report [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:32 np0005603621 nova_compute[247399]: 2026-01-31 08:31:32.950 247403 INFO nova.compute.manager [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Took 22.06 seconds to build instance.#033[00m
Jan 31 03:31:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:33 np0005603621 nova_compute[247399]: 2026-01-31 08:31:33.075 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:33 np0005603621 nova_compute[247399]: 2026-01-31 08:31:33.076 247403 INFO nova.compute.manager [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Migrating#033[00m
Jan 31 03:31:33 np0005603621 nova_compute[247399]: 2026-01-31 08:31:33.091 247403 DEBUG oslo_concurrency.lockutils [None req-f6c46657-4958-479e-a7da-b2bb98eedc4a fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:33.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.076 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 424 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 832 KiB/s wr, 5 op/s
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.432 247403 DEBUG nova.compute.manager [req-10149157-d306-4fc1-950e-163cfa89c4e9 req-d2445104-f568-4abc-9967-e84b539a606a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.432 247403 DEBUG oslo_concurrency.lockutils [req-10149157-d306-4fc1-950e-163cfa89c4e9 req-d2445104-f568-4abc-9967-e84b539a606a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.433 247403 DEBUG oslo_concurrency.lockutils [req-10149157-d306-4fc1-950e-163cfa89c4e9 req-d2445104-f568-4abc-9967-e84b539a606a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.433 247403 DEBUG oslo_concurrency.lockutils [req-10149157-d306-4fc1-950e-163cfa89c4e9 req-d2445104-f568-4abc-9967-e84b539a606a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.433 247403 DEBUG nova.compute.manager [req-10149157-d306-4fc1-950e-163cfa89c4e9 req-d2445104-f568-4abc-9967-e84b539a606a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] No waiting events found dispatching network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.433 247403 WARNING nova.compute.manager [req-10149157-d306-4fc1-950e-163cfa89c4e9 req-d2445104-f568-4abc-9967-e84b539a606a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received unexpected event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:31:34 np0005603621 nova_compute[247399]: 2026-01-31 08:31:34.762 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:35.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Jan 31 03:31:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:37.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:37 np0005603621 systemd-logind[818]: New session 66 of user nova.
Jan 31 03:31:37 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:31:37 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:31:37 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:31:37 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:31:37 np0005603621 systemd[335622]: Queued start job for default target Main User Target.
Jan 31 03:31:37 np0005603621 systemd[335622]: Created slice User Application Slice.
Jan 31 03:31:37 np0005603621 systemd[335622]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:31:37 np0005603621 systemd[335622]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:31:37 np0005603621 systemd[335622]: Reached target Paths.
Jan 31 03:31:37 np0005603621 systemd[335622]: Reached target Timers.
Jan 31 03:31:37 np0005603621 systemd[335622]: Starting D-Bus User Message Bus Socket...
Jan 31 03:31:37 np0005603621 systemd[335622]: Starting Create User's Volatile Files and Directories...
Jan 31 03:31:37 np0005603621 systemd[335622]: Finished Create User's Volatile Files and Directories.
Jan 31 03:31:37 np0005603621 systemd[335622]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:31:37 np0005603621 systemd[335622]: Reached target Sockets.
Jan 31 03:31:37 np0005603621 systemd[335622]: Reached target Basic System.
Jan 31 03:31:37 np0005603621 systemd[335622]: Reached target Main User Target.
Jan 31 03:31:37 np0005603621 systemd[335622]: Startup finished in 120ms.
Jan 31 03:31:37 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:31:37 np0005603621 systemd[1]: Started Session 66 of User nova.
Jan 31 03:31:37 np0005603621 systemd[1]: session-66.scope: Deactivated successfully.
Jan 31 03:31:37 np0005603621 systemd-logind[818]: Session 66 logged out. Waiting for processes to exit.
Jan 31 03:31:37 np0005603621 systemd-logind[818]: Removed session 66.
Jan 31 03:31:38 np0005603621 systemd-logind[818]: New session 68 of user nova.
Jan 31 03:31:38 np0005603621 systemd[1]: Started Session 68 of User nova.
Jan 31 03:31:38 np0005603621 systemd[1]: session-68.scope: Deactivated successfully.
Jan 31 03:31:38 np0005603621 systemd-logind[818]: Session 68 logged out. Waiting for processes to exit.
Jan 31 03:31:38 np0005603621 systemd-logind[818]: Removed session 68.
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:31:38
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.mgr', 'volumes', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data']
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:31:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:31:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:39.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.077 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:39.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.437 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.438 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.438 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.439 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.439 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.440 247403 INFO nova.compute.manager [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Terminating instance#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.441 247403 DEBUG nova.compute.manager [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:31:39 np0005603621 kernel: tapd0ae5b47-a9 (unregistering): left promiscuous mode
Jan 31 03:31:39 np0005603621 NetworkManager[49013]: <info>  [1769848299.6324] device (tapd0ae5b47-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:31:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:39Z|00495|binding|INFO|Releasing lport d0ae5b47-a954-49b7-b658-33bf23e73e22 from this chassis (sb_readonly=0)
Jan 31 03:31:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:39Z|00496|binding|INFO|Setting lport d0ae5b47-a954-49b7-b658-33bf23e73e22 down in Southbound
Jan 31 03:31:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:39Z|00497|binding|INFO|Removing iface tapd0ae5b47-a9 ovn-installed in OVS
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.639 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.647 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:31:39Z|00498|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:31:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:39.650 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:46:d1 10.100.0.3'], port_security=['fa:16:3e:1e:46:d1 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '3a47b87f-9141-4131-b272-3ff82f226681', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d0ae5b47-a954-49b7-b658-33bf23e73e22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:31:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:39.652 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d0ae5b47-a954-49b7-b658-33bf23e73e22 in datapath f6071a46-64a6-45aa-97c6-06e6c564195b unbound from our chassis#033[00m
Jan 31 03:31:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:39.653 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6071a46-64a6-45aa-97c6-06e6c564195b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:31:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:39.655 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2af494fa-08cc-4bc6-847a-741314a972dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:39.656 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b namespace which is not needed anymore#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 31 03:31:39 np0005603621 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000082.scope: Consumed 8.063s CPU time.
Jan 31 03:31:39 np0005603621 systemd-machined[212769]: Machine qemu-60-instance-00000082 terminated.
Jan 31 03:31:39 np0005603621 podman[335646]: 2026-01-31 08:31:39.70542711 +0000 UTC m=+0.050055708 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:31:39 np0005603621 podman[335650]: 2026-01-31 08:31:39.730567237 +0000 UTC m=+0.073965675 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.860 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.876 247403 INFO nova.virt.libvirt.driver [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Instance destroyed successfully.#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.877 247403 DEBUG nova.objects.instance [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'resources' on Instance uuid 3a47b87f-9141-4131-b272-3ff82f226681 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.897 247403 DEBUG nova.virt.libvirt.vif [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:31:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1783939189',display_name='tempest-ServersTestJSON-server-1783939189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1783939189',id=130,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:31:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-9vo2mh1n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:31:32Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=3a47b87f-9141-4131-b272-3ff82f226681,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.898 247403 DEBUG nova.network.os_vif_util [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "address": "fa:16:3e:1e:46:d1", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0ae5b47-a9", "ovs_interfaceid": "d0ae5b47-a954-49b7-b658-33bf23e73e22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.899 247403 DEBUG nova.network.os_vif_util [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.899 247403 DEBUG os_vif [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.900 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.900 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ae5b47-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.903 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:39 np0005603621 nova_compute[247399]: 2026-01-31 08:31:39.905 247403 INFO os_vif [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:46:d1,bridge_name='br-int',has_traffic_filtering=True,id=d0ae5b47-a954-49b7-b658-33bf23e73e22,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0ae5b47-a9')#033[00m
Jan 31 03:31:40 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [NOTICE]   (335540) : haproxy version is 2.8.14-c23fe91
Jan 31 03:31:40 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [NOTICE]   (335540) : path to executable is /usr/sbin/haproxy
Jan 31 03:31:40 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [WARNING]  (335540) : Exiting Master process...
Jan 31 03:31:40 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [ALERT]    (335540) : Current worker (335542) exited with code 143 (Terminated)
Jan 31 03:31:40 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[335536]: [WARNING]  (335540) : All workers exited. Exiting... (0)
Jan 31 03:31:40 np0005603621 systemd[1]: libpod-6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d.scope: Deactivated successfully.
Jan 31 03:31:40 np0005603621 podman[335711]: 2026-01-31 08:31:40.103043966 +0000 UTC m=+0.375336201 container died 6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:31:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:31:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:31:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d9e61463b8c8bdba8f2982647eb090fc9119c920496f983edf4c946190ee7e8a-merged.mount: Deactivated successfully.
Jan 31 03:31:40 np0005603621 podman[335711]: 2026-01-31 08:31:40.713343576 +0000 UTC m=+0.985635821 container cleanup 6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:31:40 np0005603621 systemd[1]: libpod-conmon-6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d.scope: Deactivated successfully.
Jan 31 03:31:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:41.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:41.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:41 np0005603621 podman[335774]: 2026-01-31 08:31:41.374038073 +0000 UTC m=+0.641538761 container remove 6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.377 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[76570486-592c-4fe5-a58c-bf19d2d989a8]: (4, ('Sat Jan 31 08:31:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b (6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d)\n6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d\nSat Jan 31 08:31:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b (6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d)\n6493064c9de6f2fa9ef0995c923969ee41015b34c71df14abbec11ac856fe79d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.379 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eacd3582-9478-4ab6-85b0-af4517d66a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.380 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:31:41 np0005603621 nova_compute[247399]: 2026-01-31 08:31:41.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:41 np0005603621 kernel: tapf6071a46-60: left promiscuous mode
Jan 31 03:31:41 np0005603621 nova_compute[247399]: 2026-01-31 08:31:41.388 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.391 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d7169f-2194-4f68-af48-ee26b8bdd8f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.417 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e71e0bbe-3e18-48d1-a6e8-f810336b8640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.418 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a37c59-3442-4020-b9a8-7c07c9ed306e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.428 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dfde95e3-1247-4b4e-894e-f80d1b9d929a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759689, 'reachable_time': 37539, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335789, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:41 np0005603621 systemd[1]: run-netns-ovnmeta\x2df6071a46\x2d64a6\x2d45aa\x2d97c6\x2d06e6c564195b.mount: Deactivated successfully.
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.431 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:31:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:31:41.431 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[76e8550e-68f6-4b60-86d8-6136e8e65121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:31:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 31 03:31:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:43.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:43.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:43 np0005603621 nova_compute[247399]: 2026-01-31 08:31:43.587 247403 DEBUG nova.compute.manager [req-93f6d186-4e76-43ae-b5b7-07b28757b3b5 req-b2a665cd-772d-4a05-9807-193b618d11ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-vif-unplugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:43 np0005603621 nova_compute[247399]: 2026-01-31 08:31:43.587 247403 DEBUG oslo_concurrency.lockutils [req-93f6d186-4e76-43ae-b5b7-07b28757b3b5 req-b2a665cd-772d-4a05-9807-193b618d11ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:43 np0005603621 nova_compute[247399]: 2026-01-31 08:31:43.588 247403 DEBUG oslo_concurrency.lockutils [req-93f6d186-4e76-43ae-b5b7-07b28757b3b5 req-b2a665cd-772d-4a05-9807-193b618d11ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:43 np0005603621 nova_compute[247399]: 2026-01-31 08:31:43.588 247403 DEBUG oslo_concurrency.lockutils [req-93f6d186-4e76-43ae-b5b7-07b28757b3b5 req-b2a665cd-772d-4a05-9807-193b618d11ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:43 np0005603621 nova_compute[247399]: 2026-01-31 08:31:43.588 247403 DEBUG nova.compute.manager [req-93f6d186-4e76-43ae-b5b7-07b28757b3b5 req-b2a665cd-772d-4a05-9807-193b618d11ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] No waiting events found dispatching network-vif-unplugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:43 np0005603621 nova_compute[247399]: 2026-01-31 08:31:43.588 247403 DEBUG nova.compute.manager [req-93f6d186-4e76-43ae-b5b7-07b28757b3b5 req-b2a665cd-772d-4a05-9807-193b618d11ad fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-vif-unplugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:31:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 31 03:31:44 np0005603621 nova_compute[247399]: 2026-01-31 08:31:44.691 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:44 np0005603621 nova_compute[247399]: 2026-01-31 08:31:44.765 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:44 np0005603621 nova_compute[247399]: 2026-01-31 08:31:44.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:45.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.216 247403 DEBUG nova.compute.manager [req-a950222a-26c7-47bf-8341-3dcb75b457ed req-8db48737-7094-4dfd-bae8-61efb5b2eb92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-unplugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.216 247403 DEBUG oslo_concurrency.lockutils [req-a950222a-26c7-47bf-8341-3dcb75b457ed req-8db48737-7094-4dfd-bae8-61efb5b2eb92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.216 247403 DEBUG oslo_concurrency.lockutils [req-a950222a-26c7-47bf-8341-3dcb75b457ed req-8db48737-7094-4dfd-bae8-61efb5b2eb92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.217 247403 DEBUG oslo_concurrency.lockutils [req-a950222a-26c7-47bf-8341-3dcb75b457ed req-8db48737-7094-4dfd-bae8-61efb5b2eb92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.217 247403 DEBUG nova.compute.manager [req-a950222a-26c7-47bf-8341-3dcb75b457ed req-8db48737-7094-4dfd-bae8-61efb5b2eb92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-unplugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.217 247403 WARNING nova.compute.manager [req-a950222a-26c7-47bf-8341-3dcb75b457ed req-8db48737-7094-4dfd-bae8-61efb5b2eb92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-unplugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.809 247403 DEBUG nova.compute.manager [req-610ba499-4fa8-478b-997e-3c06c558b3bd req-66830e61-801a-4b5f-bd2a-9b23bef767e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.809 247403 DEBUG oslo_concurrency.lockutils [req-610ba499-4fa8-478b-997e-3c06c558b3bd req-66830e61-801a-4b5f-bd2a-9b23bef767e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.809 247403 DEBUG oslo_concurrency.lockutils [req-610ba499-4fa8-478b-997e-3c06c558b3bd req-66830e61-801a-4b5f-bd2a-9b23bef767e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.809 247403 DEBUG oslo_concurrency.lockutils [req-610ba499-4fa8-478b-997e-3c06c558b3bd req-66830e61-801a-4b5f-bd2a-9b23bef767e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.810 247403 DEBUG nova.compute.manager [req-610ba499-4fa8-478b-997e-3c06c558b3bd req-66830e61-801a-4b5f-bd2a-9b23bef767e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] No waiting events found dispatching network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:45 np0005603621 nova_compute[247399]: 2026-01-31 08:31:45.810 247403 WARNING nova.compute.manager [req-610ba499-4fa8-478b-997e-3c06c558b3bd req-66830e61-801a-4b5f-bd2a-9b23bef767e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received unexpected event network-vif-plugged-d0ae5b47-a954-49b7-b658-33bf23e73e22 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:31:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 420 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1016 KiB/s wr, 113 op/s
Jan 31 03:31:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:47.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:47 np0005603621 nova_compute[247399]: 2026-01-31 08:31:47.901 247403 DEBUG nova.compute.manager [req-38c9c760-c661-40a0-8ae8-850b662454ee req-6edd6131-1104-4be3-a624-5e4d7727359e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:47 np0005603621 nova_compute[247399]: 2026-01-31 08:31:47.903 247403 DEBUG oslo_concurrency.lockutils [req-38c9c760-c661-40a0-8ae8-850b662454ee req-6edd6131-1104-4be3-a624-5e4d7727359e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:47 np0005603621 nova_compute[247399]: 2026-01-31 08:31:47.903 247403 DEBUG oslo_concurrency.lockutils [req-38c9c760-c661-40a0-8ae8-850b662454ee req-6edd6131-1104-4be3-a624-5e4d7727359e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:47 np0005603621 nova_compute[247399]: 2026-01-31 08:31:47.903 247403 DEBUG oslo_concurrency.lockutils [req-38c9c760-c661-40a0-8ae8-850b662454ee req-6edd6131-1104-4be3-a624-5e4d7727359e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:47 np0005603621 nova_compute[247399]: 2026-01-31 08:31:47.903 247403 DEBUG nova.compute.manager [req-38c9c760-c661-40a0-8ae8-850b662454ee req-6edd6131-1104-4be3-a624-5e4d7727359e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:31:47 np0005603621 nova_compute[247399]: 2026-01-31 08:31:47.903 247403 WARNING nova.compute.manager [req-38c9c760-c661-40a0-8ae8-850b662454ee req-6edd6131-1104-4be3-a624-5e4d7727359e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:31:48 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:31:48 np0005603621 systemd[335622]: Activating special unit Exit the Session...
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped target Main User Target.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped target Basic System.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped target Paths.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped target Sockets.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped target Timers.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:31:48 np0005603621 systemd[335622]: Closed D-Bus User Message Bus Socket.
Jan 31 03:31:48 np0005603621 systemd[335622]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:31:48 np0005603621 systemd[335622]: Removed slice User Application Slice.
Jan 31 03:31:48 np0005603621 systemd[335622]: Reached target Shutdown.
Jan 31 03:31:48 np0005603621 systemd[335622]: Finished Exit the Session.
Jan 31 03:31:48 np0005603621 systemd[335622]: Reached target Exit the Session.
Jan 31 03:31:48 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:31:48 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:31:48 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:31:48 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:31:48 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:31:48 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:31:48 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:31:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 656 KiB/s rd, 18 KiB/s wr, 40 op/s
Jan 31 03:31:48 np0005603621 nova_compute[247399]: 2026-01-31 08:31:48.284 247403 INFO nova.network.neutron [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating port 02df5608-7a85-4d54-b5ac-628d6c8e8179 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:31:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:49.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007509334868788386 of space, bias 1.0, pg target 2.2528004606365157 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.6448598606458217 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:31:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:31:49 np0005603621 nova_compute[247399]: 2026-01-31 08:31:49.767 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:49 np0005603621 nova_compute[247399]: 2026-01-31 08:31:49.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:31:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 17 op/s
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.345 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.346 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.346 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.346 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.347 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.508 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.757 247403 DEBUG nova.compute.manager [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-changed-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.758 247403 DEBUG nova.compute.manager [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Refreshing instance network info cache due to event network-changed-02df5608-7a85-4d54-b5ac-628d6c8e8179. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:31:50 np0005603621 nova_compute[247399]: 2026-01-31 08:31:50.758 247403 DEBUG oslo_concurrency.lockutils [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:31:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:51.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:51.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 30 KiB/s wr, 26 op/s
Jan 31 03:31:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:53.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:53.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 14 KiB/s wr, 21 op/s
Jan 31 03:31:54 np0005603621 nova_compute[247399]: 2026-01-31 08:31:54.768 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:54 np0005603621 nova_compute[247399]: 2026-01-31 08:31:54.873 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848299.872196, 3a47b87f-9141-4131-b272-3ff82f226681 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:31:54 np0005603621 nova_compute[247399]: 2026-01-31 08:31:54.874 247403 INFO nova.compute.manager [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:31:54 np0005603621 nova_compute[247399]: 2026-01-31 08:31:54.905 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.059 247403 DEBUG nova.compute.manager [None req-67e42d87-c293-48c9-9722-f390a61de44c - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.064 247403 DEBUG nova.compute.manager [None req-67e42d87-c293-48c9-9722-f390a61de44c - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:31:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:55.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:55.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.172 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating instance_info_cache with network_info: [{"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.247 247403 INFO nova.compute.manager [None req-67e42d87-c293-48c9-9722-f390a61de44c - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.488 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.489 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.489 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.489 247403 DEBUG nova.network.neutron [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.490 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.491 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.914 247403 WARNING nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 1 instances on the hypervisor.#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.914 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid a15175ec-85fd-457c-870b-8a6d7c13c906 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.914 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 3a47b87f-9141-4131-b272-3ff82f226681 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.915 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.916 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.916 247403 INFO nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] During sync_power_state the instance has a pending task (resize_migrated). Skip.#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.916 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.917 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "3a47b87f-9141-4131-b272-3ff82f226681" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.917 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.917 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:31:55 np0005603621 nova_compute[247399]: 2026-01-31 08:31:55.917 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:31:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.043 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.043 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.044 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.044 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.044 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1023 KiB/s rd, 22 KiB/s wr, 65 op/s
Jan 31 03:31:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3884569195' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.565 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.840 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.840 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:31:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.962 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.963 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4242MB free_disk=20.830535888671875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.963 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:31:56 np0005603621 nova_compute[247399]: 2026-01-31 08:31:56.963 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:31:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:31:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:57.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.289 247403 INFO nova.virt.libvirt.driver [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Deleting instance files /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681_del#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.290 247403 INFO nova.virt.libvirt.driver [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Deletion of /var/lib/nova/instances/3a47b87f-9141-4131-b272-3ff82f226681_del complete#033[00m
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:31:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.587 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Applying migration context for instance a15175ec-85fd-457c-870b-8a6d7c13c906 as it has an incoming, in-progress migration 3a66ac3c-101b-497a-a72d-758b98e95184. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.588 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating resource usage from migration 3a66ac3c-101b-497a-a72d-758b98e95184#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.659 247403 INFO nova.compute.manager [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Took 18.22 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.660 247403 DEBUG oslo.service.loopingcall [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.661 247403 DEBUG nova.compute.manager [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.661 247403 DEBUG nova.network.neutron [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.885 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3a47b87f-9141-4131-b272-3ff82f226681 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.885 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance a15175ec-85fd-457c-870b-8a6d7c13c906 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.886 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:31:57 np0005603621 nova_compute[247399]: 2026-01-31 08:31:57.886 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:31:58 np0005603621 nova_compute[247399]: 2026-01-31 08:31:58.141 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:31:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d64958d4-f294-41aa-8c90-d3605fb3e094 does not exist
Jan 31 03:31:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4e1fdc86-28e1-4d02-a196-caae46d838db does not exist
Jan 31 03:31:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4c9ff234-a83d-455e-863a-5426836ca0bd does not exist
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:31:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 88 op/s
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972227613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:31:58 np0005603621 nova_compute[247399]: 2026-01-31 08:31:58.590 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:31:58 np0005603621 nova_compute[247399]: 2026-01-31 08:31:58.596 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:31:58 np0005603621 nova_compute[247399]: 2026-01-31 08:31:58.622 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:31:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:31:58 np0005603621 nova_compute[247399]: 2026-01-31 08:31:58.704 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:31:58 np0005603621 nova_compute[247399]: 2026-01-31 08:31:58.704 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:31:58 np0005603621 podman[336169]: 2026-01-31 08:31:58.708270988 +0000 UTC m=+0.018740506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:31:59 np0005603621 podman[336169]: 2026-01-31 08:31:59.004187629 +0000 UTC m=+0.314657127 container create f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_clarke, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:31:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:31:59.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:31:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:31:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:31:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:31:59 np0005603621 systemd[1]: Started libpod-conmon-f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6.scope.
Jan 31 03:31:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:31:59 np0005603621 podman[336169]: 2026-01-31 08:31:59.396973433 +0000 UTC m=+0.707442981 container init f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_clarke, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:31:59 np0005603621 podman[336169]: 2026-01-31 08:31:59.403592032 +0000 UTC m=+0.714061530 container start f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:31:59 np0005603621 brave_clarke[336185]: 167 167
Jan 31 03:31:59 np0005603621 systemd[1]: libpod-f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6.scope: Deactivated successfully.
Jan 31 03:31:59 np0005603621 conmon[336185]: conmon f90b0a058f46494e6caf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6.scope/container/memory.events
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.413 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.414 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.415 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:31:59 np0005603621 podman[336169]: 2026-01-31 08:31:59.608815789 +0000 UTC m=+0.919285337 container attach f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:31:59 np0005603621 podman[336169]: 2026-01-31 08:31:59.609347736 +0000 UTC m=+0.919817244 container died f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_clarke, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.769 247403 DEBUG nova.compute.manager [req-8f081e8d-d018-4e07-8880-abf16672fe1e req-cf8ae660-5698-447b-a040-165080d391bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Received event network-vif-deleted-d0ae5b47-a954-49b7-b658-33bf23e73e22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.769 247403 INFO nova.compute.manager [req-8f081e8d-d018-4e07-8880-abf16672fe1e req-cf8ae660-5698-447b-a040-165080d391bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Neutron deleted interface d0ae5b47-a954-49b7-b658-33bf23e73e22; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.770 247403 DEBUG nova.network.neutron [req-8f081e8d-d018-4e07-8880-abf16672fe1e req-cf8ae660-5698-447b-a040-165080d391bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.776 247403 DEBUG nova.network.neutron [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.803 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:31:59 np0005603621 nova_compute[247399]: 2026-01-31 08:31:59.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.123 247403 DEBUG nova.network.neutron [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating instance_info_cache with network_info: [{"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5c2727b2fc3fb386f87a4ed585dd03984a526cfcfd3837b63de40bd312a851c3-merged.mount: Deactivated successfully.
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.217 247403 INFO nova.compute.manager [-] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Took 2.56 seconds to deallocate network for instance.#033[00m
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.223 247403 DEBUG nova.compute.manager [req-8f081e8d-d018-4e07-8880-abf16672fe1e req-cf8ae660-5698-447b-a040-165080d391bc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] Detach interface failed, port_id=d0ae5b47-a954-49b7-b658-33bf23e73e22, reason: Instance 3a47b87f-9141-4131-b272-3ff82f226681 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:32:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 85 op/s
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.492 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.496 247403 DEBUG oslo_concurrency.lockutils [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.496 247403 DEBUG nova.network.neutron [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Refreshing network info cache for port 02df5608-7a85-4d54-b5ac-628d6c8e8179 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.700 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.700 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:00 np0005603621 podman[336169]: 2026-01-31 08:32:00.712884774 +0000 UTC m=+2.023354272 container remove f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_clarke, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:32:00 np0005603621 systemd[1]: libpod-conmon-f90b0a058f46494e6caf6e13a3bcd955ecbd2b7f1523e11baa7efc6d1d6cd7b6.scope: Deactivated successfully.
Jan 31 03:32:00 np0005603621 nova_compute[247399]: 2026-01-31 08:32:00.784 247403 DEBUG oslo_concurrency.processutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:00 np0005603621 podman[336210]: 2026-01-31 08:32:00.817528381 +0000 UTC m=+0.018428695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:32:00 np0005603621 podman[336210]: 2026-01-31 08:32:00.964294444 +0000 UTC m=+0.165194738 container create e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 03:32:01 np0005603621 systemd[1]: Started libpod-conmon-e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050.scope.
Jan 31 03:32:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d33207a28f576fd96787ce18d313dc267e96754d3c3970b8823390a6cbe9b11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d33207a28f576fd96787ce18d313dc267e96754d3c3970b8823390a6cbe9b11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d33207a28f576fd96787ce18d313dc267e96754d3c3970b8823390a6cbe9b11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d33207a28f576fd96787ce18d313dc267e96754d3c3970b8823390a6cbe9b11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d33207a28f576fd96787ce18d313dc267e96754d3c3970b8823390a6cbe9b11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.094 247403 DEBUG os_brick.utils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.097 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.124 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.124 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9fe15c-d691-4c9e-ba9d-be19847e479e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.125 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.132 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.132 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[42ffcca4-c819-4c0a-b6a6-7c55fbe16352]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.133 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.140 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.141 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[db1d20a7-aa74-4baa-bb8d-92e2dc000f88]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.142 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[764334c7-395e-4121-9c61-8240192715f2]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.143 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.165 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:01.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.167 247403 DEBUG os_brick.initiator.connectors.lightos [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.167 247403 DEBUG os_brick.initiator.connectors.lightos [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.167 247403 DEBUG os_brick.initiator.connectors.lightos [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.168 247403 DEBUG os_brick.utils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:32:01 np0005603621 podman[336210]: 2026-01-31 08:32:01.189439962 +0000 UTC m=+0.390340286 container init e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:32:01 np0005603621 podman[336210]: 2026-01-31 08:32:01.196341701 +0000 UTC m=+0.397241995 container start e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:32:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:32:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 35K writes, 139K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.96 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4370 writes, 17K keys, 4370 commit groups, 1.0 writes per commit group, ingest: 16.37 MB, 0.03 MB/s#012Interval WAL: 4370 writes, 1575 syncs, 2.77 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:32:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377140046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:01 np0005603621 podman[336210]: 2026-01-31 08:32:01.237813266 +0000 UTC m=+0.438713560 container attach e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.251 247403 DEBUG oslo_concurrency.processutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.257 247403 DEBUG nova.compute.provider_tree [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.427 247403 DEBUG nova.scheduler.client.report [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.576 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:01 np0005603621 nova_compute[247399]: 2026-01-31 08:32:01.837 247403 INFO nova.scheduler.client.report [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Deleted allocations for instance 3a47b87f-9141-4131-b272-3ff82f226681#033[00m
Jan 31 03:32:01 np0005603621 infallible_carson[336247]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:32:01 np0005603621 infallible_carson[336247]: --> relative data size: 1.0
Jan 31 03:32:01 np0005603621 infallible_carson[336247]: --> All data devices are unavailable
Jan 31 03:32:01 np0005603621 systemd[1]: libpod-e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050.scope: Deactivated successfully.
Jan 31 03:32:02 np0005603621 podman[336271]: 2026-01-31 08:32:02.023053702 +0000 UTC m=+0.026613505 container died e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.093 247403 DEBUG oslo_concurrency.lockutils [None req-6f2a0680-a711-4bdb-9bc5-e70c8706cd33 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "3a47b87f-9141-4131-b272-3ff82f226681" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 22.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.096 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "3a47b87f-9141-4131-b272-3ff82f226681" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.097 247403 INFO nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3a47b87f-9141-4131-b272-3ff82f226681] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.097 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "3a47b87f-9141-4131-b272-3ff82f226681" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8d33207a28f576fd96787ce18d313dc267e96754d3c3970b8823390a6cbe9b11-merged.mount: Deactivated successfully.
Jan 31 03:32:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 87 op/s
Jan 31 03:32:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:32:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972302893' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:32:02 np0005603621 podman[336271]: 2026-01-31 08:32:02.470112425 +0000 UTC m=+0.473672218 container remove e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 03:32:02 np0005603621 systemd[1]: libpod-conmon-e36900459facfe7c16e353b90486ac40e6e1c8e4f904127a7342aceeab100050.scope: Deactivated successfully.
Jan 31 03:32:02 np0005603621 podman[336428]: 2026-01-31 08:32:02.95504786 +0000 UTC m=+0.053961922 container create 71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.991 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.995 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:32:02 np0005603621 nova_compute[247399]: 2026-01-31 08:32:02.996 247403 INFO nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Creating image(s)#033[00m
Jan 31 03:32:03 np0005603621 systemd[1]: Started libpod-conmon-71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77.scope.
Jan 31 03:32:03 np0005603621 podman[336428]: 2026-01-31 08:32:02.921690343 +0000 UTC m=+0.020604425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:32:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:03 np0005603621 nova_compute[247399]: 2026-01-31 08:32:03.038 247403 DEBUG nova.storage.rbd_utils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(nova-resize) on rbd image(a15175ec-85fd-457c-870b-8a6d7c13c906_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:32:03 np0005603621 podman[336428]: 2026-01-31 08:32:03.058358135 +0000 UTC m=+0.157272227 container init 71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:32:03 np0005603621 podman[336428]: 2026-01-31 08:32:03.064883142 +0000 UTC m=+0.163797204 container start 71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:32:03 np0005603621 wizardly_banzai[336444]: 167 167
Jan 31 03:32:03 np0005603621 systemd[1]: libpod-71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77.scope: Deactivated successfully.
Jan 31 03:32:03 np0005603621 podman[336428]: 2026-01-31 08:32:03.076068736 +0000 UTC m=+0.174982798 container attach 71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:32:03 np0005603621 podman[336428]: 2026-01-31 08:32:03.07650175 +0000 UTC m=+0.175415812 container died 71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:32:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a4575edd68033e731f79d64aa92c9683c1c9759691e84a3a76bc8f3ceb0fe882-merged.mount: Deactivated successfully.
Jan 31 03:32:03 np0005603621 podman[336428]: 2026-01-31 08:32:03.150318381 +0000 UTC m=+0.249232443 container remove 71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:32:03 np0005603621 systemd[1]: libpod-conmon-71a9e6ea48955bf183d860e86f8793026f95fc8887243cf0945c81e6fd5d6f77.scope: Deactivated successfully.
Jan 31 03:32:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:32:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:03.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:32:03 np0005603621 podman[336507]: 2026-01-31 08:32:03.350124165 +0000 UTC m=+0.108960605 container create ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:32:03 np0005603621 podman[336507]: 2026-01-31 08:32:03.270287314 +0000 UTC m=+0.029123774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:32:03 np0005603621 systemd[1]: Started libpod-conmon-ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a.scope.
Jan 31 03:32:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5293463b10c544d45c086123b1a5c3f60fe3d5b82e13c946af366a2394bfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5293463b10c544d45c086123b1a5c3f60fe3d5b82e13c946af366a2394bfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5293463b10c544d45c086123b1a5c3f60fe3d5b82e13c946af366a2394bfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4a5293463b10c544d45c086123b1a5c3f60fe3d5b82e13c946af366a2394bfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:03 np0005603621 podman[336507]: 2026-01-31 08:32:03.448410361 +0000 UTC m=+0.207246821 container init ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 03:32:03 np0005603621 podman[336507]: 2026-01-31 08:32:03.454102052 +0000 UTC m=+0.212938482 container start ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:32:03 np0005603621 podman[336507]: 2026-01-31 08:32:03.478141104 +0000 UTC m=+0.236977534 container attach ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:32:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 31 03:32:04 np0005603621 nova_compute[247399]: 2026-01-31 08:32:04.031 247403 DEBUG nova.network.neutron [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updated VIF entry in instance network info cache for port 02df5608-7a85-4d54-b5ac-628d6c8e8179. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:32:04 np0005603621 nova_compute[247399]: 2026-01-31 08:32:04.032 247403 DEBUG nova.network.neutron [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating instance_info_cache with network_info: [{"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:04 np0005603621 nova_compute[247399]: 2026-01-31 08:32:04.057 247403 DEBUG oslo_concurrency.lockutils [req-5548df7c-22d0-4646-a4a2-fbccb70fd5ec req-e951292e-cc6b-407a-95c1-3df387e2a1d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]: {
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:    "0": [
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:        {
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "devices": [
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "/dev/loop3"
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            ],
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "lv_name": "ceph_lv0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "lv_size": "7511998464",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "name": "ceph_lv0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "tags": {
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.cluster_name": "ceph",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.crush_device_class": "",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.encrypted": "0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.osd_id": "0",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.type": "block",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:                "ceph.vdo": "0"
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            },
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "type": "block",
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:            "vg_name": "ceph_vg0"
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:        }
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]:    ]
Jan 31 03:32:04 np0005603621 objective_meninsky[336523]: }
Jan 31 03:32:04 np0005603621 nova_compute[247399]: 2026-01-31 08:32:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:04 np0005603621 systemd[1]: libpod-ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a.scope: Deactivated successfully.
Jan 31 03:32:04 np0005603621 podman[336507]: 2026-01-31 08:32:04.214869502 +0000 UTC m=+0.973705932 container died ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:32:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.8 KiB/s wr, 78 op/s
Jan 31 03:32:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d4a5293463b10c544d45c086123b1a5c3f60fe3d5b82e13c946af366a2394bfd-merged.mount: Deactivated successfully.
Jan 31 03:32:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 31 03:32:04 np0005603621 nova_compute[247399]: 2026-01-31 08:32:04.805 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:04 np0005603621 podman[336507]: 2026-01-31 08:32:04.809201574 +0000 UTC m=+1.568038014 container remove ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_meninsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:32:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 31 03:32:04 np0005603621 systemd[1]: libpod-conmon-ffd5009e00fcd654b9f44c5a1e05841c399f70c25e1fc4b7b2c00806c9528a3a.scope: Deactivated successfully.
Jan 31 03:32:04 np0005603621 nova_compute[247399]: 2026-01-31 08:32:04.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:05.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:05.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:05 np0005603621 podman[336687]: 2026-01-31 08:32:05.278332129 +0000 UTC m=+0.017735504 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:32:05 np0005603621 podman[336687]: 2026-01-31 08:32:05.399413387 +0000 UTC m=+0.138816742 container create a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.443 247403 DEBUG nova.objects.instance [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'trusted_certs' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:05 np0005603621 systemd[1]: Started libpod-conmon-a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5.scope.
Jan 31 03:32:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.563 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.564 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Ensure instance console log exists: /var/lib/nova/instances/a15175ec-85fd-457c-870b-8a6d7c13c906/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.565 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.566 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.566 247403 DEBUG oslo_concurrency.lockutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.570 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Start _get_guest_xml network_info=[{"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:dd:59:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-901896ec-4cee-48ca-89ea-1ef061e9fbf3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '901896ec-4cee-48ca-89ea-1ef061e9fbf3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'a15175ec-85fd-457c-870b-8a6d7c13c906', 'attached_at': '2026-01-31T08:32:02.000000', 'detached_at': '', 'volume_id': '901896ec-4cee-48ca-89ea-1ef061e9fbf3', 'serial': '901896ec-4cee-48ca-89ea-1ef061e9fbf3'}, 'device_type': 'disk', 'boot_index': None, 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'attachment_id': '0f56aada-8f7d-4f1b-be22-beb78a3a90f0', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.575 247403 WARNING nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.579 247403 DEBUG nova.virt.libvirt.host [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.580 247403 DEBUG nova.virt.libvirt.host [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.584 247403 DEBUG nova.virt.libvirt.host [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.585 247403 DEBUG nova.virt.libvirt.host [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.586 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.586 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f75c4aee-d826-4343-a7e3-f06a4b21de52',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.587 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.587 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.587 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.587 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.587 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.588 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.588 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.588 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.588 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.588 247403 DEBUG nova.virt.hardware [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.589 247403 DEBUG nova.objects.instance [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'vcpu_model' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:05 np0005603621 nova_compute[247399]: 2026-01-31 08:32:05.608 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:05 np0005603621 podman[336687]: 2026-01-31 08:32:05.731691432 +0000 UTC m=+0.471094807 container init a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:32:05 np0005603621 podman[336687]: 2026-01-31 08:32:05.73728876 +0000 UTC m=+0.476692115 container start a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:32:05 np0005603621 cranky_leavitt[336737]: 167 167
Jan 31 03:32:05 np0005603621 systemd[1]: libpod-a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5.scope: Deactivated successfully.
Jan 31 03:32:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:32:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280551440' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.022 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.057 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:06 np0005603621 podman[336687]: 2026-01-31 08:32:06.095084713 +0000 UTC m=+0.834488078 container attach a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:32:06 np0005603621 podman[336687]: 2026-01-31 08:32:06.096804367 +0000 UTC m=+0.836207722 container died a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:32:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 409 B/s wr, 46 op/s
Jan 31 03:32:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-82cb29266bcb8038214868b921536d6ddbc84dd9b6180da09d1af1c5fa270fd3-merged.mount: Deactivated successfully.
Jan 31 03:32:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:32:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31009335' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.513 247403 DEBUG oslo_concurrency.processutils [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:06 np0005603621 podman[336687]: 2026-01-31 08:32:06.554765607 +0000 UTC m=+1.294168952 container remove a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_leavitt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 03:32:06 np0005603621 systemd[1]: libpod-conmon-a07a700f96e74408c4d8545d461ff95e017582e8a8e53c975eff4891a4886de5.scope: Deactivated successfully.
Jan 31 03:32:06 np0005603621 podman[336827]: 2026-01-31 08:32:06.676037522 +0000 UTC m=+0.038607595 container create a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:32:06 np0005603621 systemd[1]: Started libpod-conmon-a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29.scope.
Jan 31 03:32:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ad676c099010f260eea57e897c480683af89437fd08ccd58b8e1ff4f630bb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ad676c099010f260eea57e897c480683af89437fd08ccd58b8e1ff4f630bb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ad676c099010f260eea57e897c480683af89437fd08ccd58b8e1ff4f630bb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ad676c099010f260eea57e897c480683af89437fd08ccd58b8e1ff4f630bb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:06 np0005603621 podman[336827]: 2026-01-31 08:32:06.65830911 +0000 UTC m=+0.020879193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:32:06 np0005603621 podman[336827]: 2026-01-31 08:32:06.756790223 +0000 UTC m=+0.119360326 container init a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:32:06 np0005603621 podman[336827]: 2026-01-31 08:32:06.76239606 +0000 UTC m=+0.124966123 container start a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:32:06 np0005603621 podman[336827]: 2026-01-31 08:32:06.767058618 +0000 UTC m=+0.129628701 container attach a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.896 247403 DEBUG nova.virt.libvirt.vif [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:30:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2097097080',display_name='tempest-ServerActionsTestOtherB-server-2097097080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2097097080',id=128,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:30:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-41gbj3yx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=a15175ec-85fd-457c-870b-8a6d7c13c906,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:dd:59:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.899 247403 DEBUG nova.network.os_vif_util [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:dd:59:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.901 247403 DEBUG nova.network.os_vif_util [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.906 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <uuid>a15175ec-85fd-457c-870b-8a6d7c13c906</uuid>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <name>instance-00000080</name>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <memory>196608</memory>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerActionsTestOtherB-server-2097097080</nova:name>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:32:05</nova:creationTime>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.micro">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:memory>192</nova:memory>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:user uuid="ef51681d234a4abc88ff433d0640b6e7">tempest-ServerActionsTestOtherB-1048458052-project-member</nova:user>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:project uuid="953a213fa5cb435ab3c04ad96152685f">tempest-ServerActionsTestOtherB-1048458052</nova:project>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <nova:port uuid="02df5608-7a85-4d54-b5ac-628d6c8e8179">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <entry name="serial">a15175ec-85fd-457c-870b-8a6d7c13c906</entry>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <entry name="uuid">a15175ec-85fd-457c-870b-8a6d7c13c906</entry>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a15175ec-85fd-457c-870b-8a6d7c13c906_disk">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a15175ec-85fd-457c-870b-8a6d7c13c906_disk.config">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-901896ec-4cee-48ca-89ea-1ef061e9fbf3">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <serial>901896ec-4cee-48ca-89ea-1ef061e9fbf3</serial>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:dd:59:a9"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <target dev="tap02df5608-7a"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/a15175ec-85fd-457c-870b-8a6d7c13c906/console.log" append="off"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:32:06 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:32:06 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:32:06 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:32:06 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.909 247403 DEBUG nova.virt.libvirt.vif [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:30:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2097097080',display_name='tempest-ServerActionsTestOtherB-server-2097097080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2097097080',id=128,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:30:35Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-41gbj3yx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:31:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=a15175ec-85fd-457c-870b-8a6d7c13c906,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:dd:59:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.909 247403 DEBUG nova.network.os_vif_util [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-2130829654-network", "vif_mac": "fa:16:3e:dd:59:a9"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.910 247403 DEBUG nova.network.os_vif_util [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.911 247403 DEBUG os_vif [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.912 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.913 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.916 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.917 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02df5608-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.918 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap02df5608-7a, col_values=(('external_ids', {'iface-id': '02df5608-7a85-4d54-b5ac-628d6c8e8179', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:59:a9', 'vm-uuid': 'a15175ec-85fd-457c-870b-8a6d7c13c906'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:06 np0005603621 NetworkManager[49013]: <info>  [1769848326.9209] manager: (tap02df5608-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.924 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.927 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:06 np0005603621 nova_compute[247399]: 2026-01-31 08:32:06.928 247403 INFO os_vif [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a')#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.032 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.032 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.033 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.033 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No VIF found with MAC fa:16:3e:dd:59:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.034 247403 INFO nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Using config drive#033[00m
Jan 31 03:32:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:07 np0005603621 kernel: tap02df5608-7a: entered promiscuous mode
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.1120] manager: (tap02df5608-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Jan 31 03:32:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:07Z|00499|binding|INFO|Claiming lport 02df5608-7a85-4d54-b5ac-628d6c8e8179 for this chassis.
Jan 31 03:32:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:07Z|00500|binding|INFO|02df5608-7a85-4d54-b5ac-628d6c8e8179: Claiming fa:16:3e:dd:59:a9 10.100.0.10
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.130 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.1441] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.1450] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 31 03:32:07 np0005603621 systemd-udevd[336879]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.153 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:59:a9 10.100.0.10'], port_security=['fa:16:3e:dd:59:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a15175ec-85fd-457c-870b-8a6d7c13c906', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5b1bd8ad-0d2a-4d57-a00a-9a6b59df86e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=02df5608-7a85-4d54-b5ac-628d6c8e8179) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.154 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 02df5608-7a85-4d54-b5ac-628d6c8e8179 in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 bound to our chassis#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.158 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44469d8b-ad30-4270-88fa-e67c568f3150#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.1715] device (tap02df5608-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.168 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8add6e6c-51ab-4b52-b65a-9320e7927df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.169 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44469d8b-a1 in ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.170 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44469d8b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.171 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eaa798cd-04ff-4500-9b22-76c3a944bf4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.1721] device (tap02df5608-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.171 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[747df64c-5c2a-467e-a720-b5b17f927c53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:07.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:07 np0005603621 systemd-machined[212769]: New machine qemu-61-instance-00000080.
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.180 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fec2efbc-2dde-42e9-bedd-29ecee2d18a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 systemd[1]: Started Virtual Machine qemu-61-instance-00000080.
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.202 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7206beb8-9cf9-4362-821b-0fe515f949f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.206 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.226 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[51a8f69e-5f8e-4922-820e-5c37679b0245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.231 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0ecf4f1e-f916-4773-9b9f-4743cdf56042]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.2323] manager: (tap44469d8b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Jan 31 03:32:07 np0005603621 systemd-udevd[336881]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:32:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:07Z|00501|binding|INFO|Setting lport 02df5608-7a85-4d54-b5ac-628d6c8e8179 ovn-installed in OVS
Jan 31 03:32:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:07Z|00502|binding|INFO|Setting lport 02df5608-7a85-4d54-b5ac-628d6c8e8179 up in Southbound
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.234 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.258 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[302cc299-09f4-4467-a714-1697c9eb9d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.261 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bd47d6-899e-46c2-99e0-570193964657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.2790] device (tap44469d8b-a0): carrier: link connected
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.283 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[28241856-9844-4ee0-ad07-316be51f5c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.297 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc577a2-b29a-4c26-9fdc-8d26ca6be7e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763279, 'reachable_time': 24170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336913, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.315 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a346dee2-f2ed-4edf-8338-27858c367f63]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:9820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763279, 'tstamp': 763279}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336914, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.328 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[795dc9d5-e5f6-47f2-b4fc-b70a15df2dcf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763279, 'reachable_time': 24170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336915, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.347 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0c08418f-adb1-4c70-99f4-effaa1e3ebe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.385 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[edc2498f-e8a8-4ec4-b8d2-8e288abe0f8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.386 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.386 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.387 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44469d8b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:07 np0005603621 NetworkManager[49013]: <info>  [1769848327.3892] manager: (tap44469d8b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Jan 31 03:32:07 np0005603621 kernel: tap44469d8b-a0: entered promiscuous mode
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.388 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.390 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.391 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44469d8b-a0, col_values=(('external_ids', {'iface-id': '7e288124-e200-4c03-8a4a-baab3e3f3d7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:07Z|00503|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.394 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.395 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ad764ab9-ae25-4e5d-9548-7f43fb2efde0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.396 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:32:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:07.396 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'env', 'PROCESS_TAG=haproxy-44469d8b-ad30-4270-88fa-e67c568f3150', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44469d8b-ad30-4270-88fa-e67c568f3150.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:32:07 np0005603621 nova_compute[247399]: 2026-01-31 08:32:07.398 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]: {
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:        "osd_id": 0,
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:        "type": "bluestore"
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]:    }
Jan 31 03:32:07 np0005603621 nice_montalcini[336843]: }
Jan 31 03:32:07 np0005603621 systemd[1]: libpod-a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29.scope: Deactivated successfully.
Jan 31 03:32:07 np0005603621 podman[336827]: 2026-01-31 08:32:07.577928826 +0000 UTC m=+0.940498939 container died a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:32:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f5ad676c099010f260eea57e897c480683af89437fd08ccd58b8e1ff4f630bb5-merged.mount: Deactivated successfully.
Jan 31 03:32:07 np0005603621 podman[336827]: 2026-01-31 08:32:07.652721118 +0000 UTC m=+1.015291191 container remove a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_montalcini, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:32:07 np0005603621 systemd[1]: libpod-conmon-a2ce526ba3d284b6e51287b87e56a591521936204223d46b2796f11313f71b29.scope: Deactivated successfully.
Jan 31 03:32:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:32:07 np0005603621 podman[336974]: 2026-01-31 08:32:07.747791442 +0000 UTC m=+0.059817418 container create ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:32:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:32:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:32:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:32:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0b2dd1d6-dc09-49bc-b26e-96ce9a6e6cbc does not exist
Jan 31 03:32:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b7210e3d-5d3d-4e8a-93dc-2ab25e388146 does not exist
Jan 31 03:32:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d92fdc35-ab55-4693-b93a-52a6e8a2a27c does not exist
Jan 31 03:32:07 np0005603621 systemd[1]: Started libpod-conmon-ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d.scope.
Jan 31 03:32:07 np0005603621 podman[336974]: 2026-01-31 08:32:07.71840746 +0000 UTC m=+0.030433456 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:32:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/159537e909dfe90952a24a970b2f6dc07c705d89a167cb4569c82912e4f6ebb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:07 np0005603621 podman[336974]: 2026-01-31 08:32:07.859471532 +0000 UTC m=+0.171497539 container init ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:32:07 np0005603621 podman[336974]: 2026-01-31 08:32:07.865653759 +0000 UTC m=+0.177679725 container start ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:32:07 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [NOTICE]   (337023) : New worker (337043) forked
Jan 31 03:32:07 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [NOTICE]   (337023) : Loading success.
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 411 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 111 KiB/s rd, 839 KiB/s wr, 38 op/s
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.355 247403 DEBUG nova.compute.manager [req-41d27d7b-3a89-4b76-a82e-5d6c3fb783c6 req-fd777b72-a437-4c15-9e5c-72dec2d22f74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.355 247403 DEBUG oslo_concurrency.lockutils [req-41d27d7b-3a89-4b76-a82e-5d6c3fb783c6 req-fd777b72-a437-4c15-9e5c-72dec2d22f74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.356 247403 DEBUG oslo_concurrency.lockutils [req-41d27d7b-3a89-4b76-a82e-5d6c3fb783c6 req-fd777b72-a437-4c15-9e5c-72dec2d22f74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.356 247403 DEBUG oslo_concurrency.lockutils [req-41d27d7b-3a89-4b76-a82e-5d6c3fb783c6 req-fd777b72-a437-4c15-9e5c-72dec2d22f74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.356 247403 DEBUG nova.compute.manager [req-41d27d7b-3a89-4b76-a82e-5d6c3fb783c6 req-fd777b72-a437-4c15-9e5c-72dec2d22f74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.356 247403 WARNING nova.compute.manager [req-41d27d7b-3a89-4b76-a82e-5d6c3fb783c6 req-fd777b72-a437-4c15-9e5c-72dec2d22f74 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.452 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848328.4516726, a15175ec-85fd-457c-870b-8a6d7c13c906 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.452 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.454 247403 DEBUG nova.compute.manager [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.456 247403 INFO nova.virt.libvirt.driver [-] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Instance running successfully.#033[00m
Jan 31 03:32:08 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.458 247403 DEBUG nova.virt.libvirt.guest [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.459 247403 DEBUG nova.virt.libvirt.driver [None req-aa86097b-9337-407f-9ec4-ac81e4a5ff8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.517 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.522 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.593 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.593 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848328.453723, a15175ec-85fd-457c-870b-8a6d7c13c906 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.594 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] VM Started (Lifecycle Event)#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.805 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:08 np0005603621 nova_compute[247399]: 2026-01-31 08:32:08.808 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:32:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:32:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:09.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:09 np0005603621 nova_compute[247399]: 2026-01-31 08:32:09.410 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:32:09 np0005603621 nova_compute[247399]: 2026-01-31 08:32:09.806 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:09 np0005603621 podman[337139]: 2026-01-31 08:32:09.885907889 +0000 UTC m=+0.054812418 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:32:09 np0005603621 podman[337140]: 2026-01-31 08:32:09.910029684 +0000 UTC m=+0.077762056 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:32:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 411 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 111 KiB/s rd, 839 KiB/s wr, 38 op/s
Jan 31 03:32:10 np0005603621 nova_compute[247399]: 2026-01-31 08:32:10.928 247403 DEBUG nova.compute.manager [req-f7bbbae5-31fd-46dd-a61e-77e896054b84 req-c217e6a1-018d-412c-8962-bcf30f321d35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:10 np0005603621 nova_compute[247399]: 2026-01-31 08:32:10.929 247403 DEBUG oslo_concurrency.lockutils [req-f7bbbae5-31fd-46dd-a61e-77e896054b84 req-c217e6a1-018d-412c-8962-bcf30f321d35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:10 np0005603621 nova_compute[247399]: 2026-01-31 08:32:10.929 247403 DEBUG oslo_concurrency.lockutils [req-f7bbbae5-31fd-46dd-a61e-77e896054b84 req-c217e6a1-018d-412c-8962-bcf30f321d35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:10 np0005603621 nova_compute[247399]: 2026-01-31 08:32:10.930 247403 DEBUG oslo_concurrency.lockutils [req-f7bbbae5-31fd-46dd-a61e-77e896054b84 req-c217e6a1-018d-412c-8962-bcf30f321d35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:10 np0005603621 nova_compute[247399]: 2026-01-31 08:32:10.930 247403 DEBUG nova.compute.manager [req-f7bbbae5-31fd-46dd-a61e-77e896054b84 req-c217e6a1-018d-412c-8962-bcf30f321d35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:10 np0005603621 nova_compute[247399]: 2026-01-31 08:32:10.930 247403 WARNING nova.compute.manager [req-f7bbbae5-31fd-46dd-a61e-77e896054b84 req-c217e6a1-018d-412c-8962-bcf30f321d35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:32:11 np0005603621 nova_compute[247399]: 2026-01-31 08:32:11.088 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:11.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 03:32:11 np0005603621 nova_compute[247399]: 2026-01-31 08:32:11.919 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:12.178 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:12.179 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:32:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 436 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 166 op/s
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.824 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.825 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.860 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.966 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.967 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.974 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:32:12 np0005603621 nova_compute[247399]: 2026-01-31 08:32:12.975 247403 INFO nova.compute.claims [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.082 247403 DEBUG nova.scheduler.client.report [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:32:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:13.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.111 247403 DEBUG nova.scheduler.client.report [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.112 247403 DEBUG nova.compute.provider_tree [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.135 247403 DEBUG nova.scheduler.client.report [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.168 247403 DEBUG nova.scheduler.client.report [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:32:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.248 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327156697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.661 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.665 247403 DEBUG nova.compute.provider_tree [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.714 247403 DEBUG nova.scheduler.client.report [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.799 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.800 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.998 247403 DEBUG nova.network.neutron [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Port 02df5608-7a85-4d54-b5ac-628d6c8e8179 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.999 247403 DEBUG oslo_concurrency.lockutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:32:13 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.999 247403 DEBUG oslo_concurrency.lockutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:13.999 247403 DEBUG nova.network.neutron [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:14.004 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:14.004 247403 DEBUG nova.network.neutron [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:14.212 247403 INFO nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:32:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 188 op/s
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:14.354 247403 DEBUG nova.policy [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb3f20f0143d465ebfe98f6a13200890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '40db421b27d84f809f8074c58151327f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:14.615 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:32:14 np0005603621 nova_compute[247399]: 2026-01-31 08:32:14.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:32:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458707067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:32:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:32:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458707067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:32:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:15.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:32:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:15.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:32:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:15.182 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.063 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.064 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.065 247403 INFO nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Creating image(s)#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.178 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.207 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.238 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.243 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.313 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.316 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.317 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.318 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.388 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.391 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 4914607d-e887-4890-a7dd-9d020c2104b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.832 247403 DEBUG nova.network.neutron [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Successfully created port: 2f175855-d786-42a8-81b9-d274c83adeac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:32:16 np0005603621 nova_compute[247399]: 2026-01-31 08:32:16.921 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:17.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:17.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.176 247403 DEBUG nova.network.neutron [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating instance_info_cache with network_info: [{"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.242 247403 DEBUG oslo_concurrency.lockutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 31 03:32:18 np0005603621 kernel: tap02df5608-7a (unregistering): left promiscuous mode
Jan 31 03:32:18 np0005603621 NetworkManager[49013]: <info>  [1769848338.3944] device (tap02df5608-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:32:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:18Z|00504|binding|INFO|Releasing lport 02df5608-7a85-4d54-b5ac-628d6c8e8179 from this chassis (sb_readonly=0)
Jan 31 03:32:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:18Z|00505|binding|INFO|Setting lport 02df5608-7a85-4d54-b5ac-628d6c8e8179 down in Southbound
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.403 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:18Z|00506|binding|INFO|Removing iface tap02df5608-7a ovn-installed in OVS
Jan 31 03:32:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:18.411 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:59:a9 10.100.0.10'], port_security=['fa:16:3e:dd:59:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'a15175ec-85fd-457c-870b-8a6d7c13c906', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5b1bd8ad-0d2a-4d57-a00a-9a6b59df86e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=02df5608-7a85-4d54-b5ac-628d6c8e8179) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:18.412 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 02df5608-7a85-4d54-b5ac-628d6c8e8179 in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 unbound from our chassis#033[00m
Jan 31 03:32:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:18.414 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44469d8b-ad30-4270-88fa-e67c568f3150, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.417 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:18.415 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[05081486-cfdc-4aed-8c5e-7bf90ab992a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:18.416 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace which is not needed anymore#033[00m
Jan 31 03:32:18 np0005603621 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 31 03:32:18 np0005603621 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000080.scope: Consumed 11.107s CPU time.
Jan 31 03:32:18 np0005603621 systemd-machined[212769]: Machine qemu-61-instance-00000080 terminated.
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.509 247403 INFO nova.virt.libvirt.driver [-] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Instance destroyed successfully.#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.509 247403 DEBUG nova.objects.instance [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'resources' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:18 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [NOTICE]   (337023) : haproxy version is 2.8.14-c23fe91
Jan 31 03:32:18 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [NOTICE]   (337023) : path to executable is /usr/sbin/haproxy
Jan 31 03:32:18 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [WARNING]  (337023) : Exiting Master process...
Jan 31 03:32:18 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [ALERT]    (337023) : Current worker (337043) exited with code 143 (Terminated)
Jan 31 03:32:18 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[336993]: [WARNING]  (337023) : All workers exited. Exiting... (0)
Jan 31 03:32:18 np0005603621 systemd[1]: libpod-ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d.scope: Deactivated successfully.
Jan 31 03:32:18 np0005603621 podman[337356]: 2026-01-31 08:32:18.547297885 +0000 UTC m=+0.046750363 container died ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:32:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:32:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-159537e909dfe90952a24a970b2f6dc07c705d89a167cb4569c82912e4f6ebb7-merged.mount: Deactivated successfully.
Jan 31 03:32:18 np0005603621 podman[337356]: 2026-01-31 08:32:18.600319776 +0000 UTC m=+0.099772254 container cleanup ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:32:18 np0005603621 systemd[1]: libpod-conmon-ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d.scope: Deactivated successfully.
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.616 247403 DEBUG nova.virt.libvirt.vif [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:30:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2097097080',display_name='tempest-ServerActionsTestOtherB-server-2097097080',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2097097080',id=128,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:32:10Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-41gbj3yx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:32:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=a15175ec-85fd-457c-870b-8a6d7c13c906,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.617 247403 DEBUG nova.network.os_vif_util [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.618 247403 DEBUG nova.network.os_vif_util [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.619 247403 DEBUG os_vif [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.621 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.621 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02df5608-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.623 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.625 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:18 np0005603621 nova_compute[247399]: 2026-01-31 08:32:18.627 247403 INFO os_vif [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dd:59:a9,bridge_name='br-int',has_traffic_filtering=True,id=02df5608-7a85-4d54-b5ac-628d6c8e8179,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap02df5608-7a')#033[00m
Jan 31 03:32:19 np0005603621 podman[337391]: 2026-01-31 08:32:19.031027951 +0000 UTC m=+0.417165147 container remove ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.036 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[816eaf80-1c2f-4e6c-8313-9a8819163c09]: (4, ('Sat Jan 31 08:32:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d)\nae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d\nSat Jan 31 08:32:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (ae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d)\nae20fc1d4fe4dbcce972f98438285df33ff32f6f34aa25cbb6e38e575fc44c1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.037 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[792cd03b-e3ee-497e-bf06-3a7a94cb024e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.038 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.040 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:19 np0005603621 kernel: tap44469d8b-a0: left promiscuous mode
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.046 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.049 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7d27bd5d-cb5c-43d5-9929-bec79b7e0b14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.068 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[59a42cdb-615f-4e2e-83ae-908ab69e12d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.070 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ce956384-0bbc-4297-a864-33ab90d6cef9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.084 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c6352c22-0e0c-4d08-af1f-ffb25a6291cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763273, 'reachable_time': 35944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337406, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.087 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:32:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:19.087 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[72e26f46-ee24-497c-b490-7623d6223d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:19 np0005603621 systemd[1]: run-netns-ovnmeta\x2d44469d8b\x2dad30\x2d4270\x2d88fa\x2de67c568f3150.mount: Deactivated successfully.
Jan 31 03:32:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:19.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:19.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.455 247403 DEBUG oslo_concurrency.lockutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.455 247403 DEBUG oslo_concurrency.lockutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.638 247403 DEBUG nova.objects.instance [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'migration_context' on Instance uuid a15175ec-85fd-457c-870b-8a6d7c13c906 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.809 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.915 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.966 247403 DEBUG nova.compute.manager [req-3c72f3db-0840-4239-a18e-855945238e68 req-95d9a412-4d08-468a-8b26-59b82390c244 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-unplugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.966 247403 DEBUG oslo_concurrency.lockutils [req-3c72f3db-0840-4239-a18e-855945238e68 req-95d9a412-4d08-468a-8b26-59b82390c244 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.967 247403 DEBUG oslo_concurrency.lockutils [req-3c72f3db-0840-4239-a18e-855945238e68 req-95d9a412-4d08-468a-8b26-59b82390c244 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.967 247403 DEBUG oslo_concurrency.lockutils [req-3c72f3db-0840-4239-a18e-855945238e68 req-95d9a412-4d08-468a-8b26-59b82390c244 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.967 247403 DEBUG nova.compute.manager [req-3c72f3db-0840-4239-a18e-855945238e68 req-95d9a412-4d08-468a-8b26-59b82390c244 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-unplugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:19 np0005603621 nova_compute[247399]: 2026-01-31 08:32:19.967 247403 WARNING nova.compute.manager [req-3c72f3db-0840-4239-a18e-855945238e68 req-95d9a412-4d08-468a-8b26-59b82390c244 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-unplugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state resized and task_state resize_reverting.#033[00m
Jan 31 03:32:20 np0005603621 nova_compute[247399]: 2026-01-31 08:32:20.204 247403 DEBUG oslo_concurrency.processutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 127 op/s
Jan 31 03:32:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/143906672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:20 np0005603621 nova_compute[247399]: 2026-01-31 08:32:20.644 247403 DEBUG oslo_concurrency.processutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:20 np0005603621 nova_compute[247399]: 2026-01-31 08:32:20.650 247403 DEBUG nova.compute.provider_tree [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:32:20 np0005603621 nova_compute[247399]: 2026-01-31 08:32:20.742 247403 DEBUG nova.scheduler.client.report [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:32:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:21.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:21 np0005603621 nova_compute[247399]: 2026-01-31 08:32:21.937 247403 DEBUG oslo_concurrency.lockutils [None req-8164fb27-073f-45ea-84c4-9cbcd9d0826b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.116 247403 DEBUG nova.network.neutron [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Successfully updated port: 2f175855-d786-42a8-81b9-d274c83adeac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:32:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 128 op/s
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.448 247403 DEBUG nova.compute.manager [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.448 247403 DEBUG oslo_concurrency.lockutils [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.449 247403 DEBUG oslo_concurrency.lockutils [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.449 247403 DEBUG oslo_concurrency.lockutils [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.449 247403 DEBUG nova.compute.manager [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.449 247403 WARNING nova.compute.manager [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state resized and task_state resize_reverting.#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.450 247403 DEBUG nova.compute.manager [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-changed-2f175855-d786-42a8-81b9-d274c83adeac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.450 247403 DEBUG nova.compute.manager [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Refreshing instance network info cache due to event network-changed-2f175855-d786-42a8-81b9-d274c83adeac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.450 247403 DEBUG oslo_concurrency.lockutils [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-4914607d-e887-4890-a7dd-9d020c2104b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.450 247403 DEBUG oslo_concurrency.lockutils [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-4914607d-e887-4890-a7dd-9d020c2104b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.451 247403 DEBUG nova.network.neutron [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Refreshing network info cache for port 2f175855-d786-42a8-81b9-d274c83adeac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:32:22 np0005603621 nova_compute[247399]: 2026-01-31 08:32:22.466 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "refresh_cache-4914607d-e887-4890-a7dd-9d020c2104b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:32:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:23.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:23 np0005603621 nova_compute[247399]: 2026-01-31 08:32:23.386 247403 DEBUG nova.network.neutron [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:32:23 np0005603621 nova_compute[247399]: 2026-01-31 08:32:23.625 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:23 np0005603621 nova_compute[247399]: 2026-01-31 08:32:23.994 247403 DEBUG nova.network.neutron [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:24 np0005603621 nova_compute[247399]: 2026-01-31 08:32:24.063 247403 DEBUG oslo_concurrency.lockutils [req-8891d946-9608-4206-9af9-04a676c1a601 req-e331167b-71ad-4455-91c6-eb26b0d16f2c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-4914607d-e887-4890-a7dd-9d020c2104b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:24 np0005603621 nova_compute[247399]: 2026-01-31 08:32:24.063 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquired lock "refresh_cache-4914607d-e887-4890-a7dd-9d020c2104b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:24 np0005603621 nova_compute[247399]: 2026-01-31 08:32:24.063 247403 DEBUG nova.network.neutron [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:32:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 290 KiB/s rd, 47 KiB/s wr, 20 op/s
Jan 31 03:32:24 np0005603621 nova_compute[247399]: 2026-01-31 08:32:24.716 247403 DEBUG nova.network.neutron [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:32:24 np0005603621 nova_compute[247399]: 2026-01-31 08:32:24.811 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:25.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:26 np0005603621 nova_compute[247399]: 2026-01-31 08:32:26.235 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 4914607d-e887-4890-a7dd-9d020c2104b9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 9.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 472 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.2 MiB/s wr, 18 op/s
Jan 31 03:32:26 np0005603621 nova_compute[247399]: 2026-01-31 08:32:26.319 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] resizing rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:32:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:27.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:27 np0005603621 nova_compute[247399]: 2026-01-31 08:32:27.299 247403 DEBUG nova.network.neutron [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Updating instance_info_cache with network_info: [{"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:27 np0005603621 nova_compute[247399]: 2026-01-31 08:32:27.405 247403 DEBUG nova.compute.manager [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-changed-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:27 np0005603621 nova_compute[247399]: 2026-01-31 08:32:27.405 247403 DEBUG nova.compute.manager [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Refreshing instance network info cache due to event network-changed-02df5608-7a85-4d54-b5ac-628d6c8e8179. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:32:27 np0005603621 nova_compute[247399]: 2026-01-31 08:32:27.405 247403 DEBUG oslo_concurrency.lockutils [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:32:27 np0005603621 nova_compute[247399]: 2026-01-31 08:32:27.405 247403 DEBUG oslo_concurrency.lockutils [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:32:27 np0005603621 nova_compute[247399]: 2026-01-31 08:32:27.406 247403 DEBUG nova.network.neutron [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Refreshing network info cache for port 02df5608-7a85-4d54-b5ac-628d6c8e8179 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:32:28 np0005603621 nova_compute[247399]: 2026-01-31 08:32:28.176 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Releasing lock "refresh_cache-4914607d-e887-4890-a7dd-9d020c2104b9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:28 np0005603621 nova_compute[247399]: 2026-01-31 08:32:28.176 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Instance network_info: |[{"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:32:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Jan 31 03:32:28 np0005603621 nova_compute[247399]: 2026-01-31 08:32:28.671 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:29.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.426 247403 DEBUG nova.objects.instance [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'migration_context' on Instance uuid 4914607d-e887-4890-a7dd-9d020c2104b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.795 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.796 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Ensure instance console log exists: /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.796 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.796 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.797 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.798 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Start _get_guest_xml network_info=[{"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.802 247403 WARNING nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.810 247403 DEBUG nova.virt.libvirt.host [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.811 247403 DEBUG nova.virt.libvirt.host [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.813 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.815 247403 DEBUG nova.virt.libvirt.host [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.816 247403 DEBUG nova.virt.libvirt.host [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.817 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.817 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.818 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.818 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.819 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.819 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.819 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.819 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.820 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.820 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.820 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.820 247403 DEBUG nova.virt.hardware [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:32:29 np0005603621 nova_compute[247399]: 2026-01-31 08:32:29.824 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:32:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3719394546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.272 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.295 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.298 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:30.513 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:30.514 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:30.514 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:32:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028847623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.839 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.841 247403 DEBUG nova.virt.libvirt.vif [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:32:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1969929545',display_name='tempest-ServersTestJSON-server-1969929545',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1969929545',id=132,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPllj1PzTA4aI2C2vdRzmYzFE+q+40xMYsNLNHhuI2MsNoGAy3qBRcov6m8Oy3gHIcufC+QZfdZcURYcjbAEKzSfX8DNUmWOpzxMOaunggTGxYbKzV6n6v+uYPvNGf84zQ==',key_name='tempest-key-24355983',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-tdl821sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:32:15Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=4914607d-e887-4890-a7dd-9d020c2104b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.841 247403 DEBUG nova.network.os_vif_util [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.843 247403 DEBUG nova.network.os_vif_util [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.845 247403 DEBUG nova.objects.instance [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4914607d-e887-4890-a7dd-9d020c2104b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.949 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <uuid>4914607d-e887-4890-a7dd-9d020c2104b9</uuid>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <name>instance-00000084</name>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersTestJSON-server-1969929545</nova:name>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:32:29</nova:creationTime>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:user uuid="fb3f20f0143d465ebfe98f6a13200890">tempest-ServersTestJSON-1064072764-project-member</nova:user>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:project uuid="40db421b27d84f809f8074c58151327f">tempest-ServersTestJSON-1064072764</nova:project>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <nova:port uuid="2f175855-d786-42a8-81b9-d274c83adeac">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <entry name="serial">4914607d-e887-4890-a7dd-9d020c2104b9</entry>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <entry name="uuid">4914607d-e887-4890-a7dd-9d020c2104b9</entry>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/4914607d-e887-4890-a7dd-9d020c2104b9_disk">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/4914607d-e887-4890-a7dd-9d020c2104b9_disk.config">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:dd:e1:03"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <target dev="tap2f175855-d7"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/console.log" append="off"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:32:30 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:32:30 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:32:30 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:32:30 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.950 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Preparing to wait for external event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.950 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.951 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.951 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.952 247403 DEBUG nova.virt.libvirt.vif [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:32:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1969929545',display_name='tempest-ServersTestJSON-server-1969929545',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1969929545',id=132,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPllj1PzTA4aI2C2vdRzmYzFE+q+40xMYsNLNHhuI2MsNoGAy3qBRcov6m8Oy3gHIcufC+QZfdZcURYcjbAEKzSfX8DNUmWOpzxMOaunggTGxYbKzV6n6v+uYPvNGf84zQ==',key_name='tempest-key-24355983',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-tdl821sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:32:15Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=4914607d-e887-4890-a7dd-9d020c2104b9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.952 247403 DEBUG nova.network.os_vif_util [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.953 247403 DEBUG nova.network.os_vif_util [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.954 247403 DEBUG os_vif [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.955 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.955 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.955 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.958 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.958 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f175855-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.958 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f175855-d7, col_values=(('external_ids', {'iface-id': '2f175855-d786-42a8-81b9-d274c83adeac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:e1:03', 'vm-uuid': '4914607d-e887-4890-a7dd-9d020c2104b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.959 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:30 np0005603621 NetworkManager[49013]: <info>  [1769848350.9607] manager: (tap2f175855-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:32:30 np0005603621 nova_compute[247399]: 2026-01-31 08:32:30.965 247403 INFO os_vif [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7')#033[00m
Jan 31 03:32:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:31.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:31.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.332 247403 DEBUG nova.network.neutron [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updated VIF entry in instance network info cache for port 02df5608-7a85-4d54-b5ac-628d6c8e8179. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.333 247403 DEBUG nova.network.neutron [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Updating instance_info_cache with network_info: [{"id": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "address": "fa:16:3e:dd:59:a9", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap02df5608-7a", "ovs_interfaceid": "02df5608-7a85-4d54-b5ac-628d6c8e8179", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.412 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.413 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.413 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No VIF found with MAC fa:16:3e:dd:e1:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.414 247403 INFO nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Using config drive#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.436 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:31 np0005603621 nova_compute[247399]: 2026-01-31 08:32:31.555 247403 DEBUG oslo_concurrency.lockutils [req-b3d048e7-a622-4fa9-958f-2eab5483d904 req-8a47665f-bec5-4725-a079-281023b9a616 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-a15175ec-85fd-457c-870b-8a6d7c13c906" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:32:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 31 03:32:32 np0005603621 nova_compute[247399]: 2026-01-31 08:32:32.336 247403 INFO nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Creating config drive at /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/disk.config#033[00m
Jan 31 03:32:32 np0005603621 nova_compute[247399]: 2026-01-31 08:32:32.341 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzvpehtem execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:32 np0005603621 nova_compute[247399]: 2026-01-31 08:32:32.471 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpzvpehtem" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:32 np0005603621 nova_compute[247399]: 2026-01-31 08:32:32.497 247403 DEBUG nova.storage.rbd_utils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 4914607d-e887-4890-a7dd-9d020c2104b9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:32:32 np0005603621 nova_compute[247399]: 2026-01-31 08:32:32.501 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/disk.config 4914607d-e887-4890-a7dd-9d020c2104b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:33.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:32:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:33.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:32:33 np0005603621 nova_compute[247399]: 2026-01-31 08:32:33.508 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848338.5068617, a15175ec-85fd-457c-870b-8a6d7c13c906 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:33 np0005603621 nova_compute[247399]: 2026-01-31 08:32:33.508 247403 INFO nova.compute.manager [-] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:32:33 np0005603621 nova_compute[247399]: 2026-01-31 08:32:33.830 247403 DEBUG nova.compute.manager [None req-bdc366b2-de64-454b-b81b-e1e5e48fe34d - - - - - -] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 496 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 MiB/s wr, 31 op/s
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.526 247403 DEBUG oslo_concurrency.processutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/disk.config 4914607d-e887-4890-a7dd-9d020c2104b9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.527 247403 INFO nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Deleting local config drive /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9/disk.config because it was imported into RBD.#033[00m
Jan 31 03:32:34 np0005603621 kernel: tap2f175855-d7: entered promiscuous mode
Jan 31 03:32:34 np0005603621 NetworkManager[49013]: <info>  [1769848354.5747] manager: (tap2f175855-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Jan 31 03:32:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:34Z|00507|binding|INFO|Claiming lport 2f175855-d786-42a8-81b9-d274c83adeac for this chassis.
Jan 31 03:32:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:34Z|00508|binding|INFO|2f175855-d786-42a8-81b9-d274c83adeac: Claiming fa:16:3e:dd:e1:03 10.100.0.10
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.575 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:34Z|00509|binding|INFO|Setting lport 2f175855-d786-42a8-81b9-d274c83adeac ovn-installed in OVS
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.584 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.587 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 systemd-udevd[337695]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:32:34 np0005603621 NetworkManager[49013]: <info>  [1769848354.6124] device (tap2f175855-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:32:34 np0005603621 NetworkManager[49013]: <info>  [1769848354.6135] device (tap2f175855-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:32:34 np0005603621 systemd-machined[212769]: New machine qemu-62-instance-00000084.
Jan 31 03:32:34 np0005603621 systemd[1]: Started Virtual Machine qemu-62-instance-00000084.
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.702 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:e1:03 10.100.0.10'], port_security=['fa:16:3e:dd:e1:03 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4914607d-e887-4890-a7dd-9d020c2104b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=2f175855-d786-42a8-81b9-d274c83adeac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:34Z|00510|binding|INFO|Setting lport 2f175855-d786-42a8-81b9-d274c83adeac up in Southbound
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.703 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 2f175855-d786-42a8-81b9-d274c83adeac in datapath f6071a46-64a6-45aa-97c6-06e6c564195b bound to our chassis#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.705 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6071a46-64a6-45aa-97c6-06e6c564195b#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.714 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80efdc6b-d2c9-4164-a32f-cb02f5d062c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.715 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6071a46-61 in ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.717 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6071a46-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.718 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6d76dd-ee98-4a43-b706-20390fa245d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.718 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[661c37ee-5822-45fc-ab59-94167579e43d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.728 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[89debae4-c85a-4fdf-827f-7b76af06a5d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.737 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1b49ebb9-1daa-4578-8c93-140ddb3cc23b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.761 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[314c26dd-ea62-422a-9d8c-a71410ebe5e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 NetworkManager[49013]: <info>  [1769848354.7675] manager: (tapf6071a46-60): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Jan 31 03:32:34 np0005603621 systemd-udevd[337700]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.769 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2920e659-d7bb-46c0-b88e-d7628d31dd50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.802 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[107321cb-9d8c-40f4-b453-ad701cfa66bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.806 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[436c4e76-488a-4903-8edc-6d24588d93a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.814 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 NetworkManager[49013]: <info>  [1769848354.8304] device (tapf6071a46-60): carrier: link connected
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.841 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[13ab8e5e-9978-4f33-8c9e-2ec1ae79a9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.856 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8c267c-12ef-4680-bfb6-a4b5466f36e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766034, 'reachable_time': 16879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337731, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.867 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[586d17e8-68d4-4ea3-9fb8-dcca1ba80096]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:8c48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766034, 'tstamp': 766034}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337732, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.878 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[122b55e0-5fe9-4de9-aeef-b00bbcdac3c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766034, 'reachable_time': 16879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337733, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.901 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6a70e685-6d40-4eed-bdc8-b79762f37ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.948 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[efa3122a-1d63-4473-a631-ac75101bfb56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.949 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.949 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.950 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6071a46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:34 np0005603621 NetworkManager[49013]: <info>  [1769848354.9520] manager: (tapf6071a46-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.951 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 kernel: tapf6071a46-60: entered promiscuous mode
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.955 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.956 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6071a46-60, col_values=(('external_ids', {'iface-id': 'e9a7861c-c6ea-4166-9252-dc2aacdf4771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:34Z|00511|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 nova_compute[247399]: 2026-01-31 08:32:34.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.965 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.965 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4a16fd-5fcc-43ba-a2ed-4039dbac8891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.966 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f6071a46-64a6-45aa-97c6-06e6c564195b
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f6071a46-64a6-45aa-97c6-06e6c564195b
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:32:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:34.967 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'env', 'PROCESS_TAG=haproxy-f6071a46-64a6-45aa-97c6-06e6c564195b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6071a46-64a6-45aa-97c6-06e6c564195b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:32:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:35.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:35.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:32:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2561903065' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:32:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:35 np0005603621 podman[337765]: 2026-01-31 08:32:35.253305943 +0000 UTC m=+0.022304028 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:32:35 np0005603621 podman[337765]: 2026-01-31 08:32:35.642037127 +0000 UTC m=+0.411035192 container create ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:32:35 np0005603621 systemd[1]: Started libpod-conmon-ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e.scope.
Jan 31 03:32:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:32:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36132a5f565632c2c05443201999eb21962e5c347226f55194ac742e301038f0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:32:35 np0005603621 podman[337765]: 2026-01-31 08:32:35.937287818 +0000 UTC m=+0.706285893 container init ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:32:35 np0005603621 podman[337765]: 2026-01-31 08:32:35.941975727 +0000 UTC m=+0.710973782 container start ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:32:35 np0005603621 nova_compute[247399]: 2026-01-31 08:32:35.959 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848355.9584615, 4914607d-e887-4890-a7dd-9d020c2104b9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:35 np0005603621 nova_compute[247399]: 2026-01-31 08:32:35.959 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] VM Started (Lifecycle Event)#033[00m
Jan 31 03:32:35 np0005603621 nova_compute[247399]: 2026-01-31 08:32:35.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:35 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [NOTICE]   (337827) : New worker (337829) forked
Jan 31 03:32:35 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [NOTICE]   (337827) : Loading success.
Jan 31 03:32:36 np0005603621 nova_compute[247399]: 2026-01-31 08:32:36.044 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:36 np0005603621 nova_compute[247399]: 2026-01-31 08:32:36.049 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848355.9607623, 4914607d-e887-4890-a7dd-9d020c2104b9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:36 np0005603621 nova_compute[247399]: 2026-01-31 08:32:36.049 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:32:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 31 03:32:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 3.6 MiB/s wr, 55 op/s
Jan 31 03:32:36 np0005603621 nova_compute[247399]: 2026-01-31 08:32:36.333 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:36 np0005603621 nova_compute[247399]: 2026-01-31 08:32:36.337 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:36 np0005603621 nova_compute[247399]: 2026-01-31 08:32:36.551 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:32:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 31 03:32:36 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 31 03:32:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:37.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.701 247403 DEBUG nova.compute.manager [req-1d7b591a-3f25-4912-9345-b8b6b465b715 req-df460512-4858-46d9-9b18-b59dd81e8e3d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.702 247403 DEBUG oslo_concurrency.lockutils [req-1d7b591a-3f25-4912-9345-b8b6b465b715 req-df460512-4858-46d9-9b18-b59dd81e8e3d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.702 247403 DEBUG oslo_concurrency.lockutils [req-1d7b591a-3f25-4912-9345-b8b6b465b715 req-df460512-4858-46d9-9b18-b59dd81e8e3d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.703 247403 DEBUG oslo_concurrency.lockutils [req-1d7b591a-3f25-4912-9345-b8b6b465b715 req-df460512-4858-46d9-9b18-b59dd81e8e3d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.703 247403 DEBUG nova.compute.manager [req-1d7b591a-3f25-4912-9345-b8b6b465b715 req-df460512-4858-46d9-9b18-b59dd81e8e3d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Processing event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.704 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.709 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848357.708863, 4914607d-e887-4890-a7dd-9d020c2104b9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.709 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.713 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.717 247403 INFO nova.virt.libvirt.driver [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Instance spawned successfully.#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.717 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.920 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.924 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.935 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.935 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.936 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.936 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.937 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:32:37 np0005603621 nova_compute[247399]: 2026-01-31 08:32:37.937 247403 DEBUG nova.virt.libvirt.driver [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 03:32:38 np0005603621 nova_compute[247399]: 2026-01-31 08:32:38.472 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:32:38
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'vms', 'images', 'volumes', 'default.rgw.control']
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:32:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:32:39 np0005603621 nova_compute[247399]: 2026-01-31 08:32:39.041 247403 INFO nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Took 22.98 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:32:39 np0005603621 nova_compute[247399]: 2026-01-31 08:32:39.042 247403 DEBUG nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:39.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:39.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:39 np0005603621 nova_compute[247399]: 2026-01-31 08:32:39.349 247403 INFO nova.compute.manager [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Took 26.42 seconds to build instance.#033[00m
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.461317) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848359461355, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1920, "num_deletes": 253, "total_data_size": 3293268, "memory_usage": 3351584, "flush_reason": "Manual Compaction"}
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848359552376, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3233228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51910, "largest_seqno": 53829, "table_properties": {"data_size": 3224598, "index_size": 5252, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18800, "raw_average_key_size": 20, "raw_value_size": 3207014, "raw_average_value_size": 3520, "num_data_blocks": 229, "num_entries": 911, "num_filter_entries": 911, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848176, "oldest_key_time": 1769848176, "file_creation_time": 1769848359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 91153 microseconds, and 6201 cpu microseconds.
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:39 np0005603621 nova_compute[247399]: 2026-01-31 08:32:39.557 247403 DEBUG oslo_concurrency.lockutils [None req-152a3bb7-c620-4ca7-9ebc-7e289e31fbfc fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.552459) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3233228 bytes OK
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.552493) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.582138) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.582196) EVENT_LOG_v1 {"time_micros": 1769848359582184, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.582228) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3285227, prev total WAL file size 3285227, number of live WAL files 2.
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.583110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3157KB)], [116(10058KB)]
Jan 31 03:32:39 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848359583225, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 13533004, "oldest_snapshot_seqno": -1}
Jan 31 03:32:39 np0005603621 nova_compute[247399]: 2026-01-31 08:32:39.816 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8210 keys, 11456128 bytes, temperature: kUnknown
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848360089932, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11456128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11402326, "index_size": 32186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 212699, "raw_average_key_size": 25, "raw_value_size": 11257484, "raw_average_value_size": 1371, "num_data_blocks": 1261, "num_entries": 8210, "num_filter_entries": 8210, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.090544) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11456128 bytes
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.141665) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 26.7 rd, 22.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 8736, records dropped: 526 output_compression: NoCompression
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.141713) EVENT_LOG_v1 {"time_micros": 1769848360141697, "job": 70, "event": "compaction_finished", "compaction_time_micros": 507028, "compaction_time_cpu_micros": 37905, "output_level": 6, "num_output_files": 1, "total_output_size": 11456128, "num_input_records": 8736, "num_output_records": 8210, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848360142294, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848360143376, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:39.582959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.143467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.143478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.143482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.143486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:32:40.143490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:32:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 03:32:40 np0005603621 podman[337841]: 2026-01-31 08:32:40.494139351 +0000 UTC m=+0.049175551 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:32:40 np0005603621 podman[337842]: 2026-01-31 08:32:40.524820854 +0000 UTC m=+0.078494690 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:32:40 np0005603621 nova_compute[247399]: 2026-01-31 08:32:40.962 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:41.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:41.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:41 np0005603621 nova_compute[247399]: 2026-01-31 08:32:41.518 247403 DEBUG nova.compute.manager [req-b0072070-eff4-4cdd-a959-1f46696afe1c req-36e6ec39-f5f6-4330-b0fa-37debbeb667c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:41 np0005603621 nova_compute[247399]: 2026-01-31 08:32:41.519 247403 DEBUG oslo_concurrency.lockutils [req-b0072070-eff4-4cdd-a959-1f46696afe1c req-36e6ec39-f5f6-4330-b0fa-37debbeb667c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:41 np0005603621 nova_compute[247399]: 2026-01-31 08:32:41.519 247403 DEBUG oslo_concurrency.lockutils [req-b0072070-eff4-4cdd-a959-1f46696afe1c req-36e6ec39-f5f6-4330-b0fa-37debbeb667c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:41 np0005603621 nova_compute[247399]: 2026-01-31 08:32:41.519 247403 DEBUG oslo_concurrency.lockutils [req-b0072070-eff4-4cdd-a959-1f46696afe1c req-36e6ec39-f5f6-4330-b0fa-37debbeb667c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:41 np0005603621 nova_compute[247399]: 2026-01-31 08:32:41.519 247403 DEBUG nova.compute.manager [req-b0072070-eff4-4cdd-a959-1f46696afe1c req-36e6ec39-f5f6-4330-b0fa-37debbeb667c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] No waiting events found dispatching network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:41 np0005603621 nova_compute[247399]: 2026-01-31 08:32:41.519 247403 WARNING nova.compute.manager [req-b0072070-eff4-4cdd-a959-1f46696afe1c req-36e6ec39-f5f6-4330-b0fa-37debbeb667c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received unexpected event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac for instance with vm_state active and task_state None.#033[00m
Jan 31 03:32:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 115 op/s
Jan 31 03:32:42 np0005603621 nova_compute[247399]: 2026-01-31 08:32:42.733 247403 DEBUG nova.compute.manager [req-64ba190d-478a-49cc-9cf9-39d1177d9758 req-82298b42-6e03-456f-bd8e-cab36bcb18a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:42 np0005603621 nova_compute[247399]: 2026-01-31 08:32:42.733 247403 DEBUG oslo_concurrency.lockutils [req-64ba190d-478a-49cc-9cf9-39d1177d9758 req-82298b42-6e03-456f-bd8e-cab36bcb18a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:42 np0005603621 nova_compute[247399]: 2026-01-31 08:32:42.734 247403 DEBUG oslo_concurrency.lockutils [req-64ba190d-478a-49cc-9cf9-39d1177d9758 req-82298b42-6e03-456f-bd8e-cab36bcb18a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:42 np0005603621 nova_compute[247399]: 2026-01-31 08:32:42.734 247403 DEBUG oslo_concurrency.lockutils [req-64ba190d-478a-49cc-9cf9-39d1177d9758 req-82298b42-6e03-456f-bd8e-cab36bcb18a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a15175ec-85fd-457c-870b-8a6d7c13c906-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:42 np0005603621 nova_compute[247399]: 2026-01-31 08:32:42.734 247403 DEBUG nova.compute.manager [req-64ba190d-478a-49cc-9cf9-39d1177d9758 req-82298b42-6e03-456f-bd8e-cab36bcb18a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] No waiting events found dispatching network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:42 np0005603621 nova_compute[247399]: 2026-01-31 08:32:42.734 247403 WARNING nova.compute.manager [req-64ba190d-478a-49cc-9cf9-39d1177d9758 req-82298b42-6e03-456f-bd8e-cab36bcb18a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a15175ec-85fd-457c-870b-8a6d7c13c906] Received unexpected event network-vif-plugged-02df5608-7a85-4d54-b5ac-628d6c8e8179 for instance with vm_state resized and task_state resize_reverting.#033[00m
Jan 31 03:32:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:43.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.396 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.399 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.400 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.400 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.401 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.403 247403 INFO nova.compute.manager [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Terminating instance#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.405 247403 DEBUG nova.compute.manager [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:32:43 np0005603621 kernel: tap2f175855-d7 (unregistering): left promiscuous mode
Jan 31 03:32:43 np0005603621 NetworkManager[49013]: <info>  [1769848363.4983] device (tap2f175855-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:32:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:43Z|00512|binding|INFO|Releasing lport 2f175855-d786-42a8-81b9-d274c83adeac from this chassis (sb_readonly=0)
Jan 31 03:32:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:43Z|00513|binding|INFO|Setting lport 2f175855-d786-42a8-81b9-d274c83adeac down in Southbound
Jan 31 03:32:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:32:43Z|00514|binding|INFO|Removing iface tap2f175855-d7 ovn-installed in OVS
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.502 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.512 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 31 03:32:43 np0005603621 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000084.scope: Consumed 6.872s CPU time.
Jan 31 03:32:43 np0005603621 systemd-machined[212769]: Machine qemu-62-instance-00000084 terminated.
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.617 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:e1:03 10.100.0.10'], port_security=['fa:16:3e:dd:e1:03 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4914607d-e887-4890-a7dd-9d020c2104b9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=2f175855-d786-42a8-81b9-d274c83adeac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.618 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 2f175855-d786-42a8-81b9-d274c83adeac in datapath f6071a46-64a6-45aa-97c6-06e6c564195b unbound from our chassis#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.620 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6071a46-64a6-45aa-97c6-06e6c564195b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.620 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d537b44-9236-47ae-ae4f-9e53c547073c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.621 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b namespace which is not needed anymore#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.639 247403 INFO nova.virt.libvirt.driver [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Instance destroyed successfully.#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.639 247403 DEBUG nova.objects.instance [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'resources' on Instance uuid 4914607d-e887-4890-a7dd-9d020c2104b9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:32:43 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [NOTICE]   (337827) : haproxy version is 2.8.14-c23fe91
Jan 31 03:32:43 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [NOTICE]   (337827) : path to executable is /usr/sbin/haproxy
Jan 31 03:32:43 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [WARNING]  (337827) : Exiting Master process...
Jan 31 03:32:43 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [WARNING]  (337827) : Exiting Master process...
Jan 31 03:32:43 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [ALERT]    (337827) : Current worker (337829) exited with code 143 (Terminated)
Jan 31 03:32:43 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[337817]: [WARNING]  (337827) : All workers exited. Exiting... (0)
Jan 31 03:32:43 np0005603621 systemd[1]: libpod-ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e.scope: Deactivated successfully.
Jan 31 03:32:43 np0005603621 podman[337924]: 2026-01-31 08:32:43.743402297 +0000 UTC m=+0.048365794 container died ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:32:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:32:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-36132a5f565632c2c05443201999eb21962e5c347226f55194ac742e301038f0-merged.mount: Deactivated successfully.
Jan 31 03:32:43 np0005603621 podman[337924]: 2026-01-31 08:32:43.793063401 +0000 UTC m=+0.098026898 container cleanup ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 03:32:43 np0005603621 systemd[1]: libpod-conmon-ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e.scope: Deactivated successfully.
Jan 31 03:32:43 np0005603621 podman[337954]: 2026-01-31 08:32:43.845399311 +0000 UTC m=+0.035340862 container remove ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.850 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb7ab7b-e848-4dd7-bf7f-7e7a7b515104]: (4, ('Sat Jan 31 08:32:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b (ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e)\nac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e\nSat Jan 31 08:32:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b (ac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e)\nac82bb055ec58f767acdfc485db2f3b615a86aeeb085bea0aa2f3f09148fea5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.852 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4183e848-b1d7-4f8e-90c2-84cf01836648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.853 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.855 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 kernel: tapf6071a46-60: left promiscuous mode
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.865 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.868 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d8be93-608a-4123-b1fa-d5ccf9f9809d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.881 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[132a6ba1-f820-4f75-9640-33d379f6c375]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.882 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[232f1c33-dbd6-4d74-9a00-73efe3652335]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.896 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f12d723e-58c8-4be3-8ec1-dfdcf38735b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766027, 'reachable_time': 37933, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337976, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 systemd[1]: run-netns-ovnmeta\x2df6071a46\x2d64a6\x2d45aa\x2d97c6\x2d06e6c564195b.mount: Deactivated successfully.
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.899 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:32:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:32:43.899 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[a3658b45-ac60-41e7-9f70-3c737d8b7c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.960 247403 DEBUG nova.virt.libvirt.vif [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:32:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1969929545',display_name='tempest-ServersTestJSON-server-1969929545',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1969929545',id=132,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPllj1PzTA4aI2C2vdRzmYzFE+q+40xMYsNLNHhuI2MsNoGAy3qBRcov6m8Oy3gHIcufC+QZfdZcURYcjbAEKzSfX8DNUmWOpzxMOaunggTGxYbKzV6n6v+uYPvNGf84zQ==',key_name='tempest-key-24355983',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:32:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-tdl821sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:32:39Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=4914607d-e887-4890-a7dd-9d020c2104b9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.961 247403 DEBUG nova.network.os_vif_util [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "2f175855-d786-42a8-81b9-d274c83adeac", "address": "fa:16:3e:dd:e1:03", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f175855-d7", "ovs_interfaceid": "2f175855-d786-42a8-81b9-d274c83adeac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.961 247403 DEBUG nova.network.os_vif_util [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.962 247403 DEBUG os_vif [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.963 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.964 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f175855-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.965 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.967 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:43 np0005603621 nova_compute[247399]: 2026-01-31 08:32:43.971 247403 INFO os_vif [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:e1:03,bridge_name='br-int',has_traffic_filtering=True,id=2f175855-d786-42a8-81b9-d274c83adeac,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f175855-d7')#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 531 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 146 op/s
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.817 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.981 247403 DEBUG nova.compute.manager [req-2d2df33b-93e2-45b8-a7b2-5b9b18d9e289 req-8de2d0f2-20f8-4557-b6fa-6d1bc3903607 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-vif-unplugged-2f175855-d786-42a8-81b9-d274c83adeac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.981 247403 DEBUG oslo_concurrency.lockutils [req-2d2df33b-93e2-45b8-a7b2-5b9b18d9e289 req-8de2d0f2-20f8-4557-b6fa-6d1bc3903607 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.981 247403 DEBUG oslo_concurrency.lockutils [req-2d2df33b-93e2-45b8-a7b2-5b9b18d9e289 req-8de2d0f2-20f8-4557-b6fa-6d1bc3903607 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.982 247403 DEBUG oslo_concurrency.lockutils [req-2d2df33b-93e2-45b8-a7b2-5b9b18d9e289 req-8de2d0f2-20f8-4557-b6fa-6d1bc3903607 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.982 247403 DEBUG nova.compute.manager [req-2d2df33b-93e2-45b8-a7b2-5b9b18d9e289 req-8de2d0f2-20f8-4557-b6fa-6d1bc3903607 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] No waiting events found dispatching network-vif-unplugged-2f175855-d786-42a8-81b9-d274c83adeac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:44 np0005603621 nova_compute[247399]: 2026-01-31 08:32:44.982 247403 DEBUG nova.compute.manager [req-2d2df33b-93e2-45b8-a7b2-5b9b18d9e289 req-8de2d0f2-20f8-4557-b6fa-6d1bc3903607 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-vif-unplugged-2f175855-d786-42a8-81b9-d274c83adeac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:32:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:45.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 31 03:32:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 31 03:32:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 31 03:32:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 496 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 60 KiB/s wr, 285 op/s
Jan 31 03:32:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:47.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:47.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:47 np0005603621 nova_compute[247399]: 2026-01-31 08:32:47.220 247403 DEBUG nova.compute.manager [req-bd222d63-5b16-4ba0-bcb3-ce6b4c200674 req-e05621a2-8062-4c03-afa4-f80fd08de115 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:47 np0005603621 nova_compute[247399]: 2026-01-31 08:32:47.221 247403 DEBUG oslo_concurrency.lockutils [req-bd222d63-5b16-4ba0-bcb3-ce6b4c200674 req-e05621a2-8062-4c03-afa4-f80fd08de115 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:47 np0005603621 nova_compute[247399]: 2026-01-31 08:32:47.221 247403 DEBUG oslo_concurrency.lockutils [req-bd222d63-5b16-4ba0-bcb3-ce6b4c200674 req-e05621a2-8062-4c03-afa4-f80fd08de115 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:47 np0005603621 nova_compute[247399]: 2026-01-31 08:32:47.221 247403 DEBUG oslo_concurrency.lockutils [req-bd222d63-5b16-4ba0-bcb3-ce6b4c200674 req-e05621a2-8062-4c03-afa4-f80fd08de115 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:47 np0005603621 nova_compute[247399]: 2026-01-31 08:32:47.221 247403 DEBUG nova.compute.manager [req-bd222d63-5b16-4ba0-bcb3-ce6b4c200674 req-e05621a2-8062-4c03-afa4-f80fd08de115 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] No waiting events found dispatching network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:32:47 np0005603621 nova_compute[247399]: 2026-01-31 08:32:47.222 247403 WARNING nova.compute.manager [req-bd222d63-5b16-4ba0-bcb3-ce6b4c200674 req-e05621a2-8062-4c03-afa4-f80fd08de115 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received unexpected event network-vif-plugged-2f175855-d786-42a8-81b9-d274c83adeac for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:32:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 42 KiB/s wr, 288 op/s
Jan 31 03:32:48 np0005603621 nova_compute[247399]: 2026-01-31 08:32:48.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:49.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009685289875302438 of space, bias 1.0, pg target 2.9055869625907316 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021759550065140517 of space, bias 1.0, pg target 0.6484345919411874 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:32:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:32:49 np0005603621 nova_compute[247399]: 2026-01-31 08:32:49.819 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.003 247403 INFO nova.virt.libvirt.driver [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Deleting instance files /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9_del#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.004 247403 INFO nova.virt.libvirt.driver [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Deletion of /var/lib/nova/instances/4914607d-e887-4890-a7dd-9d020c2104b9_del complete#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.255 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.256 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:32:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 42 KiB/s wr, 288 op/s
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.331 247403 INFO nova.compute.manager [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Took 6.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.332 247403 DEBUG oslo.service.loopingcall [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.333 247403 DEBUG nova.compute.manager [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:32:50 np0005603621 nova_compute[247399]: 2026-01-31 08:32:50.334 247403 DEBUG nova.network.neutron [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:32:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:51.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:52 np0005603621 nova_compute[247399]: 2026-01-31 08:32:52.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:52 np0005603621 nova_compute[247399]: 2026-01-31 08:32:52.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:32:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 485 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 27 KiB/s wr, 229 op/s
Jan 31 03:32:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:53.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.340 247403 DEBUG nova.network.neutron [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.488 247403 INFO nova.compute.manager [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Took 3.15 seconds to deallocate network for instance.#033[00m
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.678 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.678 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.744 247403 DEBUG oslo_concurrency.processutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:53 np0005603621 nova_compute[247399]: 2026-01-31 08:32:53.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2091353331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.179 247403 DEBUG oslo_concurrency.processutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.183 247403 DEBUG nova.compute.provider_tree [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.232 247403 DEBUG nova.scheduler.client.report [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.275 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 159 KiB/s wr, 205 op/s
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.391 247403 INFO nova.scheduler.client.report [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Deleted allocations for instance 4914607d-e887-4890-a7dd-9d020c2104b9#033[00m
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.758 247403 DEBUG oslo_concurrency.lockutils [None req-0de65dd0-f193-47fe-93e8-12168be6911b fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "4914607d-e887-4890-a7dd-9d020c2104b9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:54 np0005603621 nova_compute[247399]: 2026-01-31 08:32:54.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:32:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:55.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:55.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.332 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.333 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.333 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.333 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.333 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.685 247403 DEBUG nova.compute.manager [req-58cf3b15-dba4-4ac6-8a03-c51f6c8e6fac req-79af17e4-ff50-4b59-b6df-57e8199b5d0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Received event network-vif-deleted-2f175855-d786-42a8-81b9-d274c83adeac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:32:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869763726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.750 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.893 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.894 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4222MB free_disk=20.78363800048828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.894 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:32:55 np0005603621 nova_compute[247399]: 2026-01-31 08:32:55.894 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.085 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.085 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.108 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:32:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 289 op/s
Jan 31 03:32:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:32:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922998187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.521 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.526 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.653 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.852 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:32:56 np0005603621 nova_compute[247399]: 2026-01-31 08:32:56.852 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:32:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:57.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:57.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 31 03:32:58 np0005603621 nova_compute[247399]: 2026-01-31 08:32:58.638 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848363.6369488, 4914607d-e887-4890-a7dd-9d020c2104b9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:32:58 np0005603621 nova_compute[247399]: 2026-01-31 08:32:58.638 247403 INFO nova.compute.manager [-] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:32:58 np0005603621 nova_compute[247399]: 2026-01-31 08:32:58.848 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:58 np0005603621 nova_compute[247399]: 2026-01-31 08:32:58.919 247403 DEBUG nova.compute.manager [None req-8f00f86e-4551-4f0b-9c50-ce307aa3ff1e - - - - - -] [instance: 4914607d-e887-4890-a7dd-9d020c2104b9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:32:58 np0005603621 nova_compute[247399]: 2026-01-31 08:32:58.922 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:58 np0005603621 nova_compute[247399]: 2026-01-31 08:32:58.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:32:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:32:59.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:59 np0005603621 nova_compute[247399]: 2026-01-31 08:32:59.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:32:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:32:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:32:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:32:59.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:32:59 np0005603621 nova_compute[247399]: 2026-01-31 08:32:59.824 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 451 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 652 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 31 03:33:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:01.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.036 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "0236c768-8b91-4ca0-94b9-ec028c671266" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.037 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.074 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.266 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.266 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.273 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.273 247403 INFO nova.compute.claims [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:33:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 453 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 655 KiB/s rd, 1.8 MiB/s wr, 125 op/s
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.539 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/972866114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.966 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.972 247403 DEBUG nova.compute.provider_tree [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:02 np0005603621 nova_compute[247399]: 2026-01-31 08:33:02.987 247403 DEBUG nova.scheduler.client.report [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.033 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.034 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.113 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.114 247403 DEBUG nova.network.neutron [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:33:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:03.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.143 247403 INFO nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.164 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:33:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:03.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.327 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.328 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.328 247403 INFO nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Creating image(s)#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.358 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.383 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.412 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.415 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.435 247403 DEBUG nova.policy [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb3f20f0143d465ebfe98f6a13200890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '40db421b27d84f809f8074c58151327f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.466 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.467 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.467 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.468 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.491 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.495 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0236c768-8b91-4ca0-94b9-ec028c671266_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.780 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0236c768-8b91-4ca0-94b9-ec028c671266_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.846 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] resizing rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:33:03 np0005603621 nova_compute[247399]: 2026-01-31 08:33:03.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.157 247403 DEBUG nova.objects.instance [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'migration_context' on Instance uuid 0236c768-8b91-4ca0-94b9-ec028c671266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.267 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.267 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Ensure instance console log exists: /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.268 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.268 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.269 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 453 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 649 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 03:33:04 np0005603621 nova_compute[247399]: 2026-01-31 08:33:04.825 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:05 np0005603621 nova_compute[247399]: 2026-01-31 08:33:05.342 247403 DEBUG nova.network.neutron [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Successfully created port: 7f2a0280-14bd-4a11-9882-dc3c57cd2563 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:33:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 198 op/s
Jan 31 03:33:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:07.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:08 np0005603621 nova_compute[247399]: 2026-01-31 08:33:08.713 247403 DEBUG nova.network.neutron [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Successfully updated port: 7f2a0280-14bd-4a11-9882-dc3c57cd2563 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ba64b580-f52e-4816-bb80-e8b440a03698 does not exist
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 034f5937-e7a1-4ffe-8c6e-2d7a6aa91a7b does not exist
Jan 31 03:33:08 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 11b24b8c-81b7-4d9f-ba66-e0d4b7b2da3c does not exist
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:33:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:33:08 np0005603621 nova_compute[247399]: 2026-01-31 08:33:08.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:08 np0005603621 nova_compute[247399]: 2026-01-31 08:33:08.976 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "refresh_cache-0236c768-8b91-4ca0-94b9-ec028c671266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:08 np0005603621 nova_compute[247399]: 2026-01-31 08:33:08.976 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquired lock "refresh_cache-0236c768-8b91-4ca0-94b9-ec028c671266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:08 np0005603621 nova_compute[247399]: 2026-01-31 08:33:08.976 247403 DEBUG nova.network.neutron [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:33:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:09.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:09 np0005603621 nova_compute[247399]: 2026-01-31 08:33:09.291 247403 DEBUG nova.network.neutron [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.364795402 +0000 UTC m=+0.036138497 container create b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:33:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:33:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:33:09 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:33:09 np0005603621 systemd[1]: Started libpod-conmon-b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf.scope.
Jan 31 03:33:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.346472911 +0000 UTC m=+0.017816026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.500211785 +0000 UTC m=+0.171554970 container init b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.50511059 +0000 UTC m=+0.176453685 container start b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:33:09 np0005603621 systemd[1]: libpod-b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf.scope: Deactivated successfully.
Jan 31 03:33:09 np0005603621 admiring_bartik[338600]: 167 167
Jan 31 03:33:09 np0005603621 conmon[338600]: conmon b1b592d4bcea8557bc94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf.scope/container/memory.events
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.558656058 +0000 UTC m=+0.229999273 container attach b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.559151944 +0000 UTC m=+0.230495049 container died b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bartik, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:33:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-687d2f210cba4d45b1d1d54f0d10303d231d665df9c5d6e2de1b906cb72ac2af-merged.mount: Deactivated successfully.
Jan 31 03:33:09 np0005603621 nova_compute[247399]: 2026-01-31 08:33:09.862 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:09 np0005603621 podman[338584]: 2026-01-31 08:33:09.872570261 +0000 UTC m=+0.543913356 container remove b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bartik, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:09 np0005603621 systemd[1]: libpod-conmon-b1b592d4bcea8557bc94299852d86e0c2ee42a8e1e2bd392318f311eceb6f8bf.scope: Deactivated successfully.
Jan 31 03:33:10 np0005603621 podman[338625]: 2026-01-31 08:33:10.013422627 +0000 UTC m=+0.067362827 container create cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lichterman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:33:10 np0005603621 podman[338625]: 2026-01-31 08:33:09.967604964 +0000 UTC m=+0.021545204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:33:10 np0005603621 systemd[1]: Started libpod-conmon-cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c.scope.
Jan 31 03:33:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8a2ac55322d377aa29d4b4d8c892adc3b124d0db4a2f39a4df30894cdd4858/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8a2ac55322d377aa29d4b4d8c892adc3b124d0db4a2f39a4df30894cdd4858/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8a2ac55322d377aa29d4b4d8c892adc3b124d0db4a2f39a4df30894cdd4858/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8a2ac55322d377aa29d4b4d8c892adc3b124d0db4a2f39a4df30894cdd4858/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:10 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8a2ac55322d377aa29d4b4d8c892adc3b124d0db4a2f39a4df30894cdd4858/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:10 np0005603621 podman[338625]: 2026-01-31 08:33:10.168850264 +0000 UTC m=+0.222790444 container init cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lichterman, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:10 np0005603621 podman[338625]: 2026-01-31 08:33:10.176467075 +0000 UTC m=+0.230407255 container start cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:33:10 np0005603621 podman[338625]: 2026-01-31 08:33:10.241794667 +0000 UTC m=+0.295734847 container attach cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:33:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:33:10 np0005603621 nova_compute[247399]: 2026-01-31 08:33:10.389 247403 DEBUG nova.compute.manager [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Received event network-changed-7f2a0280-14bd-4a11-9882-dc3c57cd2563 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:10 np0005603621 nova_compute[247399]: 2026-01-31 08:33:10.389 247403 DEBUG nova.compute.manager [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Refreshing instance network info cache due to event network-changed-7f2a0280-14bd-4a11-9882-dc3c57cd2563. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:33:10 np0005603621 nova_compute[247399]: 2026-01-31 08:33:10.390 247403 DEBUG oslo_concurrency.lockutils [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0236c768-8b91-4ca0-94b9-ec028c671266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:10 np0005603621 pensive_lichterman[338643]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:33:10 np0005603621 pensive_lichterman[338643]: --> relative data size: 1.0
Jan 31 03:33:10 np0005603621 pensive_lichterman[338643]: --> All data devices are unavailable
Jan 31 03:33:10 np0005603621 systemd[1]: libpod-cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c.scope: Deactivated successfully.
Jan 31 03:33:10 np0005603621 podman[338709]: 2026-01-31 08:33:10.984526515 +0000 UTC m=+0.024426806 container died cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lichterman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:33:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ca8a2ac55322d377aa29d4b4d8c892adc3b124d0db4a2f39a4df30894cdd4858-merged.mount: Deactivated successfully.
Jan 31 03:33:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:11 np0005603621 podman[338709]: 2026-01-31 08:33:11.300501333 +0000 UTC m=+0.340401574 container remove cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lichterman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:33:11 np0005603621 systemd[1]: libpod-conmon-cb40a31146c133c1d106d848d815c0d131252c071d7f5a873e6cb768d24ce83c.scope: Deactivated successfully.
Jan 31 03:33:11 np0005603621 podman[338708]: 2026-01-31 08:33:11.358362857 +0000 UTC m=+0.378250943 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Jan 31 03:33:11 np0005603621 podman[338718]: 2026-01-31 08:33:11.37958302 +0000 UTC m=+0.397405631 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:33:11 np0005603621 podman[338912]: 2026-01-31 08:33:11.773951974 +0000 UTC m=+0.018386335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:33:11 np0005603621 podman[338912]: 2026-01-31 08:33:11.900887787 +0000 UTC m=+0.145322128 container create 338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:33:12 np0005603621 systemd[1]: Started libpod-conmon-338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b.scope.
Jan 31 03:33:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:12 np0005603621 podman[338912]: 2026-01-31 08:33:12.083983752 +0000 UTC m=+0.328418123 container init 338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 03:33:12 np0005603621 podman[338912]: 2026-01-31 08:33:12.091136469 +0000 UTC m=+0.335570810 container start 338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:33:12 np0005603621 peaceful_darwin[338928]: 167 167
Jan 31 03:33:12 np0005603621 systemd[1]: libpod-338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b.scope: Deactivated successfully.
Jan 31 03:33:12 np0005603621 podman[338912]: 2026-01-31 08:33:12.107386025 +0000 UTC m=+0.351820496 container attach 338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:33:12 np0005603621 podman[338912]: 2026-01-31 08:33:12.109037517 +0000 UTC m=+0.353471858 container died 338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:33:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5721c4e1ff953bbe8887b4baac00b897a71c03d34021dee5388380b1aa3a75c4-merged.mount: Deactivated successfully.
Jan 31 03:33:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:33:12 np0005603621 podman[338912]: 2026-01-31 08:33:12.369830555 +0000 UTC m=+0.614264906 container remove 338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_darwin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:33:12 np0005603621 systemd[1]: libpod-conmon-338904baa1e41daac6486f0d005a5a21c7d98cfeb9bc38fdc6ff018cb91ad76b.scope: Deactivated successfully.
Jan 31 03:33:12 np0005603621 podman[338953]: 2026-01-31 08:33:12.51316339 +0000 UTC m=+0.045238045 container create 59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:33:12 np0005603621 systemd[1]: Started libpod-conmon-59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef.scope.
Jan 31 03:33:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969074a016fb3a66df4938c24f1898a889aca1a6a58e63f4b4c80d2e55f76572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969074a016fb3a66df4938c24f1898a889aca1a6a58e63f4b4c80d2e55f76572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969074a016fb3a66df4938c24f1898a889aca1a6a58e63f4b4c80d2e55f76572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969074a016fb3a66df4938c24f1898a889aca1a6a58e63f4b4c80d2e55f76572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:12 np0005603621 podman[338953]: 2026-01-31 08:33:12.587382253 +0000 UTC m=+0.119456928 container init 59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_golick, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:33:12 np0005603621 podman[338953]: 2026-01-31 08:33:12.494193468 +0000 UTC m=+0.026268153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:33:12 np0005603621 podman[338953]: 2026-01-31 08:33:12.593685103 +0000 UTC m=+0.125759758 container start 59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:33:12 np0005603621 podman[338953]: 2026-01-31 08:33:12.59771302 +0000 UTC m=+0.129787665 container attach 59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_golick, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:33:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:13.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:13.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:13 np0005603621 keen_golick[338969]: {
Jan 31 03:33:13 np0005603621 keen_golick[338969]:    "0": [
Jan 31 03:33:13 np0005603621 keen_golick[338969]:        {
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "devices": [
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "/dev/loop3"
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            ],
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "lv_name": "ceph_lv0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "lv_size": "7511998464",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "name": "ceph_lv0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "tags": {
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.cluster_name": "ceph",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.crush_device_class": "",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.encrypted": "0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.osd_id": "0",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.type": "block",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:                "ceph.vdo": "0"
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            },
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "type": "block",
Jan 31 03:33:13 np0005603621 keen_golick[338969]:            "vg_name": "ceph_vg0"
Jan 31 03:33:13 np0005603621 keen_golick[338969]:        }
Jan 31 03:33:13 np0005603621 keen_golick[338969]:    ]
Jan 31 03:33:13 np0005603621 keen_golick[338969]: }
Jan 31 03:33:13 np0005603621 systemd[1]: libpod-59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef.scope: Deactivated successfully.
Jan 31 03:33:13 np0005603621 podman[338953]: 2026-01-31 08:33:13.34995454 +0000 UTC m=+0.882029205 container died 59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:33:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-969074a016fb3a66df4938c24f1898a889aca1a6a58e63f4b4c80d2e55f76572-merged.mount: Deactivated successfully.
Jan 31 03:33:13 np0005603621 podman[338953]: 2026-01-31 08:33:13.396084632 +0000 UTC m=+0.928159277 container remove 59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:33:13 np0005603621 systemd[1]: libpod-conmon-59d53a1db4acada3d2bb87d5bcac36df20a182b5855a71e26258dfef224dfeef.scope: Deactivated successfully.
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.579 247403 DEBUG nova.network.neutron [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Updating instance_info_cache with network_info: [{"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.707 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Releasing lock "refresh_cache-0236c768-8b91-4ca0-94b9-ec028c671266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.708 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Instance network_info: |[{"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.709 247403 DEBUG oslo_concurrency.lockutils [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0236c768-8b91-4ca0-94b9-ec028c671266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.709 247403 DEBUG nova.network.neutron [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Refreshing network info cache for port 7f2a0280-14bd-4a11-9882-dc3c57cd2563 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.714 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Start _get_guest_xml network_info=[{"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.722 247403 WARNING nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.728 247403 DEBUG nova.virt.libvirt.host [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.731 247403 DEBUG nova.virt.libvirt.host [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.737 247403 DEBUG nova.virt.libvirt.host [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.737 247403 DEBUG nova.virt.libvirt.host [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.741 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.741 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.742 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.742 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.743 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.743 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.743 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.744 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.744 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.745 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.745 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.746 247403 DEBUG nova.virt.hardware [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.752 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:13 np0005603621 podman[339132]: 2026-01-31 08:33:13.929463263 +0000 UTC m=+0.042852450 container create 90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:33:13 np0005603621 systemd[1]: Started libpod-conmon-90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b.scope.
Jan 31 03:33:13 np0005603621 nova_compute[247399]: 2026-01-31 08:33:13.975 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:14 np0005603621 podman[339132]: 2026-01-31 08:33:14.000599398 +0000 UTC m=+0.113988585 container init 90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_burnell, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:33:14 np0005603621 podman[339132]: 2026-01-31 08:33:13.91109208 +0000 UTC m=+0.024481317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:33:14 np0005603621 podman[339132]: 2026-01-31 08:33:14.00822595 +0000 UTC m=+0.121615137 container start 90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_burnell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:33:14 np0005603621 podman[339132]: 2026-01-31 08:33:14.011779652 +0000 UTC m=+0.125168879 container attach 90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_burnell, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:33:14 np0005603621 great_burnell[339167]: 167 167
Jan 31 03:33:14 np0005603621 systemd[1]: libpod-90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b.scope: Deactivated successfully.
Jan 31 03:33:14 np0005603621 podman[339132]: 2026-01-31 08:33:14.014625773 +0000 UTC m=+0.128014960 container died 90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:33:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-74d4445b7e2d9a19c26520ff2115b19e5d29556997d54960a7e44db77da5cfab-merged.mount: Deactivated successfully.
Jan 31 03:33:14 np0005603621 podman[339132]: 2026-01-31 08:33:14.05866744 +0000 UTC m=+0.172056627 container remove 90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:33:14 np0005603621 systemd[1]: libpod-conmon-90039d27d786f3c7dbba0e126fe0de76da9654fcf0587e96d7c9b82ae7f6982b.scope: Deactivated successfully.
Jan 31 03:33:14 np0005603621 podman[339193]: 2026-01-31 08:33:14.176780083 +0000 UTC m=+0.038113470 container create ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:33:14 np0005603621 systemd[1]: Started libpod-conmon-ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8.scope.
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.224 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abb3d1cb2eccf140bdc0ad8980c2835b6c43d3daa84ce61f7a8b9b3ca43e22c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abb3d1cb2eccf140bdc0ad8980c2835b6c43d3daa84ce61f7a8b9b3ca43e22c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abb3d1cb2eccf140bdc0ad8980c2835b6c43d3daa84ce61f7a8b9b3ca43e22c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abb3d1cb2eccf140bdc0ad8980c2835b6c43d3daa84ce61f7a8b9b3ca43e22c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:14 np0005603621 podman[339193]: 2026-01-31 08:33:14.254916851 +0000 UTC m=+0.116250267 container init ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:33:14 np0005603621 podman[339193]: 2026-01-31 08:33:14.161163599 +0000 UTC m=+0.022497005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:33:14 np0005603621 podman[339193]: 2026-01-31 08:33:14.260963182 +0000 UTC m=+0.122296578 container start ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:33:14 np0005603621 podman[339193]: 2026-01-31 08:33:14.26464456 +0000 UTC m=+0.125977976 container attach ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.270 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.276 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 514 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 96 op/s
Jan 31 03:33:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631447108' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.734 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.736 247403 DEBUG nova.virt.libvirt.vif [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-29964497',display_name='tempest-ServersTestJSON-server-29964497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-29964497',id=134,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-yo24wivb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:03Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=0236c768-8b91-4ca0-94b9-ec028c671266,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.736 247403 DEBUG nova.network.os_vif_util [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.737 247403 DEBUG nova.network.os_vif_util [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.739 247403 DEBUG nova.objects.instance [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'pci_devices' on Instance uuid 0236c768-8b91-4ca0-94b9-ec028c671266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:14.756 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:14.757 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.757 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.775 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <uuid>0236c768-8b91-4ca0-94b9-ec028c671266</uuid>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <name>instance-00000086</name>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersTestJSON-server-29964497</nova:name>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:33:13</nova:creationTime>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:user uuid="fb3f20f0143d465ebfe98f6a13200890">tempest-ServersTestJSON-1064072764-project-member</nova:user>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:project uuid="40db421b27d84f809f8074c58151327f">tempest-ServersTestJSON-1064072764</nova:project>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <nova:port uuid="7f2a0280-14bd-4a11-9882-dc3c57cd2563">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <entry name="serial">0236c768-8b91-4ca0-94b9-ec028c671266</entry>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <entry name="uuid">0236c768-8b91-4ca0-94b9-ec028c671266</entry>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0236c768-8b91-4ca0-94b9-ec028c671266_disk">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0236c768-8b91-4ca0-94b9-ec028c671266_disk.config">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:24:da:b6"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <target dev="tap7f2a0280-14"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/console.log" append="off"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:33:14 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:33:14 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:33:14 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:33:14 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.777 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Preparing to wait for external event network-vif-plugged-7f2a0280-14bd-4a11-9882-dc3c57cd2563 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.777 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.777 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.778 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.779 247403 DEBUG nova.virt.libvirt.vif [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-29964497',display_name='tempest-ServersTestJSON-server-29964497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-29964497',id=134,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-yo24wivb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:03Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=0236c768-8b91-4ca0-94b9-ec028c671266,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.779 247403 DEBUG nova.network.os_vif_util [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.780 247403 DEBUG nova.network.os_vif_util [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.781 247403 DEBUG os_vif [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.783 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.784 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.784 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.788 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f2a0280-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.788 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f2a0280-14, col_values=(('external_ids', {'iface-id': '7f2a0280-14bd-4a11-9882-dc3c57cd2563', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:da:b6', 'vm-uuid': '0236c768-8b91-4ca0-94b9-ec028c671266'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.790 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:14 np0005603621 NetworkManager[49013]: <info>  [1769848394.7909] manager: (tap7f2a0280-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.793 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.797 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.798 247403 INFO os_vif [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14')#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.857 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.857 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.858 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No VIF found with MAC fa:16:3e:24:da:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.858 247403 INFO nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Using config drive#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.886 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:14 np0005603621 nova_compute[247399]: 2026-01-31 08:33:14.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]: {
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:        "osd_id": 0,
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:        "type": "bluestore"
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]:    }
Jan 31 03:33:15 np0005603621 xenodochial_mclaren[339210]: }
Jan 31 03:33:15 np0005603621 systemd[1]: libpod-ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8.scope: Deactivated successfully.
Jan 31 03:33:15 np0005603621 podman[339193]: 2026-01-31 08:33:15.061032229 +0000 UTC m=+0.922365625 container died ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:33:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3abb3d1cb2eccf140bdc0ad8980c2835b6c43d3daa84ce61f7a8b9b3ca43e22c-merged.mount: Deactivated successfully.
Jan 31 03:33:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:15.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:15 np0005603621 podman[339193]: 2026-01-31 08:33:15.202541145 +0000 UTC m=+1.063874531 container remove ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 03:33:15 np0005603621 systemd[1]: libpod-conmon-ef967a8586d34460abdd7985a6f329e13d80f714c023094d3e339d8c1ee344a8.scope: Deactivated successfully.
Jan 31 03:33:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:33:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:33:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:33:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:33:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev dfe6a21c-2e40-4ee0-b906-4646c0bd98bd does not exist
Jan 31 03:33:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7520e922-1577-4609-95c2-b08d5da4a541 does not exist
Jan 31 03:33:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 108ac013-0ac5-44de-9d60-42bc20fcfda2 does not exist
Jan 31 03:33:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.471 247403 INFO nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Creating config drive at /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/disk.config#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.475 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgonbqvwq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.600 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgonbqvwq" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.630 247403 DEBUG nova.storage.rbd_utils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 0236c768-8b91-4ca0-94b9-ec028c671266_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.635 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/disk.config 0236c768-8b91-4ca0-94b9-ec028c671266_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.823 247403 DEBUG oslo_concurrency.processutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/disk.config 0236c768-8b91-4ca0-94b9-ec028c671266_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.825 247403 INFO nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Deleting local config drive /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266/disk.config because it was imported into RBD.#033[00m
Jan 31 03:33:15 np0005603621 kernel: tap7f2a0280-14: entered promiscuous mode
Jan 31 03:33:15 np0005603621 NetworkManager[49013]: <info>  [1769848395.8664] manager: (tap7f2a0280-14): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 31 03:33:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:15Z|00515|binding|INFO|Claiming lport 7f2a0280-14bd-4a11-9882-dc3c57cd2563 for this chassis.
Jan 31 03:33:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:15Z|00516|binding|INFO|7f2a0280-14bd-4a11-9882-dc3c57cd2563: Claiming fa:16:3e:24:da:b6 10.100.0.14
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.869 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:15Z|00517|binding|INFO|Setting lport 7f2a0280-14bd-4a11-9882-dc3c57cd2563 ovn-installed in OVS
Jan 31 03:33:15 np0005603621 nova_compute[247399]: 2026-01-31 08:33:15.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:15 np0005603621 systemd-machined[212769]: New machine qemu-63-instance-00000086.
Jan 31 03:33:15 np0005603621 systemd-udevd[339407]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:33:15 np0005603621 NetworkManager[49013]: <info>  [1769848395.8987] device (tap7f2a0280-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:33:15 np0005603621 NetworkManager[49013]: <info>  [1769848395.8995] device (tap7f2a0280-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:33:15 np0005603621 systemd[1]: Started Virtual Machine qemu-63-instance-00000086.
Jan 31 03:33:15 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:15Z|00518|binding|INFO|Setting lport 7f2a0280-14bd-4a11-9882-dc3c57cd2563 up in Southbound
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.926 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:da:b6 10.100.0.14'], port_security=['fa:16:3e:24:da:b6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0236c768-8b91-4ca0-94b9-ec028c671266', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=7f2a0280-14bd-4a11-9882-dc3c57cd2563) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.927 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 7f2a0280-14bd-4a11-9882-dc3c57cd2563 in datapath f6071a46-64a6-45aa-97c6-06e6c564195b bound to our chassis#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.929 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6071a46-64a6-45aa-97c6-06e6c564195b#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.939 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6f19a20b-6602-4eff-a203-ec2ec4da9491]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.940 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf6071a46-61 in ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.942 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf6071a46-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.943 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2bf9f2-07c2-4cc8-bf5f-2f1abb771c37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.943 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f73a8b0c-baec-4718-9865-f91b66a24255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.952 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[cec82eb7-1fc1-436f-90f9-1c063b2ffdc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.961 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[200476fa-470b-4ca2-a1ef-9901a41aa77d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.978 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[40580a73-9701-42e3-acbd-9bf4b27c16fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:15 np0005603621 NetworkManager[49013]: <info>  [1769848395.9848] manager: (tapf6071a46-60): new Veth device (/org/freedesktop/NetworkManager/Devices/234)
Jan 31 03:33:15 np0005603621 systemd-udevd[339409]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:33:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:15.986 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdaa129-c79f-4561-8b6a-7e1374c451f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.019 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd30564-9327-49a8-9394-1629cf9b6029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.022 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[658231ed-5714-42df-980c-6418ca1425ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 NetworkManager[49013]: <info>  [1769848396.0389] device (tapf6071a46-60): carrier: link connected
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.043 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3f666868-52f1-4d07-9b8d-b1d3a3107381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.055 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[242e0402-72df-4a7c-b721-ff89bd75c4d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770155, 'reachable_time': 21267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339440, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.067 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f385688d-b688-4221-948d-a23f3b60b3ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:8c48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770155, 'tstamp': 770155}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339441, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.080 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1e68b9-350d-4439-906a-9b221857f2a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770155, 'reachable_time': 21267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339442, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.103 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3e37b67e-e570-403f-bcfd-a3d0c48ff067]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.147 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b5697a26-0d96-46f8-bc1d-749625198299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.149 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.149 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.150 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6071a46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.151 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:16 np0005603621 kernel: tapf6071a46-60: entered promiscuous mode
Jan 31 03:33:16 np0005603621 NetworkManager[49013]: <info>  [1769848396.1534] manager: (tapf6071a46-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.156 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6071a46-60, col_values=(('external_ids', {'iface-id': 'e9a7861c-c6ea-4166-9252-dc2aacdf4771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.158 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:16Z|00519|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.160 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.162 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.161 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[769475f6-f5e6-4cb1-9250-3f52c1fb4ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.164 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-f6071a46-64a6-45aa-97c6-06e6c564195b
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/f6071a46-64a6-45aa-97c6-06e6c564195b.pid.haproxy
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID f6071a46-64a6-45aa-97c6-06e6c564195b
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:33:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:16.165 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'env', 'PROCESS_TAG=haproxy-f6071a46-64a6-45aa-97c6-06e6c564195b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f6071a46-64a6-45aa-97c6-06e6c564195b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:33:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 570 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.3 MiB/s wr, 174 op/s
Jan 31 03:33:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.439 247403 DEBUG nova.compute.manager [req-7cb75cbf-c955-427b-9a11-bc1b96dca6d7 req-4a128c2a-9f46-4cd5-8997-2cf0664be07b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Received event network-vif-plugged-7f2a0280-14bd-4a11-9882-dc3c57cd2563 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.440 247403 DEBUG oslo_concurrency.lockutils [req-7cb75cbf-c955-427b-9a11-bc1b96dca6d7 req-4a128c2a-9f46-4cd5-8997-2cf0664be07b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.440 247403 DEBUG oslo_concurrency.lockutils [req-7cb75cbf-c955-427b-9a11-bc1b96dca6d7 req-4a128c2a-9f46-4cd5-8997-2cf0664be07b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.441 247403 DEBUG oslo_concurrency.lockutils [req-7cb75cbf-c955-427b-9a11-bc1b96dca6d7 req-4a128c2a-9f46-4cd5-8997-2cf0664be07b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.441 247403 DEBUG nova.compute.manager [req-7cb75cbf-c955-427b-9a11-bc1b96dca6d7 req-4a128c2a-9f46-4cd5-8997-2cf0664be07b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Processing event network-vif-plugged-7f2a0280-14bd-4a11-9882-dc3c57cd2563 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:33:16 np0005603621 podman[339474]: 2026-01-31 08:33:16.49902234 +0000 UTC m=+0.061380087 container create ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:33:16 np0005603621 systemd[1]: Started libpod-conmon-ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1.scope.
Jan 31 03:33:16 np0005603621 podman[339474]: 2026-01-31 08:33:16.462854963 +0000 UTC m=+0.025212740 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.566 247403 DEBUG nova.network.neutron [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Updated VIF entry in instance network info cache for port 7f2a0280-14bd-4a11-9882-dc3c57cd2563. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.567 247403 DEBUG nova.network.neutron [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Updating instance_info_cache with network_info: [{"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:33:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf1d97674b84c300e078edffd74d3f1154c2c3eab006cbafef2fff32d71b6be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.588 247403 DEBUG oslo_concurrency.lockutils [req-a2955f8c-ca05-4d17-9511-676b54302ac6 req-cfe6b1cd-ba75-4328-9a02-34094b83c4a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0236c768-8b91-4ca0-94b9-ec028c671266" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:16 np0005603621 podman[339474]: 2026-01-31 08:33:16.59995249 +0000 UTC m=+0.162310257 container init ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:33:16 np0005603621 podman[339474]: 2026-01-31 08:33:16.605160325 +0000 UTC m=+0.167518072 container start ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:33:16 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [NOTICE]   (339528) : New worker (339532) forked
Jan 31 03:33:16 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [NOTICE]   (339528) : Loading success.
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.716 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.718 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848396.7180607, 0236c768-8b91-4ca0-94b9-ec028c671266 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.718 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] VM Started (Lifecycle Event)#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.722 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.726 247403 INFO nova.virt.libvirt.driver [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Instance spawned successfully.#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.726 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.748 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.755 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.756 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.756 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.757 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.757 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.758 247403 DEBUG nova.virt.libvirt.driver [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.762 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.814 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.814 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848396.7191167, 0236c768-8b91-4ca0-94b9-ec028c671266 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.815 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.867 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.872 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848396.7223527, 0236c768-8b91-4ca0-94b9-ec028c671266 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.872 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.879 247403 INFO nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Took 13.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.879 247403 DEBUG nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.929 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.932 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.963 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:33:16 np0005603621 nova_compute[247399]: 2026-01-31 08:33:16.986 247403 INFO nova.compute.manager [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Took 14.76 seconds to build instance.#033[00m
Jan 31 03:33:17 np0005603621 nova_compute[247399]: 2026-01-31 08:33:17.015 247403 DEBUG oslo_concurrency.lockutils [None req-4cde7d40-e52d-42a6-898f-4a2d7b15dd2d fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:17.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 550 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 718 KiB/s rd, 4.5 MiB/s wr, 101 op/s
Jan 31 03:33:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:18.759 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:19.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.421 247403 DEBUG nova.compute.manager [req-1dc13020-3d87-4516-b7f7-f67708b23830 req-2a08d266-75f1-4db8-ba87-2092584e628a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Received event network-vif-plugged-7f2a0280-14bd-4a11-9882-dc3c57cd2563 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.421 247403 DEBUG oslo_concurrency.lockutils [req-1dc13020-3d87-4516-b7f7-f67708b23830 req-2a08d266-75f1-4db8-ba87-2092584e628a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.421 247403 DEBUG oslo_concurrency.lockutils [req-1dc13020-3d87-4516-b7f7-f67708b23830 req-2a08d266-75f1-4db8-ba87-2092584e628a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.421 247403 DEBUG oslo_concurrency.lockutils [req-1dc13020-3d87-4516-b7f7-f67708b23830 req-2a08d266-75f1-4db8-ba87-2092584e628a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.422 247403 DEBUG nova.compute.manager [req-1dc13020-3d87-4516-b7f7-f67708b23830 req-2a08d266-75f1-4db8-ba87-2092584e628a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] No waiting events found dispatching network-vif-plugged-7f2a0280-14bd-4a11-9882-dc3c57cd2563 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.422 247403 WARNING nova.compute.manager [req-1dc13020-3d87-4516-b7f7-f67708b23830 req-2a08d266-75f1-4db8-ba87-2092584e628a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Received unexpected event network-vif-plugged-7f2a0280-14bd-4a11-9882-dc3c57cd2563 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.791 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:19 np0005603621 nova_compute[247399]: 2026-01-31 08:33:19.865 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 550 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 31 03:33:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:21.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Jan 31 03:33:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:23.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:23.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.418 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.418 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.473 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.655 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.655 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.663 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.663 247403 INFO nova.compute.claims [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:33:23 np0005603621 nova_compute[247399]: 2026-01-31 08:33:23.837 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/658712853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.256 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.262 247403 DEBUG nova.compute.provider_tree [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 221 op/s
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.340 247403 DEBUG nova.scheduler.client.report [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.395 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.396 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.521 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.521 247403 DEBUG nova.network.neutron [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.655 247403 INFO nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.754 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.793 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:24 np0005603621 nova_compute[247399]: 2026-01-31 08:33:24.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.102 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.103 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.104 247403 INFO nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Creating image(s)#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.135 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:25.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.169 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.207 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.212 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:25.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.267 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.268 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.269 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.269 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.295 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.298 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:25 np0005603621 nova_compute[247399]: 2026-01-31 08:33:25.368 247403 DEBUG nova.policy [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb3f20f0143d465ebfe98f6a13200890', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '40db421b27d84f809f8074c58151327f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:33:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 452 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 276 op/s
Jan 31 03:33:26 np0005603621 nova_compute[247399]: 2026-01-31 08:33:26.976 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.048 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] resizing rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:33:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:27.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.516 247403 DEBUG nova.objects.instance [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'migration_context' on Instance uuid 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.604 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.604 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Ensure instance console log exists: /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.605 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.606 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.606 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:27 np0005603621 nova_compute[247399]: 2026-01-31 08:33:27.681 247403 DEBUG nova.network.neutron [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Successfully created port: a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:33:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 771 KiB/s wr, 211 op/s
Jan 31 03:33:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:29 np0005603621 nova_compute[247399]: 2026-01-31 08:33:29.833 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:29 np0005603621 nova_compute[247399]: 2026-01-31 08:33:29.869 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:30 np0005603621 nova_compute[247399]: 2026-01-31 08:33:30.097 247403 DEBUG nova.network.neutron [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Successfully updated port: a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:33:30 np0005603621 nova_compute[247399]: 2026-01-31 08:33:30.186 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "refresh_cache-52f0a44e-9891-4fc6-a679-3804d0eb6ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:30 np0005603621 nova_compute[247399]: 2026-01-31 08:33:30.186 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquired lock "refresh_cache-52f0a44e-9891-4fc6-a679-3804d0eb6ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:30 np0005603621 nova_compute[247399]: 2026-01-31 08:33:30.187 247403 DEBUG nova.network.neutron [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:33:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 463 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 440 KiB/s wr, 193 op/s
Jan 31 03:33:30 np0005603621 nova_compute[247399]: 2026-01-31 08:33:30.487 247403 DEBUG nova.network.neutron [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:30.515 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:30.516 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:30.516 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:31.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:31Z|00520|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:33:31 np0005603621 nova_compute[247399]: 2026-01-31 08:33:31.505 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:31Z|00521|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:33:31 np0005603621 nova_compute[247399]: 2026-01-31 08:33:31.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:31 np0005603621 nova_compute[247399]: 2026-01-31 08:33:31.766 247403 DEBUG nova.compute.manager [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-changed-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:31 np0005603621 nova_compute[247399]: 2026-01-31 08:33:31.767 247403 DEBUG nova.compute.manager [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Refreshing instance network info cache due to event network-changed-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:33:31 np0005603621 nova_compute[247399]: 2026-01-31 08:33:31.768 247403 DEBUG oslo_concurrency.lockutils [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-52f0a44e-9891-4fc6-a679-3804d0eb6ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:31 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 31 03:33:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 516 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 242 op/s
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.319 247403 DEBUG nova.network.neutron [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Updating instance_info_cache with network_info: [{"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:32 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.551 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Releasing lock "refresh_cache-52f0a44e-9891-4fc6-a679-3804d0eb6ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.552 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Instance network_info: |[{"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.553 247403 DEBUG oslo_concurrency.lockutils [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-52f0a44e-9891-4fc6-a679-3804d0eb6ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.553 247403 DEBUG nova.network.neutron [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Refreshing network info cache for port a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.556 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Start _get_guest_xml network_info=[{"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.560 247403 WARNING nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.571 247403 DEBUG nova.virt.libvirt.host [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.572 247403 DEBUG nova.virt.libvirt.host [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.575 247403 DEBUG nova.virt.libvirt.host [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.575 247403 DEBUG nova.virt.libvirt.host [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.576 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.576 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.577 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.577 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.577 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.578 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.578 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.578 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.578 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.579 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.579 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.579 247403 DEBUG nova.virt.hardware [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.582 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180378927' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:32 np0005603621 nova_compute[247399]: 2026-01-31 08:33:32.987 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.013 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.018 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:33.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:33.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327839594' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.433 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.435 247403 DEBUG nova.virt.libvirt.vif [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-29964497',display_name='tempest-ServersTestJSON-server-29964497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-29964497',id=136,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-mp6d9vbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:24Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=52f0a44e-9891-4fc6-a679-3804d0eb6ab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.435 247403 DEBUG nova.network.os_vif_util [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.436 247403 DEBUG nova.network.os_vif_util [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.437 247403 DEBUG nova.objects.instance [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.479 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <uuid>52f0a44e-9891-4fc6-a679-3804d0eb6ab5</uuid>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <name>instance-00000088</name>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersTestJSON-server-29964497</nova:name>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:33:32</nova:creationTime>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:user uuid="fb3f20f0143d465ebfe98f6a13200890">tempest-ServersTestJSON-1064072764-project-member</nova:user>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:project uuid="40db421b27d84f809f8074c58151327f">tempest-ServersTestJSON-1064072764</nova:project>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <nova:port uuid="a8c825b4-71f1-4ad9-bbc3-ff780cc658e9">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <entry name="serial">52f0a44e-9891-4fc6-a679-3804d0eb6ab5</entry>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <entry name="uuid">52f0a44e-9891-4fc6-a679-3804d0eb6ab5</entry>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk.config">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:00:49:73"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <target dev="tapa8c825b4-71"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/console.log" append="off"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:33:33 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:33:33 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:33:33 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:33:33 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.480 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Preparing to wait for external event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.481 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.481 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.481 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.482 247403 DEBUG nova.virt.libvirt.vif [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-29964497',display_name='tempest-ServersTestJSON-server-29964497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-29964497',id=136,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-mp6d9vbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:24Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=52f0a44e-9891-4fc6-a679-3804d0eb6ab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.482 247403 DEBUG nova.network.os_vif_util [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.483 247403 DEBUG nova.network.os_vif_util [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.483 247403 DEBUG os_vif [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.484 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.484 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.485 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.487 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.487 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8c825b4-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.487 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8c825b4-71, col_values=(('external_ids', {'iface-id': 'a8c825b4-71f1-4ad9-bbc3-ff780cc658e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:49:73', 'vm-uuid': '52f0a44e-9891-4fc6-a679-3804d0eb6ab5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.489 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:33 np0005603621 NetworkManager[49013]: <info>  [1769848413.4900] manager: (tapa8c825b4-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.491 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.495 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.496 247403 INFO os_vif [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71')#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.922 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.924 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.924 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] No VIF found with MAC fa:16:3e:00:49:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.925 247403 INFO nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Using config drive#033[00m
Jan 31 03:33:33 np0005603621 nova_compute[247399]: 2026-01-31 08:33:33.947 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:34Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:da:b6 10.100.0.14
Jan 31 03:33:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:34Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:da:b6 10.100.0.14
Jan 31 03:33:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 519 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 168 op/s
Jan 31 03:33:34 np0005603621 nova_compute[247399]: 2026-01-31 08:33:34.661 247403 INFO nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Creating config drive at /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/disk.config#033[00m
Jan 31 03:33:34 np0005603621 nova_compute[247399]: 2026-01-31 08:33:34.665 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprl3h8qej execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:34 np0005603621 nova_compute[247399]: 2026-01-31 08:33:34.800 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprl3h8qej" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:34 np0005603621 nova_compute[247399]: 2026-01-31 08:33:34.831 247403 DEBUG nova.storage.rbd_utils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] rbd image 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:34 np0005603621 nova_compute[247399]: 2026-01-31 08:33:34.834 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/disk.config 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:34 np0005603621 nova_compute[247399]: 2026-01-31 08:33:34.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:35.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:35.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.323 247403 DEBUG nova.network.neutron [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Updated VIF entry in instance network info cache for port a8c825b4-71f1-4ad9-bbc3-ff780cc658e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.324 247403 DEBUG nova.network.neutron [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Updating instance_info_cache with network_info: [{"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.342 247403 DEBUG oslo_concurrency.lockutils [req-df6801e5-c00d-4c09-9d05-6d3f66aae58c req-90f4ab1d-621b-416f-8706-78263dd0bfc5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-52f0a44e-9891-4fc6-a679-3804d0eb6ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.901 247403 DEBUG oslo_concurrency.processutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/disk.config 52f0a44e-9891-4fc6-a679-3804d0eb6ab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.901 247403 INFO nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Deleting local config drive /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5/disk.config because it was imported into RBD.#033[00m
Jan 31 03:33:35 np0005603621 kernel: tapa8c825b4-71: entered promiscuous mode
Jan 31 03:33:35 np0005603621 NetworkManager[49013]: <info>  [1769848415.9460] manager: (tapa8c825b4-71): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.947 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:35Z|00522|binding|INFO|Claiming lport a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 for this chassis.
Jan 31 03:33:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:35Z|00523|binding|INFO|a8c825b4-71f1-4ad9-bbc3-ff780cc658e9: Claiming fa:16:3e:00:49:73 10.100.0.5
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:35.957 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:49:73 10.100.0.5'], port_security=['fa:16:3e:00:49:73 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f0a44e-9891-4fc6-a679-3804d0eb6ab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:35Z|00524|binding|INFO|Setting lport a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 ovn-installed in OVS
Jan 31 03:33:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:35Z|00525|binding|INFO|Setting lport a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 up in Southbound
Jan 31 03:33:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:35.959 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 in datapath f6071a46-64a6-45aa-97c6-06e6c564195b bound to our chassis#033[00m
Jan 31 03:33:35 np0005603621 nova_compute[247399]: 2026-01-31 08:33:35.959 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:35.961 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6071a46-64a6-45aa-97c6-06e6c564195b#033[00m
Jan 31 03:33:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:35.972 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[99f5eb9d-053e-457b-9d87-004364c32768]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:35 np0005603621 systemd-udevd[339931]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:33:35 np0005603621 systemd-machined[212769]: New machine qemu-64-instance-00000088.
Jan 31 03:33:35 np0005603621 NetworkManager[49013]: <info>  [1769848415.9859] device (tapa8c825b4-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:33:35 np0005603621 NetworkManager[49013]: <info>  [1769848415.9864] device (tapa8c825b4-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:33:35 np0005603621 systemd[1]: Started Virtual Machine qemu-64-instance-00000088.
Jan 31 03:33:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:35.996 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a3bf5e52-1b08-4ced-9520-d41791ad510e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:35.999 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0f6303-69a6-44a9-9dfa-6bcd06aaad23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.019 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5470baf3-955b-4fbb-8d77-b3198d050bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.031 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3707f6c1-b9d9-4fa8-8cef-4388a3498a2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770155, 'reachable_time': 21267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339944, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.041 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f699e051-14bd-49bc-b329-e1d794a3bc42]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6071a46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770163, 'tstamp': 770163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339946, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6071a46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770165, 'tstamp': 770165}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339946, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.043 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.044 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.045 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6071a46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.045 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.046 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6071a46-60, col_values=(('external_ids', {'iface-id': 'e9a7861c-c6ea-4166-9252-dc2aacdf4771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:36.046 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 550 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 233 op/s
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.435 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848416.4348629, 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.436 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] VM Started (Lifecycle Event)#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.482 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.485 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848416.4351447, 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.485 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.509 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.512 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.536 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.567 247403 DEBUG nova.compute.manager [req-e4dc5cdb-b874-45ce-bc60-0133213607f3 req-e922422f-2523-40d6-b27f-fe386dbd03fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.568 247403 DEBUG oslo_concurrency.lockutils [req-e4dc5cdb-b874-45ce-bc60-0133213607f3 req-e922422f-2523-40d6-b27f-fe386dbd03fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.569 247403 DEBUG oslo_concurrency.lockutils [req-e4dc5cdb-b874-45ce-bc60-0133213607f3 req-e922422f-2523-40d6-b27f-fe386dbd03fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.569 247403 DEBUG oslo_concurrency.lockutils [req-e4dc5cdb-b874-45ce-bc60-0133213607f3 req-e922422f-2523-40d6-b27f-fe386dbd03fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.569 247403 DEBUG nova.compute.manager [req-e4dc5cdb-b874-45ce-bc60-0133213607f3 req-e922422f-2523-40d6-b27f-fe386dbd03fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Processing event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.571 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.573 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848416.5735636, 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.574 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.577 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.580 247403 INFO nova.virt.libvirt.driver [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Instance spawned successfully.#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.581 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.608 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.612 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.619 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.620 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.620 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.621 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.621 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.621 247403 DEBUG nova.virt.libvirt.driver [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.643 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.696 247403 INFO nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Took 11.59 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.697 247403 DEBUG nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.805 247403 INFO nova.compute.manager [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Took 13.19 seconds to build instance.#033[00m
Jan 31 03:33:36 np0005603621 nova_compute[247399]: 2026-01-31 08:33:36.843 247403 DEBUG oslo_concurrency.lockutils [None req-9e56184d-469b-492b-a5f4-7c92b59444b2 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:37.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.0 MiB/s wr, 223 op/s
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.491 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:33:38
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.log', 'volumes', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr']
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.738 247403 DEBUG nova.compute.manager [req-0c4585db-5730-414e-b9c5-4b9987705093 req-2b23e6d0-588b-4b14-9d74-30adeae6c641 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.739 247403 DEBUG oslo_concurrency.lockutils [req-0c4585db-5730-414e-b9c5-4b9987705093 req-2b23e6d0-588b-4b14-9d74-30adeae6c641 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.739 247403 DEBUG oslo_concurrency.lockutils [req-0c4585db-5730-414e-b9c5-4b9987705093 req-2b23e6d0-588b-4b14-9d74-30adeae6c641 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.740 247403 DEBUG oslo_concurrency.lockutils [req-0c4585db-5730-414e-b9c5-4b9987705093 req-2b23e6d0-588b-4b14-9d74-30adeae6c641 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.740 247403 DEBUG nova.compute.manager [req-0c4585db-5730-414e-b9c5-4b9987705093 req-2b23e6d0-588b-4b14-9d74-30adeae6c641 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] No waiting events found dispatching network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:38 np0005603621 nova_compute[247399]: 2026-01-31 08:33:38.740 247403 WARNING nova.compute.manager [req-0c4585db-5730-414e-b9c5-4b9987705093 req-2b23e6d0-588b-4b14-9d74-30adeae6c641 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received unexpected event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:33:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:33:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:39.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:39.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.776 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.777 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.777 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.777 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.778 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.779 247403 INFO nova.compute.manager [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Terminating instance#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.780 247403 DEBUG nova.compute.manager [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:33:39 np0005603621 kernel: tapa8c825b4-71 (unregistering): left promiscuous mode
Jan 31 03:33:39 np0005603621 NetworkManager[49013]: <info>  [1769848419.8243] device (tapa8c825b4-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:39Z|00526|binding|INFO|Releasing lport a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 from this chassis (sb_readonly=0)
Jan 31 03:33:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:39Z|00527|binding|INFO|Setting lport a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 down in Southbound
Jan 31 03:33:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:39Z|00528|binding|INFO|Removing iface tapa8c825b4-71 ovn-installed in OVS
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.831 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.835 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.863 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:49:73 10.100.0.5'], port_security=['fa:16:3e:00:49:73 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f0a44e-9891-4fc6-a679-3804d0eb6ab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.864 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 in datapath f6071a46-64a6-45aa-97c6-06e6c564195b unbound from our chassis#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.866 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f6071a46-64a6-45aa-97c6-06e6c564195b#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.878 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e31023b9-d2ff-4a9b-9116-ff773aca5de4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:39 np0005603621 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000088.scope: Deactivated successfully.
Jan 31 03:33:39 np0005603621 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000088.scope: Consumed 3.713s CPU time.
Jan 31 03:33:39 np0005603621 systemd-machined[212769]: Machine qemu-64-instance-00000088 terminated.
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.898 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[217d4f6e-1676-4887-88c7-910f66e335d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.902 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8387b1fa-3a65-464d-a87b-e89b68623ba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.920 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[83e8301b-f2a2-455d-bf66-0194c9a2425e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.933 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b82dfdc4-b622-43a7-af4e-13d3da2f958c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf6071a46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:8c:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 7, 'rx_bytes': 574, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770155, 'reachable_time': 21267, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340002, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.945 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6a14ab-55da-4131-b19f-72deedd45640]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf6071a46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770163, 'tstamp': 770163}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340003, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf6071a46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770165, 'tstamp': 770165}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340003, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.946 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.948 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:39 np0005603621 nova_compute[247399]: 2026-01-31 08:33:39.951 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.951 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf6071a46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.951 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.952 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf6071a46-60, col_values=(('external_ids', {'iface-id': 'e9a7861c-c6ea-4166-9252-dc2aacdf4771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:39.952 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.008 247403 INFO nova.virt.libvirt.driver [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Instance destroyed successfully.#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.009 247403 DEBUG nova.objects.instance [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'resources' on Instance uuid 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.055 247403 DEBUG nova.virt.libvirt.vif [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:33:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-29964497',display_name='tempest-ServersTestJSON-server-29964497',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-29964497',id=136,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:33:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-mp6d9vbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:33:36Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=52f0a44e-9891-4fc6-a679-3804d0eb6ab5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.056 247403 DEBUG nova.network.os_vif_util [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "address": "fa:16:3e:00:49:73", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8c825b4-71", "ovs_interfaceid": "a8c825b4-71f1-4ad9-bbc3-ff780cc658e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.056 247403 DEBUG nova.network.os_vif_util [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.057 247403 DEBUG os_vif [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.059 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.059 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8c825b4-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:40 np0005603621 nova_compute[247399]: 2026-01-31 08:33:40.063 247403 INFO os_vif [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:49:73,bridge_name='br-int',has_traffic_filtering=True,id=a8c825b4-71f1-4ad9-bbc3-ff780cc658e9,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8c825b4-71')#033[00m
Jan 31 03:33:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 554 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 937 KiB/s rd, 5.6 MiB/s wr, 210 op/s
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.157 247403 DEBUG nova.compute.manager [req-7269a2a8-8fce-4d51-802c-3d7d7ded2d7c req-6a728584-d677-486b-ae26-f16d57523136 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-vif-unplugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.158 247403 DEBUG oslo_concurrency.lockutils [req-7269a2a8-8fce-4d51-802c-3d7d7ded2d7c req-6a728584-d677-486b-ae26-f16d57523136 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.158 247403 DEBUG oslo_concurrency.lockutils [req-7269a2a8-8fce-4d51-802c-3d7d7ded2d7c req-6a728584-d677-486b-ae26-f16d57523136 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.158 247403 DEBUG oslo_concurrency.lockutils [req-7269a2a8-8fce-4d51-802c-3d7d7ded2d7c req-6a728584-d677-486b-ae26-f16d57523136 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.158 247403 DEBUG nova.compute.manager [req-7269a2a8-8fce-4d51-802c-3d7d7ded2d7c req-6a728584-d677-486b-ae26-f16d57523136 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] No waiting events found dispatching network-vif-unplugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.159 247403 DEBUG nova.compute.manager [req-7269a2a8-8fce-4d51-802c-3d7d7ded2d7c req-6a728584-d677-486b-ae26-f16d57523136 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-vif-unplugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:33:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.482 247403 INFO nova.virt.libvirt.driver [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Deleting instance files /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5_del#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.484 247403 INFO nova.virt.libvirt.driver [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Deletion of /var/lib/nova/instances/52f0a44e-9891-4fc6-a679-3804d0eb6ab5_del complete#033[00m
Jan 31 03:33:41 np0005603621 podman[340036]: 2026-01-31 08:33:41.51563531 +0000 UTC m=+0.061233042 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 31 03:33:41 np0005603621 podman[340037]: 2026-01-31 08:33:41.541253043 +0000 UTC m=+0.086295487 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.584 247403 INFO nova.compute.manager [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Took 1.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.585 247403 DEBUG oslo.service.loopingcall [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.585 247403 DEBUG nova.compute.manager [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:33:41 np0005603621 nova_compute[247399]: 2026-01-31 08:33:41.585 247403 DEBUG nova.network.neutron [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:33:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 459 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 385 op/s
Jan 31 03:33:42 np0005603621 nova_compute[247399]: 2026-01-31 08:33:42.707 247403 DEBUG nova.network.neutron [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:42 np0005603621 nova_compute[247399]: 2026-01-31 08:33:42.736 247403 INFO nova.compute.manager [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Took 1.15 seconds to deallocate network for instance.#033[00m
Jan 31 03:33:42 np0005603621 nova_compute[247399]: 2026-01-31 08:33:42.822 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:42 np0005603621 nova_compute[247399]: 2026-01-31 08:33:42.823 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:42 np0005603621 nova_compute[247399]: 2026-01-31 08:33:42.908 247403 DEBUG oslo_concurrency.processutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:43.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1776453377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.332 247403 DEBUG nova.compute.manager [req-b4a80c93-8634-4919-a4e0-9cc60c387312 req-0a7c9923-166a-486d-987e-45adec95a3b2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.332 247403 DEBUG oslo_concurrency.lockutils [req-b4a80c93-8634-4919-a4e0-9cc60c387312 req-0a7c9923-166a-486d-987e-45adec95a3b2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.333 247403 DEBUG oslo_concurrency.lockutils [req-b4a80c93-8634-4919-a4e0-9cc60c387312 req-0a7c9923-166a-486d-987e-45adec95a3b2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.333 247403 DEBUG oslo_concurrency.lockutils [req-b4a80c93-8634-4919-a4e0-9cc60c387312 req-0a7c9923-166a-486d-987e-45adec95a3b2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.333 247403 DEBUG nova.compute.manager [req-b4a80c93-8634-4919-a4e0-9cc60c387312 req-0a7c9923-166a-486d-987e-45adec95a3b2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] No waiting events found dispatching network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.334 247403 WARNING nova.compute.manager [req-b4a80c93-8634-4919-a4e0-9cc60c387312 req-0a7c9923-166a-486d-987e-45adec95a3b2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received unexpected event network-vif-plugged-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.334 247403 DEBUG oslo_concurrency.processutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.339 247403 DEBUG nova.compute.provider_tree [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.370 247403 DEBUG nova.scheduler.client.report [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.419 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.459 247403 INFO nova.scheduler.client.report [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Deleted allocations for instance 52f0a44e-9891-4fc6-a679-3804d0eb6ab5#033[00m
Jan 31 03:33:43 np0005603621 nova_compute[247399]: 2026-01-31 08:33:43.651 247403 DEBUG oslo_concurrency.lockutils [None req-6d765aba-ad10-4a77-a063-bb45e94e5798 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "52f0a44e-9891-4fc6-a679-3804d0eb6ab5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.147 247403 DEBUG nova.compute.manager [req-c6f11bc7-c54e-4634-83b5-b5007d206596 req-c8c958ae-b1dc-497d-9f5d-686f648a87c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Received event network-vif-deleted-a8c825b4-71f1-4ad9-bbc3-ff780cc658e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 438 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 382 op/s
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.343 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "0236c768-8b91-4ca0-94b9-ec028c671266" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.343 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.344 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.344 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.344 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.346 247403 INFO nova.compute.manager [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Terminating instance#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.347 247403 DEBUG nova.compute.manager [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:33:44 np0005603621 kernel: tap7f2a0280-14 (unregistering): left promiscuous mode
Jan 31 03:33:44 np0005603621 NetworkManager[49013]: <info>  [1769848424.4923] device (tap7f2a0280-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:33:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:44Z|00529|binding|INFO|Releasing lport 7f2a0280-14bd-4a11-9882-dc3c57cd2563 from this chassis (sb_readonly=0)
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:44Z|00530|binding|INFO|Setting lport 7f2a0280-14bd-4a11-9882-dc3c57cd2563 down in Southbound
Jan 31 03:33:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:44Z|00531|binding|INFO|Removing iface tap7f2a0280-14 ovn-installed in OVS
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.500 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.507 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 31 03:33:44 np0005603621 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000086.scope: Consumed 13.311s CPU time.
Jan 31 03:33:44 np0005603621 systemd-machined[212769]: Machine qemu-63-instance-00000086 terminated.
Jan 31 03:33:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:44Z|00532|binding|INFO|Releasing lport e9a7861c-c6ea-4166-9252-dc2aacdf4771 from this chassis (sb_readonly=0)
Jan 31 03:33:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:44.549 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:da:b6 10.100.0.14'], port_security=['fa:16:3e:24:da:b6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0236c768-8b91-4ca0-94b9-ec028c671266', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f6071a46-64a6-45aa-97c6-06e6c564195b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '40db421b27d84f809f8074c58151327f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '986b09c9-4243-429e-9b6e-93ffcacf8cb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=111856e4-2ce2-4b64-a82d-6a5bd7b8a457, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=7f2a0280-14bd-4a11-9882-dc3c57cd2563) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:44.550 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 7f2a0280-14bd-4a11-9882-dc3c57cd2563 in datapath f6071a46-64a6-45aa-97c6-06e6c564195b unbound from our chassis#033[00m
Jan 31 03:33:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:44.554 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f6071a46-64a6-45aa-97c6-06e6c564195b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:33:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:44.555 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c49b2ef8-cb92-44bd-b807-a46e94fc830e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:44.556 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b namespace which is not needed anymore#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.568 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.573 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.578 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.589 247403 INFO nova.virt.libvirt.driver [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Instance destroyed successfully.#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.590 247403 DEBUG nova.objects.instance [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lazy-loading 'resources' on Instance uuid 0236c768-8b91-4ca0-94b9-ec028c671266 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.628 247403 DEBUG nova.virt.libvirt.vif [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:33:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-29964497',display_name='tempest-ServersTestJSON-server-29964497',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-29964497',id=134,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:33:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='40db421b27d84f809f8074c58151327f',ramdisk_id='',reservation_id='r-yo24wivb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1064072764',owner_user_name='tempest-ServersTestJSON-1064072764-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:33:16Z,user_data=None,user_id='fb3f20f0143d465ebfe98f6a13200890',uuid=0236c768-8b91-4ca0-94b9-ec028c671266,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.628 247403 DEBUG nova.network.os_vif_util [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converting VIF {"id": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "address": "fa:16:3e:24:da:b6", "network": {"id": "f6071a46-64a6-45aa-97c6-06e6c564195b", "bridge": "br-int", "label": "tempest-ServersTestJSON-1491278061-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "40db421b27d84f809f8074c58151327f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f2a0280-14", "ovs_interfaceid": "7f2a0280-14bd-4a11-9882-dc3c57cd2563", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.629 247403 DEBUG nova.network.os_vif_util [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.630 247403 DEBUG os_vif [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.633 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f2a0280-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.639 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.641 247403 INFO os_vif [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:da:b6,bridge_name='br-int',has_traffic_filtering=True,id=7f2a0280-14bd-4a11-9882-dc3c57cd2563,network=Network(f6071a46-64a6-45aa-97c6-06e6c564195b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f2a0280-14')#033[00m
Jan 31 03:33:44 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [NOTICE]   (339528) : haproxy version is 2.8.14-c23fe91
Jan 31 03:33:44 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [NOTICE]   (339528) : path to executable is /usr/sbin/haproxy
Jan 31 03:33:44 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [WARNING]  (339528) : Exiting Master process...
Jan 31 03:33:44 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [ALERT]    (339528) : Current worker (339532) exited with code 143 (Terminated)
Jan 31 03:33:44 np0005603621 neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b[339506]: [WARNING]  (339528) : All workers exited. Exiting... (0)
Jan 31 03:33:44 np0005603621 systemd[1]: libpod-ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1.scope: Deactivated successfully.
Jan 31 03:33:44 np0005603621 podman[340138]: 2026-01-31 08:33:44.714699055 +0000 UTC m=+0.074699209 container died ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:33:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1-userdata-shm.mount: Deactivated successfully.
Jan 31 03:33:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bdf1d97674b84c300e078edffd74d3f1154c2c3eab006cbafef2fff32d71b6be-merged.mount: Deactivated successfully.
Jan 31 03:33:44 np0005603621 nova_compute[247399]: 2026-01-31 08:33:44.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:44 np0005603621 podman[340138]: 2026-01-31 08:33:44.940890257 +0000 UTC m=+0.300890411 container cleanup ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Jan 31 03:33:44 np0005603621 systemd[1]: libpod-conmon-ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1.scope: Deactivated successfully.
Jan 31 03:33:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:45.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:45 np0005603621 podman[340186]: 2026-01-31 08:33:45.277504878 +0000 UTC m=+0.320749899 container remove ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:33:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.281 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[368d55db-fa93-4fcf-bb4f-106803c78bd8]: (4, ('Sat Jan 31 08:33:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b (ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1)\necdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1\nSat Jan 31 08:33:44 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b (ecdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1)\necdd81208f5d2487f17164175d1629c62a579d2b733b07a0d55efc52184217c1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.283 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4298fa-21de-42dd-a094-3491121c6b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.284 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf6071a46-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:45 np0005603621 kernel: tapf6071a46-60: left promiscuous mode
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.299 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e149a43-5c50-4d4e-a950-5cfa59611e62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.316 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c32293-26c8-4a28-b449-e5a417111a91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.318 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37adee70-9860-45b3-9e4f-4d8609b5b1ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.334 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e90bf37b-9ca2-4262-baab-9ec3688e772d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770149, 'reachable_time': 25979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340202, 'error': None, 'target': 'ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 systemd[1]: run-netns-ovnmeta\x2df6071a46\x2d64a6\x2d45aa\x2d97c6\x2d06e6c564195b.mount: Deactivated successfully.
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.337 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f6071a46-64a6-45aa-97c6-06e6c564195b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:33:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:45.338 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c2779f1b-acc0-4678-8808-1877661e15df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.765 247403 INFO nova.virt.libvirt.driver [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Deleting instance files /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266_del#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.766 247403 INFO nova.virt.libvirt.driver [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Deletion of /var/lib/nova/instances/0236c768-8b91-4ca0-94b9-ec028c671266_del complete#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.977 247403 INFO nova.compute.manager [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Took 1.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.977 247403 DEBUG oslo.service.loopingcall [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.977 247403 DEBUG nova.compute.manager [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:33:45 np0005603621 nova_compute[247399]: 2026-01-31 08:33:45.978 247403 DEBUG nova.network.neutron [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:33:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 377 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 408 op/s
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.159 247403 DEBUG nova.network.neutron [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.357 247403 DEBUG nova.compute.manager [req-478d6602-c038-4284-8f68-aebbfa2e2925 req-7eb4c6d9-64cf-41c5-b61a-d6fc564c510c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Received event network-vif-deleted-7f2a0280-14bd-4a11-9882-dc3c57cd2563 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.358 247403 INFO nova.compute.manager [req-478d6602-c038-4284-8f68-aebbfa2e2925 req-7eb4c6d9-64cf-41c5-b61a-d6fc564c510c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Neutron deleted interface 7f2a0280-14bd-4a11-9882-dc3c57cd2563; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.358 247403 DEBUG nova.network.neutron [req-478d6602-c038-4284-8f68-aebbfa2e2925 req-7eb4c6d9-64cf-41c5-b61a-d6fc564c510c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.361 247403 INFO nova.compute.manager [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Took 1.38 seconds to deallocate network for instance.#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.574 247403 DEBUG nova.compute.manager [req-478d6602-c038-4284-8f68-aebbfa2e2925 req-7eb4c6d9-64cf-41c5-b61a-d6fc564c510c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Detach interface failed, port_id=7f2a0280-14bd-4a11-9882-dc3c57cd2563, reason: Instance 0236c768-8b91-4ca0-94b9-ec028c671266 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.774 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.775 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:47 np0005603621 nova_compute[247399]: 2026-01-31 08:33:47.828 247403 DEBUG oslo_concurrency.processutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/256080249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:48 np0005603621 nova_compute[247399]: 2026-01-31 08:33:48.286 247403 DEBUG oslo_concurrency.processutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:48 np0005603621 nova_compute[247399]: 2026-01-31 08:33:48.292 247403 DEBUG nova.compute.provider_tree [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 518 KiB/s wr, 318 op/s
Jan 31 03:33:48 np0005603621 nova_compute[247399]: 2026-01-31 08:33:48.344 247403 DEBUG nova.scheduler.client.report [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:48 np0005603621 nova_compute[247399]: 2026-01-31 08:33:48.404 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:48 np0005603621 nova_compute[247399]: 2026-01-31 08:33:48.509 247403 INFO nova.scheduler.client.report [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Deleted allocations for instance 0236c768-8b91-4ca0-94b9-ec028c671266#033[00m
Jan 31 03:33:48 np0005603621 nova_compute[247399]: 2026-01-31 08:33:48.796 247403 DEBUG oslo_concurrency.lockutils [None req-0e2f1d87-713b-404c-b258-9c37645019a5 fb3f20f0143d465ebfe98f6a13200890 40db421b27d84f809f8074c58151327f - - default default] Lock "0236c768-8b91-4ca0-94b9-ec028c671266" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006509507898287735 of space, bias 1.0, pg target 1.9528523694863207 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00216414101758794 of space, bias 1.0, pg target 0.647078164258794 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:33:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.496 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.498 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.540 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.714 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.715 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.722 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.722 247403 INFO nova.compute.claims [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:33:49 np0005603621 nova_compute[247399]: 2026-01-31 08:33:49.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.051 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.201 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.201 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.249 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.249 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:33:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 121 KiB/s wr, 272 op/s
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.503 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.509 247403 DEBUG nova.compute.provider_tree [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.638 247403 DEBUG nova.scheduler.client.report [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.702 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.703 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.789 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.790 247403 DEBUG nova.network.neutron [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.860 247403 INFO nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:33:50 np0005603621 nova_compute[247399]: 2026-01-31 08:33:50.926 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.080 247403 DEBUG nova.policy [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef51681d234a4abc88ff433d0640b6e7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '953a213fa5cb435ab3c04ad96152685f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:33:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:51.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.219 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.221 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.221 247403 INFO nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Creating image(s)#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.253 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.294 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.324 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.330 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.401 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.402 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.402 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.403 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.433 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.437 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.761 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.826 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] resizing rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:33:51 np0005603621 nova_compute[247399]: 2026-01-31 08:33:51.961 247403 DEBUG nova.objects.instance [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'migration_context' on Instance uuid 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:52 np0005603621 nova_compute[247399]: 2026-01-31 08:33:52.123 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:33:52 np0005603621 nova_compute[247399]: 2026-01-31 08:33:52.123 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Ensure instance console log exists: /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:33:52 np0005603621 nova_compute[247399]: 2026-01-31 08:33:52.123 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:52 np0005603621 nova_compute[247399]: 2026-01-31 08:33:52.124 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:52 np0005603621 nova_compute[247399]: 2026-01-31 08:33:52.124 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 279 op/s
Jan 31 03:33:53 np0005603621 nova_compute[247399]: 2026-01-31 08:33:53.068 247403 DEBUG nova.network.neutron [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Successfully created port: 9f6617a9-7a5b-49df-8f47-eb9e246413dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:33:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:53.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:53 np0005603621 nova_compute[247399]: 2026-01-31 08:33:53.242 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:53.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:33:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 304 KiB/s rd, 477 KiB/s wr, 105 op/s
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.484 247403 DEBUG nova.network.neutron [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Successfully updated port: 9f6617a9-7a5b-49df-8f47-eb9e246413dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.566 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.566 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.566 247403 DEBUG nova.network.neutron [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.679 247403 DEBUG nova.compute.manager [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received event network-changed-9f6617a9-7a5b-49df-8f47-eb9e246413dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.679 247403 DEBUG nova.compute.manager [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Refreshing instance network info cache due to event network-changed-9f6617a9-7a5b-49df-8f47-eb9e246413dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.680 247403 DEBUG oslo_concurrency.lockutils [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:33:54 np0005603621 nova_compute[247399]: 2026-01-31 08:33:54.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.008 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848420.0070546, 52f0a44e-9891-4fc6-a679-3804d0eb6ab5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.009 247403 INFO nova.compute.manager [-] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.053 247403 DEBUG nova.compute.manager [None req-68c4b9cf-1327-4b44-a1a1-92d4af60e65b - - - - - -] [instance: 52f0a44e-9891-4fc6-a679-3804d0eb6ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:55.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.310 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.310 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.310 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.311 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.311 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.370 247403 DEBUG nova.network.neutron [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:33:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840651802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.762 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.890 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.891 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4190MB free_disk=20.846515655517578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.892 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:55 np0005603621 nova_compute[247399]: 2026-01-31 08:33:55.892 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.219 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.219 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.220 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.279 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 405 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 86 op/s
Jan 31 03:33:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:33:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1851879015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.719 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.723 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.783 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.833 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:33:56 np0005603621 nova_compute[247399]: 2026-01-31 08:33:56.833 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.160 247403 DEBUG nova.network.neutron [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Updating instance_info_cache with network_info: [{"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:57.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.201 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.202 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance network_info: |[{"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.202 247403 DEBUG oslo_concurrency.lockutils [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.202 247403 DEBUG nova.network.neutron [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Refreshing network info cache for port 9f6617a9-7a5b-49df-8f47-eb9e246413dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.205 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Start _get_guest_xml network_info=[{"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.208 247403 WARNING nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.212 247403 DEBUG nova.virt.libvirt.host [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.213 247403 DEBUG nova.virt.libvirt.host [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.217 247403 DEBUG nova.virt.libvirt.host [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.217 247403 DEBUG nova.virt.libvirt.host [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.219 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.219 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.219 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.219 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.220 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.220 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.220 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.220 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.220 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.220 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.221 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.221 247403 DEBUG nova.virt.hardware [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.224 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127718213' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.644 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.665 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:57 np0005603621 nova_compute[247399]: 2026-01-31 08:33:57.668 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:33:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180854024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.126 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.127 247403 DEBUG nova.virt.libvirt.vif [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1695242703',display_name='tempest-ServerActionsTestOtherB-server-1695242703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1695242703',id=137,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-laclk3ex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:50Z,user_data=None,user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.128 247403 DEBUG nova.network.os_vif_util [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.129 247403 DEBUG nova.network.os_vif_util [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.130 247403 DEBUG nova.objects.instance [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'pci_devices' on Instance uuid 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.200 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <uuid>176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc</uuid>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <name>instance-00000089</name>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerActionsTestOtherB-server-1695242703</nova:name>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:33:57</nova:creationTime>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:user uuid="ef51681d234a4abc88ff433d0640b6e7">tempest-ServerActionsTestOtherB-1048458052-project-member</nova:user>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:project uuid="953a213fa5cb435ab3c04ad96152685f">tempest-ServerActionsTestOtherB-1048458052</nova:project>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <nova:port uuid="9f6617a9-7a5b-49df-8f47-eb9e246413dc">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <entry name="serial">176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc</entry>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <entry name="uuid">176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc</entry>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk.config">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:79:85:d5"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <target dev="tap9f6617a9-7a"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/console.log" append="off"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:33:58 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:33:58 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:33:58 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:33:58 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.201 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Preparing to wait for external event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.201 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.201 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.201 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.202 247403 DEBUG nova.virt.libvirt.vif [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:33:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1695242703',display_name='tempest-ServerActionsTestOtherB-server-1695242703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1695242703',id=137,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-laclk3ex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:33:50Z,user_data=None,user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.202 247403 DEBUG nova.network.os_vif_util [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.203 247403 DEBUG nova.network.os_vif_util [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.203 247403 DEBUG os_vif [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.204 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.204 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.205 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.207 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.207 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f6617a9-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.207 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f6617a9-7a, col_values=(('external_ids', {'iface-id': '9f6617a9-7a5b-49df-8f47-eb9e246413dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:85:d5', 'vm-uuid': '176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.208 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:58 np0005603621 NetworkManager[49013]: <info>  [1769848438.2094] manager: (tap9f6617a9-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.214 247403 INFO os_vif [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a')#033[00m
Jan 31 03:33:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 423 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 2.4 MiB/s wr, 38 op/s
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.417 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.417 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.417 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No VIF found with MAC fa:16:3e:79:85:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.418 247403 INFO nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Using config drive#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.438 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.833 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:58 np0005603621 nova_compute[247399]: 2026-01-31 08:33:58.833 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:33:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:33:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:33:59.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.220 247403 INFO nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Creating config drive at /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/disk.config#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.224 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsu4z70_y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:33:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:33:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:33:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.351 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpsu4z70_y" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.376 247403 DEBUG nova.storage.rbd_utils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.380 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/disk.config 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.487 247403 DEBUG nova.network.neutron [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Updated VIF entry in instance network info cache for port 9f6617a9-7a5b-49df-8f47-eb9e246413dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.488 247403 DEBUG nova.network.neutron [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Updating instance_info_cache with network_info: [{"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.560 247403 DEBUG oslo_concurrency.lockutils [req-ce672056-d911-49d6-8136-dbb069730b16 req-df40fe89-9f37-4f4f-86b0-f6b36d2a16ef fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.561 247403 DEBUG oslo_concurrency.processutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/disk.config 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.561 247403 INFO nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Deleting local config drive /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc/disk.config because it was imported into RBD.#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.584 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848424.583719, 0236c768-8b91-4ca0-94b9-ec028c671266 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.585 247403 INFO nova.compute.manager [-] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:33:59 np0005603621 kernel: tap9f6617a9-7a: entered promiscuous mode
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.6021] manager: (tap9f6617a9-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.603 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:59Z|00533|binding|INFO|Claiming lport 9f6617a9-7a5b-49df-8f47-eb9e246413dc for this chassis.
Jan 31 03:33:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:59Z|00534|binding|INFO|9f6617a9-7a5b-49df-8f47-eb9e246413dc: Claiming fa:16:3e:79:85:d5 10.100.0.8
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.606 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.608 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.610 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 systemd-machined[212769]: New machine qemu-65-instance-00000089.
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.640 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.6421] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.6431] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 31 03:33:59 np0005603621 systemd[1]: Started Virtual Machine qemu-65-instance-00000089.
Jan 31 03:33:59 np0005603621 systemd-udevd[340651]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.6636] device (tap9f6617a9-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.6645] device (tap9f6617a9-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.708 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:85:d5 10.100.0.8'], port_security=['fa:16:3e:79:85:d5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8c881e0-722d-4784-9f91-71ffaeb0ba02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=9f6617a9-7a5b-49df-8f47-eb9e246413dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.709 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 9f6617a9-7a5b-49df-8f47-eb9e246413dc in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 bound to our chassis#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.710 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44469d8b-ad30-4270-88fa-e67c568f3150#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.718 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2dffb6-baae-4020-b81e-ff5e3bf2dfcf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.719 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44469d8b-a1 in ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.721 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44469d8b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.721 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e6109cac-6a06-4c72-827e-c98a7d1e124e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.722 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.722 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[949f93bc-7f60-4446-98eb-232bb506bf62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.731 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6a30ff-d90a-48e1-9135-366d4166decb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.739 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0feaf6e-d663-4a44-9dc1-6ffa22f9620f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.743 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:59Z|00535|binding|INFO|Setting lport 9f6617a9-7a5b-49df-8f47-eb9e246413dc ovn-installed in OVS
Jan 31 03:33:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:59Z|00536|binding|INFO|Setting lport 9f6617a9-7a5b-49df-8f47-eb9e246413dc up in Southbound
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.751 247403 DEBUG nova.compute.manager [None req-361986cf-6350-4545-ae8f-edc378621a05 - - - - - -] [instance: 0236c768-8b91-4ca0-94b9-ec028c671266] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.762 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0c87b96f-3769-4bfe-b71a-fa989fb6776e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 systemd-udevd[340657]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.7674] manager: (tap44469d8b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.769 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1b629fcf-134f-43d8-894a-bd73de54b6c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.789 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f98ba3b0-c65a-459a-aff5-17bb33ee926f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.792 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2e4f1d66-5445-4d14-968d-699b60483a86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.8061] device (tap44469d8b-a0): carrier: link connected
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.809 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb62e5b-6cca-47ef-8432-5450b63c95f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.820 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e151aeea-ffc3-48b3-bd67-ee8a15c80556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774532, 'reachable_time': 15506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340684, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.830 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[808e0bbb-527f-4366-a97a-04ba043ca7f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:9820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774532, 'tstamp': 774532}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340685, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.842 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4a2739-ee5d-4afb-ab2d-650c9d7ab853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774532, 'reachable_time': 15506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340686, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.862 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6d90f8-9300-4b0c-9c27-d9b898e9526a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.880 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.904 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c08fbeaf-db4e-444b-8fd1-fff5a300a005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.906 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.906 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.906 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44469d8b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 kernel: tap44469d8b-a0: entered promiscuous mode
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.909 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 NetworkManager[49013]: <info>  [1769848439.9099] manager: (tap44469d8b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.911 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44469d8b-a0, col_values=(('external_ids', {'iface-id': '7e288124-e200-4c03-8a4a-baab3e3f3d7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:33:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:33:59Z|00537|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.912 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.912 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.913 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[32cbfab2-649c-4551-ba61-b0770f39d373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.914 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:33:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:33:59.915 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'env', 'PROCESS_TAG=haproxy-44469d8b-ad30-4270-88fa-e67c568f3150', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44469d8b-ad30-4270-88fa-e67c568f3150.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:33:59 np0005603621 nova_compute[247399]: 2026-01-31 08:33:59.917 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.075 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848440.0754504, 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.076 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] VM Started (Lifecycle Event)#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.114 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.119 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848440.0755408, 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.119 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.159 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.163 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.220 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:34:00 np0005603621 podman[340761]: 2026-01-31 08:34:00.28733848 +0000 UTC m=+0.100196408 container create 7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:34:00 np0005603621 podman[340761]: 2026-01-31 08:34:00.20911759 +0000 UTC m=+0.021975548 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:34:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 423 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.4 MiB/s wr, 35 op/s
Jan 31 03:34:00 np0005603621 systemd[1]: Started libpod-conmon-7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca.scope.
Jan 31 03:34:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01cfe1a3e56839c2e7de5ddfc79172ce689dfc8cb116319646ae823fec694830/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:00 np0005603621 podman[340761]: 2026-01-31 08:34:00.357581317 +0000 UTC m=+0.170439265 container init 7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:34:00 np0005603621 podman[340761]: 2026-01-31 08:34:00.361703667 +0000 UTC m=+0.174561595 container start 7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:34:00 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [NOTICE]   (340780) : New worker (340782) forked
Jan 31 03:34:00 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [NOTICE]   (340780) : Loading success.
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.460 247403 DEBUG nova.compute.manager [req-5a5885a1-a66f-4721-a8be-69aa55332fe1 req-ee6c069e-ad70-4fd9-a995-2cd13b14ee48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.460 247403 DEBUG oslo_concurrency.lockutils [req-5a5885a1-a66f-4721-a8be-69aa55332fe1 req-ee6c069e-ad70-4fd9-a995-2cd13b14ee48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.461 247403 DEBUG oslo_concurrency.lockutils [req-5a5885a1-a66f-4721-a8be-69aa55332fe1 req-ee6c069e-ad70-4fd9-a995-2cd13b14ee48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.461 247403 DEBUG oslo_concurrency.lockutils [req-5a5885a1-a66f-4721-a8be-69aa55332fe1 req-ee6c069e-ad70-4fd9-a995-2cd13b14ee48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.461 247403 DEBUG nova.compute.manager [req-5a5885a1-a66f-4721-a8be-69aa55332fe1 req-ee6c069e-ad70-4fd9-a995-2cd13b14ee48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Processing event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.462 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.467 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.468 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848440.4667046, 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.468 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.475 247403 INFO nova.virt.libvirt.driver [-] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance spawned successfully.#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.476 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.609 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.613 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.644 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.648 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.648 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.649 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.649 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.650 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.650 247403 DEBUG nova.virt.libvirt.driver [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.755 247403 INFO nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Took 9.53 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.755 247403 DEBUG nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.887 247403 INFO nova.compute.manager [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Took 11.19 seconds to build instance.#033[00m
Jan 31 03:34:00 np0005603621 nova_compute[247399]: 2026-01-31 08:34:00.960 247403 DEBUG oslo_concurrency.lockutils [None req-31cdb728-3e1b-42ad-88aa-7631e7e7cf7d ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:01.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 481 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 4.7 MiB/s wr, 85 op/s
Jan 31 03:34:02 np0005603621 nova_compute[247399]: 2026-01-31 08:34:02.593 247403 DEBUG nova.compute.manager [req-1787293c-edd9-4d13-8997-1b330f836c37 req-9c1d84b8-aeb9-423f-bd5b-031ee5a8cf25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:02 np0005603621 nova_compute[247399]: 2026-01-31 08:34:02.593 247403 DEBUG oslo_concurrency.lockutils [req-1787293c-edd9-4d13-8997-1b330f836c37 req-9c1d84b8-aeb9-423f-bd5b-031ee5a8cf25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:02 np0005603621 nova_compute[247399]: 2026-01-31 08:34:02.593 247403 DEBUG oslo_concurrency.lockutils [req-1787293c-edd9-4d13-8997-1b330f836c37 req-9c1d84b8-aeb9-423f-bd5b-031ee5a8cf25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:02 np0005603621 nova_compute[247399]: 2026-01-31 08:34:02.593 247403 DEBUG oslo_concurrency.lockutils [req-1787293c-edd9-4d13-8997-1b330f836c37 req-9c1d84b8-aeb9-423f-bd5b-031ee5a8cf25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:02 np0005603621 nova_compute[247399]: 2026-01-31 08:34:02.593 247403 DEBUG nova.compute.manager [req-1787293c-edd9-4d13-8997-1b330f836c37 req-9c1d84b8-aeb9-423f-bd5b-031ee5a8cf25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] No waiting events found dispatching network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:02 np0005603621 nova_compute[247399]: 2026-01-31 08:34:02.594 247403 WARNING nova.compute.manager [req-1787293c-edd9-4d13-8997-1b330f836c37 req-9c1d84b8-aeb9-423f-bd5b-031ee5a8cf25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received unexpected event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc for instance with vm_state active and task_state None.#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.164 247403 INFO nova.compute.manager [None req-4f67df9f-dda5-4e10-9459-1b729b9631b4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Pausing#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.165 247403 DEBUG nova.objects.instance [None req-4f67df9f-dda5-4e10-9459-1b729b9631b4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'flavor' on Instance uuid 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:03.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.208 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848443.20833, 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.209 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.212 247403 DEBUG nova.compute.manager [None req-4f67df9f-dda5-4e10-9459-1b729b9631b4 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.253 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.257 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:03.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:03Z|00538|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:34:03 np0005603621 nova_compute[247399]: 2026-01-31 08:34:03.473 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 497 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 5.3 MiB/s wr, 106 op/s
Jan 31 03:34:04 np0005603621 nova_compute[247399]: 2026-01-31 08:34:04.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:05 np0005603621 nova_compute[247399]: 2026-01-31 08:34:05.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:05.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.9 MiB/s wr, 200 op/s
Jan 31 03:34:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:07.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:07.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:07 np0005603621 nova_compute[247399]: 2026-01-31 08:34:07.807 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:07 np0005603621 nova_compute[247399]: 2026-01-31 08:34:07.807 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:07 np0005603621 nova_compute[247399]: 2026-01-31 08:34:07.807 247403 INFO nova.compute.manager [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Shelving#033[00m
Jan 31 03:34:07 np0005603621 kernel: tap9f6617a9-7a (unregistering): left promiscuous mode
Jan 31 03:34:07 np0005603621 NetworkManager[49013]: <info>  [1769848447.8765] device (tap9f6617a9-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:34:07 np0005603621 nova_compute[247399]: 2026-01-31 08:34:07.881 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:07Z|00539|binding|INFO|Releasing lport 9f6617a9-7a5b-49df-8f47-eb9e246413dc from this chassis (sb_readonly=0)
Jan 31 03:34:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:07Z|00540|binding|INFO|Setting lport 9f6617a9-7a5b-49df-8f47-eb9e246413dc down in Southbound
Jan 31 03:34:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:07Z|00541|binding|INFO|Removing iface tap9f6617a9-7a ovn-installed in OVS
Jan 31 03:34:07 np0005603621 nova_compute[247399]: 2026-01-31 08:34:07.884 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:07 np0005603621 nova_compute[247399]: 2026-01-31 08:34:07.891 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:07.891 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:85:d5 10.100.0.8'], port_security=['fa:16:3e:79:85:d5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8c881e0-722d-4784-9f91-71ffaeb0ba02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=9f6617a9-7a5b-49df-8f47-eb9e246413dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:07.893 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 9f6617a9-7a5b-49df-8f47-eb9e246413dc in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 unbound from our chassis#033[00m
Jan 31 03:34:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:07.894 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44469d8b-ad30-4270-88fa-e67c568f3150, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:34:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:07.895 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ea5eeb-4e02-42b9-ad57-3faecd320776]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:07.895 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace which is not needed anymore#033[00m
Jan 31 03:34:07 np0005603621 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 31 03:34:07 np0005603621 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000089.scope: Consumed 3.289s CPU time.
Jan 31 03:34:07 np0005603621 systemd-machined[212769]: Machine qemu-65-instance-00000089 terminated.
Jan 31 03:34:08 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [NOTICE]   (340780) : haproxy version is 2.8.14-c23fe91
Jan 31 03:34:08 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [NOTICE]   (340780) : path to executable is /usr/sbin/haproxy
Jan 31 03:34:08 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [WARNING]  (340780) : Exiting Master process...
Jan 31 03:34:08 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [ALERT]    (340780) : Current worker (340782) exited with code 143 (Terminated)
Jan 31 03:34:08 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[340776]: [WARNING]  (340780) : All workers exited. Exiting... (0)
Jan 31 03:34:08 np0005603621 systemd[1]: libpod-7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca.scope: Deactivated successfully.
Jan 31 03:34:08 np0005603621 podman[340820]: 2026-01-31 08:34:08.023021046 +0000 UTC m=+0.041042903 container died 7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:34:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca-userdata-shm.mount: Deactivated successfully.
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-01cfe1a3e56839c2e7de5ddfc79172ce689dfc8cb116319646ae823fec694830-merged.mount: Deactivated successfully.
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.051 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:08 np0005603621 podman[340820]: 2026-01-31 08:34:08.064322665 +0000 UTC m=+0.082344542 container cleanup 7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.070 247403 INFO nova.virt.libvirt.driver [-] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance destroyed successfully.#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.071 247403 DEBUG nova.objects.instance [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'numa_topology' on Instance uuid 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:08 np0005603621 systemd[1]: libpod-conmon-7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca.scope: Deactivated successfully.
Jan 31 03:34:08 np0005603621 podman[340860]: 2026-01-31 08:34:08.126800456 +0000 UTC m=+0.041700254 container remove 7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.130 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[506274ab-4c1c-4fe6-b79c-75b775a2b0ca]: (4, ('Sat Jan 31 08:34:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca)\n7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca\nSat Jan 31 08:34:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca)\n7350954e58f1afdc12e5ebcebc5ce16cc8991bc16ba4448d8adbe3a6dc7f4aca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.132 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5951b4-5b14-413a-92f1-d460bd6e28f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.133 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:08 np0005603621 kernel: tap44469d8b-a0: left promiscuous mode
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.146 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[48c920e0-fd2b-4a8a-85d1-254ce5e82f5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.170 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c624f57d-88df-4d13-8d98-0c4fa8149cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.171 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3279c4-9912-41b3-b9bc-173252030afb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.184 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b80078e1-6690-4f83-ab83-f296ca34bdc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774527, 'reachable_time': 37286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340882, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.187 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:34:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:08.187 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fb080265-8db2-4873-b282-3566fe6b195c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:08 np0005603621 systemd[1]: run-netns-ovnmeta\x2d44469d8b\x2dad30\x2d4270\x2d88fa\x2de67c568f3150.mount: Deactivated successfully.
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.242 247403 DEBUG nova.compute.manager [req-2fcd04c6-48e3-4a0f-ab23-036a10667fc5 req-6978cbb4-762f-48a1-8bf0-ae1d403ba606 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received event network-vif-unplugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.242 247403 DEBUG oslo_concurrency.lockutils [req-2fcd04c6-48e3-4a0f-ab23-036a10667fc5 req-6978cbb4-762f-48a1-8bf0-ae1d403ba606 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.242 247403 DEBUG oslo_concurrency.lockutils [req-2fcd04c6-48e3-4a0f-ab23-036a10667fc5 req-6978cbb4-762f-48a1-8bf0-ae1d403ba606 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.243 247403 DEBUG oslo_concurrency.lockutils [req-2fcd04c6-48e3-4a0f-ab23-036a10667fc5 req-6978cbb4-762f-48a1-8bf0-ae1d403ba606 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.243 247403 DEBUG nova.compute.manager [req-2fcd04c6-48e3-4a0f-ab23-036a10667fc5 req-6978cbb4-762f-48a1-8bf0-ae1d403ba606 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] No waiting events found dispatching network-vif-unplugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.243 247403 WARNING nova.compute.manager [req-2fcd04c6-48e3-4a0f-ab23-036a10667fc5 req-6978cbb4-762f-48a1-8bf0-ae1d403ba606 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received unexpected event network-vif-unplugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc for instance with vm_state paused and task_state shelving.#033[00m
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.710 247403 INFO nova.virt.libvirt.driver [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Beginning cold snapshot process#033[00m
Jan 31 03:34:08 np0005603621 nova_compute[247399]: 2026-01-31 08:34:08.941 247403 DEBUG nova.virt.libvirt.imagebackend [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:34:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:09.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:09 np0005603621 nova_compute[247399]: 2026-01-31 08:34:09.309 247403 DEBUG nova.storage.rbd_utils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(0723cbff235b449bb40e6b5159593f6f) on rbd image(176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:34:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 31 03:34:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 31 03:34:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 31 03:34:09 np0005603621 nova_compute[247399]: 2026-01-31 08:34:09.781 247403 DEBUG nova.storage.rbd_utils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] cloning vms/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk@0723cbff235b449bb40e6b5159593f6f to images/e787704c-b374-4706-807f-a0d18a0ac398 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:34:09 np0005603621 nova_compute[247399]: 2026-01-31 08:34:09.885 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:09 np0005603621 nova_compute[247399]: 2026-01-31 08:34:09.923 247403 DEBUG nova.storage.rbd_utils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] flattening images/e787704c-b374-4706-807f-a0d18a0ac398 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:34:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 240 op/s
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.436 247403 DEBUG nova.storage.rbd_utils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] removing snapshot(0723cbff235b449bb40e6b5159593f6f) on rbd image(176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.481 247403 DEBUG nova.compute.manager [req-1bfa943c-f165-475a-862a-4442624a820d req-d229241b-8a33-4eaa-80b9-16cde143dd08 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.482 247403 DEBUG oslo_concurrency.lockutils [req-1bfa943c-f165-475a-862a-4442624a820d req-d229241b-8a33-4eaa-80b9-16cde143dd08 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.482 247403 DEBUG oslo_concurrency.lockutils [req-1bfa943c-f165-475a-862a-4442624a820d req-d229241b-8a33-4eaa-80b9-16cde143dd08 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.482 247403 DEBUG oslo_concurrency.lockutils [req-1bfa943c-f165-475a-862a-4442624a820d req-d229241b-8a33-4eaa-80b9-16cde143dd08 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.482 247403 DEBUG nova.compute.manager [req-1bfa943c-f165-475a-862a-4442624a820d req-d229241b-8a33-4eaa-80b9-16cde143dd08 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] No waiting events found dispatching network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.483 247403 WARNING nova.compute.manager [req-1bfa943c-f165-475a-862a-4442624a820d req-d229241b-8a33-4eaa-80b9-16cde143dd08 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received unexpected event network-vif-plugged-9f6617a9-7a5b-49df-8f47-eb9e246413dc for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Jan 31 03:34:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 31 03:34:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 31 03:34:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 31 03:34:10 np0005603621 nova_compute[247399]: 2026-01-31 08:34:10.720 247403 DEBUG nova.storage.rbd_utils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] creating snapshot(snap) on rbd image(e787704c-b374-4706-807f-a0d18a0ac398) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:34:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:11.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:11.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 31 03:34:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 31 03:34:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 31 03:34:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 530 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.8 MiB/s wr, 221 op/s
Jan 31 03:34:12 np0005603621 podman[341076]: 2026-01-31 08:34:12.488706288 +0000 UTC m=+0.038611336 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:34:12 np0005603621 podman[341077]: 2026-01-31 08:34:12.525959938 +0000 UTC m=+0.075874137 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 03:34:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:13.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:13 np0005603621 nova_compute[247399]: 2026-01-31 08:34:13.213 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:13.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 31 03:34:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 31 03:34:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 31 03:34:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 525 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 4.6 MiB/s wr, 309 op/s
Jan 31 03:34:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 31 03:34:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 31 03:34:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 31 03:34:14 np0005603621 nova_compute[247399]: 2026-01-31 08:34:14.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.079 247403 INFO nova.virt.libvirt.driver [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Snapshot image upload complete#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.080 247403 DEBUG nova.compute.manager [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.242 247403 INFO nova.compute.manager [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Shelve offloading#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.248 247403 INFO nova.virt.libvirt.driver [-] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance destroyed successfully.#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.248 247403 DEBUG nova.compute.manager [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.250 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.250 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:15 np0005603621 nova_compute[247399]: 2026-01-31 08:34:15.250 247403 DEBUG nova.network.neutron [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:34:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:15.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 31 03:34:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 31 03:34:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:34:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 530 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.2 MiB/s wr, 317 op/s
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:34:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 69ec6d7c-e71d-4671-a565-37d1e0c2b04f does not exist
Jan 31 03:34:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 36e5de6e-026f-4d6c-85ef-f69f161b1556 does not exist
Jan 31 03:34:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0e61ad71-17a2-480c-96b9-9991d1db9b91 does not exist
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:34:16 np0005603621 nova_compute[247399]: 2026-01-31 08:34:16.682 247403 DEBUG nova.network.neutron [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Updating instance_info_cache with network_info: [{"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.760255735 +0000 UTC m=+0.037401877 container create 539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:34:16 np0005603621 nova_compute[247399]: 2026-01-31 08:34:16.772 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.741944214 +0000 UTC m=+0.019090376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:34:16 np0005603621 systemd[1]: Started libpod-conmon-539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4.scope.
Jan 31 03:34:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.891883018 +0000 UTC m=+0.169029180 container init 539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.898502807 +0000 UTC m=+0.175648949 container start 539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:34:16 np0005603621 lucid_torvalds[341409]: 167 167
Jan 31 03:34:16 np0005603621 systemd[1]: libpod-539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4.scope: Deactivated successfully.
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.910439636 +0000 UTC m=+0.187585778 container attach 539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.911333805 +0000 UTC m=+0.188479947 container died 539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:34:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:34:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bd634cfd9b26b4697f9fb030c3dca8bbd92ba3848dbac5ace152c805c94c1af0-merged.mount: Deactivated successfully.
Jan 31 03:34:16 np0005603621 podman[341393]: 2026-01-31 08:34:16.982860262 +0000 UTC m=+0.260006404 container remove 539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:34:16 np0005603621 systemd[1]: libpod-conmon-539fbfcc9b28506500e44abfc30f4a0a58d3541bab496d08a6ab2c446bd16aa4.scope: Deactivated successfully.
Jan 31 03:34:17 np0005603621 podman[341436]: 2026-01-31 08:34:17.120435134 +0000 UTC m=+0.067446980 container create e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shtern, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:17 np0005603621 systemd[1]: Started libpod-conmon-e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351.scope.
Jan 31 03:34:17 np0005603621 podman[341436]: 2026-01-31 08:34:17.075468228 +0000 UTC m=+0.022480084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:34:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d7dd4642dacbee2cae40b32b4bf7f47735d4c36fd7a7075bd42e98a58b0070/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d7dd4642dacbee2cae40b32b4bf7f47735d4c36fd7a7075bd42e98a58b0070/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d7dd4642dacbee2cae40b32b4bf7f47735d4c36fd7a7075bd42e98a58b0070/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d7dd4642dacbee2cae40b32b4bf7f47735d4c36fd7a7075bd42e98a58b0070/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d7dd4642dacbee2cae40b32b4bf7f47735d4c36fd7a7075bd42e98a58b0070/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:17 np0005603621 podman[341436]: 2026-01-31 08:34:17.19064696 +0000 UTC m=+0.137658836 container init e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:34:17 np0005603621 podman[341436]: 2026-01-31 08:34:17.196489315 +0000 UTC m=+0.143501181 container start e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shtern, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:34:17 np0005603621 podman[341436]: 2026-01-31 08:34:17.2073592 +0000 UTC m=+0.154371076 container attach e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shtern, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:34:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:17.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:17 np0005603621 bold_shtern[341453]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:34:17 np0005603621 bold_shtern[341453]: --> relative data size: 1.0
Jan 31 03:34:17 np0005603621 bold_shtern[341453]: --> All data devices are unavailable
Jan 31 03:34:18 np0005603621 systemd[1]: libpod-e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351.scope: Deactivated successfully.
Jan 31 03:34:18 np0005603621 conmon[341453]: conmon e010e69a2c6e4cb3b7d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351.scope/container/memory.events
Jan 31 03:34:18 np0005603621 podman[341436]: 2026-01-31 08:34:18.002861651 +0000 UTC m=+0.949873507 container died e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shtern, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:34:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83d7dd4642dacbee2cae40b32b4bf7f47735d4c36fd7a7075bd42e98a58b0070-merged.mount: Deactivated successfully.
Jan 31 03:34:18 np0005603621 podman[341436]: 2026-01-31 08:34:18.052192045 +0000 UTC m=+0.999203901 container remove e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:34:18 np0005603621 systemd[1]: libpod-conmon-e010e69a2c6e4cb3b7d2adc7b030c20214a0f1afe108e862d4c3230cd7a9a351.scope: Deactivated successfully.
Jan 31 03:34:18 np0005603621 nova_compute[247399]: 2026-01-31 08:34:18.181 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:18.181 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:18.183 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:34:18 np0005603621 nova_compute[247399]: 2026-01-31 08:34:18.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 544 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.4 MiB/s wr, 268 op/s
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.619495322 +0000 UTC m=+0.077273602 container create db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hellman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.565730497 +0000 UTC m=+0.023508797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:34:18 np0005603621 systemd[1]: Started libpod-conmon-db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901.scope.
Jan 31 03:34:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.736062257 +0000 UTC m=+0.193840567 container init db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.743312766 +0000 UTC m=+0.201091046 container start db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.746542349 +0000 UTC m=+0.204320629 container attach db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:34:18 np0005603621 flamboyant_hellman[341635]: 167 167
Jan 31 03:34:18 np0005603621 systemd[1]: libpod-db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901.scope: Deactivated successfully.
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.750873987 +0000 UTC m=+0.208652267 container died db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hellman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 31 03:34:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5102bf645e1d2531ca5b9ed2ed74c4fda8c3c5d54492fea7ee3b3faa102f7bc3-merged.mount: Deactivated successfully.
Jan 31 03:34:18 np0005603621 podman[341621]: 2026-01-31 08:34:18.791426622 +0000 UTC m=+0.249204902 container remove db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 31 03:34:18 np0005603621 systemd[1]: libpod-conmon-db4ba0c26b017deb6283085cfe94014bc58b22a3d4790108cdb334447b33e901.scope: Deactivated successfully.
Jan 31 03:34:18 np0005603621 podman[341661]: 2026-01-31 08:34:18.910331902 +0000 UTC m=+0.043119278 container create bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:34:18 np0005603621 systemd[1]: Started libpod-conmon-bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987.scope.
Jan 31 03:34:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3660a0a5e7be3b91c40d0f5c856a57a38d4c3f6ba4bbaee2c47b0dfd80f50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3660a0a5e7be3b91c40d0f5c856a57a38d4c3f6ba4bbaee2c47b0dfd80f50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3660a0a5e7be3b91c40d0f5c856a57a38d4c3f6ba4bbaee2c47b0dfd80f50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc3660a0a5e7be3b91c40d0f5c856a57a38d4c3f6ba4bbaee2c47b0dfd80f50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:18 np0005603621 podman[341661]: 2026-01-31 08:34:18.889399878 +0000 UTC m=+0.022187344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:34:18 np0005603621 podman[341661]: 2026-01-31 08:34:18.992950811 +0000 UTC m=+0.125738207 container init bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:34:18 np0005603621 podman[341661]: 2026-01-31 08:34:18.997251528 +0000 UTC m=+0.130038904 container start bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:34:19 np0005603621 podman[341661]: 2026-01-31 08:34:19.000951665 +0000 UTC m=+0.133739061 container attach bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.166 247403 INFO nova.virt.libvirt.driver [-] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Instance destroyed successfully.#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.168 247403 DEBUG nova.objects.instance [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'resources' on Instance uuid 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.193 247403 DEBUG nova.virt.libvirt.vif [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:33:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1695242703',display_name='tempest-ServerActionsTestOtherB-server-1695242703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1695242703',id=137,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:34:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-laclk3ex',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member',shelved_at='2026-01-31T08:34:15.080005',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e787704c-b374-4706-807f-a0d18a0ac398'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:34:08Z,user_data=None,user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.194 247403 DEBUG nova.network.os_vif_util [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.195 247403 DEBUG nova.network.os_vif_util [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.195 247403 DEBUG os_vif [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.198 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.198 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f6617a9-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:19.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.223 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.231 247403 INFO os_vif [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:85:d5,bridge_name='br-int',has_traffic_filtering=True,id=9f6617a9-7a5b-49df-8f47-eb9e246413dc,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f6617a9-7a')#033[00m
Jan 31 03:34:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.550 247403 DEBUG nova.compute.manager [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Received event network-changed-9f6617a9-7a5b-49df-8f47-eb9e246413dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.551 247403 DEBUG nova.compute.manager [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Refreshing instance network info cache due to event network-changed-9f6617a9-7a5b-49df-8f47-eb9e246413dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.551 247403 DEBUG oslo_concurrency.lockutils [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.551 247403 DEBUG oslo_concurrency.lockutils [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.552 247403 DEBUG nova.network.neutron [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Refreshing network info cache for port 9f6617a9-7a5b-49df-8f47-eb9e246413dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]: {
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:    "0": [
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:        {
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "devices": [
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "/dev/loop3"
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            ],
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "lv_name": "ceph_lv0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "lv_size": "7511998464",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "name": "ceph_lv0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "tags": {
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.cluster_name": "ceph",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.crush_device_class": "",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.encrypted": "0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.osd_id": "0",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.type": "block",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:                "ceph.vdo": "0"
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            },
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "type": "block",
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:            "vg_name": "ceph_vg0"
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:        }
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]:    ]
Jan 31 03:34:19 np0005603621 affectionate_antonelli[341678]: }
Jan 31 03:34:19 np0005603621 systemd[1]: libpod-bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987.scope: Deactivated successfully.
Jan 31 03:34:19 np0005603621 podman[341661]: 2026-01-31 08:34:19.841929898 +0000 UTC m=+0.974717274 container died bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:34:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fcc3660a0a5e7be3b91c40d0f5c856a57a38d4c3f6ba4bbaee2c47b0dfd80f50-merged.mount: Deactivated successfully.
Jan 31 03:34:19 np0005603621 nova_compute[247399]: 2026-01-31 08:34:19.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:19 np0005603621 podman[341661]: 2026-01-31 08:34:19.891072786 +0000 UTC m=+1.023860162 container remove bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:19 np0005603621 systemd[1]: libpod-conmon-bc3d519deb70dde30e2b96a95afaa6de1286f807d22eb01637ab32deb0e24987.scope: Deactivated successfully.
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.186 247403 INFO nova.virt.libvirt.driver [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Deleting instance files /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_del#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.187 247403 INFO nova.virt.libvirt.driver [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Deletion of /var/lib/nova/instances/176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc_del complete#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.310 247403 INFO nova.scheduler.client.report [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Deleted allocations for instance 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc#033[00m
Jan 31 03:34:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 31 03:34:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 544 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.3 MiB/s wr, 178 op/s
Jan 31 03:34:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 31 03:34:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.363 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.364 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.387 247403 DEBUG oslo_concurrency.processutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.417862367 +0000 UTC m=+0.035974271 container create 0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:34:20 np0005603621 systemd[1]: Started libpod-conmon-0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb.scope.
Jan 31 03:34:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.488022162 +0000 UTC m=+0.106134236 container init 0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.496085047 +0000 UTC m=+0.114196951 container start 0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.400970272 +0000 UTC m=+0.019082196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.50153508 +0000 UTC m=+0.119646984 container attach 0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:34:20 np0005603621 silly_raman[341877]: 167 167
Jan 31 03:34:20 np0005603621 systemd[1]: libpod-0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb.scope: Deactivated successfully.
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.502866993 +0000 UTC m=+0.120978897 container died 0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:34:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9714e357086492c8e6e30787a8033faa39b9ce5a7e2605ed0437493156c07035-merged.mount: Deactivated successfully.
Jan 31 03:34:20 np0005603621 podman[341860]: 2026-01-31 08:34:20.554606823 +0000 UTC m=+0.172718727 container remove 0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:20 np0005603621 systemd[1]: libpod-conmon-0515a9a422d523153587282109d0ac2f1529e670c6b8392bd02ab2b74b452ebb.scope: Deactivated successfully.
Jan 31 03:34:20 np0005603621 podman[341922]: 2026-01-31 08:34:20.67658275 +0000 UTC m=+0.041437724 container create 590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hawking, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:20 np0005603621 systemd[1]: Started libpod-conmon-590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb.scope.
Jan 31 03:34:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4b506005d47b2fa09286bcf6d74a9605fc53f754abea2577f068bfb3ff6dc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4b506005d47b2fa09286bcf6d74a9605fc53f754abea2577f068bfb3ff6dc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4b506005d47b2fa09286bcf6d74a9605fc53f754abea2577f068bfb3ff6dc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4b506005d47b2fa09286bcf6d74a9605fc53f754abea2577f068bfb3ff6dc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:20 np0005603621 podman[341922]: 2026-01-31 08:34:20.750033719 +0000 UTC m=+0.114888713 container init 590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:34:20 np0005603621 podman[341922]: 2026-01-31 08:34:20.655065048 +0000 UTC m=+0.019920052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:34:20 np0005603621 podman[341922]: 2026-01-31 08:34:20.755357808 +0000 UTC m=+0.120212782 container start 590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hawking, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:20 np0005603621 podman[341922]: 2026-01-31 08:34:20.759443647 +0000 UTC m=+0.124298671 container attach 590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hawking, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:34:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432861401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.846 247403 DEBUG oslo_concurrency.processutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.853 247403 DEBUG nova.compute.provider_tree [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.872 247403 DEBUG nova.scheduler.client.report [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:20 np0005603621 nova_compute[247399]: 2026-01-31 08:34:20.954 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:21 np0005603621 nova_compute[247399]: 2026-01-31 08:34:21.039 247403 DEBUG oslo_concurrency.lockutils [None req-66c4ebc7-bc39-4e68-8849-a7e18fcdbb64 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:21 np0005603621 nova_compute[247399]: 2026-01-31 08:34:21.163 247403 DEBUG nova.network.neutron [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Updated VIF entry in instance network info cache for port 9f6617a9-7a5b-49df-8f47-eb9e246413dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:34:21 np0005603621 nova_compute[247399]: 2026-01-31 08:34:21.165 247403 DEBUG nova.network.neutron [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Updating instance_info_cache with network_info: [{"id": "9f6617a9-7a5b-49df-8f47-eb9e246413dc", "address": "fa:16:3e:79:85:d5", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": null, "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap9f6617a9-7a", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:21.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:21 np0005603621 nova_compute[247399]: 2026-01-31 08:34:21.282 247403 DEBUG oslo_concurrency.lockutils [req-3f9934fc-888f-432f-8821-69d046da9056 req-c8492847-263b-4881-9dc1-d8e03aab9c79 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]: {
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:        "osd_id": 0,
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:        "type": "bluestore"
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]:    }
Jan 31 03:34:21 np0005603621 ecstatic_hawking[341939]: }
Jan 31 03:34:21 np0005603621 systemd[1]: libpod-590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb.scope: Deactivated successfully.
Jan 31 03:34:21 np0005603621 podman[341922]: 2026-01-31 08:34:21.550666613 +0000 UTC m=+0.915521617 container died 590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:34:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3a4b506005d47b2fa09286bcf6d74a9605fc53f754abea2577f068bfb3ff6dc8-merged.mount: Deactivated successfully.
Jan 31 03:34:21 np0005603621 podman[341922]: 2026-01-31 08:34:21.960139974 +0000 UTC m=+1.324994948 container remove 590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:34:21 np0005603621 systemd[1]: libpod-conmon-590ae76e8736fad62540763dcee648b1e3e736525e77a0bafd3ca8e6bbb38adb.scope: Deactivated successfully.
Jan 31 03:34:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:34:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:34:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:34:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:34:22 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 842790a8-ce0b-4da1-81a8-3f5eb06ad3f2 does not exist
Jan 31 03:34:22 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2e300451-20b5-4fc7-9513-f59dea4cc2ba does not exist
Jan 31 03:34:22 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 211155b0-4cff-4f7f-a423-7836932e9bad does not exist
Jan 31 03:34:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 512 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 221 op/s
Jan 31 03:34:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:34:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:34:23 np0005603621 nova_compute[247399]: 2026-01-31 08:34:23.063 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848448.0623884, 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:23 np0005603621 nova_compute[247399]: 2026-01-31 08:34:23.063 247403 INFO nova.compute.manager [-] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:34:23 np0005603621 nova_compute[247399]: 2026-01-31 08:34:23.082 247403 DEBUG nova.compute.manager [None req-6ebde4be-6d99-42c0-8439-5be63a827da2 - - - - - -] [instance: 176ef4eb-8ab8-4e87-a1a6-e099ccb51fcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:23.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:23.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:23 np0005603621 nova_compute[247399]: 2026-01-31 08:34:23.746 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603621 nova_compute[247399]: 2026-01-31 08:34:24.258 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 509 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.9 MiB/s wr, 112 op/s
Jan 31 03:34:24 np0005603621 nova_compute[247399]: 2026-01-31 08:34:24.891 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:25.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 570 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.3 MiB/s wr, 158 op/s
Jan 31 03:34:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:27.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:28.185 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 510 KiB/s rd, 4.7 MiB/s wr, 172 op/s
Jan 31 03:34:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:29.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:29 np0005603621 nova_compute[247399]: 2026-01-31 08:34:29.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:29.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 31 03:34:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 31 03:34:29 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 31 03:34:29 np0005603621 nova_compute[247399]: 2026-01-31 08:34:29.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 577 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 512 KiB/s rd, 4.7 MiB/s wr, 172 op/s
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.342685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470342714, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1418, "num_deletes": 259, "total_data_size": 2117034, "memory_usage": 2167872, "flush_reason": "Manual Compaction"}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470365041, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2079860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53830, "largest_seqno": 55247, "table_properties": {"data_size": 2073364, "index_size": 3571, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14618, "raw_average_key_size": 20, "raw_value_size": 2059856, "raw_average_value_size": 2837, "num_data_blocks": 157, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848360, "oldest_key_time": 1769848360, "file_creation_time": 1769848470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 22682 microseconds, and 4494 cpu microseconds.
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.365351) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2079860 bytes OK
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.365424) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.367683) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.367706) EVENT_LOG_v1 {"time_micros": 1769848470367699, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.367734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2110737, prev total WAL file size 2110737, number of live WAL files 2.
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.368699) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303132' seq:72057594037927935, type:22 .. '6C6F676D0032323634' seq:0, type:0; will stop at (end)
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2031KB)], [119(10MB)]
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470368784, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 13535988, "oldest_snapshot_seqno": -1}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 8399 keys, 13400326 bytes, temperature: kUnknown
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470496257, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 13400326, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13342963, "index_size": 35235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 217722, "raw_average_key_size": 25, "raw_value_size": 13192569, "raw_average_value_size": 1570, "num_data_blocks": 1387, "num_entries": 8399, "num_filter_entries": 8399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848470, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.496505) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 13400326 bytes
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.497673) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.1 rd, 105.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.9 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(13.0) write-amplify(6.4) OK, records in: 8936, records dropped: 537 output_compression: NoCompression
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.497691) EVENT_LOG_v1 {"time_micros": 1769848470497683, "job": 72, "event": "compaction_finished", "compaction_time_micros": 127543, "compaction_time_cpu_micros": 45737, "output_level": 6, "num_output_files": 1, "total_output_size": 13400326, "num_input_records": 8936, "num_output_records": 8399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470497973, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848470499027, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.368571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.499055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.499060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.499062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.499064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:34:30.499066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:30.515 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:30.517 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:30.517 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 31 03:34:30 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 31 03:34:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:31.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:31.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 31 03:34:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 31 03:34:31 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 31 03:34:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 637 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.2 MiB/s wr, 209 op/s
Jan 31 03:34:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:34 np0005603621 nova_compute[247399]: 2026-01-31 08:34:34.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 295 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 199 op/s
Jan 31 03:34:34 np0005603621 nova_compute[247399]: 2026-01-31 08:34:34.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 6.8 MiB/s wr, 303 op/s
Jan 31 03:34:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:37.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:37 np0005603621 nova_compute[247399]: 2026-01-31 08:34:37.508 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 5.9 MiB/s wr, 324 op/s
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:34:38
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'images', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:34:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:34:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:39.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:39 np0005603621 nova_compute[247399]: 2026-01-31 08:34:39.312 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:39.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:39 np0005603621 nova_compute[247399]: 2026-01-31 08:34:39.897 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 31 03:34:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 MiB/s rd, 4.8 MiB/s wr, 266 op/s
Jan 31 03:34:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 31 03:34:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 31 03:34:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:41.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.0 MiB/s wr, 206 op/s
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.057 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.057 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:43.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.333 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:34:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:43.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.489 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.490 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.499 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.500 247403 INFO nova.compute.claims [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:34:43 np0005603621 podman[342089]: 2026-01-31 08:34:43.546824231 +0000 UTC m=+0.095507219 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 03:34:43 np0005603621 podman[342088]: 2026-01-31 08:34:43.547476481 +0000 UTC m=+0.096899033 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 03:34:43 np0005603621 nova_compute[247399]: 2026-01-31 08:34:43.711 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3412303494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.176 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.181 247403 DEBUG nova.compute.provider_tree [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.314 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 638 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 15 KiB/s wr, 184 op/s
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.419 247403 DEBUG nova.scheduler.client.report [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.523 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.524 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.693 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.694 247403 DEBUG nova.network.neutron [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.774 247403 INFO nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.821 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.901 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.931 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.932 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.933 247403 INFO nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Creating image(s)#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.961 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:44 np0005603621 nova_compute[247399]: 2026-01-31 08:34:44.990 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.023 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.028 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.078 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.079 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.080 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.080 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.104 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.107 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2689354202' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:45.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:45.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.370 247403 DEBUG nova.policy [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1c6e7eff11b435a81429826a682b32f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bfe11bd9d694684b527666e2c378eed', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.688 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.764 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.954 247403 DEBUG nova.objects.instance [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.989 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.990 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Ensure instance console log exists: /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.990 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.990 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:45 np0005603621 nova_compute[247399]: 2026-01-31 08:34:45.991 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 618 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.1 MiB/s wr, 222 op/s
Jan 31 03:34:46 np0005603621 nova_compute[247399]: 2026-01-31 08:34:46.832 247403 DEBUG nova.network.neutron [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Successfully created port: 58956ac4-88cf-49c2-988a-8a3746f1e622 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:34:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:47.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:47.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:47 np0005603621 nova_compute[247399]: 2026-01-31 08:34:47.840 247403 DEBUG nova.network.neutron [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Successfully updated port: 58956ac4-88cf-49c2-988a-8a3746f1e622 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.007 247403 DEBUG nova.compute.manager [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-changed-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.007 247403 DEBUG nova.compute.manager [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Refreshing instance network info cache due to event network-changed-58956ac4-88cf-49c2-988a-8a3746f1e622. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.008 247403 DEBUG oslo_concurrency.lockutils [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.008 247403 DEBUG oslo_concurrency.lockutils [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.008 247403 DEBUG nova.network.neutron [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Refreshing network info cache for port 58956ac4-88cf-49c2-988a-8a3746f1e622 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.027 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.308 247403 DEBUG nova.network.neutron [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:34:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 639 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 201 op/s
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.647 247403 DEBUG nova.network.neutron [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.687 247403 DEBUG oslo_concurrency.lockutils [req-7c3454f9-907f-4546-bb05-9176060674f8 req-2c8a9cfc-f105-4716-9e65-86d4fe0fb172 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.688 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.688 247403 DEBUG nova.network.neutron [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.729 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.729 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.730 247403 INFO nova.compute.manager [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Unshelving#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.907 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.907 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.910 247403 DEBUG nova.network.neutron [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:34:48 np0005603621 nova_compute[247399]: 2026-01-31 08:34:48.913 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'pci_requests' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:49 np0005603621 nova_compute[247399]: 2026-01-31 08:34:49.173 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'numa_topology' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:49 np0005603621 nova_compute[247399]: 2026-01-31 08:34:49.247 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:34:49 np0005603621 nova_compute[247399]: 2026-01-31 08:34:49.247 247403 INFO nova.compute.claims [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:34:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:49.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:49 np0005603621 nova_compute[247399]: 2026-01-31 08:34:49.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:49.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009482816280476456 of space, bias 1.0, pg target 2.844844884142937 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.6448598606458217 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006047671808114647 of space, bias 1.0, pg target 1.802206198818165 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:34:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:34:49 np0005603621 nova_compute[247399]: 2026-01-31 08:34:49.903 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.020 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.079 247403 DEBUG nova.network.neutron [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating instance_info_cache with network_info: [{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:34:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 639 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 201 op/s
Jan 31 03:34:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945715210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.436 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.443 247403 DEBUG nova.compute.provider_tree [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.885 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.886 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.886 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.886 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:34:50 np0005603621 nova_compute[247399]: 2026-01-31 08:34:50.887 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.104 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.105 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance network_info: |[{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.110 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Start _get_guest_xml network_info=[{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.119 247403 WARNING nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.125 247403 DEBUG nova.virt.libvirt.host [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.126 247403 DEBUG nova.virt.libvirt.host [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.130 247403 DEBUG nova.virt.libvirt.host [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.131 247403 DEBUG nova.virt.libvirt.host [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.132 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.132 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.132 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.133 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.133 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.133 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.133 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.133 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.134 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.134 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.134 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.134 247403 DEBUG nova.virt.hardware [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.137 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:51.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.338 247403 DEBUG nova.scheduler.client.report [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:51.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:34:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001694261' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.575 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.599 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.602 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:51 np0005603621 nova_compute[247399]: 2026-01-31 08:34:51.778 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:34:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4293040172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.001 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.002 247403 DEBUG nova.virt.libvirt.vif [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1672201743',display_name='tempest-TestNetworkAdvancedServerOps-server-1672201743',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1672201743',id=141,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUoionOf1jsbgYnjxtSF8S5kbM7WrnC+AvzdWQ5Iv9NrHSu1YTmh7OvNKWVCt94tfduQMP4jFzkhpdFTOQdH6c769sX4vCZIDbSCuBl9lgkWTK5Ks3sTtkCsO2rA5PBWA==',key_name='tempest-TestNetworkAdvancedServerOps-2012991436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-4je44sr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:44Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=60462c66-f02d-4ca4-aa2a-b6ea91c8a6af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.003 247403 DEBUG nova.network.os_vif_util [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.004 247403 DEBUG nova.network.os_vif_util [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.005 247403 DEBUG nova.objects.instance [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.088 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <uuid>60462c66-f02d-4ca4-aa2a-b6ea91c8a6af</uuid>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <name>instance-0000008d</name>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1672201743</nova:name>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:34:51</nova:creationTime>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <nova:port uuid="58956ac4-88cf-49c2-988a-8a3746f1e622">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <entry name="serial">60462c66-f02d-4ca4-aa2a-b6ea91c8a6af</entry>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <entry name="uuid">60462c66-f02d-4ca4-aa2a-b6ea91c8a6af</entry>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk.config">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:75:ff:26"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <target dev="tap58956ac4-88"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/console.log" append="off"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:34:52 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:34:52 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:34:52 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:34:52 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.089 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Preparing to wait for external event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.089 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.089 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.090 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.090 247403 DEBUG nova.virt.libvirt.vif [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1672201743',display_name='tempest-TestNetworkAdvancedServerOps-server-1672201743',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1672201743',id=141,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUoionOf1jsbgYnjxtSF8S5kbM7WrnC+AvzdWQ5Iv9NrHSu1YTmh7OvNKWVCt94tfduQMP4jFzkhpdFTOQdH6c769sX4vCZIDbSCuBl9lgkWTK5Ks3sTtkCsO2rA5PBWA==',key_name='tempest-TestNetworkAdvancedServerOps-2012991436',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-4je44sr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:44Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=60462c66-f02d-4ca4-aa2a-b6ea91c8a6af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.091 247403 DEBUG nova.network.os_vif_util [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.091 247403 DEBUG nova.network.os_vif_util [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.092 247403 DEBUG os_vif [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.092 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.093 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.093 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.096 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.097 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58956ac4-88, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.097 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58956ac4-88, col_values=(('external_ids', {'iface-id': '58956ac4-88cf-49c2-988a-8a3746f1e622', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:ff:26', 'vm-uuid': '60462c66-f02d-4ca4-aa2a-b6ea91c8a6af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.099 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603621 NetworkManager[49013]: <info>  [1769848492.1001] manager: (tap58956ac4-88): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.104 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.105 247403 INFO os_vif [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88')#033[00m
Jan 31 03:34:52 np0005603621 nova_compute[247399]: 2026-01-31 08:34:52.329 247403 INFO nova.network.neutron [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updating port ae035cfb-a17b-4578-a506-e2581da09f74 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:34:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 655 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 239 op/s
Jan 31 03:34:53 np0005603621 nova_compute[247399]: 2026-01-31 08:34:53.091 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:34:53 np0005603621 nova_compute[247399]: 2026-01-31 08:34:53.091 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:34:53 np0005603621 nova_compute[247399]: 2026-01-31 08:34:53.092 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:75:ff:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:34:53 np0005603621 nova_compute[247399]: 2026-01-31 08:34:53.092 247403 INFO nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Using config drive#033[00m
Jan 31 03:34:53 np0005603621 nova_compute[247399]: 2026-01-31 08:34:53.120 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:34:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:53.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:34:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:53.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Jan 31 03:34:54 np0005603621 nova_compute[247399]: 2026-01-31 08:34:54.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:55.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:55.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:34:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Jan 31 03:34:56 np0005603621 nova_compute[247399]: 2026-01-31 08:34:56.448 247403 INFO nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Creating config drive at /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/disk.config#033[00m
Jan 31 03:34:56 np0005603621 nova_compute[247399]: 2026-01-31 08:34:56.453 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppsd5iuv2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:56 np0005603621 nova_compute[247399]: 2026-01-31 08:34:56.580 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmppsd5iuv2" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:56 np0005603621 nova_compute[247399]: 2026-01-31 08:34:56.613 247403 DEBUG nova.storage.rbd_utils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:34:56 np0005603621 nova_compute[247399]: 2026-01-31 08:34:56.618 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/disk.config 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.098 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.187 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:57.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.262 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updating instance_info_cache with network_info: [{"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": null, "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapae035cfb-a1", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:34:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:57.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.497 247403 DEBUG oslo_concurrency.processutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/disk.config 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.498 247403 INFO nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Deleting local config drive /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/disk.config because it was imported into RBD.#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.518 247403 DEBUG nova.compute.manager [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-changed-ae035cfb-a17b-4578-a506-e2581da09f74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.519 247403 DEBUG nova.compute.manager [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Refreshing instance network info cache due to event network-changed-ae035cfb-a17b-4578-a506-e2581da09f74. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.520 247403 DEBUG oslo_concurrency.lockutils [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.524 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.524 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.525 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquired lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.525 247403 DEBUG nova.network.neutron [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.526 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.526 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.526 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:34:57 np0005603621 kernel: tap58956ac4-88: entered promiscuous mode
Jan 31 03:34:57 np0005603621 NetworkManager[49013]: <info>  [1769848497.5376] manager: (tap58956ac4-88): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Jan 31 03:34:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:57Z|00542|binding|INFO|Claiming lport 58956ac4-88cf-49c2-988a-8a3746f1e622 for this chassis.
Jan 31 03:34:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:57Z|00543|binding|INFO|58956ac4-88cf-49c2-988a-8a3746f1e622: Claiming fa:16:3e:75:ff:26 10.100.0.7
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.538 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:57Z|00544|binding|INFO|Setting lport 58956ac4-88cf-49c2-988a-8a3746f1e622 ovn-installed in OVS
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:57 np0005603621 systemd-machined[212769]: New machine qemu-66-instance-0000008d.
Jan 31 03:34:57 np0005603621 systemd-udevd[342532]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:34:57 np0005603621 NetworkManager[49013]: <info>  [1769848497.5739] device (tap58956ac4-88): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:34:57 np0005603621 systemd[1]: Started Virtual Machine qemu-66-instance-0000008d.
Jan 31 03:34:57 np0005603621 NetworkManager[49013]: <info>  [1769848497.5744] device (tap58956ac4-88): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:34:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:57Z|00545|binding|INFO|Setting lport 58956ac4-88cf-49c2-988a-8a3746f1e622 up in Southbound
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.688 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:ff:26 10.100.0.7'], port_security=['fa:16:3e:75:ff:26 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '60462c66-f02d-4ca4-aa2a-b6ea91c8a6af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1bca5a82-b0f2-4237-92f5-d7d2dbf4afe9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e036da0b-b229-4d68-8cb9-77eeebb375fb, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=58956ac4-88cf-49c2-988a-8a3746f1e622) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.689 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 58956ac4-88cf-49c2-988a-8a3746f1e622 in datapath e45621cc-e984-4d02-a4f7-adf5b5457b33 bound to our chassis#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.691 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e45621cc-e984-4d02-a4f7-adf5b5457b33#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.698 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7d63f8-cefd-4d23-9132-343956619d2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.698 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape45621cc-e1 in ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.700 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape45621cc-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.700 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[54896ca5-ea00-43e2-9369-53ad8e60ff7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.701 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5317d3ec-36bb-4b22-84f5-f63d66e54359]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.708 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[109fbd92-bf6d-4326-87a3-dcd8420afe92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.717 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8b230600-866c-4858-a574-b598e78fa9c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.738 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[68cfd94a-33b9-4a6b-8790-e31225a621f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.744 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6159a3c0-d969-4522-a5c7-9f0acfc738dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 NetworkManager[49013]: <info>  [1769848497.7449] manager: (tape45621cc-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.751 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.752 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.752 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.752 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.752 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.777 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[426d465f-77df-42bf-b629-d7b7639ae60b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.781 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d65829a0-1f1e-4a13-87ce-0f2ec8d7af9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 NetworkManager[49013]: <info>  [1769848497.7995] device (tape45621cc-e0): carrier: link connected
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.805 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4a2f18-4647-49e1-b064-1d12e5a9af10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.818 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[213c7a1d-7e00-45cd-bfb6-197b55f23964]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape45621cc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:e8:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780331, 'reachable_time': 18104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342568, 'error': None, 'target': 'ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.827 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d2309c70-7eb7-4dc5-93e7-b789945e7ad3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:e862'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 780331, 'tstamp': 780331}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342569, 'error': None, 'target': 'ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.838 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[33c5fecb-aab6-4fcf-ace7-9130a4112796]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape45621cc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:e8:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780331, 'reachable_time': 18104, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342570, 'error': None, 'target': 'ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.854 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a4355603-24dc-4602-ad99-4cf05c37e17f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.894 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b52ef7b0-8195-4e4d-8b1c-69fe4bca129f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.896 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape45621cc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.896 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.897 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape45621cc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:57 np0005603621 kernel: tape45621cc-e0: entered promiscuous mode
Jan 31 03:34:57 np0005603621 NetworkManager[49013]: <info>  [1769848497.8997] manager: (tape45621cc-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.901 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape45621cc-e0, col_values=(('external_ids', {'iface-id': '98bdd03c-3803-4f50-b99f-a5baefc4ec8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:34:57 np0005603621 ovn_controller[149152]: 2026-01-31T08:34:57Z|00546|binding|INFO|Releasing lport 98bdd03c-3803-4f50-b99f-a5baefc4ec8a from this chassis (sb_readonly=0)
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.904 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e45621cc-e984-4d02-a4f7-adf5b5457b33.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e45621cc-e984-4d02-a4f7-adf5b5457b33.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.904 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2780ad-3306-4de1-a0e0-9aab3ee4ac60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.905 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-e45621cc-e984-4d02-a4f7-adf5b5457b33
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/e45621cc-e984-4d02-a4f7-adf5b5457b33.pid.haproxy
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID e45621cc-e984-4d02-a4f7-adf5b5457b33
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:34:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:34:57.907 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'env', 'PROCESS_TAG=haproxy-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e45621cc-e984-4d02-a4f7-adf5b5457b33.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:34:57 np0005603621 nova_compute[247399]: 2026-01-31 08:34:57.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532315806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.205 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:58 np0005603621 podman[342661]: 2026-01-31 08:34:58.262694251 +0000 UTC m=+0.048651083 container create 93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:34:58 np0005603621 systemd[1]: Started libpod-conmon-93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e.scope.
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.306 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848498.3061554, 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.307 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] VM Started (Lifecycle Event)#033[00m
Jan 31 03:34:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:34:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656011ea27537bb619f8fa9f3514c427d4e56e7dafc19967e8fb9b4f8faf9cf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:34:58 np0005603621 podman[342661]: 2026-01-31 08:34:58.325265975 +0000 UTC m=+0.111222837 container init 93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:34:58 np0005603621 podman[342661]: 2026-01-31 08:34:58.329671664 +0000 UTC m=+0.115628496 container start 93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:34:58 np0005603621 podman[342661]: 2026-01-31 08:34:58.2364602 +0000 UTC m=+0.022417042 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:34:58 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [NOTICE]   (342687) : New worker (342689) forked
Jan 31 03:34:58 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [NOTICE]   (342687) : Loading success.
Jan 31 03:34:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 136 op/s
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.457 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.458 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.458 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.461 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848498.3063269, 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.461 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.549 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.553 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.580 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.621 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.622 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4156MB free_disk=20.785491943359375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.623 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.623 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.873 247403 DEBUG nova.compute.manager [req-9b0b5b9a-185d-475e-a60b-632bfac353b4 req-c2992ac2-9e47-40bb-804b-66b4cc01bc73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.873 247403 DEBUG oslo_concurrency.lockutils [req-9b0b5b9a-185d-475e-a60b-632bfac353b4 req-c2992ac2-9e47-40bb-804b-66b4cc01bc73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.873 247403 DEBUG oslo_concurrency.lockutils [req-9b0b5b9a-185d-475e-a60b-632bfac353b4 req-c2992ac2-9e47-40bb-804b-66b4cc01bc73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.873 247403 DEBUG oslo_concurrency.lockutils [req-9b0b5b9a-185d-475e-a60b-632bfac353b4 req-c2992ac2-9e47-40bb-804b-66b4cc01bc73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.874 247403 DEBUG nova.compute.manager [req-9b0b5b9a-185d-475e-a60b-632bfac353b4 req-c2992ac2-9e47-40bb-804b-66b4cc01bc73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Processing event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.874 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.878 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848498.878626, 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.879 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.881 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.884 247403 INFO nova.virt.libvirt.driver [-] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance spawned successfully.#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.884 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.986 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.986 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance adfc4c25-9eb9-45cc-ac90-2029677bcb67 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.986 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:34:58 np0005603621 nova_compute[247399]: 2026-01-31 08:34:58.987 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.042 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.180 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.185 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.185 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.186 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.186 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.186 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.187 247403 DEBUG nova.virt.libvirt.driver [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.192 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:34:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:34:59.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.329 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:34:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:34:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:34:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:34:59.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:34:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:34:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/797628895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.468 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.471 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.783 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.819 247403 INFO nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Took 14.89 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.819 247403 DEBUG nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.906 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.918 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:34:59 np0005603621 nova_compute[247399]: 2026-01-31 08:34:59.918 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.125 247403 INFO nova.compute.manager [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Took 16.68 seconds to build instance.#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.182 247403 DEBUG oslo_concurrency.lockutils [None req-ebe14e2f-1682-435f-bf2a-6a83bc0bf8dc f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1006 KiB/s rd, 552 KiB/s wr, 112 op/s
Jan 31 03:35:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.457 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:00.456 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:35:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:00.457 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.590 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.591 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.591 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.592 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:00 np0005603621 nova_compute[247399]: 2026-01-31 08:35:00.774 247403 DEBUG nova.network.neutron [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updating instance_info_cache with network_info: [{"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.036 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Releasing lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.038 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.038 247403 INFO nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Creating image(s)#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.062 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.066 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'trusted_certs' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.068 247403 DEBUG oslo_concurrency.lockutils [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.068 247403 DEBUG nova.network.neutron [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Refreshing network info cache for port ae035cfb-a17b-4578-a506-e2581da09f74 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.140 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.163 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.167 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "e686904615f68fc13148acc88bceb1a27f9520cd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.168 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "e686904615f68fc13148acc88bceb1a27f9520cd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.218 247403 DEBUG nova.compute.manager [req-f281a3c4-7cc2-4010-be86-b4b9e432f1af req-dc6c9b96-5ca1-4731-91d1-abfbf660ee0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.219 247403 DEBUG oslo_concurrency.lockutils [req-f281a3c4-7cc2-4010-be86-b4b9e432f1af req-dc6c9b96-5ca1-4731-91d1-abfbf660ee0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.219 247403 DEBUG oslo_concurrency.lockutils [req-f281a3c4-7cc2-4010-be86-b4b9e432f1af req-dc6c9b96-5ca1-4731-91d1-abfbf660ee0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.219 247403 DEBUG oslo_concurrency.lockutils [req-f281a3c4-7cc2-4010-be86-b4b9e432f1af req-dc6c9b96-5ca1-4731-91d1-abfbf660ee0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.220 247403 DEBUG nova.compute.manager [req-f281a3c4-7cc2-4010-be86-b4b9e432f1af req-dc6c9b96-5ca1-4731-91d1-abfbf660ee0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] No waiting events found dispatching network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.220 247403 WARNING nova.compute.manager [req-f281a3c4-7cc2-4010-be86-b4b9e432f1af req-dc6c9b96-5ca1-4731-91d1-abfbf660ee0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received unexpected event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:35:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:01.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:01.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.512 247403 DEBUG nova.virt.libvirt.imagebackend [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/a445cf05-8653-452b-bc15-8061b7aa6a98/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/a445cf05-8653-452b-bc15-8061b7aa6a98/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.622 247403 DEBUG nova.virt.libvirt.imagebackend [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Selected location: {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/a445cf05-8653-452b-bc15-8061b7aa6a98/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.623 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] cloning images/a445cf05-8653-452b-bc15-8061b7aa6a98@snap to None/adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.734 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "e686904615f68fc13148acc88bceb1a27f9520cd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.864 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'migration_context' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:01 np0005603621 nova_compute[247399]: 2026-01-31 08:35:01.957 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] flattening vms/adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:35:02 np0005603621 nova_compute[247399]: 2026-01-31 08:35:02.100 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:02 np0005603621 nova_compute[247399]: 2026-01-31 08:35:02.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 656 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 566 KiB/s wr, 175 op/s
Jan 31 03:35:02 np0005603621 nova_compute[247399]: 2026-01-31 08:35:02.853 247403 DEBUG nova.network.neutron [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updated VIF entry in instance network info cache for port ae035cfb-a17b-4578-a506-e2581da09f74. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:35:02 np0005603621 nova_compute[247399]: 2026-01-31 08:35:02.854 247403 DEBUG nova.network.neutron [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updating instance_info_cache with network_info: [{"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:02 np0005603621 nova_compute[247399]: 2026-01-31 08:35:02.962 247403 DEBUG oslo_concurrency.lockutils [req-62bc9b73-caff-4fbb-9026-30513d21dfdf req-df17316e-edcb-4a68-a7a8-5deab6c4639f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-adfc4c25-9eb9-45cc-ac90-2029677bcb67" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:35:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:03.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.502 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Image rbd:vms/adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.503 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.503 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Ensure instance console log exists: /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.503 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.503 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.504 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.506 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Start _get_guest_xml network_info=[{"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:34:23Z,direct_url=<?>,disk_format='raw',id=a445cf05-8653-452b-bc15-8061b7aa6a98,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-966483760-shelved',owner='953a213fa5cb435ab3c04ad96152685f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:34:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.511 247403 WARNING nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.516 247403 DEBUG nova.virt.libvirt.host [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.517 247403 DEBUG nova.virt.libvirt.host [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.522 247403 DEBUG nova.virt.libvirt.host [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.523 247403 DEBUG nova.virt.libvirt.host [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.524 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.525 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:34:23Z,direct_url=<?>,disk_format='raw',id=a445cf05-8653-452b-bc15-8061b7aa6a98,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-966483760-shelved',owner='953a213fa5cb435ab3c04ad96152685f',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T08:34:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.525 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.526 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.526 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.526 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.526 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.527 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.527 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.527 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.527 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.528 247403 DEBUG nova.virt.hardware [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.528 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'vcpu_model' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:03 np0005603621 nova_compute[247399]: 2026-01-31 08:35:03.629 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:35:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780277949' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.066 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.100 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.105 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 685 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 154 op/s
Jan 31 03:35:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:35:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/902461004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.590 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.592 247403 DEBUG nova.virt.libvirt.vif [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:33:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-966483760',display_name='tempest-ServerActionsTestOtherB-server-966483760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-966483760',id=135,image_ref='a445cf05-8653-452b-bc15-8061b7aa6a98',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:33:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-3ojgffxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member',shelved_at='2026-01-31T08:34:34.691196',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='a445cf05-8653-452b-bc15-8061b7aa6a98'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=adfc4c25-9eb9-45cc-ac90-2029677bcb67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.593 247403 DEBUG nova.network.os_vif_util [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.593 247403 DEBUG nova.network.os_vif_util [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.595 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'pci_devices' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.704 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <uuid>adfc4c25-9eb9-45cc-ac90-2029677bcb67</uuid>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <name>instance-00000087</name>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerActionsTestOtherB-server-966483760</nova:name>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:35:03</nova:creationTime>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:user uuid="ef51681d234a4abc88ff433d0640b6e7">tempest-ServerActionsTestOtherB-1048458052-project-member</nova:user>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:project uuid="953a213fa5cb435ab3c04ad96152685f">tempest-ServerActionsTestOtherB-1048458052</nova:project>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="a445cf05-8653-452b-bc15-8061b7aa6a98"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <nova:port uuid="ae035cfb-a17b-4578-a506-e2581da09f74">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <entry name="serial">adfc4c25-9eb9-45cc-ac90-2029677bcb67</entry>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <entry name="uuid">adfc4c25-9eb9-45cc-ac90-2029677bcb67</entry>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk.config">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:30:5a:60"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <target dev="tapae035cfb-a1"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/console.log" append="off"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:35:04 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:35:04 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:35:04 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:35:04 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.706 247403 DEBUG nova.compute.manager [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Preparing to wait for external event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.706 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.706 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.707 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.707 247403 DEBUG nova.virt.libvirt.vif [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:33:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-966483760',display_name='tempest-ServerActionsTestOtherB-server-966483760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-966483760',id=135,image_ref='a445cf05-8653-452b-bc15-8061b7aa6a98',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:33:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-3ojgffxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member',shelved_at='2026-01-31T08:34:34.691196',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='a445cf05-8653-452b-bc15-8061b7aa6a98'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:34:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=adfc4c25-9eb9-45cc-ac90-2029677bcb67,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.708 247403 DEBUG nova.network.os_vif_util [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.708 247403 DEBUG nova.network.os_vif_util [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.709 247403 DEBUG os_vif [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.709 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.710 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.710 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.713 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.713 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae035cfb-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.714 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae035cfb-a1, col_values=(('external_ids', {'iface-id': 'ae035cfb-a17b-4578-a506-e2581da09f74', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:5a:60', 'vm-uuid': 'adfc4c25-9eb9-45cc-ac90-2029677bcb67'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.715 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:04 np0005603621 NetworkManager[49013]: <info>  [1769848504.7165] manager: (tapae035cfb-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.718 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.720 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.721 247403 INFO os_vif [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1')#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.955 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.956 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.956 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] No VIF found with MAC fa:16:3e:30:5a:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.957 247403 INFO nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Using config drive#033[00m
Jan 31 03:35:04 np0005603621 nova_compute[247399]: 2026-01-31 08:35:04.982 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.148 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'ec2_ids' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.281 247403 DEBUG nova.objects.instance [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'keypairs' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:05.460 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.723 247403 INFO nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Creating config drive at /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/disk.config#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.729 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvye2evth execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.794 247403 DEBUG nova.compute.manager [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-changed-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.794 247403 DEBUG nova.compute.manager [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Refreshing instance network info cache due to event network-changed-58956ac4-88cf-49c2-988a-8a3746f1e622. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.795 247403 DEBUG oslo_concurrency.lockutils [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.795 247403 DEBUG oslo_concurrency.lockutils [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.795 247403 DEBUG nova.network.neutron [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Refreshing network info cache for port 58956ac4-88cf-49c2-988a-8a3746f1e622 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.865 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvye2evth" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.892 247403 DEBUG nova.storage.rbd_utils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] rbd image adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:35:05 np0005603621 nova_compute[247399]: 2026-01-31 08:35:05.897 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/disk.config adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.254 247403 DEBUG oslo_concurrency.processutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/disk.config adfc4c25-9eb9-45cc-ac90-2029677bcb67_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.255 247403 INFO nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Deleting local config drive /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67/disk.config because it was imported into RBD.#033[00m
Jan 31 03:35:06 np0005603621 NetworkManager[49013]: <info>  [1769848506.2903] manager: (tapae035cfb-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Jan 31 03:35:06 np0005603621 kernel: tapae035cfb-a1: entered promiscuous mode
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.297 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:06Z|00547|binding|INFO|Claiming lport ae035cfb-a17b-4578-a506-e2581da09f74 for this chassis.
Jan 31 03:35:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:06Z|00548|binding|INFO|ae035cfb-a17b-4578-a506-e2581da09f74: Claiming fa:16:3e:30:5a:60 10.100.0.12
Jan 31 03:35:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:06Z|00549|binding|INFO|Setting lport ae035cfb-a17b-4578-a506-e2581da09f74 ovn-installed in OVS
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.313 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:06 np0005603621 systemd-machined[212769]: New machine qemu-67-instance-00000087.
Jan 31 03:35:06 np0005603621 systemd[1]: Started Virtual Machine qemu-67-instance-00000087.
Jan 31 03:35:06 np0005603621 systemd-udevd[343072]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:35:06 np0005603621 NetworkManager[49013]: <info>  [1769848506.3457] device (tapae035cfb-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:35:06 np0005603621 NetworkManager[49013]: <info>  [1769848506.3466] device (tapae035cfb-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:35:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 686 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.9 MiB/s wr, 247 op/s
Jan 31 03:35:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:06Z|00550|binding|INFO|Setting lport ae035cfb-a17b-4578-a506-e2581da09f74 up in Southbound
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.364 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:5a:60 10.100.0.12'], port_security=['fa:16:3e:30:5a:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'adfc4c25-9eb9-45cc-ac90-2029677bcb67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5b1bd8ad-0d2a-4d57-a00a-9a6b59df86e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ae035cfb-a17b-4578-a506-e2581da09f74) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.365 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ae035cfb-a17b-4578-a506-e2581da09f74 in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 bound to our chassis#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.367 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 44469d8b-ad30-4270-88fa-e67c568f3150#033[00m
Jan 31 03:35:06 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.376 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[327b34ff-1ba2-45b5-8c3c-73f38882dee4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.377 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap44469d8b-a1 in ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.380 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap44469d8b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.380 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba550bd-3d02-4a13-856d-4c698c0fc4a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.380 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bc54df-328a-4088-a754-68de8df43b6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.389 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6d49cdf1-d331-4317-bd63-f941f0996204]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.399 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b383e304-1665-4c4c-8ee3-af355a6c2ba7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.421 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6519c3bc-ac8b-4801-9ab2-f01b89535dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.426 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e0e692-a425-4316-b90c-9588c0d393ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 NetworkManager[49013]: <info>  [1769848506.4268] manager: (tap44469d8b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.463 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7f70ba-04cc-4981-92ac-aebfec94f52f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.468 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[f0738505-40bc-434b-a2b8-4c3ccf6602fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 NetworkManager[49013]: <info>  [1769848506.4977] device (tap44469d8b-a0): carrier: link connected
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.505 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2170e590-e3e2-4f7b-979d-833097ad2c0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.519 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e67dd0-f02e-4197-8f7d-6a1cbebee6fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781201, 'reachable_time': 19489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343106, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.532 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8985e8fd-8797-43f7-a2b2-381dab02ef50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:9820'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 781201, 'tstamp': 781201}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343107, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.546 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[85dcd53c-03ff-4953-9568-9dda2fde9f8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap44469d8b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:98:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781201, 'reachable_time': 19489, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343108, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.574 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8a1da0-b726-400b-b4f9-0e857d88345e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.627 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b00f5d4a-4dbe-43de-a851-32b9755351b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.629 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.630 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.630 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44469d8b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:06 np0005603621 NetworkManager[49013]: <info>  [1769848506.6323] manager: (tap44469d8b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Jan 31 03:35:06 np0005603621 kernel: tap44469d8b-a0: entered promiscuous mode
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.631 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.634 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap44469d8b-a0, col_values=(('external_ids', {'iface-id': '7e288124-e200-4c03-8a4a-baab3e3f3d7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:06Z|00551|binding|INFO|Releasing lport 7e288124-e200-4c03-8a4a-baab3e3f3d7a from this chassis (sb_readonly=0)
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.637 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.638 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[46b8182b-1db8-4748-af18-1343c4120594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.638 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/44469d8b-ad30-4270-88fa-e67c568f3150.pid.haproxy
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 44469d8b-ad30-4270-88fa-e67c568f3150
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:35:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:06.639 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'env', 'PROCESS_TAG=haproxy-44469d8b-ad30-4270-88fa-e67c568f3150', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/44469d8b-ad30-4270-88fa-e67c568f3150.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.641 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.875 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848506.874607, adfc4c25-9eb9-45cc-ac90-2029677bcb67 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:35:06 np0005603621 nova_compute[247399]: 2026-01-31 08:35:06.876 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] VM Started (Lifecycle Event)#033[00m
Jan 31 03:35:06 np0005603621 podman[343181]: 2026-01-31 08:35:06.956582467 +0000 UTC m=+0.050941987 container create 598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:35:06 np0005603621 systemd[1]: Started libpod-conmon-598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283.scope.
Jan 31 03:35:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2738803e62f4007904821f098e1e017717cd9bf3ba97bdd413a6f4c8b92e32da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:07 np0005603621 podman[343181]: 2026-01-31 08:35:07.018691986 +0000 UTC m=+0.113051506 container init 598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:35:07 np0005603621 podman[343181]: 2026-01-31 08:35:07.022648591 +0000 UTC m=+0.117008111 container start 598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:35:07 np0005603621 podman[343181]: 2026-01-31 08:35:06.930160039 +0000 UTC m=+0.024519589 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.035 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.040 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848506.8756592, adfc4c25-9eb9-45cc-ac90-2029677bcb67 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.040 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:35:07 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [NOTICE]   (343200) : New worker (343202) forked
Jan 31 03:35:07 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [NOTICE]   (343200) : Loading success.
Jan 31 03:35:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.316 247403 DEBUG nova.compute.manager [req-c9d742a6-1fe6-4a31-9e6d-500ef5720c6d req-7da2fccc-7341-4cc5-820b-a3c222086165 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.317 247403 DEBUG oslo_concurrency.lockutils [req-c9d742a6-1fe6-4a31-9e6d-500ef5720c6d req-7da2fccc-7341-4cc5-820b-a3c222086165 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.317 247403 DEBUG oslo_concurrency.lockutils [req-c9d742a6-1fe6-4a31-9e6d-500ef5720c6d req-7da2fccc-7341-4cc5-820b-a3c222086165 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.317 247403 DEBUG oslo_concurrency.lockutils [req-c9d742a6-1fe6-4a31-9e6d-500ef5720c6d req-7da2fccc-7341-4cc5-820b-a3c222086165 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.317 247403 DEBUG nova.compute.manager [req-c9d742a6-1fe6-4a31-9e6d-500ef5720c6d req-7da2fccc-7341-4cc5-820b-a3c222086165 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Processing event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.318 247403 DEBUG nova.compute.manager [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.321 247403 DEBUG nova.virt.libvirt.driver [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.324 247403 INFO nova.virt.libvirt.driver [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Instance spawned successfully.#033[00m
Jan 31 03:35:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:07.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.402 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.406 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848507.3208823, adfc4c25-9eb9-45cc-ac90-2029677bcb67 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.406 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.518 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:07 np0005603621 nova_compute[247399]: 2026-01-31 08:35:07.522 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:35:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 31 03:35:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 31 03:35:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 704 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.8 MiB/s wr, 264 op/s
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:08 np0005603621 nova_compute[247399]: 2026-01-31 08:35:08.740 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:35:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:09.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:09 np0005603621 nova_compute[247399]: 2026-01-31 08:35:09.715 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:09 np0005603621 nova_compute[247399]: 2026-01-31 08:35:09.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 704 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 6.8 MiB/s wr, 264 op/s
Jan 31 03:35:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.179 247403 DEBUG nova.compute.manager [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.210 247403 DEBUG nova.network.neutron [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updated VIF entry in instance network info cache for port 58956ac4-88cf-49c2-988a-8a3746f1e622. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.211 247403 DEBUG nova.network.neutron [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating instance_info_cache with network_info: [{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.227 247403 DEBUG nova.compute.manager [req-b49fb16f-49fd-458c-87cf-b86bf8d64921 req-36422f18-5dfa-4539-9851-05925cdf2482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.227 247403 DEBUG oslo_concurrency.lockutils [req-b49fb16f-49fd-458c-87cf-b86bf8d64921 req-36422f18-5dfa-4539-9851-05925cdf2482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.228 247403 DEBUG oslo_concurrency.lockutils [req-b49fb16f-49fd-458c-87cf-b86bf8d64921 req-36422f18-5dfa-4539-9851-05925cdf2482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.228 247403 DEBUG oslo_concurrency.lockutils [req-b49fb16f-49fd-458c-87cf-b86bf8d64921 req-36422f18-5dfa-4539-9851-05925cdf2482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.228 247403 DEBUG nova.compute.manager [req-b49fb16f-49fd-458c-87cf-b86bf8d64921 req-36422f18-5dfa-4539-9851-05925cdf2482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] No waiting events found dispatching network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.228 247403 WARNING nova.compute.manager [req-b49fb16f-49fd-458c-87cf-b86bf8d64921 req-36422f18-5dfa-4539-9851-05925cdf2482 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received unexpected event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Jan 31 03:35:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.302 247403 DEBUG oslo_concurrency.lockutils [req-7f0828c3-0c6d-42c7-8823-f9230444eda0 req-5a832fa9-06a2-4cfb-b7c0-9bd462871384 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:35:11 np0005603621 nova_compute[247399]: 2026-01-31 08:35:11.317 247403 DEBUG oslo_concurrency.lockutils [None req-389c95d3-4be7-4629-a66b-11e2f4ce90a6 ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 22.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:11.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:11Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:ff:26 10.100.0.7
Jan 31 03:35:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:11Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:ff:26 10.100.0.7
Jan 31 03:35:12 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 31 03:35:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 7.5 MiB/s wr, 326 op/s
Jan 31 03:35:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:13.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 31 03:35:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 31 03:35:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 31 03:35:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 643 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.6 MiB/s wr, 311 op/s
Jan 31 03:35:14 np0005603621 podman[343265]: 2026-01-31 08:35:14.489721531 +0000 UTC m=+0.051378830 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:35:14 np0005603621 podman[343266]: 2026-01-31 08:35:14.509896651 +0000 UTC m=+0.069570126 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:35:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:35:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514920322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:35:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:35:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1514920322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:35:14 np0005603621 nova_compute[247399]: 2026-01-31 08:35:14.718 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:14 np0005603621 nova_compute[247399]: 2026-01-31 08:35:14.912 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:15.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 31 03:35:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:15.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 31 03:35:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 31 03:35:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 597 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 5.3 MiB/s wr, 443 op/s
Jan 31 03:35:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 31 03:35:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 31 03:35:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 31 03:35:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:17.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:17.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:17 np0005603621 nova_compute[247399]: 2026-01-31 08:35:17.382 247403 INFO nova.compute.manager [None req-7669155d-5bf3-40c1-be61-d4ccadf29630 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Get console output#033[00m
Jan 31 03:35:17 np0005603621 nova_compute[247399]: 2026-01-31 08:35:17.388 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:35:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 31 03:35:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 31 03:35:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 31 03:35:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.1 MiB/s wr, 320 op/s
Jan 31 03:35:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:19.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.792 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.792 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.792 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.793 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.793 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.794 247403 INFO nova.compute.manager [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Terminating instance#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.795 247403 DEBUG nova.compute.manager [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.914 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:19 np0005603621 kernel: tapae035cfb-a1 (unregistering): left promiscuous mode
Jan 31 03:35:19 np0005603621 NetworkManager[49013]: <info>  [1769848519.9450] device (tapae035cfb-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:35:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:19Z|00552|binding|INFO|Releasing lport ae035cfb-a17b-4578-a506-e2581da09f74 from this chassis (sb_readonly=0)
Jan 31 03:35:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:19Z|00553|binding|INFO|Setting lport ae035cfb-a17b-4578-a506-e2581da09f74 down in Southbound
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.952 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:19Z|00554|binding|INFO|Removing iface tapae035cfb-a1 ovn-installed in OVS
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.954 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:19 np0005603621 nova_compute[247399]: 2026-01-31 08:35:19.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:19.974 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:5a:60 10.100.0.12'], port_security=['fa:16:3e:30:5a:60 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'adfc4c25-9eb9-45cc-ac90-2029677bcb67', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-44469d8b-ad30-4270-88fa-e67c568f3150', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '953a213fa5cb435ab3c04ad96152685f', 'neutron:revision_number': '9', 'neutron:security_group_ids': '5b1bd8ad-0d2a-4d57-a00a-9a6b59df86e5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.181', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d972fb9d-6d12-4c1c-b135-704d64887b72, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=ae035cfb-a17b-4578-a506-e2581da09f74) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:35:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:19.975 159734 INFO neutron.agent.ovn.metadata.agent [-] Port ae035cfb-a17b-4578-a506-e2581da09f74 in datapath 44469d8b-ad30-4270-88fa-e67c568f3150 unbound from our chassis#033[00m
Jan 31 03:35:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:19.977 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 44469d8b-ad30-4270-88fa-e67c568f3150, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:35:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:19.978 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[17fcc006-39cc-4282-bef4-6d1878ba324c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:19.978 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 namespace which is not needed anymore#033[00m
Jan 31 03:35:19 np0005603621 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 31 03:35:19 np0005603621 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000087.scope: Consumed 12.535s CPU time.
Jan 31 03:35:19 np0005603621 systemd-machined[212769]: Machine qemu-67-instance-00000087 terminated.
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.026 247403 INFO nova.virt.libvirt.driver [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Instance destroyed successfully.#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.027 247403 DEBUG nova.objects.instance [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lazy-loading 'resources' on Instance uuid adfc4c25-9eb9-45cc-ac90-2029677bcb67 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:35:20 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [NOTICE]   (343200) : haproxy version is 2.8.14-c23fe91
Jan 31 03:35:20 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [NOTICE]   (343200) : path to executable is /usr/sbin/haproxy
Jan 31 03:35:20 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [WARNING]  (343200) : Exiting Master process...
Jan 31 03:35:20 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [ALERT]    (343200) : Current worker (343202) exited with code 143 (Terminated)
Jan 31 03:35:20 np0005603621 neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150[343196]: [WARNING]  (343200) : All workers exited. Exiting... (0)
Jan 31 03:35:20 np0005603621 systemd[1]: libpod-598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283.scope: Deactivated successfully.
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.087 247403 DEBUG nova.virt.libvirt.vif [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:33:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-966483760',display_name='tempest-ServerActionsTestOtherB-server-966483760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-966483760',id=135,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDsFGTxapW26dXB/XvUTGcfGzb7/71yMMg1CszLzfnGOAhIU/1lACOYAdVBK40cFjy/2kY258v2iqF8U2lfGaG9JRRfAxw6pRph+THb2i3B9US4SfAm/pgAAiW0mmqeasA==',key_name='tempest-keypair-1440000372',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:35:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='953a213fa5cb435ab3c04ad96152685f',ramdisk_id='',reservation_id='r-3ojgffxo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1048458052',owner_user_name='tempest-ServerActionsTestOtherB-1048458052-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:35:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ef51681d234a4abc88ff433d0640b6e7',uuid=adfc4c25-9eb9-45cc-ac90-2029677bcb67,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.087 247403 DEBUG nova.network.os_vif_util [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converting VIF {"id": "ae035cfb-a17b-4578-a506-e2581da09f74", "address": "fa:16:3e:30:5a:60", "network": {"id": "44469d8b-ad30-4270-88fa-e67c568f3150", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-2130829654-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "953a213fa5cb435ab3c04ad96152685f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae035cfb-a1", "ovs_interfaceid": "ae035cfb-a17b-4578-a506-e2581da09f74", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.088 247403 DEBUG nova.network.os_vif_util [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.088 247403 DEBUG os_vif [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.091 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae035cfb-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.093 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:20 np0005603621 podman[343344]: 2026-01-31 08:35:20.093907819 +0000 UTC m=+0.044569934 container died 598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.094 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.096 247403 INFO os_vif [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:5a:60,bridge_name='br-int',has_traffic_filtering=True,id=ae035cfb-a17b-4578-a506-e2581da09f74,network=Network(44469d8b-ad30-4270-88fa-e67c568f3150),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae035cfb-a1')#033[00m
Jan 31 03:35:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283-userdata-shm.mount: Deactivated successfully.
Jan 31 03:35:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2738803e62f4007904821f098e1e017717cd9bf3ba97bdd413a6f4c8b92e32da-merged.mount: Deactivated successfully.
Jan 31 03:35:20 np0005603621 podman[343344]: 2026-01-31 08:35:20.145818016 +0000 UTC m=+0.096480141 container cleanup 598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:35:20 np0005603621 systemd[1]: libpod-conmon-598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283.scope: Deactivated successfully.
Jan 31 03:35:20 np0005603621 podman[343394]: 2026-01-31 08:35:20.209557686 +0000 UTC m=+0.049976015 container remove 598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.213 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f194d6f5-c9e8-48e9-a3c2-a9798f065065]: (4, ('Sat Jan 31 08:35:20 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283)\n598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283\nSat Jan 31 08:35:20 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 (598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283)\n598fc08f5c29ddf92deab762215e7157e3a3082c65c7460710b61b3a2bf8d283\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.215 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc01a97-fb6b-482a-80b2-56b51f1c2f82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.216 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44469d8b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:20 np0005603621 kernel: tap44469d8b-a0: left promiscuous mode
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.225 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8c9da9-6106-4262-8820-5c24f142594b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.239 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[deb6b24c-64b7-49c7-9189-272eceb02c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.240 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c6d2be-644e-4ac9-8f2e-79c0115399de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.254 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1ede88-7d74-42de-bcec-507b4b49e620]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 781193, 'reachable_time': 37891, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343409, 'error': None, 'target': 'ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.255 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-44469d8b-ad30-4270-88fa-e67c568f3150 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:35:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:20.255 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cf275a-0bac-4311-882e-bbc0020c495b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:20 np0005603621 systemd[1]: run-netns-ovnmeta\x2d44469d8b\x2dad30\x2d4270\x2d88fa\x2de67c568f3150.mount: Deactivated successfully.
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.340 247403 DEBUG nova.compute.manager [req-43728417-f18a-488a-a7da-3dd78ca39318 req-fdd8cbb7-7fbe-475f-b426-39327d3bcf54 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-vif-unplugged-ae035cfb-a17b-4578-a506-e2581da09f74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.341 247403 DEBUG oslo_concurrency.lockutils [req-43728417-f18a-488a-a7da-3dd78ca39318 req-fdd8cbb7-7fbe-475f-b426-39327d3bcf54 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.341 247403 DEBUG oslo_concurrency.lockutils [req-43728417-f18a-488a-a7da-3dd78ca39318 req-fdd8cbb7-7fbe-475f-b426-39327d3bcf54 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.341 247403 DEBUG oslo_concurrency.lockutils [req-43728417-f18a-488a-a7da-3dd78ca39318 req-fdd8cbb7-7fbe-475f-b426-39327d3bcf54 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.341 247403 DEBUG nova.compute.manager [req-43728417-f18a-488a-a7da-3dd78ca39318 req-fdd8cbb7-7fbe-475f-b426-39327d3bcf54 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] No waiting events found dispatching network-vif-unplugged-ae035cfb-a17b-4578-a506-e2581da09f74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.341 247403 DEBUG nova.compute.manager [req-43728417-f18a-488a-a7da-3dd78ca39318 req-fdd8cbb7-7fbe-475f-b426-39327d3bcf54 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-vif-unplugged-ae035cfb-a17b-4578-a506-e2581da09f74 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:35:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 579 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 247 op/s
Jan 31 03:35:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.617 247403 INFO nova.virt.libvirt.driver [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Deleting instance files /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67_del#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.617 247403 INFO nova.virt.libvirt.driver [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Deletion of /var/lib/nova/instances/adfc4c25-9eb9-45cc-ac90-2029677bcb67_del complete#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.663 247403 INFO nova.compute.manager [None req-ec4a5c6e-d7dd-4454-af98-5704ad4640d0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Get console output#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.667 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.942 247403 INFO nova.compute.manager [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.943 247403 DEBUG oslo.service.loopingcall [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.944 247403 DEBUG nova.compute.manager [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:35:20 np0005603621 nova_compute[247399]: 2026-01-31 08:35:20.944 247403 DEBUG nova.network.neutron [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:35:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:21.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:21.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 531 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 333 op/s
Jan 31 03:35:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:35:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:35:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:35:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:35:23 np0005603621 nova_compute[247399]: 2026-01-31 08:35:23.263 247403 DEBUG nova.compute.manager [req-c57a48cf-215a-4588-aac1-f67e1c07b9e1 req-05ba7e5e-f2fa-454b-9b44-fa721a338d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:23 np0005603621 nova_compute[247399]: 2026-01-31 08:35:23.264 247403 DEBUG oslo_concurrency.lockutils [req-c57a48cf-215a-4588-aac1-f67e1c07b9e1 req-05ba7e5e-f2fa-454b-9b44-fa721a338d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:23 np0005603621 nova_compute[247399]: 2026-01-31 08:35:23.264 247403 DEBUG oslo_concurrency.lockutils [req-c57a48cf-215a-4588-aac1-f67e1c07b9e1 req-05ba7e5e-f2fa-454b-9b44-fa721a338d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:23 np0005603621 nova_compute[247399]: 2026-01-31 08:35:23.264 247403 DEBUG oslo_concurrency.lockutils [req-c57a48cf-215a-4588-aac1-f67e1c07b9e1 req-05ba7e5e-f2fa-454b-9b44-fa721a338d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:23 np0005603621 nova_compute[247399]: 2026-01-31 08:35:23.264 247403 DEBUG nova.compute.manager [req-c57a48cf-215a-4588-aac1-f67e1c07b9e1 req-05ba7e5e-f2fa-454b-9b44-fa721a338d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] No waiting events found dispatching network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:23 np0005603621 nova_compute[247399]: 2026-01-31 08:35:23.265 247403 WARNING nova.compute.manager [req-c57a48cf-215a-4588-aac1-f67e1c07b9e1 req-05ba7e5e-f2fa-454b-9b44-fa721a338d3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received unexpected event network-vif-plugged-ae035cfb-a17b-4578-a506-e2581da09f74 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:35:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:23.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:35:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:23.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 500 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 207 op/s
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev aa35932b-b505-436b-bea6-91a35fce3e8f does not exist
Jan 31 03:35:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9ff1b751-9e97-4e4e-9f2b-996b01053976 does not exist
Jan 31 03:35:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0fc2b67a-1325-404f-9d1b-b9e64a7232ca does not exist
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:35:24 np0005603621 auditd[697]: Audit daemon rotating log files
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.878865884 +0000 UTC m=+0.037018334 container create 0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:35:24 np0005603621 systemd[1]: Started libpod-conmon-0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30.scope.
Jan 31 03:35:24 np0005603621 nova_compute[247399]: 2026-01-31 08:35:24.915 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:35:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.949468933 +0000 UTC m=+0.107621403 container init 0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.95537435 +0000 UTC m=+0.113526800 container start 0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.861019468 +0000 UTC m=+0.019171938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.959215052 +0000 UTC m=+0.117367502 container attach 0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:35:24 np0005603621 charming_margulis[343821]: 167 167
Jan 31 03:35:24 np0005603621 systemd[1]: libpod-0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30.scope: Deactivated successfully.
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.960569465 +0000 UTC m=+0.118721915 container died 0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:35:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e5d0d48beb18f69132b46e3aaf1a7a58c1ddbe6719f7dc519d3a907c0fc5fe22-merged.mount: Deactivated successfully.
Jan 31 03:35:24 np0005603621 podman[343805]: 2026-01-31 08:35:24.990009918 +0000 UTC m=+0.148162368 container remove 0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:35:24 np0005603621 systemd[1]: libpod-conmon-0c980f605a52b22e4b1c74ab892123762921564bc56719cc84e2d2fdcfd84c30.scope: Deactivated successfully.
Jan 31 03:35:25 np0005603621 podman[343844]: 2026-01-31 08:35:25.092588211 +0000 UTC m=+0.028802455 container create 79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.094 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:25 np0005603621 systemd[1]: Started libpod-conmon-79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463.scope.
Jan 31 03:35:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd577d242beaa1a71149477906aedef2b137b523d0775bab7f140be84cc7689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd577d242beaa1a71149477906aedef2b137b523d0775bab7f140be84cc7689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd577d242beaa1a71149477906aedef2b137b523d0775bab7f140be84cc7689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd577d242beaa1a71149477906aedef2b137b523d0775bab7f140be84cc7689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd577d242beaa1a71149477906aedef2b137b523d0775bab7f140be84cc7689/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:25 np0005603621 podman[343844]: 2026-01-31 08:35:25.168833597 +0000 UTC m=+0.105047841 container init 79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:35:25 np0005603621 podman[343844]: 2026-01-31 08:35:25.17429073 +0000 UTC m=+0.110504974 container start 79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:35:25 np0005603621 podman[343844]: 2026-01-31 08:35:25.080611311 +0000 UTC m=+0.016825575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:35:25 np0005603621 podman[343844]: 2026-01-31 08:35:25.209556159 +0000 UTC m=+0.145770393 container attach 79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.292 247403 DEBUG nova.network.neutron [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:35:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:25.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:35:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 31 03:35:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:25.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 31 03:35:25 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.489 247403 DEBUG nova.compute.manager [req-efa1d367-5344-4f95-9f70-b0aece496b0e req-753a57db-349a-498d-b6a8-9e2b3cbc5d7d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Received event network-vif-deleted-ae035cfb-a17b-4578-a506-e2581da09f74 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.489 247403 INFO nova.compute.manager [req-efa1d367-5344-4f95-9f70-b0aece496b0e req-753a57db-349a-498d-b6a8-9e2b3cbc5d7d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Neutron deleted interface ae035cfb-a17b-4578-a506-e2581da09f74; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.490 247403 DEBUG nova.network.neutron [req-efa1d367-5344-4f95-9f70-b0aece496b0e req-753a57db-349a-498d-b6a8-9e2b3cbc5d7d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:25 np0005603621 kind_tu[343860]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:35:25 np0005603621 kind_tu[343860]: --> relative data size: 1.0
Jan 31 03:35:25 np0005603621 kind_tu[343860]: --> All data devices are unavailable
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.956 247403 INFO nova.compute.manager [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Took 5.01 seconds to deallocate network for instance.#033[00m
Jan 31 03:35:25 np0005603621 systemd[1]: libpod-79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463.scope: Deactivated successfully.
Jan 31 03:35:25 np0005603621 podman[343844]: 2026-01-31 08:35:25.960038693 +0000 UTC m=+0.896252937 container died 79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:25 np0005603621 nova_compute[247399]: 2026-01-31 08:35:25.964 247403 DEBUG nova.compute.manager [req-efa1d367-5344-4f95-9f70-b0aece496b0e req-753a57db-349a-498d-b6a8-9e2b3cbc5d7d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Detach interface failed, port_id=ae035cfb-a17b-4578-a506-e2581da09f74, reason: Instance adfc4c25-9eb9-45cc-ac90-2029677bcb67 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:35:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1fd577d242beaa1a71149477906aedef2b137b523d0775bab7f140be84cc7689-merged.mount: Deactivated successfully.
Jan 31 03:35:26 np0005603621 podman[343844]: 2026-01-31 08:35:26.312341792 +0000 UTC m=+1.248556026 container remove 79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:35:26 np0005603621 systemd[1]: libpod-conmon-79abd6ed7f933ee0d58e6b0f3f98eb1eb48deb6986a977d956f02d31fc8a6463.scope: Deactivated successfully.
Jan 31 03:35:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 529 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 974 KiB/s rd, 3.0 MiB/s wr, 199 op/s
Jan 31 03:35:26 np0005603621 nova_compute[247399]: 2026-01-31 08:35:26.370 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:26 np0005603621 nova_compute[247399]: 2026-01-31 08:35:26.371 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.780684421 +0000 UTC m=+0.036847129 container create 4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 03:35:26 np0005603621 systemd[1]: Started libpod-conmon-4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125.scope.
Jan 31 03:35:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.847395886 +0000 UTC m=+0.103558624 container init 4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.852106565 +0000 UTC m=+0.108269273 container start 4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:26 np0005603621 gracious_chatelet[344044]: 167 167
Jan 31 03:35:26 np0005603621 systemd[1]: libpod-4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125.scope: Deactivated successfully.
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.761393359 +0000 UTC m=+0.017556097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.859698825 +0000 UTC m=+0.115861533 container attach 4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.860010805 +0000 UTC m=+0.116173533 container died 4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:35:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3bb0f20fd5a58fa2a5ad540b62bd77d52b76da689076c1939c95d3cc1689b898-merged.mount: Deactivated successfully.
Jan 31 03:35:26 np0005603621 podman[344028]: 2026-01-31 08:35:26.89329067 +0000 UTC m=+0.149453378 container remove 4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:35:26 np0005603621 systemd[1]: libpod-conmon-4ecfab6c39fdb5ab980dc57f45ff547d5a34458a6d7ccec4414568e5432d4125.scope: Deactivated successfully.
Jan 31 03:35:27 np0005603621 podman[344067]: 2026-01-31 08:35:27.007079919 +0000 UTC m=+0.032103410 container create 7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:27 np0005603621 systemd[1]: Started libpod-conmon-7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc.scope.
Jan 31 03:35:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed44fb76f2b096d26c835c33728fe465405425dfa289e72808529920ae54bc96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed44fb76f2b096d26c835c33728fe465405425dfa289e72808529920ae54bc96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed44fb76f2b096d26c835c33728fe465405425dfa289e72808529920ae54bc96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed44fb76f2b096d26c835c33728fe465405425dfa289e72808529920ae54bc96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:27 np0005603621 podman[344067]: 2026-01-31 08:35:27.086496606 +0000 UTC m=+0.111520117 container init 7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:27 np0005603621 podman[344067]: 2026-01-31 08:35:26.993666504 +0000 UTC m=+0.018690015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:35:27 np0005603621 podman[344067]: 2026-01-31 08:35:27.093023633 +0000 UTC m=+0.118047124 container start 7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:27 np0005603621 podman[344067]: 2026-01-31 08:35:27.096155633 +0000 UTC m=+0.121179124 container attach 7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:27.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:27.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]: {
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:    "0": [
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:        {
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "devices": [
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "/dev/loop3"
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            ],
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "lv_name": "ceph_lv0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "lv_size": "7511998464",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "name": "ceph_lv0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "tags": {
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.cluster_name": "ceph",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.crush_device_class": "",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.encrypted": "0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.osd_id": "0",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.type": "block",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:                "ceph.vdo": "0"
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            },
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "type": "block",
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:            "vg_name": "ceph_vg0"
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:        }
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]:    ]
Jan 31 03:35:27 np0005603621 eloquent_hertz[344083]: }
Jan 31 03:35:27 np0005603621 systemd[1]: libpod-7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc.scope: Deactivated successfully.
Jan 31 03:35:27 np0005603621 podman[344067]: 2026-01-31 08:35:27.839407957 +0000 UTC m=+0.864431458 container died 7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ed44fb76f2b096d26c835c33728fe465405425dfa289e72808529920ae54bc96-merged.mount: Deactivated successfully.
Jan 31 03:35:28 np0005603621 podman[344067]: 2026-01-31 08:35:28.031049623 +0000 UTC m=+1.056073114 container remove 7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:28 np0005603621 systemd[1]: libpod-conmon-7b3820d81bc36475db83c2ad7fb4fd54df8262b66f0988155e7199fd82ba8fcc.scope: Deactivated successfully.
Jan 31 03:35:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 898 KiB/s rd, 2.6 MiB/s wr, 178 op/s
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.603875974 +0000 UTC m=+0.048957893 container create dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 03:35:28 np0005603621 systemd[1]: Started libpod-conmon-dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e.scope.
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.583451217 +0000 UTC m=+0.028533136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:35:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.700113905 +0000 UTC m=+0.145195874 container init dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_aryabhata, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.706975193 +0000 UTC m=+0.152057152 container start dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.711360432 +0000 UTC m=+0.156442391 container attach dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:35:28 np0005603621 nice_aryabhata[344261]: 167 167
Jan 31 03:35:28 np0005603621 systemd[1]: libpod-dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e.scope: Deactivated successfully.
Jan 31 03:35:28 np0005603621 conmon[344261]: conmon dd6b4d04052f066452d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e.scope/container/memory.events
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.713481219 +0000 UTC m=+0.158563158 container died dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:35:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-64d876615a3f5c1cd77e60092ce24a7ca2fb45f345498c41e011a9ba77b5c317-merged.mount: Deactivated successfully.
Jan 31 03:35:28 np0005603621 podman[344245]: 2026-01-31 08:35:28.766989295 +0000 UTC m=+0.212071254 container remove dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_aryabhata, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:35:28 np0005603621 systemd[1]: libpod-conmon-dd6b4d04052f066452d9e5c0b143977d599fce90c3689de1d9f2ca417ef24f6e.scope: Deactivated successfully.
Jan 31 03:35:28 np0005603621 podman[344285]: 2026-01-31 08:35:28.961225594 +0000 UTC m=+0.059718065 container create 0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 03:35:28 np0005603621 nova_compute[247399]: 2026-01-31 08:35:28.960 247403 DEBUG oslo_concurrency.processutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:29 np0005603621 systemd[1]: Started libpod-conmon-0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769.scope.
Jan 31 03:35:29 np0005603621 podman[344285]: 2026-01-31 08:35:28.92765632 +0000 UTC m=+0.026148811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:35:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:35:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82849decea83c4982f5e64db0f29b598c6075c6ca387ee732063afbc1f3bf629/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82849decea83c4982f5e64db0f29b598c6075c6ca387ee732063afbc1f3bf629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82849decea83c4982f5e64db0f29b598c6075c6ca387ee732063afbc1f3bf629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82849decea83c4982f5e64db0f29b598c6075c6ca387ee732063afbc1f3bf629/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:35:29 np0005603621 podman[344285]: 2026-01-31 08:35:29.057307549 +0000 UTC m=+0.155800040 container init 0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 03:35:29 np0005603621 podman[344285]: 2026-01-31 08:35:29.064679524 +0000 UTC m=+0.163171995 container start 0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:35:29 np0005603621 podman[344285]: 2026-01-31 08:35:29.068470053 +0000 UTC m=+0.166962544 container attach 0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:35:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:29.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:29.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:35:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723394744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:35:29 np0005603621 nova_compute[247399]: 2026-01-31 08:35:29.418 247403 DEBUG oslo_concurrency.processutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:29 np0005603621 nova_compute[247399]: 2026-01-31 08:35:29.425 247403 DEBUG nova.compute.provider_tree [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:35:29 np0005603621 nova_compute[247399]: 2026-01-31 08:35:29.507 247403 DEBUG nova.scheduler.client.report [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:35:29 np0005603621 nova_compute[247399]: 2026-01-31 08:35:29.663 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:29 np0005603621 nova_compute[247399]: 2026-01-31 08:35:29.802 247403 INFO nova.scheduler.client.report [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Deleted allocations for instance adfc4c25-9eb9-45cc-ac90-2029677bcb67#033[00m
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]: {
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:        "osd_id": 0,
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:        "type": "bluestore"
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]:    }
Jan 31 03:35:29 np0005603621 compassionate_golick[344302]: }
Jan 31 03:35:29 np0005603621 systemd[1]: libpod-0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769.scope: Deactivated successfully.
Jan 31 03:35:29 np0005603621 podman[344344]: 2026-01-31 08:35:29.909943203 +0000 UTC m=+0.020143570 container died 0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:29 np0005603621 nova_compute[247399]: 2026-01-31 08:35:29.917 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-82849decea83c4982f5e64db0f29b598c6075c6ca387ee732063afbc1f3bf629-merged.mount: Deactivated successfully.
Jan 31 03:35:29 np0005603621 podman[344344]: 2026-01-31 08:35:29.954072191 +0000 UTC m=+0.064272538 container remove 0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:35:29 np0005603621 systemd[1]: libpod-conmon-0032c5724dfaa8c5922f1e0351631199b0702b9ad07e10869ee0c8da4d5eb769.scope: Deactivated successfully.
Jan 31 03:35:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:35:30 np0005603621 nova_compute[247399]: 2026-01-31 08:35:30.095 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:30 np0005603621 nova_compute[247399]: 2026-01-31 08:35:30.161 247403 DEBUG oslo_concurrency.lockutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquiring lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:35:30 np0005603621 nova_compute[247399]: 2026-01-31 08:35:30.162 247403 DEBUG oslo_concurrency.lockutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquired lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:35:30 np0005603621 nova_compute[247399]: 2026-01-31 08:35:30.163 247403 DEBUG nova.network.neutron [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:35:30 np0005603621 nova_compute[247399]: 2026-01-31 08:35:30.302 247403 DEBUG oslo_concurrency.lockutils [None req-3972e234-65c8-4d72-a5a9-9b16288e0e8b ef51681d234a4abc88ff433d0640b6e7 953a213fa5cb435ab3c04ad96152685f - - default default] Lock "adfc4c25-9eb9-45cc-ac90-2029677bcb67" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 898 KiB/s rd, 2.6 MiB/s wr, 178 op/s
Jan 31 03:35:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:35:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 95e04f2c-23a5-46fe-9912-a92a81094924 does not exist
Jan 31 03:35:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev abcf7fe0-7c0f-43fd-b7e6-898353e10647 does not exist
Jan 31 03:35:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 48118cde-f4c3-4d12-a3ad-c1e7e3d0a15d does not exist
Jan 31 03:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:30.517 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:30.517 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:30.518 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:35:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:31.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:31Z|00555|binding|INFO|Releasing lport 98bdd03c-3803-4f50-b99f-a5baefc4ec8a from this chassis (sb_readonly=0)
Jan 31 03:35:31 np0005603621 nova_compute[247399]: 2026-01-31 08:35:31.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:35:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/714576901' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:35:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:35:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/714576901' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:35:32 np0005603621 nova_compute[247399]: 2026-01-31 08:35:32.285 247403 DEBUG nova.network.neutron [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating instance_info_cache with network_info: [{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:32 np0005603621 nova_compute[247399]: 2026-01-31 08:35:32.341 247403 DEBUG oslo_concurrency.lockutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Releasing lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:35:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 531 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.140 247403 DEBUG nova.virt.libvirt.driver [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.140 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Creating file /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/48fecf4553c2431eb0334b76591a7d0a.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.141 247403 DEBUG oslo_concurrency.processutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/48fecf4553c2431eb0334b76591a7d0a.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:33.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.489 247403 DEBUG oslo_concurrency.processutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/48fecf4553c2431eb0334b76591a7d0a.tmp" returned: 1 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.490 247403 DEBUG oslo_concurrency.processutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af/48fecf4553c2431eb0334b76591a7d0a.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.490 247403 DEBUG nova.virt.libvirt.volume.remotefs [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Creating directory /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.491 247403 DEBUG oslo_concurrency.processutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.672 247403 DEBUG oslo_concurrency.processutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:33 np0005603621 nova_compute[247399]: 2026-01-31 08:35:33.678 247403 DEBUG nova.virt.libvirt.driver [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:35:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 240 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 31 03:35:34 np0005603621 nova_compute[247399]: 2026-01-31 08:35:34.919 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.025 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848520.0237076, adfc4c25-9eb9-45cc-ac90-2029677bcb67 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.026 247403 INFO nova.compute.manager [-] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.095 247403 DEBUG nova.compute.manager [None req-fb6f7312-caa5-49a6-a035-29f3f289f7e6 - - - - - -] [instance: adfc4c25-9eb9-45cc-ac90-2029677bcb67] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:35.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:35 np0005603621 kernel: tap58956ac4-88 (unregistering): left promiscuous mode
Jan 31 03:35:35 np0005603621 NetworkManager[49013]: <info>  [1769848535.9753] device (tap58956ac4-88): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:35:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:35Z|00556|binding|INFO|Releasing lport 58956ac4-88cf-49c2-988a-8a3746f1e622 from this chassis (sb_readonly=0)
Jan 31 03:35:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:35Z|00557|binding|INFO|Setting lport 58956ac4-88cf-49c2-988a-8a3746f1e622 down in Southbound
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.981 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:35:35Z|00558|binding|INFO|Removing iface tap58956ac4-88 ovn-installed in OVS
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.983 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:35 np0005603621 nova_compute[247399]: 2026-01-31 08:35:35.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.001 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:ff:26 10.100.0.7'], port_security=['fa:16:3e:75:ff:26 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '60462c66-f02d-4ca4-aa2a-b6ea91c8a6af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1bca5a82-b0f2-4237-92f5-d7d2dbf4afe9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e036da0b-b229-4d68-8cb9-77eeebb375fb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=58956ac4-88cf-49c2-988a-8a3746f1e622) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.002 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 58956ac4-88cf-49c2-988a-8a3746f1e622 in datapath e45621cc-e984-4d02-a4f7-adf5b5457b33 unbound from our chassis#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.004 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e45621cc-e984-4d02-a4f7-adf5b5457b33, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.006 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcc832c-d147-4eb3-8644-08854bb56106]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.006 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33 namespace which is not needed anymore#033[00m
Jan 31 03:35:36 np0005603621 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 31 03:35:36 np0005603621 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000008d.scope: Consumed 13.635s CPU time.
Jan 31 03:35:36 np0005603621 systemd-machined[212769]: Machine qemu-66-instance-0000008d terminated.
Jan 31 03:35:36 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [NOTICE]   (342687) : haproxy version is 2.8.14-c23fe91
Jan 31 03:35:36 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [NOTICE]   (342687) : path to executable is /usr/sbin/haproxy
Jan 31 03:35:36 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [WARNING]  (342687) : Exiting Master process...
Jan 31 03:35:36 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [ALERT]    (342687) : Current worker (342689) exited with code 143 (Terminated)
Jan 31 03:35:36 np0005603621 neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33[342683]: [WARNING]  (342687) : All workers exited. Exiting... (0)
Jan 31 03:35:36 np0005603621 systemd[1]: libpod-93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e.scope: Deactivated successfully.
Jan 31 03:35:36 np0005603621 podman[344489]: 2026-01-31 08:35:36.110491047 +0000 UTC m=+0.038466110 container died 93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:35:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-656011ea27537bb619f8fa9f3514c427d4e56e7dafc19967e8fb9b4f8faf9cf5-merged.mount: Deactivated successfully.
Jan 31 03:35:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:35:36 np0005603621 podman[344489]: 2026-01-31 08:35:36.146787738 +0000 UTC m=+0.074762801 container cleanup 93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:35:36 np0005603621 systemd[1]: libpod-conmon-93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e.scope: Deactivated successfully.
Jan 31 03:35:36 np0005603621 podman[344521]: 2026-01-31 08:35:36.197117114 +0000 UTC m=+0.036905681 container remove 93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.201 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[818ff183-bad9-464b-8828-d62d71cc2d24]: (4, ('Sat Jan 31 08:35:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33 (93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e)\n93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e\nSat Jan 31 08:35:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33 (93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e)\n93cd51453ae953eef407ec2986653012e5a7eb3a7d1833c871427c7dec9db15e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.203 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2dcfdf-ca9f-4006-88a8-5f6cc50930b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.204 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape45621cc-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.206 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:36 np0005603621 kernel: tape45621cc-e0: left promiscuous mode
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.218 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8710169d-10d2-46c4-81d7-2ac9b652f7ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.233 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bad1f32a-20ec-4737-a114-c376c9a01a2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.234 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39d1c280-5fe1-4434-b0c8-e6ce1baee997]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.245 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[965993a2-c467-4b4c-8099-7c55926cef7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 780324, 'reachable_time': 34219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344551, 'error': None, 'target': 'ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 systemd[1]: run-netns-ovnmeta\x2de45621cc\x2de984\x2d4d02\x2da4f7\x2dadf5b5457b33.mount: Deactivated successfully.
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.249 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e45621cc-e984-4d02-a4f7-adf5b5457b33 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:35:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:35:36.249 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[ad945e85-fa1f-4bbb-9415-13496117bf80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:35:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 235 KiB/s rd, 1.9 MiB/s wr, 83 op/s
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.697 247403 INFO nova.virt.libvirt.driver [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.702 247403 INFO nova.virt.libvirt.driver [-] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Instance destroyed successfully.#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.703 247403 DEBUG nova.virt.libvirt.vif [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1672201743',display_name='tempest-TestNetworkAdvancedServerOps-server-1672201743',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1672201743',id=141,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUoionOf1jsbgYnjxtSF8S5kbM7WrnC+AvzdWQ5Iv9NrHSu1YTmh7OvNKWVCt94tfduQMP4jFzkhpdFTOQdH6c769sX4vCZIDbSCuBl9lgkWTK5Ks3sTtkCsO2rA5PBWA==',key_name='tempest-TestNetworkAdvancedServerOps-2012991436',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:34:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-4je44sr0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:35:26Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=60462c66-f02d-4ca4-aa2a-b6ea91c8a6af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--147789550", "vif_mac": "fa:16:3e:75:ff:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.703 247403 DEBUG nova.network.os_vif_util [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Converting VIF {"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--147789550", "vif_mac": "fa:16:3e:75:ff:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.704 247403 DEBUG nova.network.os_vif_util [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.704 247403 DEBUG os_vif [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.706 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.706 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58956ac4-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.711 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.712 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.714 247403 INFO os_vif [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88')#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.718 247403 DEBUG nova.virt.libvirt.driver [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.718 247403 DEBUG nova.virt.libvirt.driver [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.806 247403 DEBUG nova.compute.manager [req-5a0bf9cc-61c1-4ee4-8e96-83804cf4228a req-0929bc51-b329-4b0b-92b9-2d5ea202a0ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-vif-unplugged-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.806 247403 DEBUG oslo_concurrency.lockutils [req-5a0bf9cc-61c1-4ee4-8e96-83804cf4228a req-0929bc51-b329-4b0b-92b9-2d5ea202a0ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.808 247403 DEBUG oslo_concurrency.lockutils [req-5a0bf9cc-61c1-4ee4-8e96-83804cf4228a req-0929bc51-b329-4b0b-92b9-2d5ea202a0ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.808 247403 DEBUG oslo_concurrency.lockutils [req-5a0bf9cc-61c1-4ee4-8e96-83804cf4228a req-0929bc51-b329-4b0b-92b9-2d5ea202a0ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.808 247403 DEBUG nova.compute.manager [req-5a0bf9cc-61c1-4ee4-8e96-83804cf4228a req-0929bc51-b329-4b0b-92b9-2d5ea202a0ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] No waiting events found dispatching network-vif-unplugged-58956ac4-88cf-49c2-988a-8a3746f1e622 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:36 np0005603621 nova_compute[247399]: 2026-01-31 08:35:36.809 247403 WARNING nova.compute.manager [req-5a0bf9cc-61c1-4ee4-8e96-83804cf4228a req-0929bc51-b329-4b0b-92b9-2d5ea202a0ec fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received unexpected event network-vif-unplugged-58956ac4-88cf-49c2-988a-8a3746f1e622 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:35:37 np0005603621 nova_compute[247399]: 2026-01-31 08:35:37.098 247403 DEBUG neutronclient.v2_0.client [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 58956ac4-88cf-49c2-988a-8a3746f1e622 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:35:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:37.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:37 np0005603621 nova_compute[247399]: 2026-01-31 08:35:37.356 247403 DEBUG oslo_concurrency.lockutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:37 np0005603621 nova_compute[247399]: 2026-01-31 08:35:37.357 247403 DEBUG oslo_concurrency.lockutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:37 np0005603621 nova_compute[247399]: 2026-01-31 08:35:37.357 247403 DEBUG oslo_concurrency.lockutils [None req-7fad474e-c634-4e22-8175-4b33c26a0248 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:37.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 54 KiB/s wr, 53 op/s
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:35:38
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root']
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:35:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:35:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.405 247403 DEBUG nova.compute.manager [req-2acd6282-deab-46c9-95f4-685f3238b758 req-28ef24f2-6b6f-47ac-bb9e-550568164d21 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.406 247403 DEBUG oslo_concurrency.lockutils [req-2acd6282-deab-46c9-95f4-685f3238b758 req-28ef24f2-6b6f-47ac-bb9e-550568164d21 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.406 247403 DEBUG oslo_concurrency.lockutils [req-2acd6282-deab-46c9-95f4-685f3238b758 req-28ef24f2-6b6f-47ac-bb9e-550568164d21 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.406 247403 DEBUG oslo_concurrency.lockutils [req-2acd6282-deab-46c9-95f4-685f3238b758 req-28ef24f2-6b6f-47ac-bb9e-550568164d21 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.406 247403 DEBUG nova.compute.manager [req-2acd6282-deab-46c9-95f4-685f3238b758 req-28ef24f2-6b6f-47ac-bb9e-550568164d21 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] No waiting events found dispatching network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.406 247403 WARNING nova.compute.manager [req-2acd6282-deab-46c9-95f4-685f3238b758 req-28ef24f2-6b6f-47ac-bb9e-550568164d21 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received unexpected event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:35:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3835187053
Jan 31 03:35:39 np0005603621 nova_compute[247399]: 2026-01-31 08:35:39.921 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 49 KiB/s wr, 49 op/s
Jan 31 03:35:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:41.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:41 np0005603621 nova_compute[247399]: 2026-01-31 08:35:41.712 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.260 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:35:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 58 KiB/s rd, 50 KiB/s wr, 79 op/s
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.500 247403 DEBUG nova.compute.manager [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-changed-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.500 247403 DEBUG nova.compute.manager [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Refreshing instance network info cache due to event network-changed-58956ac4-88cf-49c2-988a-8a3746f1e622. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.501 247403 DEBUG oslo_concurrency.lockutils [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.501 247403 DEBUG oslo_concurrency.lockutils [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:35:42 np0005603621 nova_compute[247399]: 2026-01-31 08:35:42.501 247403 DEBUG nova.network.neutron [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Refreshing network info cache for port 58956ac4-88cf-49c2-988a-8a3746f1e622 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.710025) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848542710157, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1068, "num_deletes": 255, "total_data_size": 1585443, "memory_usage": 1612768, "flush_reason": "Manual Compaction"}
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848542782678, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1556740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55248, "largest_seqno": 56315, "table_properties": {"data_size": 1551316, "index_size": 2820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12225, "raw_average_key_size": 20, "raw_value_size": 1540356, "raw_average_value_size": 2610, "num_data_blocks": 122, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848470, "oldest_key_time": 1769848470, "file_creation_time": 1769848542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 72753 microseconds, and 3613 cpu microseconds.
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.782782) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1556740 bytes OK
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.782809) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.847440) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.847496) EVENT_LOG_v1 {"time_micros": 1769848542847486, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.847528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1580364, prev total WAL file size 1580364, number of live WAL files 2.
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.848458) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1520KB)], [122(12MB)]
Jan 31 03:35:42 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848542848553, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 14957066, "oldest_snapshot_seqno": -1}
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 8460 keys, 12785256 bytes, temperature: kUnknown
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848543124611, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 12785256, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12728103, "index_size": 34897, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 219879, "raw_average_key_size": 25, "raw_value_size": 12577219, "raw_average_value_size": 1486, "num_data_blocks": 1366, "num_entries": 8460, "num_filter_entries": 8460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.124995) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12785256 bytes
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.158596) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.1 rd, 46.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 12.8 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(17.8) write-amplify(8.2) OK, records in: 8989, records dropped: 529 output_compression: NoCompression
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.158669) EVENT_LOG_v1 {"time_micros": 1769848543158624, "job": 74, "event": "compaction_finished", "compaction_time_micros": 276251, "compaction_time_cpu_micros": 23439, "output_level": 6, "num_output_files": 1, "total_output_size": 12785256, "num_input_records": 8989, "num_output_records": 8460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848543159037, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848543160317, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:42.848210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.160491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.160498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.160500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.160502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:35:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:35:43.160504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:35:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:43.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:43.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 48 KiB/s wr, 82 op/s
Jan 31 03:35:44 np0005603621 nova_compute[247399]: 2026-01-31 08:35:44.923 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:45.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:45.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:45 np0005603621 podman[344558]: 2026-01-31 08:35:45.505534983 +0000 UTC m=+0.052541436 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 03:35:45 np0005603621 podman[344559]: 2026-01-31 08:35:45.586950835 +0000 UTC m=+0.134100833 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:35:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 48 KiB/s wr, 72 op/s
Jan 31 03:35:46 np0005603621 nova_compute[247399]: 2026-01-31 08:35:46.752 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:47 np0005603621 nova_compute[247399]: 2026-01-31 08:35:47.260 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:47.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:47.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:47 np0005603621 nova_compute[247399]: 2026-01-31 08:35:47.730 247403 DEBUG nova.network.neutron [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updated VIF entry in instance network info cache for port 58956ac4-88cf-49c2-988a-8a3746f1e622. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:35:47 np0005603621 nova_compute[247399]: 2026-01-31 08:35:47.731 247403 DEBUG nova.network.neutron [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating instance_info_cache with network_info: [{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:35:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 16 KiB/s wr, 47 op/s
Jan 31 03:35:48 np0005603621 nova_compute[247399]: 2026-01-31 08:35:48.704 247403 DEBUG oslo_concurrency.lockutils [req-2ce5e9f0-e59e-4819-8633-5b0b462ca53d req-8679fbc9-e527-4a29-a342-b65491e48167 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:35:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:35:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:49.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006519140843104411 of space, bias 1.0, pg target 1.955742252931323 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6469151312116136 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:35:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:35:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:49.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Jan 31 03:35:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Jan 31 03:35:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Jan 31 03:35:49 np0005603621 nova_compute[247399]: 2026-01-31 08:35:49.925 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 40 op/s
Jan 31 03:35:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:51 np0005603621 nova_compute[247399]: 2026-01-31 08:35:51.211 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848536.2108827, 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:35:51 np0005603621 nova_compute[247399]: 2026-01-31 08:35:51.212 247403 INFO nova.compute.manager [-] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:35:51 np0005603621 nova_compute[247399]: 2026-01-31 08:35:51.302 247403 DEBUG nova.compute.manager [None req-fb9e6eed-2ee0-469d-8d39-c36f0efe456c - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:35:51 np0005603621 nova_compute[247399]: 2026-01-31 08:35:51.305 247403 DEBUG nova.compute.manager [None req-fb9e6eed-2ee0-469d-8d39-c36f0efe456c - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:35:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:51.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:51.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:51 np0005603621 nova_compute[247399]: 2026-01-31 08:35:51.431 247403 INFO nova.compute.manager [None req-fb9e6eed-2ee0-469d-8d39-c36f0efe456c - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 31 03:35:51 np0005603621 nova_compute[247399]: 2026-01-31 08:35:51.756 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:52 np0005603621 nova_compute[247399]: 2026-01-31 08:35:52.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:52 np0005603621 nova_compute[247399]: 2026-01-31 08:35:52.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:35:52 np0005603621 nova_compute[247399]: 2026-01-31 08:35:52.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:35:52 np0005603621 nova_compute[247399]: 2026-01-31 08:35:52.293 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:35:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 20 op/s
Jan 31 03:35:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:53.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:53.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:53 np0005603621 nova_compute[247399]: 2026-01-31 08:35:53.854 247403 DEBUG nova.compute.manager [req-e6c99750-a036-4e7c-9a2b-7c17e78bdc74 req-82077e0c-a972-42b1-8ca0-7cf663b143af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:53 np0005603621 nova_compute[247399]: 2026-01-31 08:35:53.855 247403 DEBUG oslo_concurrency.lockutils [req-e6c99750-a036-4e7c-9a2b-7c17e78bdc74 req-82077e0c-a972-42b1-8ca0-7cf663b143af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:53 np0005603621 nova_compute[247399]: 2026-01-31 08:35:53.855 247403 DEBUG oslo_concurrency.lockutils [req-e6c99750-a036-4e7c-9a2b-7c17e78bdc74 req-82077e0c-a972-42b1-8ca0-7cf663b143af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:53 np0005603621 nova_compute[247399]: 2026-01-31 08:35:53.855 247403 DEBUG oslo_concurrency.lockutils [req-e6c99750-a036-4e7c-9a2b-7c17e78bdc74 req-82077e0c-a972-42b1-8ca0-7cf663b143af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:53 np0005603621 nova_compute[247399]: 2026-01-31 08:35:53.855 247403 DEBUG nova.compute.manager [req-e6c99750-a036-4e7c-9a2b-7c17e78bdc74 req-82077e0c-a972-42b1-8ca0-7cf663b143af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] No waiting events found dispatching network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:53 np0005603621 nova_compute[247399]: 2026-01-31 08:35:53.855 247403 WARNING nova.compute.manager [req-e6c99750-a036-4e7c-9a2b-7c17e78bdc74 req-82077e0c-a972-42b1-8ca0-7cf663b143af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received unexpected event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:35:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 409 B/s wr, 18 op/s
Jan 31 03:35:54 np0005603621 nova_compute[247399]: 2026-01-31 08:35:54.927 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:55 np0005603621 nova_compute[247399]: 2026-01-31 08:35:55.288 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:55.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:55.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:35:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 KiB/s wr, 150 op/s
Jan 31 03:35:56 np0005603621 nova_compute[247399]: 2026-01-31 08:35:56.806 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:35:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:35:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:57.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.358 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.358 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.358 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.358 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.359 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:57.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.685 247403 DEBUG nova.compute.manager [req-9b7d1204-994a-405e-bd9e-62d20640772f req-bef38eb4-7af8-4ad5-9558-5ddd16cab4dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.686 247403 DEBUG oslo_concurrency.lockutils [req-9b7d1204-994a-405e-bd9e-62d20640772f req-bef38eb4-7af8-4ad5-9558-5ddd16cab4dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.686 247403 DEBUG oslo_concurrency.lockutils [req-9b7d1204-994a-405e-bd9e-62d20640772f req-bef38eb4-7af8-4ad5-9558-5ddd16cab4dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.686 247403 DEBUG oslo_concurrency.lockutils [req-9b7d1204-994a-405e-bd9e-62d20640772f req-bef38eb4-7af8-4ad5-9558-5ddd16cab4dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.687 247403 DEBUG nova.compute.manager [req-9b7d1204-994a-405e-bd9e-62d20640772f req-bef38eb4-7af8-4ad5-9558-5ddd16cab4dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] No waiting events found dispatching network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.687 247403 WARNING nova.compute.manager [req-9b7d1204-994a-405e-bd9e-62d20640772f req-bef38eb4-7af8-4ad5-9558-5ddd16cab4dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Received unexpected event network-vif-plugged-58956ac4-88cf-49c2-988a-8a3746f1e622 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:35:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:35:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732027251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.827 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.963 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:35:57 np0005603621 nova_compute[247399]: 2026-01-31 08:35:57.964 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000008d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.104 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.105 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4267MB free_disk=20.851451873779297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.105 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.106 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.214 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration for instance 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.246 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating resource usage from migration c515f698-9e01-4a1c-97de-ee3d9443f03e#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.247 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Starting to track outgoing migration c515f698-9e01-4a1c-97de-ee3d9443f03e with flavor a01eb4f0-fd80-416b-a750-75de320394d8 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.300 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration c515f698-9e01-4a1c-97de-ee3d9443f03e is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.300 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.301 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:35:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 461 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 616 KiB/s wr, 221 op/s
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.381 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:35:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:35:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586905555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.813 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:35:58 np0005603621 nova_compute[247399]: 2026-01-31 08:35:58.822 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:35:59 np0005603621 nova_compute[247399]: 2026-01-31 08:35:59.135 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:35:59 np0005603621 nova_compute[247399]: 2026-01-31 08:35:59.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:59 np0005603621 nova_compute[247399]: 2026-01-31 08:35:59.274 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:35:59 np0005603621 nova_compute[247399]: 2026-01-31 08:35:59.292 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:35:59 np0005603621 nova_compute[247399]: 2026-01-31 08:35:59.292 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:35:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:35:59.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:35:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:35:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:35:59.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:35:59 np0005603621 nova_compute[247399]: 2026-01-31 08:35:59.929 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:00 np0005603621 nova_compute[247399]: 2026-01-31 08:36:00.318 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:00 np0005603621 nova_compute[247399]: 2026-01-31 08:36:00.319 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:00 np0005603621 nova_compute[247399]: 2026-01-31 08:36:00.319 247403 DEBUG nova.compute.manager [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 31 03:36:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 461 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 585 KiB/s wr, 210 op/s
Jan 31 03:36:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:01 np0005603621 nova_compute[247399]: 2026-01-31 08:36:01.293 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:01 np0005603621 nova_compute[247399]: 2026-01-31 08:36:01.293 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:01 np0005603621 nova_compute[247399]: 2026-01-31 08:36:01.293 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:01.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:01.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:01 np0005603621 nova_compute[247399]: 2026-01-31 08:36:01.852 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:02 np0005603621 nova_compute[247399]: 2026-01-31 08:36:02.074 247403 DEBUG neutronclient.v2_0.client [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 58956ac4-88cf-49c2-988a-8a3746f1e622 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 31 03:36:02 np0005603621 nova_compute[247399]: 2026-01-31 08:36:02.074 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:02 np0005603621 nova_compute[247399]: 2026-01-31 08:36:02.074 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:02 np0005603621 nova_compute[247399]: 2026-01-31 08:36:02.075 247403 DEBUG nova.network.neutron [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:36:02 np0005603621 nova_compute[247399]: 2026-01-31 08:36:02.075 247403 DEBUG nova.objects.instance [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'info_cache' on Instance uuid 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 31 03:36:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:03.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:03.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 195 op/s
Jan 31 03:36:04 np0005603621 nova_compute[247399]: 2026-01-31 08:36:04.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:05.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:05.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:06 np0005603621 nova_compute[247399]: 2026-01-31 08:36:06.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Jan 31 03:36:06 np0005603621 nova_compute[247399]: 2026-01-31 08:36:06.853 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:07 np0005603621 nova_compute[247399]: 2026-01-31 08:36:07.168 247403 DEBUG nova.network.neutron [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af] Updating instance_info_cache with network_info: [{"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:07 np0005603621 nova_compute[247399]: 2026-01-31 08:36:07.267 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:07 np0005603621 nova_compute[247399]: 2026-01-31 08:36:07.267 247403 DEBUG nova.objects.instance [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid 60462c66-f02d-4ca4-aa2a-b6ea91c8a6af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:07.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:07.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:07 np0005603621 nova_compute[247399]: 2026-01-31 08:36:07.447 247403 DEBUG nova.storage.rbd_utils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] removing snapshot(nova-resize) on rbd image(60462c66-f02d-4ca4-aa2a-b6ea91c8a6af_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:36:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Jan 31 03:36:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Jan 31 03:36:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.185 247403 DEBUG nova.virt.libvirt.vif [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:34:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1672201743',display_name='tempest-TestNetworkAdvancedServerOps-server-1672201743',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1672201743',id=141,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPUoionOf1jsbgYnjxtSF8S5kbM7WrnC+AvzdWQ5Iv9NrHSu1YTmh7OvNKWVCt94tfduQMP4jFzkhpdFTOQdH6c769sX4vCZIDbSCuBl9lgkWTK5Ks3sTtkCsO2rA5PBWA==',key_name='tempest-TestNetworkAdvancedServerOps-2012991436',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:35:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-4je44sr0',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:35:54Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=60462c66-f02d-4ca4-aa2a-b6ea91c8a6af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.186 247403 DEBUG nova.network.os_vif_util [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "58956ac4-88cf-49c2-988a-8a3746f1e622", "address": "fa:16:3e:75:ff:26", "network": {"id": "e45621cc-e984-4d02-a4f7-adf5b5457b33", "bridge": "br-int", "label": "tempest-network-smoke--147789550", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58956ac4-88", "ovs_interfaceid": "58956ac4-88cf-49c2-988a-8a3746f1e622", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.186 247403 DEBUG nova.network.os_vif_util [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.187 247403 DEBUG os_vif [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.189 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58956ac4-88, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.189 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.191 247403 INFO os_vif [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:ff:26,bridge_name='br-int',has_traffic_filtering=True,id=58956ac4-88cf-49c2-988a-8a3746f1e622,network=Network(e45621cc-e984-4d02-a4f7-adf5b5457b33),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58956ac4-88')#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.191 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.191 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:08.243 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:36:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:08.244 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.244 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 1.6 MiB/s wr, 116 op/s
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.403 247403 DEBUG oslo_concurrency.processutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.574 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.575 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.740 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:36:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:36:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003036009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.810 247403 DEBUG oslo_concurrency.processutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.815 247403 DEBUG nova.compute.provider_tree [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.867 247403 DEBUG nova.scheduler.client.report [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:36:08 np0005603621 nova_compute[247399]: 2026-01-31 08:36:08.972 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.014 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.016 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.025 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.025 247403 INFO nova.compute.claims [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.256 247403 INFO nova.scheduler.client.report [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Deleted allocation for migration c515f698-9e01-4a1c-97de-ee3d9443f03e#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.340 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:09.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.363 247403 DEBUG oslo_concurrency.lockutils [None req-c7261529-22fc-4769-a0a7-4f656c94086a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "60462c66-f02d-4ca4-aa2a-b6ea91c8a6af" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:09.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:36:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272734369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.787 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.791 247403 DEBUG nova.compute.provider_tree [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.823 247403 DEBUG nova.scheduler.client.report [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.903 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.904 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:36:09 np0005603621 nova_compute[247399]: 2026-01-31 08:36:09.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.328 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.329 247403 DEBUG nova.network.neutron [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:36:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1019 KiB/s rd, 1.6 MiB/s wr, 116 op/s
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.413 247403 INFO nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:36:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.464 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.707 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.709 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.709 247403 INFO nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Creating image(s)#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.741 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.770 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.797 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.801 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.862 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.863 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.864 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.864 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.890 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:10 np0005603621 nova_compute[247399]: 2026-01-31 08:36:10.894 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c95caf87-5069-4b70-9023-d3c2d911e87d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.123 247403 DEBUG nova.policy [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6733330b634472ca8c21316f1ee5057', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1e29363ca464487b931af54fe14166b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:36:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:11.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.359 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c95caf87-5069-4b70-9023-d3c2d911e87d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.428 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] resizing rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:36:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:11.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.592 247403 DEBUG nova.objects.instance [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'migration_context' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.640 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.641 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Ensure instance console log exists: /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.641 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.642 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.642 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:11 np0005603621 nova_compute[247399]: 2026-01-31 08:36:11.855 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 57 KiB/s wr, 196 op/s
Jan 31 03:36:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:13.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:13.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:14 np0005603621 nova_compute[247399]: 2026-01-31 08:36:14.093 247403 DEBUG nova.network.neutron [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Successfully created port: 509c791a-f0a2-4105-a992-82e720b801e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:36:14 np0005603621 nova_compute[247399]: 2026-01-31 08:36:14.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:14 np0005603621 nova_compute[247399]: 2026-01-31 08:36:14.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:36:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 512 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 527 KiB/s wr, 223 op/s
Jan 31 03:36:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:36:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045644612' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:36:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:36:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4045644612' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:36:14 np0005603621 nova_compute[247399]: 2026-01-31 08:36:14.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:15.247 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Jan 31 03:36:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Jan 31 03:36:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Jan 31 03:36:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 185 op/s
Jan 31 03:36:16 np0005603621 podman[345008]: 2026-01-31 08:36:16.520781259 +0000 UTC m=+0.077842420 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:36:16 np0005603621 podman[345009]: 2026-01-31 08:36:16.520696386 +0000 UTC m=+0.076006991 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:36:16 np0005603621 nova_compute[247399]: 2026-01-31 08:36:16.857 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:17.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:17.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.092 247403 DEBUG nova.network.neutron [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Successfully updated port: 509c791a-f0a2-4105-a992-82e720b801e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.157 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.158 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.158 247403 DEBUG nova.network.neutron [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.308 247403 DEBUG nova.compute.manager [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-changed-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.308 247403 DEBUG nova.compute.manager [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Refreshing instance network info cache due to event network-changed-509c791a-f0a2-4105-a992-82e720b801e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.308 247403 DEBUG oslo_concurrency.lockutils [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Jan 31 03:36:18 np0005603621 nova_compute[247399]: 2026-01-31 08:36:18.464 247403 DEBUG nova.network.neutron [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:36:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:19.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:19 np0005603621 nova_compute[247399]: 2026-01-31 08:36:19.673 247403 DEBUG nova.network.neutron [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:19 np0005603621 nova_compute[247399]: 2026-01-31 08:36:19.936 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.113 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.114 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance network_info: |[{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.114 247403 DEBUG oslo_concurrency.lockutils [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.114 247403 DEBUG nova.network.neutron [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Refreshing network info cache for port 509c791a-f0a2-4105-a992-82e720b801e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.118 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Start _get_guest_xml network_info=[{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.122 247403 WARNING nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.133 247403 DEBUG nova.virt.libvirt.host [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.134 247403 DEBUG nova.virt.libvirt.host [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.139 247403 DEBUG nova.virt.libvirt.host [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.140 247403 DEBUG nova.virt.libvirt.host [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.141 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.141 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.142 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.142 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.142 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.142 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.142 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.143 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.143 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.143 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.143 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.144 247403 DEBUG nova.virt.hardware [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.146 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 154 op/s
Jan 31 03:36:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:36:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602042454' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.561 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.586 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:20 np0005603621 nova_compute[247399]: 2026-01-31 08:36:20.590 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:36:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204013378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.032 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.034 247403 DEBUG nova.virt.libvirt.vif [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:36:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-268360674',display_name='tempest-ServerStableDeviceRescueTest-server-268360674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-268360674',id=144,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-dmlghow8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:10Z,user_data=None,user_id='b6733330b634472ca8c21316f1ee5057',uuid=c95caf87-5069-4b70-9023-d3c2d911e87d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.035 247403 DEBUG nova.network.os_vif_util [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.035 247403 DEBUG nova.network.os_vif_util [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.036 247403 DEBUG nova.objects.instance [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.269 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <uuid>c95caf87-5069-4b70-9023-d3c2d911e87d</uuid>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <name>instance-00000090</name>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-268360674</nova:name>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:36:20</nova:creationTime>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:user uuid="b6733330b634472ca8c21316f1ee5057">tempest-ServerStableDeviceRescueTest-319343227-project-member</nova:user>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:project uuid="1e29363ca464487b931af54fe14166b1">tempest-ServerStableDeviceRescueTest-319343227</nova:project>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <nova:port uuid="509c791a-f0a2-4105-a992-82e720b801e8">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <entry name="serial">c95caf87-5069-4b70-9023-d3c2d911e87d</entry>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <entry name="uuid">c95caf87-5069-4b70-9023-d3c2d911e87d</entry>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c95caf87-5069-4b70-9023-d3c2d911e87d_disk">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:0a:93:70"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <target dev="tap509c791a-f0"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/console.log" append="off"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:36:21 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:36:21 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:36:21 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:36:21 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.269 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Preparing to wait for external event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.270 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.270 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.270 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.271 247403 DEBUG nova.virt.libvirt.vif [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:36:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-268360674',display_name='tempest-ServerStableDeviceRescueTest-server-268360674',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-268360674',id=144,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-dmlghow8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:10Z,user_data=None,user_id='b6733330b634472ca8c21316f1ee5057',uuid=c95caf87-5069-4b70-9023-d3c2d911e87d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.271 247403 DEBUG nova.network.os_vif_util [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.271 247403 DEBUG nova.network.os_vif_util [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.272 247403 DEBUG os_vif [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.273 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.273 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.275 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.275 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap509c791a-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.276 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap509c791a-f0, col_values=(('external_ids', {'iface-id': '509c791a-f0a2-4105-a992-82e720b801e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:93:70', 'vm-uuid': 'c95caf87-5069-4b70-9023-d3c2d911e87d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:36:21 np0005603621 NetworkManager[49013]: <info>  [1769848581.2790] manager: (tap509c791a-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.283 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.285 247403 INFO os_vif [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0')#033[00m
Jan 31 03:36:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:21.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.572 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.573 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.573 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No VIF found with MAC fa:16:3e:0a:93:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.573 247403 INFO nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Using config drive#033[00m
Jan 31 03:36:21 np0005603621 nova_compute[247399]: 2026-01-31 08:36:21.601 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:21 np0005603621 systemd[1]: Starting dnf makecache...
Jan 31 03:36:21 np0005603621 dnf[345135]: Metadata cache refreshed recently.
Jan 31 03:36:21 np0005603621 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 31 03:36:21 np0005603621 systemd[1]: Finished dnf makecache.
Jan 31 03:36:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 565 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 764 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.448 247403 INFO nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Creating config drive at /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.453 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxpaqoiyx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.581 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxpaqoiyx" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.605 247403 DEBUG nova.storage.rbd_utils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.609 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.742 247403 DEBUG oslo_concurrency.processutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.743 247403 INFO nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Deleting local config drive /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config because it was imported into RBD.#033[00m
Jan 31 03:36:22 np0005603621 kernel: tap509c791a-f0: entered promiscuous mode
Jan 31 03:36:22 np0005603621 NetworkManager[49013]: <info>  [1769848582.7771] manager: (tap509c791a-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.777 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:22Z|00559|binding|INFO|Claiming lport 509c791a-f0a2-4105-a992-82e720b801e8 for this chassis.
Jan 31 03:36:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:22Z|00560|binding|INFO|509c791a-f0a2-4105-a992-82e720b801e8: Claiming fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:22 np0005603621 systemd-udevd[345189]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:36:22 np0005603621 systemd-machined[212769]: New machine qemu-68-instance-00000090.
Jan 31 03:36:22 np0005603621 NetworkManager[49013]: <info>  [1769848582.8092] device (tap509c791a-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:36:22 np0005603621 NetworkManager[49013]: <info>  [1769848582.8098] device (tap509c791a-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:36:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:22Z|00561|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 ovn-installed in OVS
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.815 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:22 np0005603621 systemd[1]: Started Virtual Machine qemu-68-instance-00000090.
Jan 31 03:36:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:22Z|00562|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 up in Southbound
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.864 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:93:70 10.100.0.7'], port_security=['fa:16:3e:0a:93:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c95caf87-5069-4b70-9023-d3c2d911e87d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1c240f5-10ef-43c0-92c2-4688e636b197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=509c791a-f0a2-4105-a992-82e720b801e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.865 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 509c791a-f0a2-4105-a992-82e720b801e8 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 bound to our chassis#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.866 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.875 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29d2aef2-846f-4fd0-8133-d5c04912532e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.876 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31da00d3-01 in ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.878 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31da00d3-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.878 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b181f83f-4c97-4e40-b4ae-efec2ec02eba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.879 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8ba697-083a-442e-91eb-8f75a5b19c79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.886 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fdfb1e-1321-40a4-9fa0-d5ccc9a9b455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.894 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2f10af-924d-4942-86fa-2eddf9f2b0f9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.917 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f58b22-23e5-49cf-af15-06ecd6b92559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.926 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[011e25f6-ef2d-4e47-afea-585674b9d617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 NetworkManager[49013]: <info>  [1769848582.9272] manager: (tap31da00d3-00): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.949 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e045f042-15a3-4575-a1e3-ce21fa992cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.956 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b890b794-a425-40c8-8841-00c333648a26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.962 247403 DEBUG nova.network.neutron [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updated VIF entry in instance network info cache for port 509c791a-f0a2-4105-a992-82e720b801e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:36:22 np0005603621 nova_compute[247399]: 2026-01-31 08:36:22.962 247403 DEBUG nova.network.neutron [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:22 np0005603621 NetworkManager[49013]: <info>  [1769848582.9711] device (tap31da00d3-00): carrier: link connected
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.975 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[559b87b8-65db-47c0-b42d-99b7154e156e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.987 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[587110c5-3e43-456b-b398-9f25aa269023]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788848, 'reachable_time': 33213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345226, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:22.998 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcc0b67-b873-4afa-b317-36e83f4f7a96]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:4f2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 788848, 'tstamp': 788848}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345227, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.011 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a3095516-39b8-4cf6-a791-adce5f86ee6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788848, 'reachable_time': 33213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345228, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.032 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e5edbdff-1df7-4d44-b912-e0de5c34829a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.075 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39601da7-a6e2-496a-8ce9-a5fba975881b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.076 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.077 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.077 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:23 np0005603621 NetworkManager[49013]: <info>  [1769848583.0801] manager: (tap31da00d3-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Jan 31 03:36:23 np0005603621 kernel: tap31da00d3-00: entered promiscuous mode
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.083 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.084 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:23Z|00563|binding|INFO|Releasing lport 54969bc0-ee8d-420c-ac0c-dd4f9410e42c from this chassis (sb_readonly=1)
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.085 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.086 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.087 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[90692ea6-6592-4d98-9a0a-d7900b9665d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.088 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-31da00d3-077b-4620-a7d3-68186467ab47
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 31da00d3-077b-4620-a7d3-68186467ab47
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.090 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:23.090 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'env', 'PROCESS_TAG=haproxy-31da00d3-077b-4620-a7d3-68186467ab47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31da00d3-077b-4620-a7d3-68186467ab47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.223 247403 DEBUG oslo_concurrency.lockutils [req-67fff478-e2bd-4d04-8eae-ae8fc4d8f4ca req-75bc76de-8338-47f4-848f-d19aef9ba2f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.399 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848583.398799, c95caf87-5069-4b70-9023-d3c2d911e87d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.400 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:36:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.454 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.460 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848583.3991287, c95caf87-5069-4b70-9023-d3c2d911e87d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.461 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:36:23 np0005603621 podman[345301]: 2026-01-31 08:36:23.419390206 +0000 UTC m=+0.024406075 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.515 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.519 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.563 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:36:23 np0005603621 podman[345301]: 2026-01-31 08:36:23.723135396 +0000 UTC m=+0.328151255 container create 2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:36:23 np0005603621 systemd[1]: Started libpod-conmon-2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416.scope.
Jan 31 03:36:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9df257af45daa6cf71d4aa63be01e94439ce714dd9d87622dc8cf688c7ac929f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:23 np0005603621 podman[345301]: 2026-01-31 08:36:23.821185215 +0000 UTC m=+0.426201084 container init 2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:36:23 np0005603621 podman[345301]: 2026-01-31 08:36:23.82608669 +0000 UTC m=+0.431102549 container start 2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:36:23 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [NOTICE]   (345321) : New worker (345323) forked
Jan 31 03:36:23 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [NOTICE]   (345321) : Loading success.
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.893 247403 DEBUG nova.compute.manager [req-a983598a-6e94-47ef-bcd6-59ea2612041b req-f547827c-d289-41c5-aa3f-be97d855d89b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.893 247403 DEBUG oslo_concurrency.lockutils [req-a983598a-6e94-47ef-bcd6-59ea2612041b req-f547827c-d289-41c5-aa3f-be97d855d89b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.894 247403 DEBUG oslo_concurrency.lockutils [req-a983598a-6e94-47ef-bcd6-59ea2612041b req-f547827c-d289-41c5-aa3f-be97d855d89b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.894 247403 DEBUG oslo_concurrency.lockutils [req-a983598a-6e94-47ef-bcd6-59ea2612041b req-f547827c-d289-41c5-aa3f-be97d855d89b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.894 247403 DEBUG nova.compute.manager [req-a983598a-6e94-47ef-bcd6-59ea2612041b req-f547827c-d289-41c5-aa3f-be97d855d89b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Processing event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.895 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.899 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848583.8992612, c95caf87-5069-4b70-9023-d3c2d911e87d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.900 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.903 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.906 247403 INFO nova.virt.libvirt.driver [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance spawned successfully.#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.907 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.950 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.954 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.979 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.980 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.981 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.981 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.982 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:23 np0005603621 nova_compute[247399]: 2026-01-31 08:36:23.982 247403 DEBUG nova.virt.libvirt.driver [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:36:24 np0005603621 nova_compute[247399]: 2026-01-31 08:36:24.109 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:36:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 546 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 223 KiB/s rd, 4.1 MiB/s wr, 70 op/s
Jan 31 03:36:24 np0005603621 nova_compute[247399]: 2026-01-31 08:36:24.632 247403 INFO nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Took 13.92 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:36:24 np0005603621 nova_compute[247399]: 2026-01-31 08:36:24.633 247403 DEBUG nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:24 np0005603621 nova_compute[247399]: 2026-01-31 08:36:24.937 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:24 np0005603621 nova_compute[247399]: 2026-01-31 08:36:24.940 247403 INFO nova.compute.manager [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Took 15.99 seconds to build instance.#033[00m
Jan 31 03:36:24 np0005603621 nova_compute[247399]: 2026-01-31 08:36:24.996 247403 DEBUG oslo_concurrency.lockutils [None req-0ddebfc3-8b2a-44d8-98d3-67388ee4e06a b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:25.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:25.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.077 247403 DEBUG nova.compute.manager [req-a1212377-7d7f-4083-b933-49223d489d2c req-53768450-a981-41d6-ac98-9375776496fa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.078 247403 DEBUG oslo_concurrency.lockutils [req-a1212377-7d7f-4083-b933-49223d489d2c req-53768450-a981-41d6-ac98-9375776496fa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.078 247403 DEBUG oslo_concurrency.lockutils [req-a1212377-7d7f-4083-b933-49223d489d2c req-53768450-a981-41d6-ac98-9375776496fa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.078 247403 DEBUG oslo_concurrency.lockutils [req-a1212377-7d7f-4083-b933-49223d489d2c req-53768450-a981-41d6-ac98-9375776496fa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.078 247403 DEBUG nova.compute.manager [req-a1212377-7d7f-4083-b933-49223d489d2c req-53768450-a981-41d6-ac98-9375776496fa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.078 247403 WARNING nova.compute.manager [req-a1212377-7d7f-4083-b933-49223d489d2c req-53768450-a981-41d6-ac98-9375776496fa fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:36:26 np0005603621 nova_compute[247399]: 2026-01-31 08:36:26.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 31 03:36:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:27.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:27.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 168 op/s
Jan 31 03:36:28 np0005603621 nova_compute[247399]: 2026-01-31 08:36:28.535 247403 DEBUG nova.compute.manager [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:36:28 np0005603621 nova_compute[247399]: 2026-01-31 08:36:28.619 247403 INFO nova.compute.manager [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] instance snapshotting#033[00m
Jan 31 03:36:28 np0005603621 nova_compute[247399]: 2026-01-31 08:36:28.929 247403 INFO nova.virt.libvirt.driver [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Beginning live snapshot process#033[00m
Jan 31 03:36:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:29.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:29 np0005603621 nova_compute[247399]: 2026-01-31 08:36:29.939 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 168 op/s
Jan 31 03:36:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:30.518 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:30.518 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:30.519 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:36:31 np0005603621 nova_compute[247399]: 2026-01-31 08:36:31.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:31.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:31.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:36:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 096e7f99-1a5d-4efe-9c3c-1f529244de23 does not exist
Jan 31 03:36:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ca925b9e-653c-422e-8a59-0377287510f4 does not exist
Jan 31 03:36:31 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9fcbe8c2-77a5-4d17-b4a4-59228e3e37b2 does not exist
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:36:32 np0005603621 podman[345656]: 2026-01-31 08:36:32.049955465 +0000 UTC m=+0.038265435 container create dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:36:32 np0005603621 systemd[1]: Started libpod-conmon-dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886.scope.
Jan 31 03:36:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:32 np0005603621 podman[345656]: 2026-01-31 08:36:32.113417306 +0000 UTC m=+0.101727306 container init dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:36:32 np0005603621 podman[345656]: 2026-01-31 08:36:32.120202732 +0000 UTC m=+0.108512702 container start dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:36:32 np0005603621 podman[345656]: 2026-01-31 08:36:32.125143118 +0000 UTC m=+0.113453098 container attach dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:36:32 np0005603621 sleepy_tesla[345674]: 167 167
Jan 31 03:36:32 np0005603621 podman[345656]: 2026-01-31 08:36:32.03123368 +0000 UTC m=+0.019543680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:36:32 np0005603621 systemd[1]: libpod-dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886.scope: Deactivated successfully.
Jan 31 03:36:32 np0005603621 podman[345679]: 2026-01-31 08:36:32.166415257 +0000 UTC m=+0.026720448 container died dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4773759531e54308b6fe9c574abd39c50b0704967946b79f2ac1552d67f29c13-merged.mount: Deactivated successfully.
Jan 31 03:36:32 np0005603621 podman[345679]: 2026-01-31 08:36:32.213921063 +0000 UTC m=+0.074226244 container remove dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:36:32 np0005603621 systemd[1]: libpod-conmon-dae313dbc411731ca9ab23863de9533a8f7fc470240035014835d8903b1af886.scope: Deactivated successfully.
Jan 31 03:36:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:36:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:36:32 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:36:32 np0005603621 podman[345701]: 2026-01-31 08:36:32.3643044 +0000 UTC m=+0.040816194 container create ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:36:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 442 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 31 03:36:32 np0005603621 systemd[1]: Started libpod-conmon-ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de.scope.
Jan 31 03:36:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4249fcd0fbf5810f179654cfef8ce6a4265364ac91b12bc3c804bf7796df3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4249fcd0fbf5810f179654cfef8ce6a4265364ac91b12bc3c804bf7796df3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4249fcd0fbf5810f179654cfef8ce6a4265364ac91b12bc3c804bf7796df3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4249fcd0fbf5810f179654cfef8ce6a4265364ac91b12bc3c804bf7796df3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d4249fcd0fbf5810f179654cfef8ce6a4265364ac91b12bc3c804bf7796df3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:32 np0005603621 podman[345701]: 2026-01-31 08:36:32.348930793 +0000 UTC m=+0.025442507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:36:32 np0005603621 podman[345701]: 2026-01-31 08:36:32.450423641 +0000 UTC m=+0.126935345 container init ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:36:32 np0005603621 podman[345701]: 2026-01-31 08:36:32.455991658 +0000 UTC m=+0.132503342 container start ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:36:32 np0005603621 podman[345701]: 2026-01-31 08:36:32.462795213 +0000 UTC m=+0.139306917 container attach ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:36:33 np0005603621 determined_goldwasser[345717]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:36:33 np0005603621 determined_goldwasser[345717]: --> relative data size: 1.0
Jan 31 03:36:33 np0005603621 determined_goldwasser[345717]: --> All data devices are unavailable
Jan 31 03:36:33 np0005603621 systemd[1]: libpod-ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de.scope: Deactivated successfully.
Jan 31 03:36:33 np0005603621 podman[345701]: 2026-01-31 08:36:33.264876283 +0000 UTC m=+0.941387997 container died ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:36:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:33.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:33 np0005603621 nova_compute[247399]: 2026-01-31 08:36:33.394 247403 DEBUG nova.virt.libvirt.imagebackend [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:36:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:33.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-98d4249fcd0fbf5810f179654cfef8ce6a4265364ac91b12bc3c804bf7796df3-merged.mount: Deactivated successfully.
Jan 31 03:36:33 np0005603621 nova_compute[247399]: 2026-01-31 08:36:33.631 247403 DEBUG nova.storage.rbd_utils [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] creating snapshot(24eaf1c5565f4cce92702791603eec86) on rbd image(c95caf87-5069-4b70-9023-d3c2d911e87d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:36:33 np0005603621 podman[345701]: 2026-01-31 08:36:33.962013896 +0000 UTC m=+1.638525590 container remove ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_goldwasser, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:36:33 np0005603621 systemd[1]: libpod-conmon-ee346a84decd6fd94f3eb63f13440e77d0281c94945379343d212c79760f88de.scope: Deactivated successfully.
Jan 31 03:36:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 550 KiB/s wr, 168 op/s
Jan 31 03:36:34 np0005603621 podman[345936]: 2026-01-31 08:36:34.514118159 +0000 UTC m=+0.104518824 container create d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:34 np0005603621 podman[345936]: 2026-01-31 08:36:34.427463532 +0000 UTC m=+0.017864197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:36:34 np0005603621 systemd[1]: Started libpod-conmon-d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697.scope.
Jan 31 03:36:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Jan 31 03:36:34 np0005603621 podman[345936]: 2026-01-31 08:36:34.850623578 +0000 UTC m=+0.441024263 container init d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:36:34 np0005603621 podman[345936]: 2026-01-31 08:36:34.857058822 +0000 UTC m=+0.447459487 container start d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jennings, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:36:34 np0005603621 cool_jennings[345953]: 167 167
Jan 31 03:36:34 np0005603621 systemd[1]: libpod-d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697.scope: Deactivated successfully.
Jan 31 03:36:34 np0005603621 nova_compute[247399]: 2026-01-31 08:36:34.941 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:35 np0005603621 podman[345936]: 2026-01-31 08:36:35.017889611 +0000 UTC m=+0.608290276 container attach d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:36:35 np0005603621 podman[345936]: 2026-01-31 08:36:35.018233272 +0000 UTC m=+0.608633937 container died d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:36:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Jan 31 03:36:35 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Jan 31 03:36:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:35.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d5f94b3ed62eb2dc03df18e6a7e9fc4e67c3074a8f907f13adb357761bb6c0a7-merged.mount: Deactivated successfully.
Jan 31 03:36:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:35.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:35 np0005603621 podman[345936]: 2026-01-31 08:36:35.689138853 +0000 UTC m=+1.279539508 container remove d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:36:35 np0005603621 systemd[1]: libpod-conmon-d7cc6f53f4e2139694f6204d20ae4bc651c3f113d0bc8d68ee537b09e3f4a697.scope: Deactivated successfully.
Jan 31 03:36:35 np0005603621 podman[345978]: 2026-01-31 08:36:35.800600517 +0000 UTC m=+0.020360997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:36:35 np0005603621 podman[345978]: 2026-01-31 08:36:35.985239491 +0000 UTC m=+0.204999951 container create 75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 03:36:36 np0005603621 systemd[1]: Started libpod-conmon-75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846.scope.
Jan 31 03:36:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fa2826715e1e66fb0f90c68dc26db347497ea00ba73e4d7e5908a9f6ce0f24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fa2826715e1e66fb0f90c68dc26db347497ea00ba73e4d7e5908a9f6ce0f24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fa2826715e1e66fb0f90c68dc26db347497ea00ba73e4d7e5908a9f6ce0f24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61fa2826715e1e66fb0f90c68dc26db347497ea00ba73e4d7e5908a9f6ce0f24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:36 np0005603621 podman[345978]: 2026-01-31 08:36:36.266567689 +0000 UTC m=+0.486328169 container init 75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:36:36 np0005603621 podman[345978]: 2026-01-31 08:36:36.273356075 +0000 UTC m=+0.493116535 container start 75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:36:36 np0005603621 podman[345978]: 2026-01-31 08:36:36.278509848 +0000 UTC m=+0.498270388 container attach 75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:36:36 np0005603621 nova_compute[247399]: 2026-01-31 08:36:36.284 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:36 np0005603621 nova_compute[247399]: 2026-01-31 08:36:36.341 247403 DEBUG nova.storage.rbd_utils [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] cloning vms/c95caf87-5069-4b70-9023-d3c2d911e87d_disk@24eaf1c5565f4cce92702791603eec86 to images/287f9894-83db-40a1-9a7f-ba7bbf17dc1d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:36:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 428 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 542 KiB/s rd, 131 KiB/s wr, 76 op/s
Jan 31 03:36:36 np0005603621 nova_compute[247399]: 2026-01-31 08:36:36.739 247403 DEBUG nova.storage.rbd_utils [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] flattening images/287f9894-83db-40a1-9a7f-ba7bbf17dc1d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]: {
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:    "0": [
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:        {
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "devices": [
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "/dev/loop3"
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            ],
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "lv_name": "ceph_lv0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "lv_size": "7511998464",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "name": "ceph_lv0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "tags": {
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.cluster_name": "ceph",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.crush_device_class": "",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.encrypted": "0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.osd_id": "0",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.type": "block",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:                "ceph.vdo": "0"
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            },
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "type": "block",
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:            "vg_name": "ceph_vg0"
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:        }
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]:    ]
Jan 31 03:36:37 np0005603621 flamboyant_visvesvaraya[345996]: }
Jan 31 03:36:37 np0005603621 systemd[1]: libpod-75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846.scope: Deactivated successfully.
Jan 31 03:36:37 np0005603621 conmon[345996]: conmon 75d5623f535064ad7da5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846.scope/container/memory.events
Jan 31 03:36:37 np0005603621 podman[345978]: 2026-01-31 08:36:37.164272611 +0000 UTC m=+1.384033071 container died 75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:37 np0005603621 nova_compute[247399]: 2026-01-31 08:36:37.202 247403 DEBUG nova.storage.rbd_utils [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] removing snapshot(24eaf1c5565f4cce92702791603eec86) on rbd image(c95caf87-5069-4b70-9023-d3c2d911e87d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:36:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-61fa2826715e1e66fb0f90c68dc26db347497ea00ba73e4d7e5908a9f6ce0f24-merged.mount: Deactivated successfully.
Jan 31 03:36:37 np0005603621 podman[345978]: 2026-01-31 08:36:37.234321272 +0000 UTC m=+1.454081722 container remove 75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_visvesvaraya, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:36:37 np0005603621 systemd[1]: libpod-conmon-75d5623f535064ad7da5ab1238ad18a8e4cc92640abad5da6166e36ef3e0f846.scope: Deactivated successfully.
Jan 31 03:36:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:37.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:37Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:36:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:37Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.720110024 +0000 UTC m=+0.037974125 container create 690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:36:37 np0005603621 systemd[1]: Started libpod-conmon-690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e.scope.
Jan 31 03:36:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.702860787 +0000 UTC m=+0.020724908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.803170118 +0000 UTC m=+0.121034239 container init 690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.808470806 +0000 UTC m=+0.126334897 container start 690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.811678978 +0000 UTC m=+0.129543089 container attach 690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:36:37 np0005603621 determined_buck[346247]: 167 167
Jan 31 03:36:37 np0005603621 systemd[1]: libpod-690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e.scope: Deactivated successfully.
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.814025662 +0000 UTC m=+0.131889763 container died 690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:36:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e767eb6fec4a0d60ea917dbbc64ca85fc199890eb8ae4dacdb10b0561f5440ed-merged.mount: Deactivated successfully.
Jan 31 03:36:37 np0005603621 podman[346231]: 2026-01-31 08:36:37.848829055 +0000 UTC m=+0.166693156 container remove 690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:37 np0005603621 systemd[1]: libpod-conmon-690f9df307a1c62d40a4cdcec41ee97925e6e3bb51920bb204b72a0076f3259e.scope: Deactivated successfully.
Jan 31 03:36:37 np0005603621 podman[346271]: 2026-01-31 08:36:37.985430575 +0000 UTC m=+0.039343667 container create 66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_taussig, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:36:38 np0005603621 systemd[1]: Started libpod-conmon-66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1.scope.
Jan 31 03:36:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:36:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ae1b5eb5159e6119b8dc4477000c4dae056bb8ec6ab966955fc4658ff43a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ae1b5eb5159e6119b8dc4477000c4dae056bb8ec6ab966955fc4658ff43a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ae1b5eb5159e6119b8dc4477000c4dae056bb8ec6ab966955fc4658ff43a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200ae1b5eb5159e6119b8dc4477000c4dae056bb8ec6ab966955fc4658ff43a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:36:38 np0005603621 podman[346271]: 2026-01-31 08:36:38.053425061 +0000 UTC m=+0.107338173 container init 66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:36:38 np0005603621 podman[346271]: 2026-01-31 08:36:38.060534777 +0000 UTC m=+0.114447869 container start 66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_taussig, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:36:38 np0005603621 podman[346271]: 2026-01-31 08:36:37.967934961 +0000 UTC m=+0.021848083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:36:38 np0005603621 podman[346271]: 2026-01-31 08:36:38.064841143 +0000 UTC m=+0.118754235 container attach 66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_taussig, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Jan 31 03:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Jan 31 03:36:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Jan 31 03:36:38 np0005603621 nova_compute[247399]: 2026-01-31 08:36:38.278 247403 DEBUG nova.storage.rbd_utils [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] creating snapshot(snap) on rbd image(287f9894-83db-40a1-9a7f-ba7bbf17dc1d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 842 KiB/s rd, 2.2 MiB/s wr, 161 op/s
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:36:38
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups']
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]: {
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:        "osd_id": 0,
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:        "type": "bluestore"
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]:    }
Jan 31 03:36:38 np0005603621 flamboyant_taussig[346288]: }
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:36:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:36:38 np0005603621 systemd[1]: libpod-66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1.scope: Deactivated successfully.
Jan 31 03:36:38 np0005603621 conmon[346288]: conmon 66370159872f9772713b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1.scope/container/memory.events
Jan 31 03:36:38 np0005603621 podman[346271]: 2026-01-31 08:36:38.976665222 +0000 UTC m=+1.030578304 container died 66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:36:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-200ae1b5eb5159e6119b8dc4477000c4dae056bb8ec6ab966955fc4658ff43a8-merged.mount: Deactivated successfully.
Jan 31 03:36:39 np0005603621 podman[346271]: 2026-01-31 08:36:39.020333627 +0000 UTC m=+1.074246719 container remove 66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_taussig, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:36:39 np0005603621 systemd[1]: libpod-conmon-66370159872f9772713be328532378a1a20ec62b4cce81540646d5c348cb9cf1.scope: Deactivated successfully.
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:36:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2e41c64b-d641-4259-9d1b-595f2b4c92cc does not exist
Jan 31 03:36:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 602f9e33-a62d-4f33-a1b3-1be9bac5df91 does not exist
Jan 31 03:36:39 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4df969f2-1b31-4919-869e-cb2d5dcc3fed does not exist
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:36:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Jan 31 03:36:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:36:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:36:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:39 np0005603621 nova_compute[247399]: 2026-01-31 08:36:39.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.9 MiB/s wr, 160 op/s
Jan 31 03:36:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:41 np0005603621 nova_compute[247399]: 2026-01-31 08:36:41.288 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:41.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:41.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 546 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 9.3 MiB/s wr, 234 op/s
Jan 31 03:36:42 np0005603621 nova_compute[247399]: 2026-01-31 08:36:42.934 247403 INFO nova.virt.libvirt.driver [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Snapshot image upload complete#033[00m
Jan 31 03:36:42 np0005603621 nova_compute[247399]: 2026-01-31 08:36:42.935 247403 INFO nova.compute.manager [None req-27e82e5d-b642-44c1-b3bd-a1450faa40cb b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Took 14.31 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:36:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:43.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 8.4 MiB/s wr, 270 op/s
Jan 31 03:36:44 np0005603621 nova_compute[247399]: 2026-01-31 08:36:44.945 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Jan 31 03:36:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:45.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:46 np0005603621 nova_compute[247399]: 2026-01-31 08:36:46.291 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 235 op/s
Jan 31 03:36:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Jan 31 03:36:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Jan 31 03:36:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:47.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:47.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:47 np0005603621 podman[346397]: 2026-01-31 08:36:47.498795802 +0000 UTC m=+0.049769388 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:36:47 np0005603621 podman[346398]: 2026-01-31 08:36:47.523906538 +0000 UTC m=+0.075823025 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:36:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.6 MiB/s wr, 210 op/s
Jan 31 03:36:49 np0005603621 nova_compute[247399]: 2026-01-31 08:36:49.284 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:49.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007504064012190582 of space, bias 1.0, pg target 2.2512192036571745 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.644751535455053 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:36:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:36:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:49.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:49 np0005603621 nova_compute[247399]: 2026-01-31 08:36:49.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.1 MiB/s wr, 191 op/s
Jan 31 03:36:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:51 np0005603621 nova_compute[247399]: 2026-01-31 08:36:51.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:51.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:51 np0005603621 nova_compute[247399]: 2026-01-31 08:36:51.526 247403 INFO nova.compute.manager [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Rescuing#033[00m
Jan 31 03:36:51 np0005603621 nova_compute[247399]: 2026-01-31 08:36:51.527 247403 DEBUG oslo_concurrency.lockutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:51 np0005603621 nova_compute[247399]: 2026-01-31 08:36:51.527 247403 DEBUG oslo_concurrency.lockutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:51 np0005603621 nova_compute[247399]: 2026-01-31 08:36:51.527 247403 DEBUG nova.network.neutron [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:36:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 60 KiB/s wr, 120 op/s
Jan 31 03:36:53 np0005603621 nova_compute[247399]: 2026-01-31 08:36:53.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:53 np0005603621 nova_compute[247399]: 2026-01-31 08:36:53.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:36:53 np0005603621 nova_compute[247399]: 2026-01-31 08:36:53.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:36:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:53.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:53.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:53 np0005603621 nova_compute[247399]: 2026-01-31 08:36:53.501 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:36:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 536 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 15 KiB/s wr, 70 op/s
Jan 31 03:36:54 np0005603621 nova_compute[247399]: 2026-01-31 08:36:54.947 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:55 np0005603621 nova_compute[247399]: 2026-01-31 08:36:55.389 247403 DEBUG nova.network.neutron [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:55.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:36:55 np0005603621 nova_compute[247399]: 2026-01-31 08:36:55.862 247403 DEBUG oslo_concurrency.lockutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:55 np0005603621 nova_compute[247399]: 2026-01-31 08:36:55.865 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:36:55 np0005603621 nova_compute[247399]: 2026-01-31 08:36:55.866 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:36:55 np0005603621 nova_compute[247399]: 2026-01-31 08:36:55.866 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:36:56 np0005603621 nova_compute[247399]: 2026-01-31 08:36:56.296 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.1 KiB/s wr, 31 op/s
Jan 31 03:36:57 np0005603621 nova_compute[247399]: 2026-01-31 08:36:57.393 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:36:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:57.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 9.7 KiB/s wr, 37 op/s
Jan 31 03:36:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:36:59.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:36:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:36:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:36:59.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:36:59 np0005603621 kernel: tap509c791a-f0 (unregistering): left promiscuous mode
Jan 31 03:36:59 np0005603621 NetworkManager[49013]: <info>  [1769848619.7176] device (tap509c791a-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.723 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:59Z|00564|binding|INFO|Releasing lport 509c791a-f0a2-4105-a992-82e720b801e8 from this chassis (sb_readonly=0)
Jan 31 03:36:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:59Z|00565|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 down in Southbound
Jan 31 03:36:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:36:59Z|00566|binding|INFO|Removing iface tap509c791a-f0 ovn-installed in OVS
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.725 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.776 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:36:59 np0005603621 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 31 03:36:59 np0005603621 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000090.scope: Consumed 13.537s CPU time.
Jan 31 03:36:59 np0005603621 systemd-machined[212769]: Machine qemu-68-instance-00000090 terminated.
Jan 31 03:36:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:59.790 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:93:70 10.100.0.7'], port_security=['fa:16:3e:0a:93:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c95caf87-5069-4b70-9023-d3c2d911e87d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1c240f5-10ef-43c0-92c2-4688e636b197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=509c791a-f0a2-4105-a992-82e720b801e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:36:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:59.791 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 509c791a-f0a2-4105-a992-82e720b801e8 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:36:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:59.792 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31da00d3-077b-4620-a7d3-68186467ab47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:36:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:59.794 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b8ff832-3249-4ba0-a374-a78c37721550]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:36:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:36:59.794 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 namespace which is not needed anymore#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.867 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.867 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.868 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.868 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.869 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.869 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.938 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:36:59 np0005603621 nova_compute[247399]: 2026-01-31 08:36:59.949 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:00 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [NOTICE]   (345321) : haproxy version is 2.8.14-c23fe91
Jan 31 03:37:00 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [NOTICE]   (345321) : path to executable is /usr/sbin/haproxy
Jan 31 03:37:00 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [WARNING]  (345321) : Exiting Master process...
Jan 31 03:37:00 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [ALERT]    (345321) : Current worker (345323) exited with code 143 (Terminated)
Jan 31 03:37:00 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[345317]: [WARNING]  (345321) : All workers exited. Exiting... (0)
Jan 31 03:37:00 np0005603621 systemd[1]: libpod-2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416.scope: Deactivated successfully.
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.048 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.049 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.049 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.050 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.050 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:00 np0005603621 podman[346521]: 2026-01-31 08:37:00.051021316 +0000 UTC m=+0.193620310 container died 2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:37:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416-userdata-shm.mount: Deactivated successfully.
Jan 31 03:37:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9df257af45daa6cf71d4aa63be01e94439ce714dd9d87622dc8cf688c7ac929f-merged.mount: Deactivated successfully.
Jan 31 03:37:00 np0005603621 podman[346521]: 2026-01-31 08:37:00.101578499 +0000 UTC m=+0.244177503 container cleanup 2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:00 np0005603621 systemd[1]: libpod-conmon-2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416.scope: Deactivated successfully.
Jan 31 03:37:00 np0005603621 podman[346565]: 2026-01-31 08:37:00.163397559 +0000 UTC m=+0.044764321 container remove 2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.170 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a2844c-2c4d-4817-9e17-81c5be257349]: (4, ('Sat Jan 31 08:36:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 (2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416)\n2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416\nSat Jan 31 08:37:00 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 (2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416)\n2a48ced3b09ea319690854de8843b67060c42cc42f61e7161648264ba9cd4416\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.171 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[975b30fa-c2cf-4340-8449-445a5ad49c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.172 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.175 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:00 np0005603621 kernel: tap31da00d3-00: left promiscuous mode
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.185 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.188 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4a58bb94-b12f-4221-9b5b-e9a31463a688]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.202 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c01f5247-cb99-430a-8332-461606bf0ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.203 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11a51903-c958-48ff-926b-c2776bf9f274]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.213 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfa64e0-af24-4e59-92f1-b971deff3ddf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 788842, 'reachable_time': 43238, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346603, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 systemd[1]: run-netns-ovnmeta\x2d31da00d3\x2d077b\x2d4620\x2da7d3\x2d68186467ab47.mount: Deactivated successfully.
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.216 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:37:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:00.217 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c08c2bbd-d043-44df-86b2-e878859d260c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.2 KiB/s wr, 35 op/s
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.408 247403 INFO nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.415 247403 INFO nova.virt.libvirt.driver [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance destroyed successfully.#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.415 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'numa_topology' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179661808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.463 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.561 247403 INFO nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Attempting a stable device rescue#033[00m
Jan 31 03:37:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.982 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:00 np0005603621 nova_compute[247399]: 2026-01-31 08:37:00.982 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:37:01 np0005603621 nova_compute[247399]: 2026-01-31 08:37:01.123 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:37:01 np0005603621 nova_compute[247399]: 2026-01-31 08:37:01.124 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4258MB free_disk=20.851696014404297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:37:01 np0005603621 nova_compute[247399]: 2026-01-31 08:37:01.124 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:01 np0005603621 nova_compute[247399]: 2026-01-31 08:37:01.124 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:01 np0005603621 nova_compute[247399]: 2026-01-31 08:37:01.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:01.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:01.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.014 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.018 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.018 247403 INFO nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Creating image(s)#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.043 247403 DEBUG nova.storage.rbd_utils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.047 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.137 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c95caf87-5069-4b70-9023-d3c2d911e87d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.137 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.138 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.253 247403 DEBUG nova.compute.manager [req-de8a34b7-f43d-40c2-8190-fba8de2eb0ab req-3de1e678-53d7-4d90-bd5c-2c33a1df419d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.253 247403 DEBUG oslo_concurrency.lockutils [req-de8a34b7-f43d-40c2-8190-fba8de2eb0ab req-3de1e678-53d7-4d90-bd5c-2c33a1df419d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.253 247403 DEBUG oslo_concurrency.lockutils [req-de8a34b7-f43d-40c2-8190-fba8de2eb0ab req-3de1e678-53d7-4d90-bd5c-2c33a1df419d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.254 247403 DEBUG oslo_concurrency.lockutils [req-de8a34b7-f43d-40c2-8190-fba8de2eb0ab req-3de1e678-53d7-4d90-bd5c-2c33a1df419d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.254 247403 DEBUG nova.compute.manager [req-de8a34b7-f43d-40c2-8190-fba8de2eb0ab req-3de1e678-53d7-4d90-bd5c-2c33a1df419d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.254 247403 WARNING nova.compute.manager [req-de8a34b7-f43d-40c2-8190-fba8de2eb0ab req-3de1e678-53d7-4d90-bd5c-2c33a1df419d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.306 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.384 247403 DEBUG nova.storage.rbd_utils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 52 op/s
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.409 247403 DEBUG nova.storage.rbd_utils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.413 247403 DEBUG oslo_concurrency.lockutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "0ea69616b7a92ca13b4e5df8feefec40e9c017f1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.414 247403 DEBUG oslo_concurrency.lockutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "0ea69616b7a92ca13b4e5df8feefec40e9c017f1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:37:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2936833007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.725 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.730 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:37:02 np0005603621 nova_compute[247399]: 2026-01-31 08:37:02.747 247403 DEBUG nova.virt.libvirt.imagebackend [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/287f9894-83db-40a1-9a7f-ba7bbf17dc1d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/287f9894-83db-40a1-9a7f-ba7bbf17dc1d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.162 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.169 247403 DEBUG nova.virt.libvirt.imagebackend [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Selected location: {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/287f9894-83db-40a1-9a7f-ba7bbf17dc1d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.169 247403 DEBUG nova.storage.rbd_utils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] cloning images/287f9894-83db-40a1-9a7f-ba7bbf17dc1d@snap to None/c95caf87-5069-4b70-9023-d3c2d911e87d_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:37:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:03.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.515 247403 DEBUG oslo_concurrency.lockutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "0ea69616b7a92ca13b4e5df8feefec40e9c017f1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.565 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.566 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.570 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'migration_context' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.640 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.644 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Start _get_guest_xml network_info=[{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "vif_mac": "fa:16:3e:0a:93:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '287f9894-83db-40a1-9a7f-ba7bbf17dc1d', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:37:03 np0005603621 nova_compute[247399]: 2026-01-31 08:37:03.645 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'resources' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.025 247403 WARNING nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.046 247403 DEBUG nova.virt.libvirt.host [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.047 247403 DEBUG nova.virt.libvirt.host [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.051 247403 DEBUG nova.virt.libvirt.host [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.052 247403 DEBUG nova.virt.libvirt.host [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.054 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.054 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.055 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.056 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.056 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.056 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.056 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.057 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.057 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.057 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.058 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.058 247403 DEBUG nova.virt.hardware [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.058 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.260 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 524 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.0 MiB/s wr, 51 op/s
Jan 31 03:37:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:37:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/924137943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.688 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.726 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:04 np0005603621 nova_compute[247399]: 2026-01-31 08:37:04.950 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.076 247403 DEBUG nova.compute.manager [req-8eb9a8ba-a092-467d-9dbf-794d1abd777d req-92496cf0-8caa-4afd-a2e2-77e9043485b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.077 247403 DEBUG oslo_concurrency.lockutils [req-8eb9a8ba-a092-467d-9dbf-794d1abd777d req-92496cf0-8caa-4afd-a2e2-77e9043485b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.077 247403 DEBUG oslo_concurrency.lockutils [req-8eb9a8ba-a092-467d-9dbf-794d1abd777d req-92496cf0-8caa-4afd-a2e2-77e9043485b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.077 247403 DEBUG oslo_concurrency.lockutils [req-8eb9a8ba-a092-467d-9dbf-794d1abd777d req-92496cf0-8caa-4afd-a2e2-77e9043485b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.078 247403 DEBUG nova.compute.manager [req-8eb9a8ba-a092-467d-9dbf-794d1abd777d req-92496cf0-8caa-4afd-a2e2-77e9043485b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.078 247403 WARNING nova.compute.manager [req-8eb9a8ba-a092-467d-9dbf-794d1abd777d req-92496cf0-8caa-4afd-a2e2-77e9043485b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:37:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:37:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039465511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.175 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.176 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:05.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:37:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249599314' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.594 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.596 247403 DEBUG nova.virt.libvirt.vif [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:36:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-268360674',display_name='tempest-ServerStableDeviceRescueTest-server-268360674',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-268360674',id=144,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:36:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-dmlghow8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:36:43Z,user_data=None,user_id='b6733330b634472ca8c21316f1ee5057',uuid=c95caf87-5069-4b70-9023-d3c2d911e87d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "vif_mac": "fa:16:3e:0a:93:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.596 247403 DEBUG nova.network.os_vif_util [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "vif_mac": "fa:16:3e:0a:93:70"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.597 247403 DEBUG nova.network.os_vif_util [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.598 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.752 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <uuid>c95caf87-5069-4b70-9023-d3c2d911e87d</uuid>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <name>instance-00000090</name>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-268360674</nova:name>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:37:04</nova:creationTime>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:user uuid="b6733330b634472ca8c21316f1ee5057">tempest-ServerStableDeviceRescueTest-319343227-project-member</nova:user>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:project uuid="1e29363ca464487b931af54fe14166b1">tempest-ServerStableDeviceRescueTest-319343227</nova:project>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <nova:port uuid="509c791a-f0a2-4105-a992-82e720b801e8">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <entry name="serial">c95caf87-5069-4b70-9023-d3c2d911e87d</entry>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <entry name="uuid">c95caf87-5069-4b70-9023-d3c2d911e87d</entry>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c95caf87-5069-4b70-9023-d3c2d911e87d_disk">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c95caf87-5069-4b70-9023-d3c2d911e87d_disk.rescue">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <boot order="1"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:0a:93:70"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <target dev="tap509c791a-f0"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/console.log" append="off"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:37:05 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:37:05 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:37:05 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:37:05 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.758 247403 INFO nova.virt.libvirt.driver [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance destroyed successfully.#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.896 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:05 np0005603621 nova_compute[247399]: 2026-01-31 08:37:05.897 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.187 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.187 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.375 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.375 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.376 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.376 247403 DEBUG nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No VIF found with MAC fa:16:3e:0a:93:70, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.376 247403 INFO nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Using config drive#033[00m
Jan 31 03:37:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.401 247403 DEBUG nova.storage.rbd_utils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.530 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:06 np0005603621 nova_compute[247399]: 2026-01-31 08:37:06.841 247403 DEBUG nova.objects.instance [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'keypairs' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:07.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:07.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:07 np0005603621 nova_compute[247399]: 2026-01-31 08:37:07.767 247403 INFO nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Creating config drive at /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config.rescue#033[00m
Jan 31 03:37:07 np0005603621 nova_compute[247399]: 2026-01-31 08:37:07.770 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp605l524c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:07 np0005603621 nova_compute[247399]: 2026-01-31 08:37:07.895 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp605l524c" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:07 np0005603621 nova_compute[247399]: 2026-01-31 08:37:07.923 247403 DEBUG nova.storage.rbd_utils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:37:07 np0005603621 nova_compute[247399]: 2026-01-31 08:37:07.927 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config.rescue c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.177 247403 DEBUG oslo_concurrency.processutils [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config.rescue c95caf87-5069-4b70-9023-d3c2d911e87d_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.178 247403 INFO nova.virt.libvirt.driver [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Deleting local config drive /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:08 np0005603621 kernel: tap509c791a-f0: entered promiscuous mode
Jan 31 03:37:08 np0005603621 NetworkManager[49013]: <info>  [1769848628.2180] manager: (tap509c791a-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/256)
Jan 31 03:37:08 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:08Z|00567|binding|INFO|Claiming lport 509c791a-f0a2-4105-a992-82e720b801e8 for this chassis.
Jan 31 03:37:08 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:08Z|00568|binding|INFO|509c791a-f0a2-4105-a992-82e720b801e8: Claiming fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:08Z|00569|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 ovn-installed in OVS
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 systemd-udevd[346929]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:37:08 np0005603621 systemd-machined[212769]: New machine qemu-69-instance-00000090.
Jan 31 03:37:08 np0005603621 NetworkManager[49013]: <info>  [1769848628.2490] device (tap509c791a-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:37:08 np0005603621 NetworkManager[49013]: <info>  [1769848628.2496] device (tap509c791a-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:37:08 np0005603621 systemd[1]: Started Virtual Machine qemu-69-instance-00000090.
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:08 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:08Z|00570|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 up in Southbound
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.546 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:93:70 10.100.0.7'], port_security=['fa:16:3e:0a:93:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c95caf87-5069-4b70-9023-d3c2d911e87d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b1c240f5-10ef-43c0-92c2-4688e636b197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=509c791a-f0a2-4105-a992-82e720b801e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.547 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 509c791a-f0a2-4105-a992-82e720b801e8 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 bound to our chassis#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.548 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.557 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[20a95b69-7d2c-41bd-808f-362bdadd7787]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.558 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31da00d3-01 in ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.560 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31da00d3-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.560 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8c45f163-0ccd-4be4-9f3c-aeb04565c335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.561 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ce63eb71-c169-4791-b52f-948d2c97b400]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.574 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[9113e85f-fa36-40cb-a479-0006ee296f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.585 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[31543a10-049a-4471-b0fe-4ed999afddf2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.609 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[209a701e-d463-4ecc-87ed-6162212f73c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 systemd-udevd[346931]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:37:08 np0005603621 NetworkManager[49013]: <info>  [1769848628.6143] manager: (tap31da00d3-00): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.614 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d93b281-1281-4b85-9eaf-575453aacc58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.638 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[170b328e-50db-4995-aa02-03e2927639d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.641 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f1ff4a-0d90-4f8e-9d88-7ca02fd52f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 NetworkManager[49013]: <info>  [1769848628.6584] device (tap31da00d3-00): carrier: link connected
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.663 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba65ca3-f17f-48e8-a58e-2462c76d898e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.676 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6893eec2-e9bc-4921-8188-4da1540c045f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793417, 'reachable_time': 24309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346996, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.687 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6c51b15a-2c24-425b-a512-31c455d39627]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:4f2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 793417, 'tstamp': 793417}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346999, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.702 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb33634-5dc1-456a-a739-9b6077e78f01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793417, 'reachable_time': 24309, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347000, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.723 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2060a7-553e-454a-8a1c-0bed30608497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.761 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6617bda4-08fa-4dfa-9740-53e64417e7f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.763 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.763 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.764 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.765 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 NetworkManager[49013]: <info>  [1769848628.7662] manager: (tap31da00d3-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Jan 31 03:37:08 np0005603621 kernel: tap31da00d3-00: entered promiscuous mode
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.767 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.769 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:08 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:08Z|00571|binding|INFO|Releasing lport 54969bc0-ee8d-420c-ac0c-dd4f9410e42c from this chassis (sb_readonly=0)
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.774 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.775 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.776 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7a796b91-043d-4937-b308-d6c8e3a69421]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.777 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-31da00d3-077b-4620-a7d3-68186467ab47
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 31da00d3-077b-4620-a7d3-68186467ab47
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:37:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:08.777 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'env', 'PROCESS_TAG=haproxy-31da00d3-077b-4620-a7d3-68186467ab47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31da00d3-077b-4620-a7d3-68186467ab47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.917 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for c95caf87-5069-4b70-9023-d3c2d911e87d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.918 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848628.9165442, c95caf87-5069-4b70-9023-d3c2d911e87d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.918 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.922 247403 DEBUG nova.compute.manager [None req-39211525-ce03-4e3f-9d3b-bc479af64ea6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.984 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:08 np0005603621 nova_compute[247399]: 2026-01-31 08:37:08.988 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:37:09 np0005603621 nova_compute[247399]: 2026-01-31 08:37:09.047 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:37:09 np0005603621 nova_compute[247399]: 2026-01-31 08:37:09.048 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848628.9167745, c95caf87-5069-4b70-9023-d3c2d911e87d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:09 np0005603621 nova_compute[247399]: 2026-01-31 08:37:09.048 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:37:09 np0005603621 nova_compute[247399]: 2026-01-31 08:37:09.076 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:09 np0005603621 nova_compute[247399]: 2026-01-31 08:37:09.080 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:37:09 np0005603621 podman[347056]: 2026-01-31 08:37:09.064771132 +0000 UTC m=+0.022610568 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:37:09 np0005603621 podman[347056]: 2026-01-31 08:37:09.288326829 +0000 UTC m=+0.246166245 container create 2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:37:09 np0005603621 systemd[1]: Started libpod-conmon-2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557.scope.
Jan 31 03:37:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dd7657facb48e38a1115843c5182ab90c2473f7b28f8996e9e971a441d152b0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:09.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:09 np0005603621 podman[347056]: 2026-01-31 08:37:09.441974631 +0000 UTC m=+0.399814047 container init 2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:37:09 np0005603621 podman[347056]: 2026-01-31 08:37:09.446516215 +0000 UTC m=+0.404355631 container start 2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:37:09 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [NOTICE]   (347075) : New worker (347077) forked
Jan 31 03:37:09 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [NOTICE]   (347075) : Loading success.
Jan 31 03:37:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:09 np0005603621 nova_compute[247399]: 2026-01-31 08:37:09.953 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 03:37:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:11 np0005603621 nova_compute[247399]: 2026-01-31 08:37:11.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:11.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:11.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 31 03:37:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:13.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:13 np0005603621 nova_compute[247399]: 2026-01-31 08:37:13.483 247403 DEBUG nova.compute.manager [req-93319014-f88c-4fed-861c-b30f192c97a1 req-1a66c411-4e33-427a-90fe-bb1248a0a050 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:13 np0005603621 nova_compute[247399]: 2026-01-31 08:37:13.484 247403 DEBUG oslo_concurrency.lockutils [req-93319014-f88c-4fed-861c-b30f192c97a1 req-1a66c411-4e33-427a-90fe-bb1248a0a050 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:13 np0005603621 nova_compute[247399]: 2026-01-31 08:37:13.485 247403 DEBUG oslo_concurrency.lockutils [req-93319014-f88c-4fed-861c-b30f192c97a1 req-1a66c411-4e33-427a-90fe-bb1248a0a050 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:13 np0005603621 nova_compute[247399]: 2026-01-31 08:37:13.485 247403 DEBUG oslo_concurrency.lockutils [req-93319014-f88c-4fed-861c-b30f192c97a1 req-1a66c411-4e33-427a-90fe-bb1248a0a050 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:13 np0005603621 nova_compute[247399]: 2026-01-31 08:37:13.485 247403 DEBUG nova.compute.manager [req-93319014-f88c-4fed-861c-b30f192c97a1 req-1a66c411-4e33-427a-90fe-bb1248a0a050 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:13 np0005603621 nova_compute[247399]: 2026-01-31 08:37:13.485 247403 WARNING nova.compute.manager [req-93319014-f88c-4fed-861c-b30f192c97a1 req-1a66c411-4e33-427a-90fe-bb1248a0a050 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:37:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:13.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 31 03:37:14 np0005603621 nova_compute[247399]: 2026-01-31 08:37:14.955 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:15.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:15.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:16 np0005603621 nova_compute[247399]: 2026-01-31 08:37:16.350 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 819 KiB/s wr, 115 op/s
Jan 31 03:37:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:17.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:17.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Jan 31 03:37:18 np0005603621 podman[347141]: 2026-01-31 08:37:18.485547133 +0000 UTC m=+0.044831902 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:37:18 np0005603621 podman[347142]: 2026-01-31 08:37:18.511537358 +0000 UTC m=+0.070999253 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.004 247403 DEBUG nova.compute.manager [req-f35d21c8-e523-4182-ab2e-cde28d47ed98 req-47b3ff26-8b68-4287-8472-0ab51437192b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.005 247403 DEBUG oslo_concurrency.lockutils [req-f35d21c8-e523-4182-ab2e-cde28d47ed98 req-47b3ff26-8b68-4287-8472-0ab51437192b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.005 247403 DEBUG oslo_concurrency.lockutils [req-f35d21c8-e523-4182-ab2e-cde28d47ed98 req-47b3ff26-8b68-4287-8472-0ab51437192b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.005 247403 DEBUG oslo_concurrency.lockutils [req-f35d21c8-e523-4182-ab2e-cde28d47ed98 req-47b3ff26-8b68-4287-8472-0ab51437192b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.006 247403 DEBUG nova.compute.manager [req-f35d21c8-e523-4182-ab2e-cde28d47ed98 req-47b3ff26-8b68-4287-8472-0ab51437192b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.006 247403 WARNING nova.compute.manager [req-f35d21c8-e523-4182-ab2e-cde28d47ed98 req-47b3ff26-8b68-4287-8472-0ab51437192b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.069 247403 INFO nova.compute.manager [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Unrescuing#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.069 247403 DEBUG oslo_concurrency.lockutils [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.069 247403 DEBUG oslo_concurrency.lockutils [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.070 247403 DEBUG nova.network.neutron [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:37:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:19.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:19.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:19 np0005603621 nova_compute[247399]: 2026-01-31 08:37:19.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 79 op/s
Jan 31 03:37:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:21 np0005603621 nova_compute[247399]: 2026-01-31 08:37:21.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:21.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:21.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:22 np0005603621 nova_compute[247399]: 2026-01-31 08:37:22.128 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:22.128 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:22.129 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:37:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 589 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 118 op/s
Jan 31 03:37:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:23Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:37:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:23Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:37:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:23.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:23.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:23 np0005603621 nova_compute[247399]: 2026-01-31 08:37:23.574 247403 DEBUG nova.network.neutron [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:37:23 np0005603621 nova_compute[247399]: 2026-01-31 08:37:23.645 247403 DEBUG oslo_concurrency.lockutils [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:37:23 np0005603621 nova_compute[247399]: 2026-01-31 08:37:23.646 247403 DEBUG nova.objects.instance [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'flavor' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:24 np0005603621 kernel: tap509c791a-f0 (unregistering): left promiscuous mode
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 NetworkManager[49013]: <info>  [1769848644.0481] device (tap509c791a-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.054 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00572|binding|INFO|Releasing lport 509c791a-f0a2-4105-a992-82e720b801e8 from this chassis (sb_readonly=0)
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00573|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 down in Southbound
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00574|binding|INFO|Removing iface tap509c791a-f0 ovn-installed in OVS
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 31 03:37:24 np0005603621 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000090.scope: Consumed 12.686s CPU time.
Jan 31 03:37:24 np0005603621 systemd-machined[212769]: Machine qemu-69-instance-00000090 terminated.
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.117 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:93:70 10.100.0.7'], port_security=['fa:16:3e:0a:93:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c95caf87-5069-4b70-9023-d3c2d911e87d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b1c240f5-10ef-43c0-92c2-4688e636b197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=509c791a-f0a2-4105-a992-82e720b801e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.118 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 509c791a-f0a2-4105-a992-82e720b801e8 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.120 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31da00d3-077b-4620-a7d3-68186467ab47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.121 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[27bc5cb1-93bd-4c50-992f-79fbed7389e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.121 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 namespace which is not needed anymore#033[00m
Jan 31 03:37:24 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [NOTICE]   (347075) : haproxy version is 2.8.14-c23fe91
Jan 31 03:37:24 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [NOTICE]   (347075) : path to executable is /usr/sbin/haproxy
Jan 31 03:37:24 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [WARNING]  (347075) : Exiting Master process...
Jan 31 03:37:24 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [ALERT]    (347075) : Current worker (347077) exited with code 143 (Terminated)
Jan 31 03:37:24 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347071]: [WARNING]  (347075) : All workers exited. Exiting... (0)
Jan 31 03:37:24 np0005603621 systemd[1]: libpod-2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557.scope: Deactivated successfully.
Jan 31 03:37:24 np0005603621 podman[347211]: 2026-01-31 08:37:24.265549775 +0000 UTC m=+0.083239070 container died 2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.309 247403 INFO nova.virt.libvirt.driver [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance destroyed successfully.#033[00m
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.310 247403 DEBUG nova.objects.instance [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'numa_topology' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:37:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557-userdata-shm.mount: Deactivated successfully.
Jan 31 03:37:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2dd7657facb48e38a1115843c5182ab90c2473f7b28f8996e9e971a441d152b0-merged.mount: Deactivated successfully.
Jan 31 03:37:24 np0005603621 podman[347211]: 2026-01-31 08:37:24.391767677 +0000 UTC m=+0.209456972 container cleanup 2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:37:24 np0005603621 systemd[1]: libpod-conmon-2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557.scope: Deactivated successfully.
Jan 31 03:37:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 589 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 39 op/s
Jan 31 03:37:24 np0005603621 kernel: tap509c791a-f0: entered promiscuous mode
Jan 31 03:37:24 np0005603621 systemd-udevd[347190]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:37:24 np0005603621 NetworkManager[49013]: <info>  [1769848644.6777] manager: (tap509c791a-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00575|binding|INFO|Claiming lport 509c791a-f0a2-4105-a992-82e720b801e8 for this chassis.
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00576|binding|INFO|509c791a-f0a2-4105-a992-82e720b801e8: Claiming fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:37:24 np0005603621 NetworkManager[49013]: <info>  [1769848644.6802] device (tap509c791a-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:37:24 np0005603621 NetworkManager[49013]: <info>  [1769848644.6807] device (tap509c791a-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.680 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00577|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 ovn-installed in OVS
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.687 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 systemd-machined[212769]: New machine qemu-70-instance-00000090.
Jan 31 03:37:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:24Z|00578|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 up in Southbound
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.714 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:93:70 10.100.0.7'], port_security=['fa:16:3e:0a:93:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c95caf87-5069-4b70-9023-d3c2d911e87d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b1c240f5-10ef-43c0-92c2-4688e636b197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=509c791a-f0a2-4105-a992-82e720b801e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:37:24 np0005603621 systemd[1]: Started Virtual Machine qemu-70-instance-00000090.
Jan 31 03:37:24 np0005603621 podman[347252]: 2026-01-31 08:37:24.739220743 +0000 UTC m=+0.331451559 container remove 2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.742 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8cad654e-bffe-4dc1-9213-2bb711ffa07c]: (4, ('Sat Jan 31 08:37:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 (2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557)\n2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557\nSat Jan 31 08:37:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 (2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557)\n2a58ed3ad4a0ff6a34c2b4fdadad5dbaf3a24a813436bb8cd248506e0043e557\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.743 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a63c5c-8188-46a3-b943-598ce4ab0900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.744 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.746 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 kernel: tap31da00d3-00: left promiscuous mode
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.751 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.753 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3c08e5c8-a2bb-4955-9bba-2adc185c5c06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.764 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a95aabb3-466b-4e98-a884-86405e67a83e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.765 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d73a61ea-99b2-4091-bdb7-1758b8b29232]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.776 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d15246c-1377-4677-afb3-30c14bd87ef1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 793412, 'reachable_time': 40571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347287, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 systemd[1]: run-netns-ovnmeta\x2d31da00d3\x2d077b\x2d4620\x2da7d3\x2d68186467ab47.mount: Deactivated successfully.
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.780 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.780 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f708ce96-da1f-4fc6-b9e1-70f6fb07e115]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.781 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 509c791a-f0a2-4105-a992-82e720b801e8 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.782 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.790 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1f509532-b29c-42a4-8291-d9a0ad6a9244]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.791 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap31da00d3-01 in ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.793 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap31da00d3-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.793 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a008a7-24e8-4845-95cf-459a8f44d357]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.794 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6065c3c1-a926-4af7-9c78-c553031b36d4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.802 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[58e987ba-84d3-48ea-af40-2d93a7a8d1bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.810 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[66210f03-3bda-47be-8cad-3c83163956b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.827 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b6520721-8e1c-4dcd-b7e1-cafa0251af99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.831 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9fed8910-0f1e-45b6-a324-bc9bd1f34fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 NetworkManager[49013]: <info>  [1769848644.8334] manager: (tap31da00d3-00): new Veth device (/org/freedesktop/NetworkManager/Devices/260)
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.857 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7d27b6c3-550e-4a07-b92b-d84f79ec9bf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.860 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a1354799-24bb-4bd3-8835-6f9b1fc20a6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 NetworkManager[49013]: <info>  [1769848644.8773] device (tap31da00d3-00): carrier: link connected
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.886 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[37872972-1d97-4cd4-a927-f13e17e68b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.904 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[de1926b3-b487-4e47-a1e1-150ba2271b68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347312, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.918 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[983f5939-e4bb-4dd5-8096-84c5b7185a76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:4f2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795039, 'tstamp': 795039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347313, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.934 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7e68af-9704-4686-aace-6fb109f0f697]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347314, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:24 np0005603621 nova_compute[247399]: 2026-01-31 08:37:24.958 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:24.964 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4b45f17d-6610-4ea7-a11a-375447a90634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.006 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5bea51e6-4427-4dc8-8253-0015539a1fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.008 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.009 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.009 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:25 np0005603621 NetworkManager[49013]: <info>  [1769848645.0121] manager: (tap31da00d3-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 31 03:37:25 np0005603621 kernel: tap31da00d3-00: entered promiscuous mode
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.016 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:25 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:25Z|00579|binding|INFO|Releasing lport 54969bc0-ee8d-420c-ac0c-dd4f9410e42c from this chassis (sb_readonly=0)
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.018 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.021 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.022 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.023 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[95fb2c19-3413-4078-9f61-84016517c2f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.024 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-31da00d3-077b-4620-a7d3-68186467ab47
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/31da00d3-077b-4620-a7d3-68186467ab47.pid.haproxy
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 31da00d3-077b-4620-a7d3-68186467ab47
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:37:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:25.024 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'env', 'PROCESS_TAG=haproxy-31da00d3-077b-4620-a7d3-68186467ab47', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/31da00d3-077b-4620-a7d3-68186467ab47.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.295 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for c95caf87-5069-4b70-9023-d3c2d911e87d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.295 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848645.2952201, c95caf87-5069-4b70-9023-d3c2d911e87d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.296 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.353 247403 DEBUG nova.compute.manager [req-4adcc71f-a4fc-4bfc-bb9e-2d8c50fe8a43 req-7ef1fe8b-4408-4e63-96c2-98d1c8ec600c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.353 247403 DEBUG oslo_concurrency.lockutils [req-4adcc71f-a4fc-4bfc-bb9e-2d8c50fe8a43 req-7ef1fe8b-4408-4e63-96c2-98d1c8ec600c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.353 247403 DEBUG oslo_concurrency.lockutils [req-4adcc71f-a4fc-4bfc-bb9e-2d8c50fe8a43 req-7ef1fe8b-4408-4e63-96c2-98d1c8ec600c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.354 247403 DEBUG oslo_concurrency.lockutils [req-4adcc71f-a4fc-4bfc-bb9e-2d8c50fe8a43 req-7ef1fe8b-4408-4e63-96c2-98d1c8ec600c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.354 247403 DEBUG nova.compute.manager [req-4adcc71f-a4fc-4bfc-bb9e-2d8c50fe8a43 req-7ef1fe8b-4408-4e63-96c2-98d1c8ec600c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.354 247403 WARNING nova.compute.manager [req-4adcc71f-a4fc-4bfc-bb9e-2d8c50fe8a43 req-7ef1fe8b-4408-4e63-96c2-98d1c8ec600c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.380 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.383 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:37:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:25.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:25 np0005603621 podman[347402]: 2026-01-31 08:37:25.344472432 +0000 UTC m=+0.021949876 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.442 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.442 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848645.2988524, c95caf87-5069-4b70-9023-d3c2d911e87d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.443 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.506 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.509 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:37:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:25.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:25 np0005603621 nova_compute[247399]: 2026-01-31 08:37:25.670 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:37:25 np0005603621 podman[347402]: 2026-01-31 08:37:25.710335302 +0000 UTC m=+0.387812736 container create 636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:37:25 np0005603621 systemd[1]: Started libpod-conmon-636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27.scope.
Jan 31 03:37:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/967c3c03f975694700c844b5f164ab769f79f3de449a753208c7cc1e50f96c15/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:25 np0005603621 podman[347402]: 2026-01-31 08:37:25.866885505 +0000 UTC m=+0.544362949 container init 636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:37:25 np0005603621 podman[347402]: 2026-01-31 08:37:25.871072098 +0000 UTC m=+0.548549522 container start 636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:37:25 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [NOTICE]   (347426) : New worker (347428) forked
Jan 31 03:37:25 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [NOTICE]   (347426) : Loading success.
Jan 31 03:37:26 np0005603621 nova_compute[247399]: 2026-01-31 08:37:26.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 611 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 91 op/s
Jan 31 03:37:27 np0005603621 nova_compute[247399]: 2026-01-31 08:37:27.330 247403 DEBUG nova.compute.manager [None req-3db8c1a6-021e-42cb-a6c7-2d4d4bb3e66f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:37:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:27.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 611 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.840 247403 DEBUG nova.compute.manager [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.841 247403 DEBUG oslo_concurrency.lockutils [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.841 247403 DEBUG oslo_concurrency.lockutils [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.842 247403 DEBUG oslo_concurrency.lockutils [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.842 247403 DEBUG nova.compute.manager [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.842 247403 WARNING nova.compute.manager [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.842 247403 DEBUG nova.compute.manager [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.843 247403 DEBUG oslo_concurrency.lockutils [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.843 247403 DEBUG oslo_concurrency.lockutils [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.843 247403 DEBUG oslo_concurrency.lockutils [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.843 247403 DEBUG nova.compute.manager [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:28 np0005603621 nova_compute[247399]: 2026-01-31 08:37:28.844 247403 WARNING nova.compute.manager [req-52cb22e0-4a6c-40b2-ae86-9b5870fb897e req-5747ccda-49b6-4d40-b240-e9562c4940cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:37:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:29.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:29 np0005603621 nova_compute[247399]: 2026-01-31 08:37:29.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 611 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Jan 31 03:37:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:30.519 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:30.520 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:30.520 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:37:31.131 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.358 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:31.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.933 247403 DEBUG nova.compute.manager [req-969f1d12-6f05-4b5f-87fd-6f1043674178 req-b6ee2745-43df-4d22-bb14-ffbbc4e7f928 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.934 247403 DEBUG oslo_concurrency.lockutils [req-969f1d12-6f05-4b5f-87fd-6f1043674178 req-b6ee2745-43df-4d22-bb14-ffbbc4e7f928 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.935 247403 DEBUG oslo_concurrency.lockutils [req-969f1d12-6f05-4b5f-87fd-6f1043674178 req-b6ee2745-43df-4d22-bb14-ffbbc4e7f928 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.935 247403 DEBUG oslo_concurrency.lockutils [req-969f1d12-6f05-4b5f-87fd-6f1043674178 req-b6ee2745-43df-4d22-bb14-ffbbc4e7f928 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.935 247403 DEBUG nova.compute.manager [req-969f1d12-6f05-4b5f-87fd-6f1043674178 req-b6ee2745-43df-4d22-bb14-ffbbc4e7f928 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:37:31 np0005603621 nova_compute[247399]: 2026-01-31 08:37:31.935 247403 WARNING nova.compute.manager [req-969f1d12-6f05-4b5f-87fd-6f1043674178 req-b6ee2745-43df-4d22-bb14-ffbbc4e7f928 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:37:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 183 op/s
Jan 31 03:37:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:33.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 988 KiB/s wr, 144 op/s
Jan 31 03:37:34 np0005603621 nova_compute[247399]: 2026-01-31 08:37:34.963 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:35.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:35.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:36 np0005603621 nova_compute[247399]: 2026-01-31 08:37:36.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 988 KiB/s wr, 148 op/s
Jan 31 03:37:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:37.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 70 KiB/s wr, 108 op/s
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:37:38
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', 'backups', 'vms', 'volumes']
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:37:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:37:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:37:39Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:93:70 10.100.0.7
Jan 31 03:37:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:39.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:39 np0005603621 nova_compute[247399]: 2026-01-31 08:37:39.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:37:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cc4db574-be83-4139-bafa-7aef4e32f8f6 does not exist
Jan 31 03:37:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 54941bcc-add2-498b-a0d9-41a4063fc199 does not exist
Jan 31 03:37:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6c7a90dc-ac68-41ae-97f8-c600ef6bddeb does not exist
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:37:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 24 KiB/s wr, 82 op/s
Jan 31 03:37:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:40 np0005603621 podman[347769]: 2026-01-31 08:37:40.730039395 +0000 UTC m=+0.102369426 container create 2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:37:40 np0005603621 podman[347769]: 2026-01-31 08:37:40.646192686 +0000 UTC m=+0.018522747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:37:40 np0005603621 systemd[1]: Started libpod-conmon-2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a.scope.
Jan 31 03:37:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:40 np0005603621 podman[347769]: 2026-01-31 08:37:40.824250182 +0000 UTC m=+0.196580233 container init 2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:37:40 np0005603621 podman[347769]: 2026-01-31 08:37:40.829721775 +0000 UTC m=+0.202051806 container start 2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:37:40 np0005603621 pensive_margulis[347785]: 167 167
Jan 31 03:37:40 np0005603621 systemd[1]: libpod-2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a.scope: Deactivated successfully.
Jan 31 03:37:40 np0005603621 conmon[347785]: conmon 2e350dedb7dc5a396504 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a.scope/container/memory.events
Jan 31 03:37:40 np0005603621 podman[347769]: 2026-01-31 08:37:40.854221412 +0000 UTC m=+0.226551443 container attach 2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:37:40 np0005603621 podman[347769]: 2026-01-31 08:37:40.855725689 +0000 UTC m=+0.228055710 container died 2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:37:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6300a1d3e41eadb4eb9380641f2d54d0b605e2578150f5ab50907a4f80ea0c35-merged.mount: Deactivated successfully.
Jan 31 03:37:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:37:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:37:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:37:41 np0005603621 podman[347769]: 2026-01-31 08:37:41.222400324 +0000 UTC m=+0.594730355 container remove 2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_margulis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:41 np0005603621 systemd[1]: libpod-conmon-2e350dedb7dc5a396504f3b0b0243a2ec01b58f7b85825f0acf5917a97ba189a.scope: Deactivated successfully.
Jan 31 03:37:41 np0005603621 nova_compute[247399]: 2026-01-31 08:37:41.364 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:41 np0005603621 podman[347811]: 2026-01-31 08:37:41.397222798 +0000 UTC m=+0.087823046 container create 56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:37:41 np0005603621 podman[347811]: 2026-01-31 08:37:41.328276882 +0000 UTC m=+0.018877150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:37:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:41.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:41 np0005603621 systemd[1]: Started libpod-conmon-56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c.scope.
Jan 31 03:37:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbb693b96cb2f197385925c03ee3f0fc38f4660e9b11bb9a984de606d44f6c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbb693b96cb2f197385925c03ee3f0fc38f4660e9b11bb9a984de606d44f6c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbb693b96cb2f197385925c03ee3f0fc38f4660e9b11bb9a984de606d44f6c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbb693b96cb2f197385925c03ee3f0fc38f4660e9b11bb9a984de606d44f6c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbb693b96cb2f197385925c03ee3f0fc38f4660e9b11bb9a984de606d44f6c8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:41 np0005603621 podman[347811]: 2026-01-31 08:37:41.571559195 +0000 UTC m=+0.262159473 container init 56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:41 np0005603621 podman[347811]: 2026-01-31 08:37:41.576798221 +0000 UTC m=+0.267398469 container start 56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:37:41 np0005603621 podman[347811]: 2026-01-31 08:37:41.592573661 +0000 UTC m=+0.283173929 container attach 56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:37:42 np0005603621 gallant_lamport[347828]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:37:42 np0005603621 gallant_lamport[347828]: --> relative data size: 1.0
Jan 31 03:37:42 np0005603621 gallant_lamport[347828]: --> All data devices are unavailable
Jan 31 03:37:42 np0005603621 systemd[1]: libpod-56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c.scope: Deactivated successfully.
Jan 31 03:37:42 np0005603621 podman[347811]: 2026-01-31 08:37:42.369124271 +0000 UTC m=+1.059724519 container died 56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:37:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 184 op/s
Jan 31 03:37:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fdbb693b96cb2f197385925c03ee3f0fc38f4660e9b11bb9a984de606d44f6c8-merged.mount: Deactivated successfully.
Jan 31 03:37:43 np0005603621 podman[347811]: 2026-01-31 08:37:43.445118975 +0000 UTC m=+2.135719233 container remove 56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:37:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:43.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:43 np0005603621 systemd[1]: libpod-conmon-56afd798ca821d0893749c85061c413ec76bd3668909dd65c31ccfbf8b3c4e3c.scope: Deactivated successfully.
Jan 31 03:37:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:37:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:37:43 np0005603621 podman[347999]: 2026-01-31 08:37:43.985046744 +0000 UTC m=+0.080217595 container create 52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:37:44 np0005603621 podman[347999]: 2026-01-31 08:37:43.922672146 +0000 UTC m=+0.017843027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:37:44 np0005603621 systemd[1]: Started libpod-conmon-52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439.scope.
Jan 31 03:37:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:44 np0005603621 podman[347999]: 2026-01-31 08:37:44.264146383 +0000 UTC m=+0.359317264 container init 52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_spence, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:37:44 np0005603621 podman[347999]: 2026-01-31 08:37:44.270310188 +0000 UTC m=+0.365481039 container start 52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_spence, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:37:44 np0005603621 systemd[1]: libpod-52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439.scope: Deactivated successfully.
Jan 31 03:37:44 np0005603621 sleepy_spence[348016]: 167 167
Jan 31 03:37:44 np0005603621 conmon[348016]: conmon 52cb767264c69c508c98 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439.scope/container/memory.events
Jan 31 03:37:44 np0005603621 podman[347999]: 2026-01-31 08:37:44.375613206 +0000 UTC m=+0.470784097 container attach 52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:37:44 np0005603621 podman[347999]: 2026-01-31 08:37:44.376544626 +0000 UTC m=+0.471715477 container died 52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_spence, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:37:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 27 KiB/s wr, 117 op/s
Jan 31 03:37:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-66406057952ebe2c9895522af5b0ac0d1de9f236d1ec16a7b060c7ddb2e13b65-merged.mount: Deactivated successfully.
Jan 31 03:37:44 np0005603621 nova_compute[247399]: 2026-01-31 08:37:44.969 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:45 np0005603621 podman[347999]: 2026-01-31 08:37:45.058929371 +0000 UTC m=+1.154100222 container remove 52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_spence, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:37:45 np0005603621 systemd[1]: libpod-conmon-52cb767264c69c508c9899dba6780a227bd7a4d270d96a5b1c022b3095598439.scope: Deactivated successfully.
Jan 31 03:37:45 np0005603621 podman[348041]: 2026-01-31 08:37:45.184683178 +0000 UTC m=+0.029340801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:37:45 np0005603621 podman[348041]: 2026-01-31 08:37:45.318366996 +0000 UTC m=+0.163024599 container create e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:45.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:45 np0005603621 systemd[1]: Started libpod-conmon-e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9.scope.
Jan 31 03:37:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83da2e94d5e7c9c66f04f74ff9498501020b73f879449ed8b03a2b1f2188075e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83da2e94d5e7c9c66f04f74ff9498501020b73f879449ed8b03a2b1f2188075e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83da2e94d5e7c9c66f04f74ff9498501020b73f879449ed8b03a2b1f2188075e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83da2e94d5e7c9c66f04f74ff9498501020b73f879449ed8b03a2b1f2188075e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:45 np0005603621 podman[348041]: 2026-01-31 08:37:45.529406837 +0000 UTC m=+0.374064470 container init e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:37:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:45 np0005603621 podman[348041]: 2026-01-31 08:37:45.535730318 +0000 UTC m=+0.380387931 container start e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:37:45 np0005603621 podman[348041]: 2026-01-31 08:37:45.56385477 +0000 UTC m=+0.408512423 container attach e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:37:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:46 np0005603621 competent_borg[348058]: {
Jan 31 03:37:46 np0005603621 competent_borg[348058]:    "0": [
Jan 31 03:37:46 np0005603621 competent_borg[348058]:        {
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "devices": [
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "/dev/loop3"
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            ],
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "lv_name": "ceph_lv0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "lv_size": "7511998464",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "name": "ceph_lv0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "tags": {
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.cluster_name": "ceph",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.crush_device_class": "",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.encrypted": "0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.osd_id": "0",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.type": "block",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:                "ceph.vdo": "0"
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            },
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "type": "block",
Jan 31 03:37:46 np0005603621 competent_borg[348058]:            "vg_name": "ceph_vg0"
Jan 31 03:37:46 np0005603621 competent_borg[348058]:        }
Jan 31 03:37:46 np0005603621 competent_borg[348058]:    ]
Jan 31 03:37:46 np0005603621 competent_borg[348058]: }
Jan 31 03:37:46 np0005603621 systemd[1]: libpod-e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9.scope: Deactivated successfully.
Jan 31 03:37:46 np0005603621 podman[348041]: 2026-01-31 08:37:46.30414653 +0000 UTC m=+1.148804143 container died e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:37:46 np0005603621 nova_compute[247399]: 2026-01-31 08:37:46.367 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 547 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 35 KiB/s wr, 118 op/s
Jan 31 03:37:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83da2e94d5e7c9c66f04f74ff9498501020b73f879449ed8b03a2b1f2188075e-merged.mount: Deactivated successfully.
Jan 31 03:37:46 np0005603621 podman[348041]: 2026-01-31 08:37:46.617348509 +0000 UTC m=+1.462006122 container remove e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:37:46 np0005603621 systemd[1]: libpod-conmon-e464398b36fa477284ca1a34bff710e95a3b2dff3fdeec880b4b601fa41c40e9.scope: Deactivated successfully.
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.133313228 +0000 UTC m=+0.060727436 container create c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:37:47 np0005603621 systemd[1]: Started libpod-conmon-c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94.scope.
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.092425722 +0000 UTC m=+0.019839960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:37:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.226444441 +0000 UTC m=+0.153858659 container init c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.231546863 +0000 UTC m=+0.158961061 container start c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 03:37:47 np0005603621 distracted_mcnulty[348237]: 167 167
Jan 31 03:37:47 np0005603621 systemd[1]: libpod-c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94.scope: Deactivated successfully.
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.259947963 +0000 UTC m=+0.187362191 container attach c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.260638145 +0000 UTC m=+0.188052343 container died c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:37:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-19add948a4fcfe5741addacc016d2995a1a8c35cf0969554627004ecc54c4a7a-merged.mount: Deactivated successfully.
Jan 31 03:37:47 np0005603621 podman[348221]: 2026-01-31 08:37:47.390007276 +0000 UTC m=+0.317421474 container remove c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mcnulty, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:37:47 np0005603621 systemd[1]: libpod-conmon-c6f5a2f2adde60636c20ec969eca9751e88531e8689d01cba625afc171cafb94.scope: Deactivated successfully.
Jan 31 03:37:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:47.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:47 np0005603621 podman[348262]: 2026-01-31 08:37:47.54434934 +0000 UTC m=+0.062905185 container create 6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:37:47 np0005603621 systemd[1]: Started libpod-conmon-6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac.scope.
Jan 31 03:37:47 np0005603621 podman[348262]: 2026-01-31 08:37:47.502833324 +0000 UTC m=+0.021389199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:37:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:37:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d233c04106cfa61aaccc03afa14570eb13ae077421d562fce7f80a76f1f49d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d233c04106cfa61aaccc03afa14570eb13ae077421d562fce7f80a76f1f49d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d233c04106cfa61aaccc03afa14570eb13ae077421d562fce7f80a76f1f49d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d233c04106cfa61aaccc03afa14570eb13ae077421d562fce7f80a76f1f49d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:37:47 np0005603621 podman[348262]: 2026-01-31 08:37:47.666237815 +0000 UTC m=+0.184793680 container init 6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kepler, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:47 np0005603621 podman[348262]: 2026-01-31 08:37:47.671991947 +0000 UTC m=+0.190547792 container start 6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:37:47 np0005603621 podman[348262]: 2026-01-31 08:37:47.70932947 +0000 UTC m=+0.227885335 container attach 6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kepler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 114 op/s
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]: {
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:        "osd_id": 0,
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:        "type": "bluestore"
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]:    }
Jan 31 03:37:48 np0005603621 intelligent_kepler[348278]: }
Jan 31 03:37:48 np0005603621 systemd[1]: libpod-6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac.scope: Deactivated successfully.
Jan 31 03:37:48 np0005603621 podman[348262]: 2026-01-31 08:37:48.470833283 +0000 UTC m=+0.989389138 container died 6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kepler, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:37:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-16d233c04106cfa61aaccc03afa14570eb13ae077421d562fce7f80a76f1f49d-merged.mount: Deactivated successfully.
Jan 31 03:37:48 np0005603621 podman[348262]: 2026-01-31 08:37:48.520141967 +0000 UTC m=+1.038697822 container remove 6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:37:48 np0005603621 systemd[1]: libpod-conmon-6e392ee865c5d68dfe0bd7476c8f717d4768c735473b5c94db5fc59877fc55ac.scope: Deactivated successfully.
Jan 31 03:37:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:37:48 np0005603621 podman[348308]: 2026-01-31 08:37:48.570508934 +0000 UTC m=+0.054800399 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:37:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:37:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:37:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:37:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev eb3701c4-6a79-4e57-b02e-77bc304310a0 does not exist
Jan 31 03:37:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d1798302-548f-416e-a22b-a29142ae9c94 does not exist
Jan 31 03:37:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a7d5a11a-005a-47f2-9388-714fc87b8756 does not exist
Jan 31 03:37:48 np0005603621 podman[348323]: 2026-01-31 08:37:48.61645225 +0000 UTC m=+0.064321970 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0065193225967801974 of space, bias 1.0, pg target 1.9557967790340591 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003152699260189838 of space, bias 1.0, pg target 0.9426570787967616 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4561024887167318 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:37:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:37:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:49.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:49.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:37:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:37:49 np0005603621 nova_compute[247399]: 2026-01-31 08:37:49.972 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:50 np0005603621 nova_compute[247399]: 2026-01-31 08:37:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 13 KiB/s wr, 101 op/s
Jan 31 03:37:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:51 np0005603621 nova_compute[247399]: 2026-01-31 08:37:51.370 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:51.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 582 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Jan 31 03:37:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:37:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:53.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:37:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 582 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 03:37:54 np0005603621 nova_compute[247399]: 2026-01-31 08:37:54.975 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:55 np0005603621 nova_compute[247399]: 2026-01-31 08:37:55.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:37:55 np0005603621 nova_compute[247399]: 2026-01-31 08:37:55.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:37:55 np0005603621 nova_compute[247399]: 2026-01-31 08:37:55.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:37:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:55.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:37:56 np0005603621 nova_compute[247399]: 2026-01-31 08:37:56.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:37:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 612 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.0 MiB/s wr, 91 op/s
Jan 31 03:37:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:57.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 03:37:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:37:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:37:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:37:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:37:59.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:37:59 np0005603621 nova_compute[247399]: 2026-01-31 08:37:59.977 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:00 np0005603621 nova_compute[247399]: 2026-01-31 08:38:00.084 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:00 np0005603621 nova_compute[247399]: 2026-01-31 08:38:00.084 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:00 np0005603621 nova_compute[247399]: 2026-01-31 08:38:00.085 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:38:00 np0005603621 nova_compute[247399]: 2026-01-31 08:38:00.085 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:38:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 03:38:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:01 np0005603621 nova_compute[247399]: 2026-01-31 08:38:01.375 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:01.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:01.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.081928) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848682082018, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1468, "num_deletes": 252, "total_data_size": 2476256, "memory_usage": 2516632, "flush_reason": "Manual Compaction"}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848682121324, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1489167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56317, "largest_seqno": 57783, "table_properties": {"data_size": 1483842, "index_size": 2593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14077, "raw_average_key_size": 21, "raw_value_size": 1472163, "raw_average_value_size": 2213, "num_data_blocks": 115, "num_entries": 665, "num_filter_entries": 665, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848543, "oldest_key_time": 1769848543, "file_creation_time": 1769848682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 39532 microseconds, and 5450 cpu microseconds.
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.121476) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1489167 bytes OK
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.121519) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.138325) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.138357) EVENT_LOG_v1 {"time_micros": 1769848682138347, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.138381) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 2469880, prev total WAL file size 2501809, number of live WAL files 2.
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.139192) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1454KB)], [125(12MB)]
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848682139231, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 14274423, "oldest_snapshot_seqno": -1}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 8667 keys, 11381267 bytes, temperature: kUnknown
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848682329841, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 11381267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11325466, "index_size": 33032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 224519, "raw_average_key_size": 25, "raw_value_size": 11173821, "raw_average_value_size": 1289, "num_data_blocks": 1291, "num_entries": 8667, "num_filter_entries": 8667, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848682, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.330109) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 11381267 bytes
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.331222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.9 rd, 59.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(17.2) write-amplify(7.6) OK, records in: 9125, records dropped: 458 output_compression: NoCompression
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.331239) EVENT_LOG_v1 {"time_micros": 1769848682331231, "job": 76, "event": "compaction_finished", "compaction_time_micros": 190672, "compaction_time_cpu_micros": 28052, "output_level": 6, "num_output_files": 1, "total_output_size": 11381267, "num_input_records": 9125, "num_output_records": 8667, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848682331503, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848682332958, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.139109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.333000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.333006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.333008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.333009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:02 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:02.333029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 03:38:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:03.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:38:04 np0005603621 nova_compute[247399]: 2026-01-31 08:38:04.979 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:05.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:06 np0005603621 nova_compute[247399]: 2026-01-31 08:38:06.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:38:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:07.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:07.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 960 KiB/s wr, 1 op/s
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:09.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:09.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:09 np0005603621 nova_compute[247399]: 2026-01-31 08:38:09.847 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:09 np0005603621 nova_compute[247399]: 2026-01-31 08:38:09.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.518 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.518 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.518 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.519 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.519 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.519 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.519 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.519 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.519 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.795 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.795 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.796 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.796 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:38:10 np0005603621 nova_compute[247399]: 2026-01-31 08:38:10.796 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576138272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.243 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.381 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:11.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.599 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.599 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.720 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.721 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4067MB free_disk=20.830692291259766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.721 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:11 np0005603621 nova_compute[247399]: 2026-01-31 08:38:11.721 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.1 KiB/s wr, 0 op/s
Jan 31 03:38:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:13.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:13.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 2.7 KiB/s wr, 0 op/s
Jan 31 03:38:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:38:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3123823271' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:38:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:38:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3123823271' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:38:14 np0005603621 nova_compute[247399]: 2026-01-31 08:38:14.933 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c95caf87-5069-4b70-9023-d3c2d911e87d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:38:14 np0005603621 nova_compute[247399]: 2026-01-31 08:38:14.984 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.005 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.006 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.006 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.323 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.324 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.480 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:38:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:15.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.524 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:38:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:15.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.660 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.660 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.694 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.769 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.787 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:38:15 np0005603621 nova_compute[247399]: 2026-01-31 08:38:15.930 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3179179158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.366 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.372 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.385 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.413 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 2.8 KiB/s wr, 0 op/s
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.463 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.463 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.463 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.469 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.469 247403 INFO nova.compute.claims [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:38:16 np0005603621 nova_compute[247399]: 2026-01-31 08:38:16.898 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016721153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.304 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.309 247403 DEBUG nova.compute.provider_tree [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.357 247403 DEBUG nova.scheduler.client.report [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.471 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.471 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:38:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:17.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.640 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.640 247403 DEBUG nova.network.neutron [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:38:17 np0005603621 nova_compute[247399]: 2026-01-31 08:38:17.966 247403 INFO nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.153 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:38:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 6.5 KiB/s wr, 1 op/s
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.626 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.628 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.628 247403 INFO nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Creating image(s)#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.661 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.686 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.722 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.726 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.750 247403 DEBUG nova.policy [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b6733330b634472ca8c21316f1ee5057', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1e29363ca464487b931af54fe14166b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.784 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.785 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.786 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.786 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.807 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:18 np0005603621 nova_compute[247399]: 2026-01-31 08:38:18.810 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:19 np0005603621 nova_compute[247399]: 2026-01-31 08:38:19.258 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:19 np0005603621 nova_compute[247399]: 2026-01-31 08:38:19.331 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] resizing rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:38:19 np0005603621 nova_compute[247399]: 2026-01-31 08:38:19.437 247403 DEBUG nova.objects.instance [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:38:19 np0005603621 podman[348754]: 2026-01-31 08:38:19.488810503 +0000 UTC m=+0.044250423 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:38:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:19.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:19 np0005603621 podman[348757]: 2026-01-31 08:38:19.542508046 +0000 UTC m=+0.096596734 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 03:38:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:19.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:19 np0005603621 nova_compute[247399]: 2026-01-31 08:38:19.986 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 6.5 KiB/s wr, 1 op/s
Jan 31 03:38:20 np0005603621 nova_compute[247399]: 2026-01-31 08:38:20.459 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:20 np0005603621 nova_compute[247399]: 2026-01-31 08:38:20.705 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:38:20 np0005603621 nova_compute[247399]: 2026-01-31 08:38:20.705 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Ensure instance console log exists: /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:38:20 np0005603621 nova_compute[247399]: 2026-01-31 08:38:20.706 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:20 np0005603621 nova_compute[247399]: 2026-01-31 08:38:20.706 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:20 np0005603621 nova_compute[247399]: 2026-01-31 08:38:20.706 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:21 np0005603621 nova_compute[247399]: 2026-01-31 08:38:21.388 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:21.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:21.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 651 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 712 KiB/s wr, 31 op/s
Jan 31 03:38:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:23.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:23.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 675 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 31 03:38:24 np0005603621 nova_compute[247399]: 2026-01-31 08:38:24.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:25.062 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:38:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:25.063 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:38:25 np0005603621 nova_compute[247399]: 2026-01-31 08:38:25.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:25.064 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:25.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:25.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:25 np0005603621 nova_compute[247399]: 2026-01-31 08:38:25.967 247403 DEBUG nova.network.neutron [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Successfully created port: e31170c4-45cd-451e-9d63-81bc41146ca1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:38:26 np0005603621 nova_compute[247399]: 2026-01-31 08:38:26.391 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 675 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 31 03:38:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:27.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:27.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 674 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 31 03:38:28 np0005603621 nova_compute[247399]: 2026-01-31 08:38:28.737 247403 DEBUG nova.network.neutron [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Successfully updated port: e31170c4-45cd-451e-9d63-81bc41146ca1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.137 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.137 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.137 247403 DEBUG nova.network.neutron [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.410 247403 DEBUG nova.compute.manager [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.410 247403 DEBUG nova.compute.manager [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing instance network info cache due to event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.410 247403 DEBUG oslo_concurrency.lockutils [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:29.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.562 247403 DEBUG nova.network.neutron [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:38:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:29.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:29 np0005603621 nova_compute[247399]: 2026-01-31 08:38:29.990 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 674 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 31 03:38:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:30.521 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:30.521 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:30.521 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:31 np0005603621 nova_compute[247399]: 2026-01-31 08:38:31.394 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:31.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:31.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.116 247403 DEBUG nova.network.neutron [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 698 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.6 MiB/s wr, 61 op/s
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.573 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.573 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance network_info: |[{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.574 247403 DEBUG oslo_concurrency.lockutils [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.574 247403 DEBUG nova.network.neutron [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.577 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Start _get_guest_xml network_info=[{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.582 247403 WARNING nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.592 247403 DEBUG nova.virt.libvirt.host [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.593 247403 DEBUG nova.virt.libvirt.host [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.598 247403 DEBUG nova.virt.libvirt.host [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.599 247403 DEBUG nova.virt.libvirt.host [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.600 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.600 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.600 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.600 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.601 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.601 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.601 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.601 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.601 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.601 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.602 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.602 247403 DEBUG nova.virt.hardware [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:38:32 np0005603621 nova_compute[247399]: 2026-01-31 08:38:32.604 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:38:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/109481479' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.046 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.075 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.079 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:38:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1098606066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.500 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.501 247403 DEBUG nova.virt.libvirt.vif [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-115398860',display_name='tempest-ServerStableDeviceRescueTest-server-115398860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-115398860',id=147,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwSJ/FWYSnlwbHs0Wzm91aUcuaXBYBlt6yIH3QV0xhWQdod7WEXhLlfK4Hd2zTjsdf/eKXM6/AA0TDEI9fUsNHKQsyPifodt6RnLsQxSTsB0zuFfxi198QzASAfXJdIAg==',key_name='tempest-keypair-557896509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-a4dec496',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b6733330b634472ca8c21316f1ee5057',uuid=92bd94ef-0031-409f-8c26-23d5f3d952e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.501 247403 DEBUG nova.network.os_vif_util [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.502 247403 DEBUG nova.network.os_vif_util [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.503 247403 DEBUG nova.objects.instance [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:38:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:33.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:33.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.672 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <uuid>92bd94ef-0031-409f-8c26-23d5f3d952e1</uuid>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <name>instance-00000093</name>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-115398860</nova:name>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:38:32</nova:creationTime>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:user uuid="b6733330b634472ca8c21316f1ee5057">tempest-ServerStableDeviceRescueTest-319343227-project-member</nova:user>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:project uuid="1e29363ca464487b931af54fe14166b1">tempest-ServerStableDeviceRescueTest-319343227</nova:project>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <nova:port uuid="e31170c4-45cd-451e-9d63-81bc41146ca1">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <entry name="serial">92bd94ef-0031-409f-8c26-23d5f3d952e1</entry>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <entry name="uuid">92bd94ef-0031-409f-8c26-23d5f3d952e1</entry>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:fc:98:9d"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <target dev="tape31170c4-45"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/console.log" append="off"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:38:33 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:38:33 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:38:33 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:38:33 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.672 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Preparing to wait for external event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.673 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.673 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.673 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.674 247403 DEBUG nova.virt.libvirt.vif [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-115398860',display_name='tempest-ServerStableDeviceRescueTest-server-115398860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-115398860',id=147,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwSJ/FWYSnlwbHs0Wzm91aUcuaXBYBlt6yIH3QV0xhWQdod7WEXhLlfK4Hd2zTjsdf/eKXM6/AA0TDEI9fUsNHKQsyPifodt6RnLsQxSTsB0zuFfxi198QzASAfXJdIAg==',key_name='tempest-keypair-557896509',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-a4dec496',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b6733330b634472ca8c21316f1ee5057',uuid=92bd94ef-0031-409f-8c26-23d5f3d952e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.674 247403 DEBUG nova.network.os_vif_util [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.674 247403 DEBUG nova.network.os_vif_util [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.675 247403 DEBUG os_vif [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.675 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.675 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.676 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.678 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.678 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape31170c4-45, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.679 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape31170c4-45, col_values=(('external_ids', {'iface-id': 'e31170c4-45cd-451e-9d63-81bc41146ca1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fc:98:9d', 'vm-uuid': '92bd94ef-0031-409f-8c26-23d5f3d952e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.680 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:33 np0005603621 NetworkManager[49013]: <info>  [1769848713.6809] manager: (tape31170c4-45): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.683 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.686 247403 INFO os_vif [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45')#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.985 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.985 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.985 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No VIF found with MAC fa:16:3e:fc:98:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:38:33 np0005603621 nova_compute[247399]: 2026-01-31 08:38:33.986 247403 INFO nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Using config drive#033[00m
Jan 31 03:38:34 np0005603621 nova_compute[247399]: 2026-01-31 08:38:34.012 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 717 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.5 MiB/s wr, 44 op/s
Jan 31 03:38:34 np0005603621 nova_compute[247399]: 2026-01-31 08:38:34.982 247403 INFO nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Creating config drive at /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config#033[00m
Jan 31 03:38:34 np0005603621 nova_compute[247399]: 2026-01-31 08:38:34.987 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp46_lleey execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603621 nova_compute[247399]: 2026-01-31 08:38:35.009 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:35 np0005603621 nova_compute[247399]: 2026-01-31 08:38:35.116 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp46_lleey" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:35 np0005603621 nova_compute[247399]: 2026-01-31 08:38:35.142 247403 DEBUG nova.storage.rbd_utils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:35 np0005603621 nova_compute[247399]: 2026-01-31 08:38:35.145 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.003000096s ======
Jan 31 03:38:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:35.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000096s
Jan 31 03:38:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:35.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.227 247403 DEBUG oslo_concurrency.processutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.227 247403 INFO nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Deleting local config drive /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config because it was imported into RBD.#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.262 247403 DEBUG nova.network.neutron [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updated VIF entry in instance network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.262 247403 DEBUG nova.network.neutron [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:36 np0005603621 NetworkManager[49013]: <info>  [1769848716.2688] manager: (tape31170c4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Jan 31 03:38:36 np0005603621 kernel: tape31170c4-45: entered promiscuous mode
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.271 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:36Z|00580|binding|INFO|Claiming lport e31170c4-45cd-451e-9d63-81bc41146ca1 for this chassis.
Jan 31 03:38:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:36Z|00581|binding|INFO|e31170c4-45cd-451e-9d63-81bc41146ca1: Claiming fa:16:3e:fc:98:9d 10.100.0.8
Jan 31 03:38:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:36Z|00582|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 ovn-installed in OVS
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.285 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:36 np0005603621 systemd-udevd[348993]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:38:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:36Z|00583|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 up in Southbound
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.298 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:98:9d 10.100.0.8'], port_security=['fa:16:3e:fc:98:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '47937952-48ef-482c-9abd-e199e46c505b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e31170c4-45cd-451e-9d63-81bc41146ca1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:38:36 np0005603621 systemd-machined[212769]: New machine qemu-71-instance-00000093.
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.299 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e31170c4-45cd-451e-9d63-81bc41146ca1 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 bound to our chassis#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.301 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:38:36 np0005603621 NetworkManager[49013]: <info>  [1769848716.3048] device (tape31170c4-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:38:36 np0005603621 NetworkManager[49013]: <info>  [1769848716.3064] device (tape31170c4-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:38:36 np0005603621 systemd[1]: Started Virtual Machine qemu-71-instance-00000093.
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.313 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e577ff90-c17c-4900-b6e0-429e112a77c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.325 247403 DEBUG oslo_concurrency.lockutils [req-c1419897-668e-4d22-8a5f-299270013eb1 req-30115bde-1a03-4f3b-92b3-6c2a4fc5ceae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.337 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d4313756-0299-4640-8efd-1a4a88f0b9b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.341 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[63d75ed0-2306-4ae5-823c-6eb4e9ff5083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.365 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a2724f-c905-43d9-9801-fceb433bc56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.378 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fd531255-5ca2-424a-af39-b7ddfd695728]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349007, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.393 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d78f7ca-2c3c-445a-9641-6ab367af0c5f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795049, 'tstamp': 795049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349008, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795051, 'tstamp': 795051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349008, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.395 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.398 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.398 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.398 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:38:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:38:36.398 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:38:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 721 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.671 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848716.6704013, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.671 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Started (Lifecycle Event)#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.784 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.788 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848716.670919, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.788 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.869 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.872 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:38:36 np0005603621 nova_compute[247399]: 2026-01-31 08:38:36.904 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:38:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:37.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:37.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 721 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.516 247403 DEBUG nova.compute.manager [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.516 247403 DEBUG oslo_concurrency.lockutils [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.516 247403 DEBUG oslo_concurrency.lockutils [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.517 247403 DEBUG oslo_concurrency.lockutils [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.517 247403 DEBUG nova.compute.manager [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Processing event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.517 247403 DEBUG nova.compute.manager [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.517 247403 DEBUG oslo_concurrency.lockutils [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.517 247403 DEBUG oslo_concurrency.lockutils [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.518 247403 DEBUG oslo_concurrency.lockutils [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.518 247403 DEBUG nova.compute.manager [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.518 247403 WARNING nova.compute.manager [req-1af83af2-63d7-4f9e-8276-ae72bf919b53 req-772e0c81-28b1-431c-b9bd-db7a99eaad6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.519 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.523 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848718.522853, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.523 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.525 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.528 247403 INFO nova.virt.libvirt.driver [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance spawned successfully.#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.529 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:38:38
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root']
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.611 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.616 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.617 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.617 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.618 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.618 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.618 247403 DEBUG nova.virt.libvirt.driver [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.622 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.681 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.684 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.827 247403 INFO nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Took 20.20 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:38:38 np0005603621 nova_compute[247399]: 2026-01-31 08:38:38.827 247403 DEBUG nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:38:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:38:39 np0005603621 nova_compute[247399]: 2026-01-31 08:38:39.052 247403 INFO nova.compute.manager [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Took 23.34 seconds to build instance.#033[00m
Jan 31 03:38:39 np0005603621 nova_compute[247399]: 2026-01-31 08:38:39.116 247403 DEBUG oslo_concurrency.lockutils [None req-f59d057b-9ffe-42f0-91e2-2fbc43d6e0ef b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:39.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:39.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:39 np0005603621 nova_compute[247399]: 2026-01-31 08:38:39.994 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 721 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Jan 31 03:38:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:41.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:41.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 653 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.2 MiB/s wr, 220 op/s
Jan 31 03:38:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:43.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:43.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:43 np0005603621 nova_compute[247399]: 2026-01-31 08:38:43.684 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 653 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.4 MiB/s wr, 202 op/s
Jan 31 03:38:44 np0005603621 nova_compute[247399]: 2026-01-31 08:38:44.995 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:45.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:45.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 674 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 2.2 MiB/s wr, 237 op/s
Jan 31 03:38:46 np0005603621 nova_compute[247399]: 2026-01-31 08:38:46.979 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603621 NetworkManager[49013]: <info>  [1769848726.9817] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/264)
Jan 31 03:38:46 np0005603621 NetworkManager[49013]: <info>  [1769848726.9825] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Jan 31 03:38:46 np0005603621 nova_compute[247399]: 2026-01-31 08:38:46.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:46 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:46Z|00584|binding|INFO|Releasing lport 54969bc0-ee8d-420c-ac0c-dd4f9410e42c from this chassis (sb_readonly=0)
Jan 31 03:38:47 np0005603621 nova_compute[247399]: 2026-01-31 08:38:47.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:38:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/974836420' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:38:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:47.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 710 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.8 MiB/s wr, 265 op/s
Jan 31 03:38:48 np0005603621 nova_compute[247399]: 2026-01-31 08:38:48.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:48 np0005603621 nova_compute[247399]: 2026-01-31 08:38:48.781 247403 DEBUG nova.compute.manager [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:38:48 np0005603621 nova_compute[247399]: 2026-01-31 08:38:48.781 247403 DEBUG nova.compute.manager [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing instance network info cache due to event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:38:48 np0005603621 nova_compute[247399]: 2026-01-31 08:38:48.781 247403 DEBUG oslo_concurrency.lockutils [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:48 np0005603621 nova_compute[247399]: 2026-01-31 08:38:48.782 247403 DEBUG oslo_concurrency.lockutils [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:48 np0005603621 nova_compute[247399]: 2026-01-31 08:38:48.782 247403 DEBUG nova.network.neutron [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0106091438093244 of space, bias 1.0, pg target 3.18274314279732 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003152699260189838 of space, bias 1.0, pg target 0.9363516802763819 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4463626727386936 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:38:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 03:38:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:49.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:49.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:50 np0005603621 nova_compute[247399]: 2026-01-31 08:38:49.998 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:50 np0005603621 nova_compute[247399]: 2026-01-31 08:38:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 710 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 203 op/s
Jan 31 03:38:50 np0005603621 podman[349190]: 2026-01-31 08:38:50.497651773 +0000 UTC m=+0.052587348 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 03:38:50 np0005603621 podman[349191]: 2026-01-31 08:38:50.522626736 +0000 UTC m=+0.076565189 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:38:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:38:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:51 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:51Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:98:9d 10.100.0.8
Jan 31 03:38:51 np0005603621 ovn_controller[149152]: 2026-01-31T08:38:51Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:98:9d 10.100.0.8
Jan 31 03:38:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:51.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:51.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b8a1106a-9400-4655-81f6-e5c8451d274a does not exist
Jan 31 03:38:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1086d4a9-abb4-404b-a93a-9d617475e2b8 does not exist
Jan 31 03:38:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b5a85bbb-b5d2-4bcd-9ebe-8b3956729bee does not exist
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.214780034 +0000 UTC m=+0.043441258 container create d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:38:52 np0005603621 nova_compute[247399]: 2026-01-31 08:38:52.240 247403 DEBUG nova.network.neutron [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updated VIF entry in instance network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:38:52 np0005603621 nova_compute[247399]: 2026-01-31 08:38:52.240 247403 DEBUG nova.network.neutron [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:38:52 np0005603621 systemd[1]: Started libpod-conmon-d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720.scope.
Jan 31 03:38:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.191772484 +0000 UTC m=+0.020433738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.293772239 +0000 UTC m=+0.122433493 container init d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.299972036 +0000 UTC m=+0.128633280 container start d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.303968722 +0000 UTC m=+0.132629966 container attach d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:52 np0005603621 bold_carson[349413]: 167 167
Jan 31 03:38:52 np0005603621 systemd[1]: libpod-d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720.scope: Deactivated successfully.
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.306184412 +0000 UTC m=+0.134845636 container died d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ffdef7b27a97e6557d48ec55aa0fbc1279b8cfa05ac0d312d167c1e9dc4d1da6-merged.mount: Deactivated successfully.
Jan 31 03:38:52 np0005603621 nova_compute[247399]: 2026-01-31 08:38:52.331 247403 DEBUG oslo_concurrency.lockutils [req-a4b8b6c8-29cf-4783-8005-ee1e2dc0e880 req-32b958c6-7992-4a2d-9db5-61b3a4e3a96c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:38:52 np0005603621 podman[349374]: 2026-01-31 08:38:52.355290439 +0000 UTC m=+0.183951663 container remove d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_carson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:52 np0005603621 systemd[1]: libpod-conmon-d0a28b425cd41b3ea03e259e43d3bf6445122c45f49c9faf6f72e1816cdb9720.scope: Deactivated successfully.
Jan 31 03:38:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 751 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.0 MiB/s wr, 296 op/s
Jan 31 03:38:52 np0005603621 podman[349464]: 2026-01-31 08:38:52.487533992 +0000 UTC m=+0.038551474 container create c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:38:52 np0005603621 systemd[1]: Started libpod-conmon-c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154.scope.
Jan 31 03:38:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:38:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c6956383a0054d70cef7e878d88832a99adb9a93301f7c745bdd3c32c5d857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c6956383a0054d70cef7e878d88832a99adb9a93301f7c745bdd3c32c5d857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c6956383a0054d70cef7e878d88832a99adb9a93301f7c745bdd3c32c5d857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c6956383a0054d70cef7e878d88832a99adb9a93301f7c745bdd3c32c5d857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8c6956383a0054d70cef7e878d88832a99adb9a93301f7c745bdd3c32c5d857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:52 np0005603621 podman[349464]: 2026-01-31 08:38:52.562262841 +0000 UTC m=+0.113280343 container init c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:38:52 np0005603621 podman[349464]: 2026-01-31 08:38:52.470092219 +0000 UTC m=+0.021109731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:52 np0005603621 podman[349464]: 2026-01-31 08:38:52.573532348 +0000 UTC m=+0.124549820 container start c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:38:52 np0005603621 podman[349464]: 2026-01-31 08:38:52.578730283 +0000 UTC m=+0.129747785 container attach c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:38:53 np0005603621 busy_sinoussi[349481]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:38:53 np0005603621 busy_sinoussi[349481]: --> relative data size: 1.0
Jan 31 03:38:53 np0005603621 busy_sinoussi[349481]: --> All data devices are unavailable
Jan 31 03:38:53 np0005603621 systemd[1]: libpod-c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154.scope: Deactivated successfully.
Jan 31 03:38:53 np0005603621 podman[349496]: 2026-01-31 08:38:53.406986212 +0000 UTC m=+0.026042996 container died c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:38:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c8c6956383a0054d70cef7e878d88832a99adb9a93301f7c745bdd3c32c5d857-merged.mount: Deactivated successfully.
Jan 31 03:38:53 np0005603621 podman[349496]: 2026-01-31 08:38:53.458209877 +0000 UTC m=+0.077266641 container remove c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:38:53 np0005603621 systemd[1]: libpod-conmon-c31aceac328bc0e8103b99c299e31aedccebdd249030b8a91311d19429903154.scope: Deactivated successfully.
Jan 31 03:38:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:53.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:38:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:53.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:38:53 np0005603621 nova_compute[247399]: 2026-01-31 08:38:53.689 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:53 np0005603621 nova_compute[247399]: 2026-01-31 08:38:53.793 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:53 np0005603621 nova_compute[247399]: 2026-01-31 08:38:53.793 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:53 np0005603621 nova_compute[247399]: 2026-01-31 08:38:53.900 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:38:53 np0005603621 podman[349650]: 2026-01-31 08:38:53.939066252 +0000 UTC m=+0.035296120 container create f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:38:53 np0005603621 systemd[1]: Started libpod-conmon-f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259.scope.
Jan 31 03:38:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:38:53 np0005603621 podman[349650]: 2026-01-31 08:38:53.998760735 +0000 UTC m=+0.094990623 container init f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_sammet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:38:54 np0005603621 podman[349650]: 2026-01-31 08:38:54.003187145 +0000 UTC m=+0.099417013 container start f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_sammet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:38:54 np0005603621 hungry_sammet[349667]: 167 167
Jan 31 03:38:54 np0005603621 podman[349650]: 2026-01-31 08:38:54.006871452 +0000 UTC m=+0.103101340 container attach f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:54 np0005603621 conmon[349667]: conmon f3b2be9bf3c11287fba1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259.scope/container/memory.events
Jan 31 03:38:54 np0005603621 systemd[1]: libpod-f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259.scope: Deactivated successfully.
Jan 31 03:38:54 np0005603621 podman[349650]: 2026-01-31 08:38:54.007918875 +0000 UTC m=+0.104148743 container died f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:38:54 np0005603621 podman[349650]: 2026-01-31 08:38:53.923107276 +0000 UTC m=+0.019337164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1a14b1ee908a6ed948fa7895af1e4478818f0990c3f0c8f1468efc570b93f924-merged.mount: Deactivated successfully.
Jan 31 03:38:54 np0005603621 podman[349650]: 2026-01-31 08:38:54.038287598 +0000 UTC m=+0.134517456 container remove f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_sammet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:54 np0005603621 systemd[1]: libpod-conmon-f3b2be9bf3c11287fba140d9bddc371f9becca39a25b0866d51b35de9c1ee259.scope: Deactivated successfully.
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.067 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.068 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.076 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.077 247403 INFO nova.compute.claims [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:38:54 np0005603621 podman[349694]: 2026-01-31 08:38:54.16516544 +0000 UTC m=+0.034710621 container create a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:38:54 np0005603621 systemd[1]: Started libpod-conmon-a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244.scope.
Jan 31 03:38:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:38:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebca1d0c1ff28e0f664bf5505d44e2ccf2cd4421df75523e82941097139e8be6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebca1d0c1ff28e0f664bf5505d44e2ccf2cd4421df75523e82941097139e8be6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebca1d0c1ff28e0f664bf5505d44e2ccf2cd4421df75523e82941097139e8be6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebca1d0c1ff28e0f664bf5505d44e2ccf2cd4421df75523e82941097139e8be6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:54 np0005603621 podman[349694]: 2026-01-31 08:38:54.148351557 +0000 UTC m=+0.017896458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:54 np0005603621 podman[349694]: 2026-01-31 08:38:54.246987704 +0000 UTC m=+0.116532615 container init a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:38:54 np0005603621 podman[349694]: 2026-01-31 08:38:54.253100918 +0000 UTC m=+0.122645799 container start a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:38:54 np0005603621 podman[349694]: 2026-01-31 08:38:54.256342391 +0000 UTC m=+0.125887292 container attach a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.342 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 751 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.6 MiB/s wr, 191 op/s
Jan 31 03:38:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:38:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928635231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.753 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.759 247403 DEBUG nova.compute.provider_tree [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.928 247403 DEBUG nova.scheduler.client.report [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]: {
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:    "0": [
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:        {
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "devices": [
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "/dev/loop3"
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            ],
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "lv_name": "ceph_lv0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "lv_size": "7511998464",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "name": "ceph_lv0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "tags": {
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.cluster_name": "ceph",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.crush_device_class": "",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.encrypted": "0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.osd_id": "0",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.type": "block",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:                "ceph.vdo": "0"
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            },
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "type": "block",
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:            "vg_name": "ceph_vg0"
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:        }
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]:    ]
Jan 31 03:38:54 np0005603621 silly_stonebraker[349710]: }
Jan 31 03:38:54 np0005603621 nova_compute[247399]: 2026-01-31 08:38:54.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:55 np0005603621 systemd[1]: libpod-a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244.scope: Deactivated successfully.
Jan 31 03:38:55 np0005603621 conmon[349710]: conmon a33303dde9335c98573b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244.scope/container/memory.events
Jan 31 03:38:55 np0005603621 podman[349694]: 2026-01-31 08:38:55.013679232 +0000 UTC m=+0.883224113 container died a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.026 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.027 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:38:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ebca1d0c1ff28e0f664bf5505d44e2ccf2cd4421df75523e82941097139e8be6-merged.mount: Deactivated successfully.
Jan 31 03:38:55 np0005603621 podman[349694]: 2026-01-31 08:38:55.052337208 +0000 UTC m=+0.921882089 container remove a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_stonebraker, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:38:55 np0005603621 systemd[1]: libpod-conmon-a33303dde9335c98573b01b2d297b3902ee8e0f92f0117c1bb6fffc1f98ac244.scope: Deactivated successfully.
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.119 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.120 247403 DEBUG nova.network.neutron [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.154 247403 INFO nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.278 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.552198506 +0000 UTC m=+0.043433789 container create 5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:55 np0005603621 systemd[1]: Started libpod-conmon-5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90.scope.
Jan 31 03:38:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:38:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:55.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.614323735 +0000 UTC m=+0.105559028 container init 5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.61952381 +0000 UTC m=+0.110759083 container start 5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:38:55 np0005603621 sweet_mccarthy[349911]: 167 167
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.622691901 +0000 UTC m=+0.113927204 container attach 5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mccarthy, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:38:55 np0005603621 systemd[1]: libpod-5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90.scope: Deactivated successfully.
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.624460986 +0000 UTC m=+0.115696259 container died 5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mccarthy, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.533937206 +0000 UTC m=+0.025172509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9c84933e7a9da8d07ec87fe53f596f9b5c92933bd28d304db88eed64987326cd-merged.mount: Deactivated successfully.
Jan 31 03:38:55 np0005603621 podman[349895]: 2026-01-31 08:38:55.663761533 +0000 UTC m=+0.154996806 container remove 5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_mccarthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:38:55 np0005603621 systemd[1]: libpod-conmon-5109dfd2b953f651fdfff1d94dd2293689d8bb9939bec78d187679cb7711cc90.scope: Deactivated successfully.
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.694 247403 DEBUG nova.policy [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '516e093a00a44667ba1308900be70d8d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '621c17d53cba46d386de8efb560a988e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.786546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848735786610, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 735, "num_deletes": 250, "total_data_size": 994791, "memory_usage": 1009512, "flush_reason": "Manual Compaction"}
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 31 03:38:55 np0005603621 podman[349934]: 2026-01-31 08:38:55.786402531 +0000 UTC m=+0.033826974 container create 45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848735802671, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 985412, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57784, "largest_seqno": 58518, "table_properties": {"data_size": 981599, "index_size": 1592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 7780, "raw_average_key_size": 17, "raw_value_size": 973989, "raw_average_value_size": 2178, "num_data_blocks": 71, "num_entries": 447, "num_filter_entries": 447, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848682, "oldest_key_time": 1769848682, "file_creation_time": 1769848735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 16186 microseconds, and 2290 cpu microseconds.
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.802729) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 985412 bytes OK
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.802759) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.806767) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.806812) EVENT_LOG_v1 {"time_micros": 1769848735806804, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.806836) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 991052, prev total WAL file size 991052, number of live WAL files 2.
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.807549) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(962KB)], [128(10MB)]
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848735807600, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 12366679, "oldest_snapshot_seqno": -1}
Jan 31 03:38:55 np0005603621 systemd[1]: Started libpod-conmon-45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22.scope.
Jan 31 03:38:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:38:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b9d5871d23693ead9991e72180db3df59d960dc8490e23a8f4c80c75a6229c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b9d5871d23693ead9991e72180db3df59d960dc8490e23a8f4c80c75a6229c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b9d5871d23693ead9991e72180db3df59d960dc8490e23a8f4c80c75a6229c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b9d5871d23693ead9991e72180db3df59d960dc8490e23a8f4c80c75a6229c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:38:55 np0005603621 podman[349934]: 2026-01-31 08:38:55.770446465 +0000 UTC m=+0.017870918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 8601 keys, 11294814 bytes, temperature: kUnknown
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848735955117, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11294814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11239541, "index_size": 32696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 224876, "raw_average_key_size": 26, "raw_value_size": 11088850, "raw_average_value_size": 1289, "num_data_blocks": 1260, "num_entries": 8601, "num_filter_entries": 8601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:38:55 np0005603621 podman[349934]: 2026-01-31 08:38:55.957982621 +0000 UTC m=+0.205407074 container init 45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:38:55 np0005603621 podman[349934]: 2026-01-31 08:38:55.962546855 +0000 UTC m=+0.209971288 container start 45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.955590) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11294814 bytes
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.963079) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.7 rd, 76.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(24.0) write-amplify(11.5) OK, records in: 9114, records dropped: 513 output_compression: NoCompression
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.963121) EVENT_LOG_v1 {"time_micros": 1769848735963097, "job": 78, "event": "compaction_finished", "compaction_time_micros": 147812, "compaction_time_cpu_micros": 21267, "output_level": 6, "num_output_files": 1, "total_output_size": 11294814, "num_input_records": 9114, "num_output_records": 8601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848735963365, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848735964682, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.964 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.807450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.964763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.964767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.964769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.964771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:55 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:38:55.964773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.965 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.965 247403 INFO nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Creating image(s)#033[00m
Jan 31 03:38:55 np0005603621 podman[349934]: 2026-01-31 08:38:55.977948584 +0000 UTC m=+0.225373017 container attach 45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:38:55 np0005603621 nova_compute[247399]: 2026-01-31 08:38:55.992 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.016 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.040 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.043 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.092 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.093 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.093 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.094 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.115 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.118 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 87faeef8-1f73-43cd-8813-6230f05dafd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:38:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 758 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.1 MiB/s wr, 231 op/s
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.457 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 87faeef8-1f73-43cd-8813-6230f05dafd6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.525 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] resizing rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.623 247403 DEBUG nova.objects.instance [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lazy-loading 'migration_context' on Instance uuid 87faeef8-1f73-43cd-8813-6230f05dafd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]: {
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:        "osd_id": 0,
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:        "type": "bluestore"
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]:    }
Jan 31 03:38:56 np0005603621 goofy_lovelace[349950]: }
Jan 31 03:38:56 np0005603621 systemd[1]: libpod-45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22.scope: Deactivated successfully.
Jan 31 03:38:56 np0005603621 podman[350138]: 2026-01-31 08:38:56.771878795 +0000 UTC m=+0.025165839 container died 45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.779 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.780 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Ensure instance console log exists: /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.781 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.781 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:38:56 np0005603621 nova_compute[247399]: 2026-01-31 08:38:56.781 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:38:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d3b9d5871d23693ead9991e72180db3df59d960dc8490e23a8f4c80c75a6229c-merged.mount: Deactivated successfully.
Jan 31 03:38:56 np0005603621 podman[350138]: 2026-01-31 08:38:56.814651711 +0000 UTC m=+0.067938705 container remove 45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lovelace, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:38:56 np0005603621 systemd[1]: libpod-conmon-45da4f638e5c43eb790f33448e19149dc8846ff188799ebcf51a6c5f8fb2ad22.scope: Deactivated successfully.
Jan 31 03:38:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:38:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:38:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8fa43338-b237-4de6-9348-8ef9fbc221de does not exist
Jan 31 03:38:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 641aa536-5339-4a59-aa06-908f03378be5 does not exist
Jan 31 03:38:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b35c800e-1753-4818-999b-f14393207eeb does not exist
Jan 31 03:38:57 np0005603621 nova_compute[247399]: 2026-01-31 08:38:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:38:57 np0005603621 nova_compute[247399]: 2026-01-31 08:38:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:38:57 np0005603621 nova_compute[247399]: 2026-01-31 08:38:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:38:57 np0005603621 nova_compute[247399]: 2026-01-31 08:38:57.218 247403 DEBUG nova.network.neutron [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Successfully created port: 77125c39-f9c0-43d7-a068-15b155a2121f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:38:57 np0005603621 nova_compute[247399]: 2026-01-31 08:38:57.321 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:38:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:57.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:57.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:38:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 816 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.5 MiB/s wr, 304 op/s
Jan 31 03:38:58 np0005603621 nova_compute[247399]: 2026-01-31 08:38:58.640 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:38:58 np0005603621 nova_compute[247399]: 2026-01-31 08:38:58.641 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:38:58 np0005603621 nova_compute[247399]: 2026-01-31 08:38:58.641 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:38:58 np0005603621 nova_compute[247399]: 2026-01-31 08:38:58.641 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:38:58 np0005603621 nova_compute[247399]: 2026-01-31 08:38:58.691 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:38:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:38:59.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:38:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:38:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:38:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:38:59.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.001 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 816 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.6 MiB/s wr, 254 op/s
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.614 247403 DEBUG nova.network.neutron [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Successfully updated port: 77125c39-f9c0-43d7-a068-15b155a2121f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:39:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.804 247403 DEBUG nova.compute.manager [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-changed-77125c39-f9c0-43d7-a068-15b155a2121f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.805 247403 DEBUG nova.compute.manager [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Refreshing instance network info cache due to event network-changed-77125c39-f9c0-43d7-a068-15b155a2121f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.805 247403 DEBUG oslo_concurrency.lockutils [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-87faeef8-1f73-43cd-8813-6230f05dafd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.805 247403 DEBUG oslo_concurrency.lockutils [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-87faeef8-1f73-43cd-8813-6230f05dafd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.805 247403 DEBUG nova.network.neutron [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Refreshing network info cache for port 77125c39-f9c0-43d7-a068-15b155a2121f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:39:00 np0005603621 nova_compute[247399]: 2026-01-31 08:39:00.889 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "refresh_cache-87faeef8-1f73-43cd-8813-6230f05dafd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.344 247403 DEBUG nova.network.neutron [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.496 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:01.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.591 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.591 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.592 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.592 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:01 np0005603621 nova_compute[247399]: 2026-01-31 08:39:01.592 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:39:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:01.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.060 247403 DEBUG nova.network.neutron [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.125 247403 DEBUG oslo_concurrency.lockutils [req-9f10639b-8bf5-4738-b424-083e0fb672df req-b87ac5ac-51ca-42c5-8c3d-852c55782cc0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-87faeef8-1f73-43cd-8813-6230f05dafd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.126 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquired lock "refresh_cache-87faeef8-1f73-43cd-8813-6230f05dafd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.126 247403 DEBUG nova.network.neutron [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.232 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.268 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.268 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.268 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.269 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.269 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 833 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.2 MiB/s wr, 274 op/s
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.645 247403 DEBUG nova.network.neutron [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:39:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:39:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111512863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.668 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.796 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.796 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.800 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.800 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.956 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.957 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3770MB free_disk=20.700511932373047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.957 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:02 np0005603621 nova_compute[247399]: 2026-01-31 08:39:02.957 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.123 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c95caf87-5069-4b70-9023-d3c2d911e87d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.123 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.124 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 87faeef8-1f73-43cd-8813-6230f05dafd6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.124 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.124 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.221 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:03.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:03.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.695 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:39:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660886543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.732 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.736 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.760 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.959 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:39:03 np0005603621 nova_compute[247399]: 2026-01-31 08:39:03.960 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.262 247403 DEBUG nova.network.neutron [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Updating instance_info_cache with network_info: [{"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.289 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Releasing lock "refresh_cache-87faeef8-1f73-43cd-8813-6230f05dafd6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.290 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Instance network_info: |[{"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.291 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Start _get_guest_xml network_info=[{"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.295 247403 WARNING nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.301 247403 DEBUG nova.virt.libvirt.host [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.301 247403 DEBUG nova.virt.libvirt.host [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.305 247403 DEBUG nova.virt.libvirt.host [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.305 247403 DEBUG nova.virt.libvirt.host [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.307 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.307 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.307 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.308 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.308 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.308 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.308 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.309 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.309 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.309 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.309 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.309 247403 DEBUG nova.virt.hardware [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.312 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 833 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 181 op/s
Jan 31 03:39:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411943611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.753 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.779 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.784 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:04 np0005603621 nova_compute[247399]: 2026-01-31 08:39:04.927 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.031 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.165 247403 DEBUG nova.compute.manager [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2101999761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.269 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.271 247403 DEBUG nova.virt.libvirt.vif [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1519507900',display_name='tempest-ServersNegativeTestJSON-server-1519507900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1519507900',id=149,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='621c17d53cba46d386de8efb560a988e',ramdisk_id='',reservation_id='r-x0aroiew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-183161027',owner_user_name='tempest-ServersNegativeTestJSON-183161027-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:55Z,user_data=None,user_id='516e093a00a44667ba1308900be70d8d',uuid=87faeef8-1f73-43cd-8813-6230f05dafd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.271 247403 DEBUG nova.network.os_vif_util [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Converting VIF {"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.272 247403 DEBUG nova.network.os_vif_util [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.273 247403 DEBUG nova.objects.instance [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lazy-loading 'pci_devices' on Instance uuid 87faeef8-1f73-43cd-8813-6230f05dafd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.288 247403 INFO nova.compute.manager [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] instance snapshotting#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.301 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <uuid>87faeef8-1f73-43cd-8813-6230f05dafd6</uuid>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <name>instance-00000095</name>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServersNegativeTestJSON-server-1519507900</nova:name>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:39:04</nova:creationTime>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:user uuid="516e093a00a44667ba1308900be70d8d">tempest-ServersNegativeTestJSON-183161027-project-member</nova:user>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:project uuid="621c17d53cba46d386de8efb560a988e">tempest-ServersNegativeTestJSON-183161027</nova:project>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <nova:port uuid="77125c39-f9c0-43d7-a068-15b155a2121f">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <entry name="serial">87faeef8-1f73-43cd-8813-6230f05dafd6</entry>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <entry name="uuid">87faeef8-1f73-43cd-8813-6230f05dafd6</entry>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/87faeef8-1f73-43cd-8813-6230f05dafd6_disk">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/87faeef8-1f73-43cd-8813-6230f05dafd6_disk.config">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:6e:50:09"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <target dev="tap77125c39-f9"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/console.log" append="off"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:39:05 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:39:05 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:39:05 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:39:05 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.302 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Preparing to wait for external event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.303 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.303 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.303 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.304 247403 DEBUG nova.virt.libvirt.vif [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:38:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1519507900',display_name='tempest-ServersNegativeTestJSON-server-1519507900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1519507900',id=149,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='621c17d53cba46d386de8efb560a988e',ramdisk_id='',reservation_id='r-x0aroiew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-183161027',owner_user_name='tempest-ServersNegativeTestJSON-183161027-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:38:55Z,user_data=None,user_id='516e093a00a44667ba1308900be70d8d',uuid=87faeef8-1f73-43cd-8813-6230f05dafd6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.305 247403 DEBUG nova.network.os_vif_util [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Converting VIF {"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.305 247403 DEBUG nova.network.os_vif_util [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.306 247403 DEBUG os_vif [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.307 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.308 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.313 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.313 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77125c39-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.314 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77125c39-f9, col_values=(('external_ids', {'iface-id': '77125c39-f9c0-43d7-a068-15b155a2121f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:50:09', 'vm-uuid': '87faeef8-1f73-43cd-8813-6230f05dafd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:39:05 np0005603621 NetworkManager[49013]: <info>  [1769848745.3184] manager: (tap77125c39-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.321 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.322 247403 INFO os_vif [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9')#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.394 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.395 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.395 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] No VIF found with MAC fa:16:3e:6e:50:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.396 247403 INFO nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Using config drive#033[00m
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.420 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:05.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:05.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:05 np0005603621 nova_compute[247399]: 2026-01-31 08:39:05.769 247403 INFO nova.virt.libvirt.driver [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Beginning live snapshot process#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.020 247403 DEBUG nova.virt.libvirt.imagebackend [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.140 247403 INFO nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Creating config drive at /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/disk.config#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.143 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwlcmvxpc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.265 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpwlcmvxpc" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.328 247403 DEBUG nova.storage.rbd_utils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] rbd image 87faeef8-1f73-43cd-8813-6230f05dafd6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.331 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/disk.config 87faeef8-1f73-43cd-8813-6230f05dafd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 842 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.6 MiB/s wr, 199 op/s
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.574 247403 DEBUG nova.storage.rbd_utils [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] creating snapshot(91e7310158534beea575fe8cfadfa715) on rbd image(92bd94ef-0031-409f-8c26-23d5f3d952e1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.634 247403 DEBUG oslo_concurrency.processutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/disk.config 87faeef8-1f73-43cd-8813-6230f05dafd6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.635 247403 INFO nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Deleting local config drive /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6/disk.config because it was imported into RBD.#033[00m
Jan 31 03:39:06 np0005603621 kernel: tap77125c39-f9: entered promiscuous mode
Jan 31 03:39:06 np0005603621 NetworkManager[49013]: <info>  [1769848746.6750] manager: (tap77125c39-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Jan 31 03:39:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:06Z|00585|binding|INFO|Claiming lport 77125c39-f9c0-43d7-a068-15b155a2121f for this chassis.
Jan 31 03:39:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:06Z|00586|binding|INFO|77125c39-f9c0-43d7-a068-15b155a2121f: Claiming fa:16:3e:6e:50:09 10.100.0.8
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.676 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.679 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:06Z|00587|binding|INFO|Setting lport 77125c39-f9c0-43d7-a068-15b155a2121f ovn-installed in OVS
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.689 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.700 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:50:09 10.100.0.8'], port_security=['fa:16:3e:6e:50:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '87faeef8-1f73-43cd-8813-6230f05dafd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621c17d53cba46d386de8efb560a988e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1c8dcf47-c169-4871-843e-ae38c0fc69f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda2ce92-ce79-4f8b-b120-fd83adc645ef, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=77125c39-f9c0-43d7-a068-15b155a2121f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:06Z|00588|binding|INFO|Setting lport 77125c39-f9c0-43d7-a068-15b155a2121f up in Southbound
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.701 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 77125c39-f9c0-43d7-a068-15b155a2121f in datapath 550cf3a2-62ab-424d-afc0-3148a4a687ee bound to our chassis#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.703 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 550cf3a2-62ab-424d-afc0-3148a4a687ee#033[00m
Jan 31 03:39:06 np0005603621 systemd-udevd[350438]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:39:06 np0005603621 systemd-machined[212769]: New machine qemu-72-instance-00000095.
Jan 31 03:39:06 np0005603621 NetworkManager[49013]: <info>  [1769848746.7140] device (tap77125c39-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.712 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f38b4ed-f7e8-4e97-8bad-334d4385c4c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.713 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap550cf3a2-61 in ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:39:06 np0005603621 NetworkManager[49013]: <info>  [1769848746.7156] device (tap77125c39-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.716 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap550cf3a2-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.716 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8eb50a-5987-4ca8-99d2-d7ef7eb467dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.717 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9efa87f5-e15c-492b-8765-51e47254a19e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 systemd[1]: Started Virtual Machine qemu-72-instance-00000095.
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.727 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[791a7b4c-473b-4af1-b357-e46b62fb7b62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.736 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c75fdb50-2b9f-470b-a222-d6a5a7124825]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.758 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7f433fda-90cb-420e-969a-5f361dcdfc38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.763 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2a76b829-4192-418b-a1f9-21b67eaf2299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 NetworkManager[49013]: <info>  [1769848746.7638] manager: (tap550cf3a2-60): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.782 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a58ab00f-d22e-46f7-b93c-c5bb9b3891f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.785 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d573f121-317b-4a4d-a8ba-99cbcfdbf035]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 NetworkManager[49013]: <info>  [1769848746.8074] device (tap550cf3a2-60): carrier: link connected
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.811 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8eae4640-8734-469a-8d74-cd8c4bc565ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.823 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd03d31-b045-47c1-8e0e-4baa26b13229]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap550cf3a2-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:fc:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805232, 'reachable_time': 31209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350472, 'error': None, 'target': 'ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.835 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bf34d799-1df8-463d-bde9-67f6bf61d34f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:fc48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 805232, 'tstamp': 805232}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350473, 'error': None, 'target': 'ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.848 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[116833a8-5562-4991-afef-177feb06aeff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap550cf3a2-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:fc:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805232, 'reachable_time': 31209, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350474, 'error': None, 'target': 'ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.878 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1883ee3f-3b5e-4b63-913f-fe7350f0cb98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.929 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b62a1e4b-ff62-4b9b-967b-09980e45cfef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.930 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap550cf3a2-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.931 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.931 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap550cf3a2-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.932 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 kernel: tap550cf3a2-60: entered promiscuous mode
Jan 31 03:39:06 np0005603621 NetworkManager[49013]: <info>  [1769848746.9334] manager: (tap550cf3a2-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.935 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap550cf3a2-60, col_values=(('external_ids', {'iface-id': '9f1ac82b-bf6c-400f-a03c-b15ad5392890'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.936 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:06Z|00589|binding|INFO|Releasing lport 9f1ac82b-bf6c-400f-a03c-b15ad5392890 from this chassis (sb_readonly=0)
Jan 31 03:39:06 np0005603621 nova_compute[247399]: 2026-01-31 08:39:06.941 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.942 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/550cf3a2-62ab-424d-afc0-3148a4a687ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/550cf3a2-62ab-424d-afc0-3148a4a687ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.942 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4e247504-f2a2-47a2-a9ee-cafddc84da8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.943 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-550cf3a2-62ab-424d-afc0-3148a4a687ee
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/550cf3a2-62ab-424d-afc0-3148a4a687ee.pid.haproxy
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 550cf3a2-62ab-424d-afc0-3148a4a687ee
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:39:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:06.943 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'env', 'PROCESS_TAG=haproxy-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/550cf3a2-62ab-424d-afc0-3148a4a687ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.197 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848747.1970274, 87faeef8-1f73-43cd-8813-6230f05dafd6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.198 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] VM Started (Lifecycle Event)#033[00m
Jan 31 03:39:07 np0005603621 podman[350548]: 2026-01-31 08:39:07.26170817 +0000 UTC m=+0.046381941 container create 7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:39:07 np0005603621 systemd[1]: Started libpod-conmon-7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1.scope.
Jan 31 03:39:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:39:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31fcaf74c3811207e30d3eb61cb457029236c9e004ce20c45d330a2c99ec7d0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:39:07 np0005603621 podman[350548]: 2026-01-31 08:39:07.323580372 +0000 UTC m=+0.108254173 container init 7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:39:07 np0005603621 podman[350548]: 2026-01-31 08:39:07.328097666 +0000 UTC m=+0.112771437 container start 7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:39:07 np0005603621 podman[350548]: 2026-01-31 08:39:07.235960305 +0000 UTC m=+0.020634116 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:39:07 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [NOTICE]   (350567) : New worker (350569) forked
Jan 31 03:39:07 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [NOTICE]   (350567) : Loading success.
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.389 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.395 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848747.1971853, 87faeef8-1f73-43cd-8813-6230f05dafd6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.395 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:39:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Jan 31 03:39:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:07.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Jan 31 03:39:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Jan 31 03:39:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:07.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.706 247403 DEBUG nova.storage.rbd_utils [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] cloning vms/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk@91e7310158534beea575fe8cfadfa715 to images/9bc0d84f-a933-4fd8-8f17-38e2aca81cce clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.792 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:07 np0005603621 nova_compute[247399]: 2026-01-31 08:39:07.797 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.405 247403 DEBUG nova.storage.rbd_utils [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] flattening images/9bc0d84f-a933-4fd8-8f17-38e2aca81cce flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 862 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 547 KiB/s rd, 3.3 MiB/s wr, 112 op/s
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.837 247403 DEBUG nova.compute.manager [req-d92f97b4-5e71-448b-9bbc-4d33e4fed337 req-206ce2ca-6a16-4e0d-9a0c-34b807bb610c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.837 247403 DEBUG oslo_concurrency.lockutils [req-d92f97b4-5e71-448b-9bbc-4d33e4fed337 req-206ce2ca-6a16-4e0d-9a0c-34b807bb610c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.838 247403 DEBUG oslo_concurrency.lockutils [req-d92f97b4-5e71-448b-9bbc-4d33e4fed337 req-206ce2ca-6a16-4e0d-9a0c-34b807bb610c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.838 247403 DEBUG oslo_concurrency.lockutils [req-d92f97b4-5e71-448b-9bbc-4d33e4fed337 req-206ce2ca-6a16-4e0d-9a0c-34b807bb610c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.838 247403 DEBUG nova.compute.manager [req-d92f97b4-5e71-448b-9bbc-4d33e4fed337 req-206ce2ca-6a16-4e0d-9a0c-34b807bb610c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Processing event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.839 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.842 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.845 247403 INFO nova.virt.libvirt.driver [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Instance spawned successfully.#033[00m
Jan 31 03:39:08 np0005603621 nova_compute[247399]: 2026-01-31 08:39:08.846 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.134 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.134 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848748.842279, 87faeef8-1f73-43cd-8813-6230f05dafd6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.135 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:39:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.580 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.581 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.581 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.582 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.582 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.583 247403 DEBUG nova.virt.libvirt.driver [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:39:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:09.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.864 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:09 np0005603621 nova_compute[247399]: 2026-01-31 08:39:09.867 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.032 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.056 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.165 247403 INFO nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Took 14.20 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.166 247403 DEBUG nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.298 247403 INFO nova.compute.manager [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Took 16.29 seconds to build instance.#033[00m
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 862 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 547 KiB/s rd, 3.3 MiB/s wr, 112 op/s
Jan 31 03:39:10 np0005603621 nova_compute[247399]: 2026-01-31 08:39:10.573 247403 DEBUG oslo_concurrency.lockutils [None req-3130dc9c-ed8b-4ad0-b902-f3461102c4b2 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.281 247403 DEBUG nova.compute.manager [req-7ccc94b8-02a3-4932-9a6d-2c72e1109658 req-88ad180a-e919-4246-a9c6-a4e202a71bf6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.282 247403 DEBUG oslo_concurrency.lockutils [req-7ccc94b8-02a3-4932-9a6d-2c72e1109658 req-88ad180a-e919-4246-a9c6-a4e202a71bf6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.282 247403 DEBUG oslo_concurrency.lockutils [req-7ccc94b8-02a3-4932-9a6d-2c72e1109658 req-88ad180a-e919-4246-a9c6-a4e202a71bf6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.282 247403 DEBUG oslo_concurrency.lockutils [req-7ccc94b8-02a3-4932-9a6d-2c72e1109658 req-88ad180a-e919-4246-a9c6-a4e202a71bf6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.282 247403 DEBUG nova.compute.manager [req-7ccc94b8-02a3-4932-9a6d-2c72e1109658 req-88ad180a-e919-4246-a9c6-a4e202a71bf6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] No waiting events found dispatching network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.283 247403 WARNING nova.compute.manager [req-7ccc94b8-02a3-4932-9a6d-2c72e1109658 req-88ad180a-e919-4246-a9c6-a4e202a71bf6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received unexpected event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f for instance with vm_state active and task_state None.#033[00m
Jan 31 03:39:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:11.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:11 np0005603621 nova_compute[247399]: 2026-01-31 08:39:11.778 247403 DEBUG nova.storage.rbd_utils [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] removing snapshot(91e7310158534beea575fe8cfadfa715) on rbd image(92bd94ef-0031-409f-8c26-23d5f3d952e1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 03:39:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Jan 31 03:39:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Jan 31 03:39:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.446 247403 DEBUG nova.storage.rbd_utils [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] creating snapshot(snap) on rbd image(9bc0d84f-a933-4fd8-8f17-38e2aca81cce) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:39:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.3 MiB/s rd, 9.1 MiB/s wr, 319 op/s
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.746 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.746 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.746 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.746 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.747 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.748 247403 INFO nova.compute.manager [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Terminating instance#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.748 247403 DEBUG nova.compute.manager [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:39:12 np0005603621 kernel: tap77125c39-f9 (unregistering): left promiscuous mode
Jan 31 03:39:12 np0005603621 NetworkManager[49013]: <info>  [1769848752.7890] device (tap77125c39-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.794 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:12Z|00590|binding|INFO|Releasing lport 77125c39-f9c0-43d7-a068-15b155a2121f from this chassis (sb_readonly=0)
Jan 31 03:39:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:12Z|00591|binding|INFO|Setting lport 77125c39-f9c0-43d7-a068-15b155a2121f down in Southbound
Jan 31 03:39:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:12Z|00592|binding|INFO|Removing iface tap77125c39-f9 ovn-installed in OVS
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:12 np0005603621 nova_compute[247399]: 2026-01-31 08:39:12.800 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:12 np0005603621 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 31 03:39:12 np0005603621 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000095.scope: Consumed 4.376s CPU time.
Jan 31 03:39:12 np0005603621 systemd-machined[212769]: Machine qemu-72-instance-00000095 terminated.
Jan 31 03:39:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:12.836 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:50:09 10.100.0.8'], port_security=['fa:16:3e:6e:50:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '87faeef8-1f73-43cd-8813-6230f05dafd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621c17d53cba46d386de8efb560a988e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c8dcf47-c169-4871-843e-ae38c0fc69f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda2ce92-ce79-4f8b-b120-fd83adc645ef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=77125c39-f9c0-43d7-a068-15b155a2121f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:12.838 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 77125c39-f9c0-43d7-a068-15b155a2121f in datapath 550cf3a2-62ab-424d-afc0-3148a4a687ee unbound from our chassis#033[00m
Jan 31 03:39:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:12.839 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 550cf3a2-62ab-424d-afc0-3148a4a687ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:39:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:12.840 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e76c1443-f3bc-4730-9479-4c851c37b65e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:12.841 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee namespace which is not needed anymore#033[00m
Jan 31 03:39:12 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [NOTICE]   (350567) : haproxy version is 2.8.14-c23fe91
Jan 31 03:39:12 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [NOTICE]   (350567) : path to executable is /usr/sbin/haproxy
Jan 31 03:39:12 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [WARNING]  (350567) : Exiting Master process...
Jan 31 03:39:12 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [ALERT]    (350567) : Current worker (350569) exited with code 143 (Terminated)
Jan 31 03:39:12 np0005603621 neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee[350563]: [WARNING]  (350567) : All workers exited. Exiting... (0)
Jan 31 03:39:12 np0005603621 systemd[1]: libpod-7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1.scope: Deactivated successfully.
Jan 31 03:39:12 np0005603621 podman[350744]: 2026-01-31 08:39:12.945940677 +0000 UTC m=+0.037184180 container died 7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:39:12 np0005603621 kernel: tap77125c39-f9: entered promiscuous mode
Jan 31 03:39:12 np0005603621 NetworkManager[49013]: <info>  [1769848752.9617] manager: (tap77125c39-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/270)
Jan 31 03:39:12 np0005603621 systemd-udevd[350723]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:39:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:13Z|00593|binding|INFO|Claiming lport 77125c39-f9c0-43d7-a068-15b155a2121f for this chassis.
Jan 31 03:39:13 np0005603621 kernel: tap77125c39-f9 (unregistering): left promiscuous mode
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:13Z|00594|binding|INFO|77125c39-f9c0-43d7-a068-15b155a2121f: Claiming fa:16:3e:6e:50:09 10.100.0.8
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: hostname: compute-0
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:13Z|00595|binding|INFO|Setting lport 77125c39-f9c0-43d7-a068-15b155a2121f ovn-installed in OVS
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:13Z|00596|if_status|INFO|Dropped 2 log messages in last 748 seconds (most recently, 748 seconds ago) due to excessive rate
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:13Z|00597|if_status|INFO|Not setting lport 77125c39-f9c0-43d7-a068-15b155a2121f down as sb is readonly
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.018 247403 INFO nova.virt.libvirt.driver [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Instance destroyed successfully.#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.018 247403 DEBUG nova.objects.instance [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lazy-loading 'resources' on Instance uuid 87faeef8-1f73-43cd-8813-6230f05dafd6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 virtnodedevd[247723]: ethtool ioctl error on tap77125c39-f9: No such device
Jan 31 03:39:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:13Z|00598|binding|INFO|Releasing lport 77125c39-f9c0-43d7-a068-15b155a2121f from this chassis (sb_readonly=0)
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.082 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:50:09 10.100.0.8'], port_security=['fa:16:3e:6e:50:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '87faeef8-1f73-43cd-8813-6230f05dafd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621c17d53cba46d386de8efb560a988e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c8dcf47-c169-4871-843e-ae38c0fc69f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda2ce92-ce79-4f8b-b120-fd83adc645ef, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=77125c39-f9c0-43d7-a068-15b155a2121f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.087 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1-userdata-shm.mount: Deactivated successfully.
Jan 31 03:39:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-31fcaf74c3811207e30d3eb61cb457029236c9e004ce20c45d330a2c99ec7d0f-merged.mount: Deactivated successfully.
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.153 247403 DEBUG nova.virt.libvirt.vif [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1519507900',display_name='tempest-ServersNegativeTestJSON-server-1519507900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1519507900',id=149,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:39:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='621c17d53cba46d386de8efb560a988e',ramdisk_id='',reservation_id='r-x0aroiew',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-183161027',owner_user_name='tempest-ServersNegativeTestJSON-183161027-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:39:10Z,user_data=None,user_id='516e093a00a44667ba1308900be70d8d',uuid=87faeef8-1f73-43cd-8813-6230f05dafd6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.153 247403 DEBUG nova.network.os_vif_util [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Converting VIF {"id": "77125c39-f9c0-43d7-a068-15b155a2121f", "address": "fa:16:3e:6e:50:09", "network": {"id": "550cf3a2-62ab-424d-afc0-3148a4a687ee", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1062247136-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "621c17d53cba46d386de8efb560a988e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77125c39-f9", "ovs_interfaceid": "77125c39-f9c0-43d7-a068-15b155a2121f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.154 247403 DEBUG nova.network.os_vif_util [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.154 247403 DEBUG os_vif [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.156 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77125c39-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.158 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.160 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.162 247403 INFO os_vif [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:50:09,bridge_name='br-int',has_traffic_filtering=True,id=77125c39-f9c0-43d7-a068-15b155a2121f,network=Network(550cf3a2-62ab-424d-afc0-3148a4a687ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77125c39-f9')#033[00m
Jan 31 03:39:13 np0005603621 podman[350744]: 2026-01-31 08:39:13.16491683 +0000 UTC m=+0.256160333 container cleanup 7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.188 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:50:09 10.100.0.8'], port_security=['fa:16:3e:6e:50:09 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '87faeef8-1f73-43cd-8813-6230f05dafd6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '621c17d53cba46d386de8efb560a988e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1c8dcf47-c169-4871-843e-ae38c0fc69f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bda2ce92-ce79-4f8b-b120-fd83adc645ef, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=77125c39-f9c0-43d7-a068-15b155a2121f) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:13 np0005603621 podman[350801]: 2026-01-31 08:39:13.218194689 +0000 UTC m=+0.038163292 container remove 7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Jan 31 03:39:13 np0005603621 systemd[1]: libpod-conmon-7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1.scope: Deactivated successfully.
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.222 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[21498a42-0a39-4bd5-96da-39a79196e684]: (4, ('Sat Jan 31 08:39:12 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee (7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1)\n7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1\nSat Jan 31 08:39:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee (7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1)\n7173b76006109bf13cc0c773be0ab4ddf83956334369a137096dc74455e275d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.224 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[15974f3b-419c-443c-9a5b-bbb834a3f935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.225 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap550cf3a2-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.227 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 kernel: tap550cf3a2-60: left promiscuous mode
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.234 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.237 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[35c89f2d-fa2f-48f1-975a-02eb3eb9e825]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.253 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[14592813-0f5d-4f3c-82a5-9b916e067db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.255 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d669558c-e2e6-4d37-8a9c-4bc0b8d29194]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.267 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7d87b61a-15c4-48a9-82db-605b14e220e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 805226, 'reachable_time': 23663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350827, 'error': None, 'target': 'ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.270 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-550cf3a2-62ab-424d-afc0-3148a4a687ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.270 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b56cfab2-6747-480e-9fd3-e9b0da7dd013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.271 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 77125c39-f9c0-43d7-a068-15b155a2121f in datapath 550cf3a2-62ab-424d-afc0-3148a4a687ee unbound from our chassis#033[00m
Jan 31 03:39:13 np0005603621 systemd[1]: run-netns-ovnmeta\x2d550cf3a2\x2d62ab\x2d424d\x2dafc0\x2d3148a4a687ee.mount: Deactivated successfully.
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.272 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 550cf3a2-62ab-424d-afc0-3148a4a687ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.273 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[547ba3c3-d504-4d5c-b0dc-87d9749409ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.274 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 77125c39-f9c0-43d7-a068-15b155a2121f in datapath 550cf3a2-62ab-424d-afc0-3148a4a687ee unbound from our chassis#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.275 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 550cf3a2-62ab-424d-afc0-3148a4a687ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:39:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:13.276 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ca3e14-3dba-4114-ab9d-96c0241a3172]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.371701) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848753371777, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 464, "num_deletes": 251, "total_data_size": 414347, "memory_usage": 423072, "flush_reason": "Manual Compaction"}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848753374887, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 410188, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58520, "largest_seqno": 58982, "table_properties": {"data_size": 407517, "index_size": 706, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6695, "raw_average_key_size": 19, "raw_value_size": 402015, "raw_average_value_size": 1158, "num_data_blocks": 30, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848736, "oldest_key_time": 1769848736, "file_creation_time": 1769848753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 3248 microseconds, and 1403 cpu microseconds.
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.374947) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 410188 bytes OK
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.374966) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.376686) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.376711) EVENT_LOG_v1 {"time_micros": 1769848753376704, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.376734) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 411556, prev total WAL file size 411597, number of live WAL files 2.
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.377343) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(400KB)], [131(10MB)]
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848753377397, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11705002, "oldest_snapshot_seqno": -1}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.469 247403 DEBUG nova.compute.manager [req-0376d913-5399-4a16-8d4f-fff667b20b79 req-6cb180d8-baa0-4beb-90a9-bb4ba67fcfc8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-vif-unplugged-77125c39-f9c0-43d7-a068-15b155a2121f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.470 247403 DEBUG oslo_concurrency.lockutils [req-0376d913-5399-4a16-8d4f-fff667b20b79 req-6cb180d8-baa0-4beb-90a9-bb4ba67fcfc8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.470 247403 DEBUG oslo_concurrency.lockutils [req-0376d913-5399-4a16-8d4f-fff667b20b79 req-6cb180d8-baa0-4beb-90a9-bb4ba67fcfc8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.470 247403 DEBUG oslo_concurrency.lockutils [req-0376d913-5399-4a16-8d4f-fff667b20b79 req-6cb180d8-baa0-4beb-90a9-bb4ba67fcfc8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.470 247403 DEBUG nova.compute.manager [req-0376d913-5399-4a16-8d4f-fff667b20b79 req-6cb180d8-baa0-4beb-90a9-bb4ba67fcfc8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] No waiting events found dispatching network-vif-unplugged-77125c39-f9c0-43d7-a068-15b155a2121f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:39:13 np0005603621 nova_compute[247399]: 2026-01-31 08:39:13.470 247403 DEBUG nova.compute.manager [req-0376d913-5399-4a16-8d4f-fff667b20b79 req-6cb180d8-baa0-4beb-90a9-bb4ba67fcfc8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-vif-unplugged-77125c39-f9c0-43d7-a068-15b155a2121f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 8431 keys, 9792457 bytes, temperature: kUnknown
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848753498084, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9792457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9739550, "index_size": 30714, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 222076, "raw_average_key_size": 26, "raw_value_size": 9593009, "raw_average_value_size": 1137, "num_data_blocks": 1170, "num_entries": 8431, "num_filter_entries": 8431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848753, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.498332) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9792457 bytes
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.500061) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.9 rd, 81.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.8 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(52.4) write-amplify(23.9) OK, records in: 8948, records dropped: 517 output_compression: NoCompression
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.500082) EVENT_LOG_v1 {"time_micros": 1769848753500072, "job": 80, "event": "compaction_finished", "compaction_time_micros": 120751, "compaction_time_cpu_micros": 34207, "output_level": 6, "num_output_files": 1, "total_output_size": 9792457, "num_input_records": 8948, "num_output_records": 8431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848753500245, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848753501261, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.377283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.501366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.501372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.501374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.501375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:39:13.501377) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:39:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 946 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 MiB/s rd, 6.8 MiB/s wr, 242 op/s
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.034 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.994 247403 DEBUG nova.compute.manager [req-08cc7b9a-4c5e-436d-aca6-bacbb0500978 req-a0f3ab2c-0db8-4ca1-b2d2-cf12fbbf0ea5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.994 247403 DEBUG oslo_concurrency.lockutils [req-08cc7b9a-4c5e-436d-aca6-bacbb0500978 req-a0f3ab2c-0db8-4ca1-b2d2-cf12fbbf0ea5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.994 247403 DEBUG oslo_concurrency.lockutils [req-08cc7b9a-4c5e-436d-aca6-bacbb0500978 req-a0f3ab2c-0db8-4ca1-b2d2-cf12fbbf0ea5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.995 247403 DEBUG oslo_concurrency.lockutils [req-08cc7b9a-4c5e-436d-aca6-bacbb0500978 req-a0f3ab2c-0db8-4ca1-b2d2-cf12fbbf0ea5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.995 247403 DEBUG nova.compute.manager [req-08cc7b9a-4c5e-436d-aca6-bacbb0500978 req-a0f3ab2c-0db8-4ca1-b2d2-cf12fbbf0ea5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] No waiting events found dispatching network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:39:15 np0005603621 nova_compute[247399]: 2026-01-31 08:39:15.995 247403 WARNING nova.compute.manager [req-08cc7b9a-4c5e-436d-aca6-bacbb0500978 req-a0f3ab2c-0db8-4ca1-b2d2-cf12fbbf0ea5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received unexpected event network-vif-plugged-77125c39-f9c0-43d7-a068-15b155a2121f for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:39:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 940 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 243 op/s
Jan 31 03:39:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:17.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:17 np0005603621 nova_compute[247399]: 2026-01-31 08:39:17.648 247403 INFO nova.virt.libvirt.driver [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Deleting instance files /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6_del#033[00m
Jan 31 03:39:17 np0005603621 nova_compute[247399]: 2026-01-31 08:39:17.650 247403 INFO nova.virt.libvirt.driver [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Deletion of /var/lib/nova/instances/87faeef8-1f73-43cd-8813-6230f05dafd6_del complete#033[00m
Jan 31 03:39:17 np0005603621 nova_compute[247399]: 2026-01-31 08:39:17.923 247403 INFO nova.compute.manager [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Took 5.17 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:39:17 np0005603621 nova_compute[247399]: 2026-01-31 08:39:17.924 247403 DEBUG oslo.service.loopingcall [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:39:17 np0005603621 nova_compute[247399]: 2026-01-31 08:39:17.924 247403 DEBUG nova.compute.manager [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:39:17 np0005603621 nova_compute[247399]: 2026-01-31 08:39:17.925 247403 DEBUG nova.network.neutron [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:39:18 np0005603621 nova_compute[247399]: 2026-01-31 08:39:18.159 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 899 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.9 MiB/s wr, 289 op/s
Jan 31 03:39:19 np0005603621 nova_compute[247399]: 2026-01-31 08:39:19.521 247403 INFO nova.virt.libvirt.driver [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Snapshot image upload complete#033[00m
Jan 31 03:39:19 np0005603621 nova_compute[247399]: 2026-01-31 08:39:19.522 247403 INFO nova.compute.manager [None req-2d3208fc-6301-4a04-b4ec-2b44123f0ae2 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Took 14.23 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 03:39:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:19.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:20 np0005603621 nova_compute[247399]: 2026-01-31 08:39:20.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:20 np0005603621 nova_compute[247399]: 2026-01-31 08:39:20.110 247403 DEBUG nova.network.neutron [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:20 np0005603621 nova_compute[247399]: 2026-01-31 08:39:20.208 247403 INFO nova.compute.manager [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Took 2.28 seconds to deallocate network for instance.#033[00m
Jan 31 03:39:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 899 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.6 MiB/s wr, 138 op/s
Jan 31 03:39:20 np0005603621 nova_compute[247399]: 2026-01-31 08:39:20.542 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:20 np0005603621 nova_compute[247399]: 2026-01-31 08:39:20.542 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:20 np0005603621 nova_compute[247399]: 2026-01-31 08:39:20.657 247403 DEBUG oslo_concurrency.processutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Jan 31 03:39:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Jan 31 03:39:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Jan 31 03:39:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:39:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859631886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:39:21 np0005603621 nova_compute[247399]: 2026-01-31 08:39:21.087 247403 DEBUG oslo_concurrency.processutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:21 np0005603621 nova_compute[247399]: 2026-01-31 08:39:21.092 247403 DEBUG nova.compute.provider_tree [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:39:21 np0005603621 nova_compute[247399]: 2026-01-31 08:39:21.134 247403 DEBUG nova.scheduler.client.report [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:39:21 np0005603621 nova_compute[247399]: 2026-01-31 08:39:21.213 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:21 np0005603621 nova_compute[247399]: 2026-01-31 08:39:21.309 247403 INFO nova.scheduler.client.report [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Deleted allocations for instance 87faeef8-1f73-43cd-8813-6230f05dafd6#033[00m
Jan 31 03:39:21 np0005603621 podman[350855]: 2026-01-31 08:39:21.495847298 +0000 UTC m=+0.047849768 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:39:21 np0005603621 podman[350856]: 2026-01-31 08:39:21.511950588 +0000 UTC m=+0.063952688 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:39:21 np0005603621 nova_compute[247399]: 2026-01-31 08:39:21.518 247403 DEBUG oslo_concurrency.lockutils [None req-9870b0d1-b63f-4759-ab15-5dec6c480e9e 516e093a00a44667ba1308900be70d8d 621c17d53cba46d386de8efb560a988e - - default default] Lock "87faeef8-1f73-43cd-8813-6230f05dafd6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:21.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:21.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Jan 31 03:39:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Jan 31 03:39:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Jan 31 03:39:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 98 KiB/s rd, 6.7 KiB/s wr, 133 op/s
Jan 31 03:39:22 np0005603621 nova_compute[247399]: 2026-01-31 08:39:22.768 247403 DEBUG nova.compute.manager [req-ad16b387-c17d-4dbb-a2f8-680200f1d5b1 req-6732e8c5-6310-4f42-b28d-9866a129f7c9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Received event network-vif-deleted-77125c39-f9c0-43d7-a068-15b155a2121f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:23 np0005603621 nova_compute[247399]: 2026-01-31 08:39:23.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:23.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:23.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 70 KiB/s rd, 4.6 KiB/s wr, 98 op/s
Jan 31 03:39:25 np0005603621 nova_compute[247399]: 2026-01-31 08:39:25.038 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:25.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:25.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 1.6 KiB/s wr, 84 op/s
Jan 31 03:39:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:27.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:27.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:28 np0005603621 nova_compute[247399]: 2026-01-31 08:39:28.018 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848753.0162458, 87faeef8-1f73-43cd-8813-6230f05dafd6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:39:28 np0005603621 nova_compute[247399]: 2026-01-31 08:39:28.018 247403 INFO nova.compute.manager [-] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:39:28 np0005603621 nova_compute[247399]: 2026-01-31 08:39:28.169 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:28 np0005603621 nova_compute[247399]: 2026-01-31 08:39:28.446 247403 DEBUG nova.compute.manager [None req-f962641b-83f0-494e-bcd3-438bfbdf08fc - - - - - -] [instance: 87faeef8-1f73-43cd-8813-6230f05dafd6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:39:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.1 KiB/s wr, 170 op/s
Jan 31 03:39:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:29.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:29.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:30 np0005603621 nova_compute[247399]: 2026-01-31 08:39:30.040 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.3 KiB/s wr, 142 op/s
Jan 31 03:39:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:30.522 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:30.523 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:30.523 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:31 np0005603621 nova_compute[247399]: 2026-01-31 08:39:31.539 247403 DEBUG oslo_concurrency.lockutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:31 np0005603621 nova_compute[247399]: 2026-01-31 08:39:31.539 247403 DEBUG oslo_concurrency.lockutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:31.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:31.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:31 np0005603621 nova_compute[247399]: 2026-01-31 08:39:31.749 247403 DEBUG nova.objects.instance [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'flavor' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:32 np0005603621 nova_compute[247399]: 2026-01-31 08:39:32.456 247403 DEBUG oslo_concurrency.lockutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.3 KiB/s wr, 109 op/s
Jan 31 03:39:33 np0005603621 nova_compute[247399]: 2026-01-31 08:39:33.171 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:33.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:33.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.277 247403 DEBUG oslo_concurrency.lockutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.278 247403 DEBUG oslo_concurrency.lockutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.278 247403 INFO nova.compute.manager [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Attaching volume 3b63f0e3-6562-4449-962b-ff0c0228f219 to /dev/vdb#033[00m
Jan 31 03:39:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.9 KiB/s wr, 83 op/s
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.510 247403 DEBUG os_brick.utils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.511 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.520 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.520 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[3d2ad4b3-4301-4820-89b7-20e37ef6f044]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.521 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.526 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.526 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[50dedf9b-3062-466a-bc57-0a51cecc3eba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.527 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.532 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.532 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[c294e14d-c349-49ca-8eb2-93ac9ec671e1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.533 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[7026f9bb-e2b6-4e07-bb6b-c4608880a68a]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.533 247403 DEBUG oslo_concurrency.processutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.565 247403 DEBUG oslo_concurrency.processutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.568 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.569 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.569 247403 DEBUG os_brick.initiator.connectors.lightos [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.569 247403 DEBUG os_brick.utils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:39:34 np0005603621 nova_compute[247399]: 2026-01-31 08:39:34.570 247403 DEBUG nova.virt.block_device [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating existing volume attachment record: d494679e-1143-4c39-9b30-80982cf8b475 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:39:35 np0005603621 nova_compute[247399]: 2026-01-31 08:39:35.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:35.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:35.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 92 op/s
Jan 31 03:39:37 np0005603621 nova_compute[247399]: 2026-01-31 08:39:37.094 247403 DEBUG nova.objects.instance [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'flavor' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:37 np0005603621 nova_compute[247399]: 2026-01-31 08:39:37.228 247403 DEBUG nova.virt.libvirt.driver [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Attempting to attach volume 3b63f0e3-6562-4449-962b-ff0c0228f219 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:39:37 np0005603621 nova_compute[247399]: 2026-01-31 08:39:37.231 247403 DEBUG nova.virt.libvirt.guest [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-3b63f0e3-6562-4449-962b-ff0c0228f219">
Jan 31 03:39:37 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:39:37 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:39:37 np0005603621 nova_compute[247399]:  <serial>3b63f0e3-6562-4449-962b-ff0c0228f219</serial>
Jan 31 03:39:37 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:39:37 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:39:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:37.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:37.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:38 np0005603621 nova_compute[247399]: 2026-01-31 08:39:38.174 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 85 op/s
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:39:38
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms']
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:39:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:39:39 np0005603621 nova_compute[247399]: 2026-01-31 08:39:39.182 247403 DEBUG nova.virt.libvirt.driver [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:39 np0005603621 nova_compute[247399]: 2026-01-31 08:39:39.183 247403 DEBUG nova.virt.libvirt.driver [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:39 np0005603621 nova_compute[247399]: 2026-01-31 08:39:39.183 247403 DEBUG nova.virt.libvirt.driver [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:39:39 np0005603621 nova_compute[247399]: 2026-01-31 08:39:39.183 247403 DEBUG nova.virt.libvirt.driver [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No VIF found with MAC fa:16:3e:fc:98:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:39:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:39.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:39.774 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:39.775 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:39:39 np0005603621 nova_compute[247399]: 2026-01-31 08:39:39.775 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:40 np0005603621 nova_compute[247399]: 2026-01-31 08:39:40.044 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 354 KiB/s rd, 22 KiB/s wr, 27 op/s
Jan 31 03:39:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:40.776 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:41 np0005603621 nova_compute[247399]: 2026-01-31 08:39:41.160 247403 DEBUG oslo_concurrency.lockutils [None req-8458aee9-f853-4c97-97e0-e02cc3bb5efc b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 6.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:41 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:41Z|00599|binding|INFO|Releasing lport 54969bc0-ee8d-420c-ac0c-dd4f9410e42c from this chassis (sb_readonly=0)
Jan 31 03:39:41 np0005603621 nova_compute[247399]: 2026-01-31 08:39:41.451 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:41.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:41.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 671 KiB/s rd, 32 KiB/s wr, 58 op/s
Jan 31 03:39:43 np0005603621 nova_compute[247399]: 2026-01-31 08:39:43.177 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:43 np0005603621 nova_compute[247399]: 2026-01-31 08:39:43.377 247403 INFO nova.compute.manager [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Rescuing#033[00m
Jan 31 03:39:43 np0005603621 nova_compute[247399]: 2026-01-31 08:39:43.377 247403 DEBUG oslo_concurrency.lockutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:43 np0005603621 nova_compute[247399]: 2026-01-31 08:39:43.378 247403 DEBUG oslo_concurrency.lockutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:43 np0005603621 nova_compute[247399]: 2026-01-31 08:39:43.378 247403 DEBUG nova.network.neutron [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:39:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:43.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:43.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 534 KiB/s rd, 30 KiB/s wr, 53 op/s
Jan 31 03:39:45 np0005603621 nova_compute[247399]: 2026-01-31 08:39:45.046 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:45.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:45.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 534 KiB/s rd, 30 KiB/s wr, 54 op/s
Jan 31 03:39:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:47.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:47.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:48 np0005603621 nova_compute[247399]: 2026-01-31 08:39:48.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 514 KiB/s rd, 21 KiB/s wr, 46 op/s
Jan 31 03:39:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:39:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 13K writes, 59K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1612 writes, 7245 keys, 1610 commit groups, 1.0 writes per commit group, ingest: 10.68 MB, 0.02 MB/s#012Interval WAL: 1612 writes, 1610 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     24.0      3.30              0.19        40    0.083       0      0       0.0       0.0#012  L6      1/0    9.34 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.9     46.6     39.5      9.80              0.88        39    0.251    261K    21K       0.0       0.0#012 Sum      1/0    9.34 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.9     34.9     35.6     13.10              1.07        79    0.166    261K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2     47.4     47.1      1.62              0.21        12    0.135     53K   3080       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     46.6     39.5      9.80              0.88        39    0.251    261K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     24.0      3.30              0.19        39    0.085       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.077, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.46 GB write, 0.10 MB/s write, 0.45 GB read, 0.10 MB/s read, 13.1 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 1.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 50.62 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000466 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2924,48.67 MB,16.0109%) FilterBlock(80,736.23 KB,0.236506%) IndexBlock(80,1.23 MB,0.403158%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013030829785501581 of space, bias 1.0, pg target 3.9092489356504743 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021646862786152987 of space, bias 1.0, pg target 0.6429118247487438 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007032958484552392 of space, bias 1.0, pg target 2.0887886699120606 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:39:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 03:39:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:49.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:49.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:50 np0005603621 nova_compute[247399]: 2026-01-31 08:39:50.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 10 KiB/s wr, 31 op/s
Jan 31 03:39:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:51 np0005603621 nova_compute[247399]: 2026-01-31 08:39:51.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:51 np0005603621 nova_compute[247399]: 2026-01-31 08:39:51.594 247403 DEBUG nova.network.neutron [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:39:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:51.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:51.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:51 np0005603621 nova_compute[247399]: 2026-01-31 08:39:51.742 247403 DEBUG oslo_concurrency.lockutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:39:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 10 KiB/s wr, 31 op/s
Jan 31 03:39:52 np0005603621 podman[350994]: 2026-01-31 08:39:52.487627428 +0000 UTC m=+0.038576434 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:52 np0005603621 nova_compute[247399]: 2026-01-31 08:39:52.488 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:39:52 np0005603621 podman[350995]: 2026-01-31 08:39:52.515399628 +0000 UTC m=+0.068015127 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:39:53 np0005603621 nova_compute[247399]: 2026-01-31 08:39:53.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:53.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 31 03:39:54 np0005603621 kernel: tape31170c4-45 (unregistering): left promiscuous mode
Jan 31 03:39:54 np0005603621 NetworkManager[49013]: <info>  [1769848794.7375] device (tape31170c4-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:39:54 np0005603621 nova_compute[247399]: 2026-01-31 08:39:54.743 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:54Z|00600|binding|INFO|Releasing lport e31170c4-45cd-451e-9d63-81bc41146ca1 from this chassis (sb_readonly=0)
Jan 31 03:39:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:54Z|00601|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 down in Southbound
Jan 31 03:39:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:39:54Z|00602|binding|INFO|Removing iface tape31170c4-45 ovn-installed in OVS
Jan 31 03:39:54 np0005603621 nova_compute[247399]: 2026-01-31 08:39:54.746 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:54 np0005603621 nova_compute[247399]: 2026-01-31 08:39:54.752 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:54 np0005603621 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 31 03:39:54 np0005603621 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000093.scope: Consumed 14.837s CPU time.
Jan 31 03:39:54 np0005603621 systemd-machined[212769]: Machine qemu-71-instance-00000093 terminated.
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.050 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.105 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:98:9d 10.100.0.8'], port_security=['fa:16:3e:fc:98:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '47937952-48ef-482c-9abd-e199e46c505b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e31170c4-45cd-451e-9d63-81bc41146ca1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.107 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e31170c4-45cd-451e-9d63-81bc41146ca1 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.109 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.121 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[70668523-c30f-4ba5-a19d-8c085ad860c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.150 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[93fa1a4d-8dcd-4e4e-a401-1afcdab5864d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.154 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[76ab1ab2-5592-4562-a7da-64b3584b1e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.185 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e16827-1214-4af9-9412-eb4d9b3fab6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.202 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c8855c-eb55-4d66-ba60-de8badee95ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351108, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.214 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[acde6544-d8ba-409c-bf2a-e99c0aa786c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795049, 'tstamp': 795049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351109, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795051, 'tstamp': 795051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351109, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.215 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.220 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.221 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.221 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.221 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:39:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:39:55.222 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.507 247403 INFO nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.512 247403 INFO nova.virt.libvirt.driver [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance destroyed successfully.#033[00m
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.512 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:55.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:55.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:39:55 np0005603621 nova_compute[247399]: 2026-01-31 08:39:55.851 247403 INFO nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Attempting a stable device rescue#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.378 247403 DEBUG nova.compute.manager [req-65edc5fd-e48b-4eae-b350-832a3ca36acc req-fb588fb6-bf8c-421f-8796-4f72c900fb9c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.379 247403 DEBUG oslo_concurrency.lockutils [req-65edc5fd-e48b-4eae-b350-832a3ca36acc req-fb588fb6-bf8c-421f-8796-4f72c900fb9c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.379 247403 DEBUG oslo_concurrency.lockutils [req-65edc5fd-e48b-4eae-b350-832a3ca36acc req-fb588fb6-bf8c-421f-8796-4f72c900fb9c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.380 247403 DEBUG oslo_concurrency.lockutils [req-65edc5fd-e48b-4eae-b350-832a3ca36acc req-fb588fb6-bf8c-421f-8796-4f72c900fb9c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.380 247403 DEBUG nova.compute.manager [req-65edc5fd-e48b-4eae-b350-832a3ca36acc req-fb588fb6-bf8c-421f-8796-4f72c900fb9c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.380 247403 WARNING nova.compute.manager [req-65edc5fd-e48b-4eae-b350-832a3ca36acc req-fb588fb6-bf8c-421f-8796-4f72c900fb9c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:39:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 511 B/s rd, 3.4 KiB/s wr, 1 op/s
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.716 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.720 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.721 247403 INFO nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Creating image(s)#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.745 247403 DEBUG nova.storage.rbd_utils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.748 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:56 np0005603621 nova_compute[247399]: 2026-01-31 08:39:56.991 247403 DEBUG nova.storage.rbd_utils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.018 247403 DEBUG nova.storage.rbd_utils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.021 247403 DEBUG oslo_concurrency.lockutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "921df4ff6515ae0f972c32f5442cf280252566b5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.022 247403 DEBUG oslo_concurrency.lockutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "921df4ff6515ae0f972c32f5442cf280252566b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.299 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.300 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.301 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:39:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:39:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:57.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:39:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:57.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.725 247403 DEBUG nova.virt.libvirt.imagebackend [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/9bc0d84f-a933-4fd8-8f17-38e2aca81cce/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/9bc0d84f-a933-4fd8-8f17-38e2aca81cce/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.777 247403 DEBUG nova.virt.libvirt.imagebackend [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Selected location: {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/9bc0d84f-a933-4fd8-8f17-38e2aca81cce/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.778 247403 DEBUG nova.storage.rbd_utils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] cloning images/9bc0d84f-a933-4fd8-8f17-38e2aca81cce@snap to None/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 03:39:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Jan 31 03:39:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Jan 31 03:39:57 np0005603621 podman[351350]: 2026-01-31 08:39:57.810981832 +0000 UTC m=+0.062848453 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:39:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Jan 31 03:39:57 np0005603621 podman[351350]: 2026-01-31 08:39:57.900204141 +0000 UTC m=+0.152070742 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.920 247403 DEBUG oslo_concurrency.lockutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "921df4ff6515ae0f972c32f5442cf280252566b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:57 np0005603621 nova_compute[247399]: 2026-01-31 08:39:57.976 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.121 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.123 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Start _get_guest_xml network_info=[{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "vif_mac": "fa:16:3e:fc:98:9d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '9bc0d84f-a933-4fd8-8f17-38e2aca81cce', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3b63f0e3-6562-4449-962b-ff0c0228f219', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3b63f0e3-6562-4449-962b-ff0c0228f219', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'attached_at': '', 'detached_at': '', 'volume_id': '3b63f0e3-6562-4449-962b-ff0c0228f219', 'serial': '3b63f0e3-6562-4449-962b-ff0c0228f219'}, 'device_type': 'disk', 'boot_index': None, 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'attachment_id': 'd494679e-1143-4c39-9b30-80982cf8b475', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.124 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'resources' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.226 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:39:58 np0005603621 podman[351575]: 2026-01-31 08:39:58.369351445 +0000 UTC m=+0.047875429 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.375 247403 WARNING nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.380 247403 DEBUG nova.virt.libvirt.host [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.380 247403 DEBUG nova.virt.libvirt.host [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:39:58 np0005603621 podman[351575]: 2026-01-31 08:39:58.382116899 +0000 UTC m=+0.060640863 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.383 247403 DEBUG nova.virt.libvirt.host [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.383 247403 DEBUG nova.virt.libvirt.host [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.384 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.385 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.385 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.385 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.385 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.385 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.virt.hardware [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.386 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:39:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 10 KiB/s wr, 2 op/s
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.552 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:58 np0005603621 podman[351642]: 2026-01-31 08:39:58.572194856 +0000 UTC m=+0.063606938 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, io.openshift.expose-services=, release=1793, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.647 247403 DEBUG nova.compute.manager [req-d04c453a-5efa-44af-b8a8-8d91a5805db9 req-08721f31-59ff-4796-8573-c3e6aa06823f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.647 247403 DEBUG oslo_concurrency.lockutils [req-d04c453a-5efa-44af-b8a8-8d91a5805db9 req-08721f31-59ff-4796-8573-c3e6aa06823f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.648 247403 DEBUG oslo_concurrency.lockutils [req-d04c453a-5efa-44af-b8a8-8d91a5805db9 req-08721f31-59ff-4796-8573-c3e6aa06823f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.648 247403 DEBUG oslo_concurrency.lockutils [req-d04c453a-5efa-44af-b8a8-8d91a5805db9 req-08721f31-59ff-4796-8573-c3e6aa06823f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.648 247403 DEBUG nova.compute.manager [req-d04c453a-5efa-44af-b8a8-8d91a5805db9 req-08721f31-59ff-4796-8573-c3e6aa06823f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.648 247403 WARNING nova.compute.manager [req-d04c453a-5efa-44af-b8a8-8d91a5805db9 req-08721f31-59ff-4796-8573-c3e6aa06823f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:39:58 np0005603621 podman[351664]: 2026-01-31 08:39:58.667432045 +0000 UTC m=+0.081707530 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, name=keepalived, vcs-type=git, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 31 03:39:58 np0005603621 podman[351642]: 2026-01-31 08:39:58.718181684 +0000 UTC m=+0.209593746 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.buildah.version=1.28.2, release=1793)
Jan 31 03:39:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:39:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:39:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:39:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529594449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:39:58 np0005603621 nova_compute[247399]: 2026-01-31 08:39:58.994 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:59 np0005603621 nova_compute[247399]: 2026-01-31 08:39:59.034 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2465396187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:39:59 np0005603621 nova_compute[247399]: 2026-01-31 08:39:59.464 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:39:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ba262693-2570-4be5-b686-2291f79fd5b2 does not exist
Jan 31 03:39:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c03eeb46-e55e-4e23-954d-33f0c7e93a46 does not exist
Jan 31 03:39:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5371a6bf-22ff-4045-af2e-25f0d1f3aff7 does not exist
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:39:59 np0005603621 nova_compute[247399]: 2026-01-31 08:39:59.618 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:39:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:39:59.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:39:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:39:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:39:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:39:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:40:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.019220354 +0000 UTC m=+0.034350260 container create 2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:40:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:40:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2207248805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:40:00 np0005603621 systemd[1]: Started libpod-conmon-2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297.scope.
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.052 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.055 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.056 247403 DEBUG nova.virt.libvirt.vif [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-115398860',display_name='tempest-ServerStableDeviceRescueTest-server-115398860',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-115398860',id=147,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwSJ/FWYSnlwbHs0Wzm91aUcuaXBYBlt6yIH3QV0xhWQdod7WEXhLlfK4Hd2zTjsdf/eKXM6/AA0TDEI9fUsNHKQsyPifodt6RnLsQxSTsB0zuFfxi198QzASAfXJdIAg==',key_name='tempest-keypair-557896509',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:38:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-a4dec496',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:39:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b6733330b634472ca8c21316f1ee5057',uuid=92bd94ef-0031-409f-8c26-23d5f3d952e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "vif_mac": "fa:16:3e:fc:98:9d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.056 247403 DEBUG nova.network.os_vif_util [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "vif_mac": "fa:16:3e:fc:98:9d"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.057 247403 DEBUG nova.network.os_vif_util [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.058 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.093441377 +0000 UTC m=+0.108571313 container init 2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.003132833 +0000 UTC m=+0.018262779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.100527041 +0000 UTC m=+0.115656947 container start 2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:40:00 np0005603621 quizzical_benz[352047]: 167 167
Jan 31 03:40:00 np0005603621 systemd[1]: libpod-2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297.scope: Deactivated successfully.
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.104260119 +0000 UTC m=+0.119390055 container attach 2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.105815409 +0000 UTC m=+0.120945345 container died 2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:40:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e5c363d7aa3eb4ffed1542feb1d3f521f5f29e526204e123248b82d122aed49e-merged.mount: Deactivated successfully.
Jan 31 03:40:00 np0005603621 podman[352029]: 2026-01-31 08:40:00.144276188 +0000 UTC m=+0.159406104 container remove 2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_benz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:40:00 np0005603621 systemd[1]: libpod-conmon-2a02c542c72d408c2817e9a97add9f53917a26e52aef81fc034e18d38234a297.scope: Deactivated successfully.
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.238 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <uuid>92bd94ef-0031-409f-8c26-23d5f3d952e1</uuid>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <name>instance-00000093</name>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-115398860</nova:name>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:39:58</nova:creationTime>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:user uuid="b6733330b634472ca8c21316f1ee5057">tempest-ServerStableDeviceRescueTest-319343227-project-member</nova:user>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:project uuid="1e29363ca464487b931af54fe14166b1">tempest-ServerStableDeviceRescueTest-319343227</nova:project>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <nova:port uuid="e31170c4-45cd-451e-9d63-81bc41146ca1">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <entry name="serial">92bd94ef-0031-409f-8c26-23d5f3d952e1</entry>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <entry name="uuid">92bd94ef-0031-409f-8c26-23d5f3d952e1</entry>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-3b63f0e3-6562-4449-962b-ff0c0228f219">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <serial>3b63f0e3-6562-4449-962b-ff0c0228f219</serial>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.rescue">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <target dev="vdc" bus="virtio"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <boot order="1"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:fc:98:9d"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <target dev="tape31170c4-45"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/console.log" append="off"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:40:00 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:40:00 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:40:00 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:40:00 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.244 247403 INFO nova.virt.libvirt.driver [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance destroyed successfully.#033[00m
Jan 31 03:40:00 np0005603621 podman[352071]: 2026-01-31 08:40:00.276139809 +0000 UTC m=+0.039936567 container create 5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:40:00 np0005603621 systemd[1]: Started libpod-conmon-5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014.scope.
Jan 31 03:40:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a90b393b21740671c3e9ce270c6428de3c150319ebe31c75edfcc20d4d1674/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a90b393b21740671c3e9ce270c6428de3c150319ebe31c75edfcc20d4d1674/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a90b393b21740671c3e9ce270c6428de3c150319ebe31c75edfcc20d4d1674/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a90b393b21740671c3e9ce270c6428de3c150319ebe31c75edfcc20d4d1674/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31a90b393b21740671c3e9ce270c6428de3c150319ebe31c75edfcc20d4d1674/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:00 np0005603621 podman[352071]: 2026-01-31 08:40:00.341228712 +0000 UTC m=+0.105025490 container init 5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:40:00 np0005603621 podman[352071]: 2026-01-31 08:40:00.348061789 +0000 UTC m=+0.111858557 container start 5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:40:00 np0005603621 podman[352071]: 2026-01-31 08:40:00.35156052 +0000 UTC m=+0.115357288 container attach 5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:40:00 np0005603621 podman[352071]: 2026-01-31 08:40:00.259227793 +0000 UTC m=+0.023024611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:40:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 10 KiB/s wr, 2 op/s
Jan 31 03:40:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.915 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.916 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.916 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.916 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.916 247403 DEBUG nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] No VIF found with MAC fa:16:3e:fc:98:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.918 247403 INFO nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Using config drive#033[00m
Jan 31 03:40:00 np0005603621 nova_compute[247399]: 2026-01-31 08:40:00.940 247403 DEBUG nova.storage.rbd_utils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:40:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 03:40:01 np0005603621 nova_compute[247399]: 2026-01-31 08:40:01.021 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:01 np0005603621 nervous_ritchie[352089]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:40:01 np0005603621 nervous_ritchie[352089]: --> relative data size: 1.0
Jan 31 03:40:01 np0005603621 nervous_ritchie[352089]: --> All data devices are unavailable
Jan 31 03:40:01 np0005603621 systemd[1]: libpod-5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014.scope: Deactivated successfully.
Jan 31 03:40:01 np0005603621 nova_compute[247399]: 2026-01-31 08:40:01.140 247403 DEBUG nova.objects.instance [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'keypairs' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:01 np0005603621 podman[352122]: 2026-01-31 08:40:01.163841983 +0000 UTC m=+0.023898489 container died 5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-31a90b393b21740671c3e9ce270c6428de3c150319ebe31c75edfcc20d4d1674-merged.mount: Deactivated successfully.
Jan 31 03:40:01 np0005603621 podman[352122]: 2026-01-31 08:40:01.216030228 +0000 UTC m=+0.076086714 container remove 5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:40:01 np0005603621 systemd[1]: libpod-conmon-5567f4fbb44fca6b587326160f16823224be87faab017501a0206386a254a014.scope: Deactivated successfully.
Jan 31 03:40:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:01.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:01.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.713486339 +0000 UTC m=+0.035048242 container create bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 03:40:01 np0005603621 systemd[1]: Started libpod-conmon-bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81.scope.
Jan 31 03:40:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.773305346 +0000 UTC m=+0.094867249 container init bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.777826289 +0000 UTC m=+0.099388182 container start bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.780771182 +0000 UTC m=+0.102333085 container attach bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:40:01 np0005603621 nervous_heisenberg[352294]: 167 167
Jan 31 03:40:01 np0005603621 systemd[1]: libpod-bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81.scope: Deactivated successfully.
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.782233089 +0000 UTC m=+0.103794982 container died bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.698517345 +0000 UTC m=+0.020079278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:40:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aed8b5631a21d84bee1e25b65275335b1bc7df2d5b0bdf0945897fe1886de912-merged.mount: Deactivated successfully.
Jan 31 03:40:01 np0005603621 podman[352278]: 2026-01-31 08:40:01.817425875 +0000 UTC m=+0.138987778 container remove bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:40:01 np0005603621 systemd[1]: libpod-conmon-bec46c78b475e3276f5485c9858b000088869acd6b8a6483f72093debe9aba81.scope: Deactivated successfully.
Jan 31 03:40:01 np0005603621 podman[352319]: 2026-01-31 08:40:01.95543931 +0000 UTC m=+0.041740914 container create aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_noether, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:01 np0005603621 systemd[1]: Started libpod-conmon-aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df.scope.
Jan 31 03:40:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:40:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3c9f152964d56047f281548819121d21e22ed90fef2e6b45525a717a63cccd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3c9f152964d56047f281548819121d21e22ed90fef2e6b45525a717a63cccd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3c9f152964d56047f281548819121d21e22ed90fef2e6b45525a717a63cccd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c3c9f152964d56047f281548819121d21e22ed90fef2e6b45525a717a63cccd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:02 np0005603621 podman[352319]: 2026-01-31 08:40:01.934250358 +0000 UTC m=+0.020551982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:40:02 np0005603621 podman[352319]: 2026-01-31 08:40:02.034003011 +0000 UTC m=+0.120304615 container init aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:40:02 np0005603621 podman[352319]: 2026-01-31 08:40:02.040633111 +0000 UTC m=+0.126934735 container start aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:40:02 np0005603621 podman[352319]: 2026-01-31 08:40:02.043400959 +0000 UTC m=+0.129702583 container attach aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.127 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.211 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.211 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.211 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.211 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.212 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.212 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.212 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.224 247403 INFO nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Creating config drive at /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config.rescue#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.228 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkpkqn9dp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.284 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.285 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.285 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.285 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.286 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.358 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpkpkqn9dp" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.385 247403 DEBUG nova.storage.rbd_utils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] rbd image 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.390 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config.rescue 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 12 KiB/s wr, 56 op/s
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.535 247403 DEBUG oslo_concurrency.processutils [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config.rescue 92bd94ef-0031-409f-8c26-23d5f3d952e1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:02 np0005603621 nova_compute[247399]: 2026-01-31 08:40:02.535 247403 INFO nova.virt.libvirt.driver [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Deleting local config drive /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1/disk.config.rescue because it was imported into RBD.#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:40:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515860974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.261 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.976s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:03 np0005603621 kernel: tape31170c4-45: entered promiscuous mode
Jan 31 03:40:03 np0005603621 NetworkManager[49013]: <info>  [1769848803.2769] manager: (tape31170c4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:03Z|00603|binding|INFO|Claiming lport e31170c4-45cd-451e-9d63-81bc41146ca1 for this chassis.
Jan 31 03:40:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:03Z|00604|binding|INFO|e31170c4-45cd-451e-9d63-81bc41146ca1: Claiming fa:16:3e:fc:98:9d 10.100.0.8
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.280 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:03Z|00605|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 ovn-installed in OVS
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603621 systemd-machined[212769]: New machine qemu-73-instance-00000093.
Jan 31 03:40:03 np0005603621 systemd-udevd[352418]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.303 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:98:9d 10.100.0.8'], port_security=['fa:16:3e:fc:98:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '47937952-48ef-482c-9abd-e199e46c505b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e31170c4-45cd-451e-9d63-81bc41146ca1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:03Z|00606|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 up in Southbound
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.304 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e31170c4-45cd-451e-9d63-81bc41146ca1 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 bound to our chassis#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.306 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:40:03 np0005603621 systemd[1]: Started Virtual Machine qemu-73-instance-00000093.
Jan 31 03:40:03 np0005603621 loving_noether[352333]: {
Jan 31 03:40:03 np0005603621 loving_noether[352333]:    "0": [
Jan 31 03:40:03 np0005603621 loving_noether[352333]:        {
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "devices": [
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "/dev/loop3"
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            ],
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "lv_name": "ceph_lv0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "lv_size": "7511998464",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "name": "ceph_lv0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "tags": {
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.cluster_name": "ceph",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.crush_device_class": "",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.encrypted": "0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.osd_id": "0",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.type": "block",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:                "ceph.vdo": "0"
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            },
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "type": "block",
Jan 31 03:40:03 np0005603621 loving_noether[352333]:            "vg_name": "ceph_vg0"
Jan 31 03:40:03 np0005603621 loving_noether[352333]:        }
Jan 31 03:40:03 np0005603621 loving_noether[352333]:    ]
Jan 31 03:40:03 np0005603621 loving_noether[352333]: }
Jan 31 03:40:03 np0005603621 NetworkManager[49013]: <info>  [1769848803.3169] device (tape31170c4-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:40:03 np0005603621 NetworkManager[49013]: <info>  [1769848803.3175] device (tape31170c4-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.323 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4fed4729-4796-4b68-8219-d265beb14526]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:03 np0005603621 systemd[1]: libpod-aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df.scope: Deactivated successfully.
Jan 31 03:40:03 np0005603621 conmon[352333]: conmon aa37edfc6b7d1f0e542b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df.scope/container/memory.events
Jan 31 03:40:03 np0005603621 podman[352319]: 2026-01-31 08:40:03.342971921 +0000 UTC m=+1.429273525 container died aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_noether, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.347 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a7aa0d73-151e-410c-8010-39366b400072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.351 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6fee0799-a244-4827-9714-4dd421d7a973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5c3c9f152964d56047f281548819121d21e22ed90fef2e6b45525a717a63cccd-merged.mount: Deactivated successfully.
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.372 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[07a71baf-6984-4d07-b6c7-a019e451fc3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.387 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7e9179-9bf3-4648-a20a-882b3c83f6ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352444, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.400 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3055bb6b-cd2a-426d-a02a-3ea398627dd4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795049, 'tstamp': 795049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352445, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795051, 'tstamp': 795051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352445, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.402 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:03 np0005603621 podman[352319]: 2026-01-31 08:40:03.403179251 +0000 UTC m=+1.489480855 container remove aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 03:40:03 np0005603621 systemd[1]: libpod-conmon-aa37edfc6b7d1f0e542b3e65aa9af32eb4c4bada8037277a9967c258de93d5df.scope: Deactivated successfully.
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.456 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.457 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.459 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.459 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.459 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:03.460 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:03.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.664 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.666 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.666 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.666 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.669 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.669 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:40:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:03.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.763 247403 DEBUG nova.compute.manager [req-d9881ed0-9781-4669-b056-2d775df94ed6 req-7ea11863-3a1f-4f8a-8ded-a04b8f951019 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.763 247403 DEBUG oslo_concurrency.lockutils [req-d9881ed0-9781-4669-b056-2d775df94ed6 req-7ea11863-3a1f-4f8a-8ded-a04b8f951019 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.764 247403 DEBUG oslo_concurrency.lockutils [req-d9881ed0-9781-4669-b056-2d775df94ed6 req-7ea11863-3a1f-4f8a-8ded-a04b8f951019 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.764 247403 DEBUG oslo_concurrency.lockutils [req-d9881ed0-9781-4669-b056-2d775df94ed6 req-7ea11863-3a1f-4f8a-8ded-a04b8f951019 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.764 247403 DEBUG nova.compute.manager [req-d9881ed0-9781-4669-b056-2d775df94ed6 req-7ea11863-3a1f-4f8a-8ded-a04b8f951019 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.764 247403 WARNING nova.compute.manager [req-d9881ed0-9781-4669-b056-2d775df94ed6 req-7ea11863-3a1f-4f8a-8ded-a04b8f951019 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state rescuing.#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.834 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.835 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3878MB free_disk=20.714733123779297GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.836 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:03 np0005603621 nova_compute[247399]: 2026-01-31 08:40:03.836 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:03 np0005603621 podman[352584]: 2026-01-31 08:40:03.942071895 +0000 UTC m=+0.032621374 container create 48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_raman, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:40:03 np0005603621 systemd[1]: Started libpod-conmon-48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca.scope.
Jan 31 03:40:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:40:04 np0005603621 podman[352584]: 2026-01-31 08:40:04.007967505 +0000 UTC m=+0.098517014 container init 48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:40:04 np0005603621 podman[352584]: 2026-01-31 08:40:04.014711218 +0000 UTC m=+0.105260697 container start 48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:40:04 np0005603621 interesting_raman[352637]: 167 167
Jan 31 03:40:04 np0005603621 podman[352584]: 2026-01-31 08:40:04.019227272 +0000 UTC m=+0.109776751 container attach 48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:40:04 np0005603621 conmon[352637]: conmon 48a13e3af22f2df8320a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca.scope/container/memory.events
Jan 31 03:40:04 np0005603621 systemd[1]: libpod-48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca.scope: Deactivated successfully.
Jan 31 03:40:04 np0005603621 podman[352584]: 2026-01-31 08:40:04.020173892 +0000 UTC m=+0.110723381 container died 48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:40:04 np0005603621 podman[352584]: 2026-01-31 08:40:03.929122725 +0000 UTC m=+0.019672224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:40:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f3e5ae6702ff03a012f7074b857b90ec5c0543fdaa17f5927f4423a0b14db85c-merged.mount: Deactivated successfully.
Jan 31 03:40:04 np0005603621 podman[352584]: 2026-01-31 08:40:04.057411642 +0000 UTC m=+0.147961111 container remove 48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:40:04 np0005603621 systemd[1]: libpod-conmon-48a13e3af22f2df8320a9ee83bd21ae3c52192fab14fb5b7acd97dd9b0f34dca.scope: Deactivated successfully.
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.162 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 92bd94ef-0031-409f-8c26-23d5f3d952e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.162 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848804.161945, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.162 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.167 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c95caf87-5069-4b70-9023-d3c2d911e87d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.167 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.168 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.168 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.174 247403 DEBUG nova.compute.manager [None req-04c89932-b132-44f4-910b-2a0073002e69 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:04 np0005603621 podman[352703]: 2026-01-31 08:40:04.185886936 +0000 UTC m=+0.042296562 container create 6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:40:04 np0005603621 systemd[1]: Started libpod-conmon-6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79.scope.
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.249 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:40:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83abe0da907e205b4f60350696f9ffbca7df16c537e0926f502e54ec29507150/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83abe0da907e205b4f60350696f9ffbca7df16c537e0926f502e54ec29507150/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83abe0da907e205b4f60350696f9ffbca7df16c537e0926f502e54ec29507150/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83abe0da907e205b4f60350696f9ffbca7df16c537e0926f502e54ec29507150/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:40:04 np0005603621 podman[352703]: 2026-01-31 08:40:04.166513811 +0000 UTC m=+0.022923457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:40:04 np0005603621 podman[352703]: 2026-01-31 08:40:04.269115364 +0000 UTC m=+0.125525010 container init 6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:40:04 np0005603621 podman[352703]: 2026-01-31 08:40:04.274622079 +0000 UTC m=+0.131031705 container start 6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:40:04 np0005603621 podman[352703]: 2026-01-31 08:40:04.283082927 +0000 UTC m=+0.139492573 container attach 6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.457 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.462 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:40:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 12 KiB/s wr, 56 op/s
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.631 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.632 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848804.1716244, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.632 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Started (Lifecycle Event)#033[00m
Jan 31 03:40:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:40:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1699329261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.700 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.704 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.807 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.811 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.818 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.998 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:40:04 np0005603621 nova_compute[247399]: 2026-01-31 08:40:04.999 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]: {
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:        "osd_id": 0,
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:        "type": "bluestore"
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]:    }
Jan 31 03:40:05 np0005603621 gracious_feistel[352720]: }
Jan 31 03:40:05 np0005603621 nova_compute[247399]: 2026-01-31 08:40:05.054 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:05 np0005603621 systemd[1]: libpod-6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79.scope: Deactivated successfully.
Jan 31 03:40:05 np0005603621 podman[352703]: 2026-01-31 08:40:05.062657123 +0000 UTC m=+0.919066749 container died 6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:40:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83abe0da907e205b4f60350696f9ffbca7df16c537e0926f502e54ec29507150-merged.mount: Deactivated successfully.
Jan 31 03:40:05 np0005603621 podman[352703]: 2026-01-31 08:40:05.121572671 +0000 UTC m=+0.977982297 container remove 6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:40:05 np0005603621 systemd[1]: libpod-conmon-6862fc397bd203194fea48ca0ca45e962e8ab5332e16118a75bc8d90c03eee79.scope: Deactivated successfully.
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:40:05 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6953829e-440b-4c1d-a9ba-d697742aa9f5 does not exist
Jan 31 03:40:05 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5b90467b-5de4-45af-b875-160934cb252d does not exist
Jan 31 03:40:05 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9b6aaad5-0ba9-4ac3-8763-325d96c29b5c does not exist
Jan 31 03:40:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:05.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:05.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Jan 31 03:40:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Jan 31 03:40:05 np0005603621 nova_compute[247399]: 2026-01-31 08:40:05.986 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:05 np0005603621 nova_compute[247399]: 2026-01-31 08:40:05.986 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.350 247403 DEBUG nova.compute.manager [req-9f200b69-2f69-40b7-9bdd-b4bfe7a8aff4 req-f8437b44-77fb-4f68-9373-c4b6d94a995f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.351 247403 DEBUG oslo_concurrency.lockutils [req-9f200b69-2f69-40b7-9bdd-b4bfe7a8aff4 req-f8437b44-77fb-4f68-9373-c4b6d94a995f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.351 247403 DEBUG oslo_concurrency.lockutils [req-9f200b69-2f69-40b7-9bdd-b4bfe7a8aff4 req-f8437b44-77fb-4f68-9373-c4b6d94a995f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.351 247403 DEBUG oslo_concurrency.lockutils [req-9f200b69-2f69-40b7-9bdd-b4bfe7a8aff4 req-f8437b44-77fb-4f68-9373-c4b6d94a995f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.351 247403 DEBUG nova.compute.manager [req-9f200b69-2f69-40b7-9bdd-b4bfe7a8aff4 req-f8437b44-77fb-4f68-9373-c4b6d94a995f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.351 247403 WARNING nova.compute.manager [req-9f200b69-2f69-40b7-9bdd-b4bfe7a8aff4 req-f8437b44-77fb-4f68-9373-c4b6d94a995f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state rescued and task_state None.#033[00m
Jan 31 03:40:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:40:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:40:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 KiB/s wr, 127 op/s
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.578 247403 INFO nova.compute.manager [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Unrescuing#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.579 247403 DEBUG oslo_concurrency.lockutils [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.579 247403 DEBUG oslo_concurrency.lockutils [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:06 np0005603621 nova_compute[247399]: 2026-01-31 08:40:06.579 247403 DEBUG nova.network.neutron [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:40:07 np0005603621 nova_compute[247399]: 2026-01-31 08:40:07.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:07.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:07.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:08 np0005603621 nova_compute[247399]: 2026-01-31 08:40:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:08 np0005603621 nova_compute[247399]: 2026-01-31 08:40:08.231 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 19 KiB/s wr, 221 op/s
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:09 np0005603621 nova_compute[247399]: 2026-01-31 08:40:09.306 247403 DEBUG nova.compute.manager [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:09 np0005603621 nova_compute[247399]: 2026-01-31 08:40:09.306 247403 DEBUG nova.compute.manager [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing instance network info cache due to event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:40:09 np0005603621 nova_compute[247399]: 2026-01-31 08:40:09.306 247403 DEBUG oslo_concurrency.lockutils [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:09 np0005603621 nova_compute[247399]: 2026-01-31 08:40:09.376 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:09.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:09.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:09 np0005603621 nova_compute[247399]: 2026-01-31 08:40:09.865 247403 DEBUG nova.network.neutron [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.317 247403 DEBUG oslo_concurrency.lockutils [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.318 247403 DEBUG nova.objects.instance [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'flavor' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.320 247403 DEBUG oslo_concurrency.lockutils [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.320 247403 DEBUG nova.network.neutron [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:40:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 19 KiB/s wr, 221 op/s
Jan 31 03:40:10 np0005603621 kernel: tape31170c4-45 (unregistering): left promiscuous mode
Jan 31 03:40:10 np0005603621 NetworkManager[49013]: <info>  [1769848810.5659] device (tape31170c4-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:40:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:10Z|00607|binding|INFO|Releasing lport e31170c4-45cd-451e-9d63-81bc41146ca1 from this chassis (sb_readonly=0)
Jan 31 03:40:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:10Z|00608|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 down in Southbound
Jan 31 03:40:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:10Z|00609|binding|INFO|Removing iface tape31170c4-45 ovn-installed in OVS
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.573 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:10 np0005603621 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 31 03:40:10 np0005603621 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000093.scope: Consumed 7.337s CPU time.
Jan 31 03:40:10 np0005603621 systemd-machined[212769]: Machine qemu-73-instance-00000093 terminated.
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.702 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.705 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.722 247403 INFO nova.virt.libvirt.driver [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance destroyed successfully.#033[00m
Jan 31 03:40:10 np0005603621 nova_compute[247399]: 2026-01-31 08:40:10.722 247403 DEBUG nova.objects.instance [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.060 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:98:9d 10.100.0.8'], port_security=['fa:16:3e:fc:98:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '47937952-48ef-482c-9abd-e199e46c505b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e31170c4-45cd-451e-9d63-81bc41146ca1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.062 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e31170c4-45cd-451e-9d63-81bc41146ca1 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.064 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.077 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[32199d89-ab92-4a69-9c23-18ef4d301f90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.100 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ea9ab3-ce3f-454d-a6bf-c4857b6d7a0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.103 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[53d33e76-ef69-4ad8-8ea3-5af856cb4f4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.128 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1bac6908-ae5d-48cf-9100-37c331b25616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.144 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3c89d13f-49c6-41ab-88c7-4d2b7c353f1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352854, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.157 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fad2ebc5-e753-40e2-a8e4-a0a0888d1512]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795049, 'tstamp': 795049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352855, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795051, 'tstamp': 795051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352855, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.159 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.160 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.164 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.164 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.165 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.165 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.165 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:11 np0005603621 kernel: tape31170c4-45: entered promiscuous mode
Jan 31 03:40:11 np0005603621 systemd-udevd[352835]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.177 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:11Z|00610|binding|INFO|Claiming lport e31170c4-45cd-451e-9d63-81bc41146ca1 for this chassis.
Jan 31 03:40:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:11Z|00611|binding|INFO|e31170c4-45cd-451e-9d63-81bc41146ca1: Claiming fa:16:3e:fc:98:9d 10.100.0.8
Jan 31 03:40:11 np0005603621 NetworkManager[49013]: <info>  [1769848811.1783] manager: (tape31170c4-45): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Jan 31 03:40:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:11Z|00612|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 ovn-installed in OVS
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.183 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:11 np0005603621 NetworkManager[49013]: <info>  [1769848811.1860] device (tape31170c4-45): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.186 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:11 np0005603621 NetworkManager[49013]: <info>  [1769848811.1874] device (tape31170c4-45): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:40:11 np0005603621 systemd-machined[212769]: New machine qemu-74-instance-00000093.
Jan 31 03:40:11 np0005603621 systemd[1]: Started Virtual Machine qemu-74-instance-00000093.
Jan 31 03:40:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:11.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.917 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 92bd94ef-0031-409f-8c26-23d5f3d952e1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.917 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848811.9167793, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:40:11 np0005603621 nova_compute[247399]: 2026-01-31 08:40:11.918 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:40:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:11Z|00613|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 up in Southbound
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.934 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:98:9d 10.100.0.8'], port_security=['fa:16:3e:fc:98:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '47937952-48ef-482c-9abd-e199e46c505b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e31170c4-45cd-451e-9d63-81bc41146ca1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.936 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e31170c4-45cd-451e-9d63-81bc41146ca1 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 bound to our chassis#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.938 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.946 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ea389923-a415-4bcd-9b00-e94c3c800cb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.964 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8114d3c6-bd49-41e1-8616-99911933080a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.967 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb55bd0-77a1-401f-99d2-4e3e84cfc066]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.982 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[82508f71-1160-4a81-a263-fc33a910e8eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:11.993 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dd28925a-f3ce-4713-9f0b-d2a581ea57e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352959, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:12.005 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8825fc13-bbb7-4511-ba1b-4c0ef897fb7e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795049, 'tstamp': 795049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352960, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795051, 'tstamp': 795051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352960, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:12.006 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.007 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.008 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:12.008 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:12.009 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:12.009 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:12.009 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.127 247403 DEBUG nova.compute.manager [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.127 247403 DEBUG nova.compute.manager [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing instance network info cache due to event network-changed-e31170c4-45cd-451e-9d63-81bc41146ca1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.127 247403 DEBUG oslo_concurrency.lockutils [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 17 KiB/s wr, 186 op/s
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.475 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.480 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.675 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.676 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848811.9195406, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.676 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Started (Lifecycle Event)#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.825 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.829 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:40:12 np0005603621 nova_compute[247399]: 2026-01-31 08:40:12.949 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.232 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:13.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.997 247403 DEBUG nova.compute.manager [req-b98e1062-6bb9-4636-bef3-1db5e605403f req-7eb78f48-51d4-4d85-857d-f3d8907cc8c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.997 247403 DEBUG oslo_concurrency.lockutils [req-b98e1062-6bb9-4636-bef3-1db5e605403f req-7eb78f48-51d4-4d85-857d-f3d8907cc8c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.997 247403 DEBUG oslo_concurrency.lockutils [req-b98e1062-6bb9-4636-bef3-1db5e605403f req-7eb78f48-51d4-4d85-857d-f3d8907cc8c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.997 247403 DEBUG oslo_concurrency.lockutils [req-b98e1062-6bb9-4636-bef3-1db5e605403f req-7eb78f48-51d4-4d85-857d-f3d8907cc8c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.997 247403 DEBUG nova.compute.manager [req-b98e1062-6bb9-4636-bef3-1db5e605403f req-7eb78f48-51d4-4d85-857d-f3d8907cc8c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:13 np0005603621 nova_compute[247399]: 2026-01-31 08:40:13.998 247403 WARNING nova.compute.manager [req-b98e1062-6bb9-4636-bef3-1db5e605403f req-7eb78f48-51d4-4d85-857d-f3d8907cc8c6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 31 03:40:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 17 KiB/s wr, 186 op/s
Jan 31 03:40:14 np0005603621 nova_compute[247399]: 2026-01-31 08:40:14.871 247403 DEBUG nova.network.neutron [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updated VIF entry in instance network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:40:14 np0005603621 nova_compute[247399]: 2026-01-31 08:40:14.872 247403 DEBUG nova.network.neutron [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:14 np0005603621 nova_compute[247399]: 2026-01-31 08:40:14.960 247403 DEBUG oslo_concurrency.lockutils [req-701efd60-8d6c-4fa9-bb40-afff241ef62a req-1b3c4136-048b-4cfc-9fc6-b3d3f5af01dd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:14 np0005603621 nova_compute[247399]: 2026-01-31 08:40:14.961 247403 DEBUG oslo_concurrency.lockutils [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:14 np0005603621 nova_compute[247399]: 2026-01-31 08:40:14.961 247403 DEBUG nova.network.neutron [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Refreshing network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:40:15 np0005603621 nova_compute[247399]: 2026-01-31 08:40:15.050 247403 DEBUG nova.compute.manager [None req-681d71c2-eaaa-46d7-90ac-0fe6cee175d6 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:40:15 np0005603621 nova_compute[247399]: 2026-01-31 08:40:15.058 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.312 247403 DEBUG nova.compute.manager [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.313 247403 DEBUG oslo_concurrency.lockutils [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.313 247403 DEBUG oslo_concurrency.lockutils [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.313 247403 DEBUG oslo_concurrency.lockutils [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.313 247403 DEBUG nova.compute.manager [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.314 247403 WARNING nova.compute.manager [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.314 247403 DEBUG nova.compute.manager [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.314 247403 DEBUG oslo_concurrency.lockutils [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.314 247403 DEBUG oslo_concurrency.lockutils [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.314 247403 DEBUG oslo_concurrency.lockutils [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.315 247403 DEBUG nova.compute.manager [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:16 np0005603621 nova_compute[247399]: 2026-01-31 08:40:16.315 247403 WARNING nova.compute.manager [req-d7fc4456-1255-4ad0-a48c-e208bb505eb3 req-e67a2771-1de6-4300-bef5-98c1db541d8c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:40:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 19 KiB/s wr, 248 op/s
Jan 31 03:40:17 np0005603621 nova_compute[247399]: 2026-01-31 08:40:17.174 247403 DEBUG nova.network.neutron [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updated VIF entry in instance network info cache for port e31170c4-45cd-451e-9d63-81bc41146ca1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:40:17 np0005603621 nova_compute[247399]: 2026-01-31 08:40:17.175 247403 DEBUG nova.network.neutron [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [{"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:17 np0005603621 nova_compute[247399]: 2026-01-31 08:40:17.247 247403 DEBUG oslo_concurrency.lockutils [req-7d8812c1-a94a-4aa8-898c-697fbf4a2c16 req-3be26af5-e3dc-4af6-b314-cc83ca0c0c20 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-92bd94ef-0031-409f-8c26-23d5f3d952e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:40:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:17.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.234 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 28 KiB/s wr, 233 op/s
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.580 247403 DEBUG nova.compute.manager [req-269286a3-144f-44ff-81a8-8df187bd3eaf req-6ea364c0-869f-4b8d-b76f-a515ae06ea94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.581 247403 DEBUG oslo_concurrency.lockutils [req-269286a3-144f-44ff-81a8-8df187bd3eaf req-6ea364c0-869f-4b8d-b76f-a515ae06ea94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.581 247403 DEBUG oslo_concurrency.lockutils [req-269286a3-144f-44ff-81a8-8df187bd3eaf req-6ea364c0-869f-4b8d-b76f-a515ae06ea94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.582 247403 DEBUG oslo_concurrency.lockutils [req-269286a3-144f-44ff-81a8-8df187bd3eaf req-6ea364c0-869f-4b8d-b76f-a515ae06ea94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.582 247403 DEBUG nova.compute.manager [req-269286a3-144f-44ff-81a8-8df187bd3eaf req-6ea364c0-869f-4b8d-b76f-a515ae06ea94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:18 np0005603621 nova_compute[247399]: 2026-01-31 08:40:18.582 247403 WARNING nova.compute.manager [req-269286a3-144f-44ff-81a8-8df187bd3eaf req-6ea364c0-869f-4b8d-b76f-a515ae06ea94 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:40:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:19.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:20 np0005603621 nova_compute[247399]: 2026-01-31 08:40:20.101 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 820 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 13 KiB/s wr, 140 op/s
Jan 31 03:40:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:21 np0005603621 nova_compute[247399]: 2026-01-31 08:40:21.550 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:21.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 822 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 24 KiB/s wr, 141 op/s
Jan 31 03:40:23 np0005603621 nova_compute[247399]: 2026-01-31 08:40:23.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:23 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:23Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:98:9d 10.100.0.8
Jan 31 03:40:23 np0005603621 podman[353018]: 2026-01-31 08:40:23.509694059 +0000 UTC m=+0.054136128 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 31 03:40:23 np0005603621 podman[353019]: 2026-01-31 08:40:23.566362416 +0000 UTC m=+0.110802365 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 31 03:40:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:23.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 822 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 24 KiB/s wr, 126 op/s
Jan 31 03:40:25 np0005603621 nova_compute[247399]: 2026-01-31 08:40:25.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:25.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:25.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 792 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 34 KiB/s wr, 158 op/s
Jan 31 03:40:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:27.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:27.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:28 np0005603621 nova_compute[247399]: 2026-01-31 08:40:28.239 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 741 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 132 op/s
Jan 31 03:40:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:29.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:29.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:30 np0005603621 nova_compute[247399]: 2026-01-31 08:40:30.105 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 741 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 24 KiB/s wr, 74 op/s
Jan 31 03:40:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:30.523 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:30.524 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:30.524 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:31.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:31.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 742 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 35 KiB/s wr, 75 op/s
Jan 31 03:40:33 np0005603621 nova_compute[247399]: 2026-01-31 08:40:33.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:33.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 742 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 24 KiB/s wr, 73 op/s
Jan 31 03:40:35 np0005603621 nova_compute[247399]: 2026-01-31 08:40:35.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:35.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 754 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 269 KiB/s wr, 75 op/s
Jan 31 03:40:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:37.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:37.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:38 np0005603621 nova_compute[247399]: 2026-01-31 08:40:38.244 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 403 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:40:38
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'default.rgw.control']
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:40:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:40:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:39.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:39.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:40 np0005603621 nova_compute[247399]: 2026-01-31 08:40:40.111 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 03:40:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:41.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:41.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:42.289 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:42 np0005603621 nova_compute[247399]: 2026-01-31 08:40:42.290 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:42.290 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:40:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 03:40:42 np0005603621 nova_compute[247399]: 2026-01-31 08:40:42.816 247403 DEBUG oslo_concurrency.lockutils [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:42 np0005603621 nova_compute[247399]: 2026-01-31 08:40:42.817 247403 DEBUG oslo_concurrency.lockutils [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.078 247403 INFO nova.compute.manager [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Detaching volume 3b63f0e3-6562-4449-962b-ff0c0228f219#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.229 247403 INFO nova.virt.block_device [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Attempting to driver detach volume 3b63f0e3-6562-4449-962b-ff0c0228f219 from mountpoint /dev/vdb#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.240 247403 DEBUG nova.virt.libvirt.driver [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Attempting to detach device vdb from instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.241 247403 DEBUG nova.virt.libvirt.guest [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-3b63f0e3-6562-4449-962b-ff0c0228f219">
Jan 31 03:40:43 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <serial>3b63f0e3-6562-4449-962b-ff0c0228f219</serial>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:40:43 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.284 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.288 247403 INFO nova.virt.libvirt.driver [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Successfully detached device vdb from instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 from the persistent domain config.#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.288 247403 DEBUG nova.virt.libvirt.driver [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.289 247403 DEBUG nova.virt.libvirt.guest [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-3b63f0e3-6562-4449-962b-ff0c0228f219">
Jan 31 03:40:43 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <serial>3b63f0e3-6562-4449-962b-ff0c0228f219</serial>
Jan 31 03:40:43 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:40:43 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:40:43 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.381 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769848843.3810368, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.382 247403 DEBUG nova.virt.libvirt.driver [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:40:43 np0005603621 nova_compute[247399]: 2026-01-31 08:40:43.384 247403 INFO nova.virt.libvirt.driver [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Successfully detached device vdb from instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 from the live domain config.#033[00m
Jan 31 03:40:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:43.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:43.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:44 np0005603621 nova_compute[247399]: 2026-01-31 08:40:44.474 247403 DEBUG nova.objects.instance [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'flavor' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 789 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:40:45 np0005603621 nova_compute[247399]: 2026-01-31 08:40:45.112 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:45 np0005603621 nova_compute[247399]: 2026-01-31 08:40:45.116 247403 DEBUG oslo_concurrency.lockutils [None req-15038ea1-63b0-40b1-9bbd-641f6bb0d546 b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:45.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:45.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Jan 31 03:40:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Jan 31 03:40:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Jan 31 03:40:46 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:46.292 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 789 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 31 03:40:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:47.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:47.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:47Z|00614|binding|INFO|Releasing lport 54969bc0-ee8d-420c-ac0c-dd4f9410e42c from this chassis (sb_readonly=0)
Jan 31 03:40:47 np0005603621 nova_compute[247399]: 2026-01-31 08:40:47.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.289 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.289 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.290 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.290 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.290 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.291 247403 INFO nova.compute.manager [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Terminating instance#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.292 247403 DEBUG nova.compute.manager [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:40:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 746 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 2.6 KiB/s wr, 20 op/s
Jan 31 03:40:48 np0005603621 kernel: tape31170c4-45 (unregistering): left promiscuous mode
Jan 31 03:40:48 np0005603621 NetworkManager[49013]: <info>  [1769848848.5411] device (tape31170c4-45): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:40:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:48Z|00615|binding|INFO|Releasing lport e31170c4-45cd-451e-9d63-81bc41146ca1 from this chassis (sb_readonly=0)
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.546 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:48Z|00616|binding|INFO|Setting lport e31170c4-45cd-451e-9d63-81bc41146ca1 down in Southbound
Jan 31 03:40:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:40:48Z|00617|binding|INFO|Removing iface tape31170c4-45 ovn-installed in OVS
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.549 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.554 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 31 03:40:48 np0005603621 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000093.scope: Consumed 13.658s CPU time.
Jan 31 03:40:48 np0005603621 systemd-machined[212769]: Machine qemu-74-instance-00000093 terminated.
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.630 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:98:9d 10.100.0.8'], port_security=['fa:16:3e:fc:98:9d 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '92bd94ef-0031-409f-8c26-23d5f3d952e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '47937952-48ef-482c-9abd-e199e46c505b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.250', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e31170c4-45cd-451e-9d63-81bc41146ca1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.632 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e31170c4-45cd-451e-9d63-81bc41146ca1 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.633 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 31da00d3-077b-4620-a7d3-68186467ab47#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.643 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c15788bb-f9d7-42a3-9b97-0d43c1c966fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.663 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0a00424a-6ce4-4fb4-8786-fd55bc05474e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.665 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[02bc4d0a-36d5-4490-b43e-6d2b292cd861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.681 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ab90aa47-c503-4b17-9140-0bcf482c36d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.692 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[69f07d4c-15bd-47e9-829f-6077690807e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap31da00d3-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:4f:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795039, 'reachable_time': 18346, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353140, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.703 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9f1b2a-4904-40d0-af26-7bdd7577fede]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795049, 'tstamp': 795049}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353141, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap31da00d3-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 795051, 'tstamp': 795051}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353141, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.704 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.706 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.714 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.714 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31da00d3-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.715 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.715 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap31da00d3-00, col_values=(('external_ids', {'iface-id': '54969bc0-ee8d-420c-ac0c-dd4f9410e42c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:40:48.716 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.722 247403 INFO nova.virt.libvirt.driver [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Instance destroyed successfully.#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.723 247403 DEBUG nova.objects.instance [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'resources' on Instance uuid 92bd94ef-0031-409f-8c26-23d5f3d952e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.884 247403 DEBUG nova.virt.libvirt.vif [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-115398860',display_name='tempest-ServerStableDeviceRescueTest-server-115398860',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-115398860',id=147,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGwSJ/FWYSnlwbHs0Wzm91aUcuaXBYBlt6yIH3QV0xhWQdod7WEXhLlfK4Hd2zTjsdf/eKXM6/AA0TDEI9fUsNHKQsyPifodt6RnLsQxSTsB0zuFfxi198QzASAfXJdIAg==',key_name='tempest-keypair-557896509',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:40:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-a4dec496',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:40:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b6733330b634472ca8c21316f1ee5057',uuid=92bd94ef-0031-409f-8c26-23d5f3d952e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.884 247403 DEBUG nova.network.os_vif_util [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "e31170c4-45cd-451e-9d63-81bc41146ca1", "address": "fa:16:3e:fc:98:9d", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape31170c4-45", "ovs_interfaceid": "e31170c4-45cd-451e-9d63-81bc41146ca1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.885 247403 DEBUG nova.network.os_vif_util [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.885 247403 DEBUG os_vif [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.887 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape31170c4-45, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.888 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.890 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:48 np0005603621 nova_compute[247399]: 2026-01-31 08:40:48.893 247403 INFO os_vif [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:98:9d,bridge_name='br-int',has_traffic_filtering=True,id=e31170c4-45cd-451e-9d63-81bc41146ca1,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape31170c4-45')#033[00m
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011844523543644146 of space, bias 1.0, pg target 3.553357063093244 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021646862786152987 of space, bias 1.0, pg target 0.6429118247487438 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005887910327098456 of space, bias 1.0, pg target 1.7487093671482414 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:40:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 03:40:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:49.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:49.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:50 np0005603621 nova_compute[247399]: 2026-01-31 08:40:50.114 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 746 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 2.6 KiB/s wr, 20 op/s
Jan 31 03:40:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:51 np0005603621 nova_compute[247399]: 2026-01-31 08:40:51.371 247403 INFO nova.virt.libvirt.driver [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Deleting instance files /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1_del#033[00m
Jan 31 03:40:51 np0005603621 nova_compute[247399]: 2026-01-31 08:40:51.372 247403 INFO nova.virt.libvirt.driver [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Deletion of /var/lib/nova/instances/92bd94ef-0031-409f-8c26-23d5f3d952e1_del complete#033[00m
Jan 31 03:40:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:51.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:51.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:51 np0005603621 nova_compute[247399]: 2026-01-31 08:40:51.742 247403 INFO nova.compute.manager [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Took 3.45 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:40:51 np0005603621 nova_compute[247399]: 2026-01-31 08:40:51.743 247403 DEBUG oslo.service.loopingcall [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:40:51 np0005603621 nova_compute[247399]: 2026-01-31 08:40:51.743 247403 DEBUG nova.compute.manager [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:40:51 np0005603621 nova_compute[247399]: 2026-01-31 08:40:51.744 247403 DEBUG nova.network.neutron [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 629 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 64 op/s
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.840 247403 DEBUG nova.compute.manager [req-7bc50c19-f24a-4f7c-a1b7-a7cf9972bfa8 req-c96a9d6e-2ba9-4379-b163-44377a30c98f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.840 247403 DEBUG oslo_concurrency.lockutils [req-7bc50c19-f24a-4f7c-a1b7-a7cf9972bfa8 req-c96a9d6e-2ba9-4379-b163-44377a30c98f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.841 247403 DEBUG oslo_concurrency.lockutils [req-7bc50c19-f24a-4f7c-a1b7-a7cf9972bfa8 req-c96a9d6e-2ba9-4379-b163-44377a30c98f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.841 247403 DEBUG oslo_concurrency.lockutils [req-7bc50c19-f24a-4f7c-a1b7-a7cf9972bfa8 req-c96a9d6e-2ba9-4379-b163-44377a30c98f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.841 247403 DEBUG nova.compute.manager [req-7bc50c19-f24a-4f7c-a1b7-a7cf9972bfa8 req-c96a9d6e-2ba9-4379-b163-44377a30c98f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:52 np0005603621 nova_compute[247399]: 2026-01-31 08:40:52.841 247403 DEBUG nova.compute.manager [req-7bc50c19-f24a-4f7c-a1b7-a7cf9972bfa8 req-c96a9d6e-2ba9-4379-b163-44377a30c98f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-unplugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:40:53 np0005603621 nova_compute[247399]: 2026-01-31 08:40:53.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:53 np0005603621 nova_compute[247399]: 2026-01-31 08:40:53.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:40:53 np0005603621 nova_compute[247399]: 2026-01-31 08:40:53.329 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:40:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:53.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:53.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:53 np0005603621 nova_compute[247399]: 2026-01-31 08:40:53.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 629 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 18 KiB/s wr, 64 op/s
Jan 31 03:40:54 np0005603621 podman[353226]: 2026-01-31 08:40:54.502611324 +0000 UTC m=+0.054638522 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:40:54 np0005603621 podman[353227]: 2026-01-31 08:40:54.519586923 +0000 UTC m=+0.071194798 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.387 247403 DEBUG nova.compute.manager [req-db1d4dd6-ff26-43b0-9124-9f0a215e7e06 req-d22fcf1c-0135-4cb7-a065-1841e77ec042 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.387 247403 DEBUG oslo_concurrency.lockutils [req-db1d4dd6-ff26-43b0-9124-9f0a215e7e06 req-d22fcf1c-0135-4cb7-a065-1841e77ec042 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.388 247403 DEBUG oslo_concurrency.lockutils [req-db1d4dd6-ff26-43b0-9124-9f0a215e7e06 req-d22fcf1c-0135-4cb7-a065-1841e77ec042 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.388 247403 DEBUG oslo_concurrency.lockutils [req-db1d4dd6-ff26-43b0-9124-9f0a215e7e06 req-d22fcf1c-0135-4cb7-a065-1841e77ec042 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.388 247403 DEBUG nova.compute.manager [req-db1d4dd6-ff26-43b0-9124-9f0a215e7e06 req-d22fcf1c-0135-4cb7-a065-1841e77ec042 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] No waiting events found dispatching network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:40:55 np0005603621 nova_compute[247399]: 2026-01-31 08:40:55.388 247403 WARNING nova.compute.manager [req-db1d4dd6-ff26-43b0-9124-9f0a215e7e06 req-d22fcf1c-0135-4cb7-a065-1841e77ec042 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received unexpected event network-vif-plugged-e31170c4-45cd-451e-9d63-81bc41146ca1 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:40:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:40:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:55.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:40:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:55.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:40:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Jan 31 03:40:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Jan 31 03:40:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Jan 31 03:40:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 511 KiB/s rd, 18 KiB/s wr, 78 op/s
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.233 247403 DEBUG nova.network.neutron [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.280 247403 DEBUG nova.compute.manager [req-24511ea0-da8b-4684-8484-2b9c89a635a6 req-e1fe82be-41b5-44ed-b043-c8c65c7efe4d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Received event network-vif-deleted-e31170c4-45cd-451e-9d63-81bc41146ca1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.281 247403 INFO nova.compute.manager [req-24511ea0-da8b-4684-8484-2b9c89a635a6 req-e1fe82be-41b5-44ed-b043-c8c65c7efe4d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Neutron deleted interface e31170c4-45cd-451e-9d63-81bc41146ca1; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.281 247403 DEBUG nova.network.neutron [req-24511ea0-da8b-4684-8484-2b9c89a635a6 req-e1fe82be-41b5-44ed-b043-c8c65c7efe4d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.444 247403 DEBUG nova.compute.manager [req-24511ea0-da8b-4684-8484-2b9c89a635a6 req-e1fe82be-41b5-44ed-b043-c8c65c7efe4d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Detach interface failed, port_id=e31170c4-45cd-451e-9d63-81bc41146ca1, reason: Instance 92bd94ef-0031-409f-8c26-23d5f3d952e1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.448 247403 INFO nova.compute.manager [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Took 5.70 seconds to deallocate network for instance.#033[00m
Jan 31 03:40:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:57.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:57.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.926 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:40:57 np0005603621 nova_compute[247399]: 2026-01-31 08:40:57.926 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.011 247403 DEBUG oslo_concurrency.processutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.330 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.331 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.331 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:40:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:40:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2064757118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.477 247403 DEBUG oslo_concurrency.processutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.483 247403 DEBUG nova.compute.provider_tree [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:40:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 129 op/s
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.540 247403 DEBUG nova.scheduler.client.report [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.606 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.700 247403 INFO nova.scheduler.client.report [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Deleted allocations for instance 92bd94ef-0031-409f-8c26-23d5f3d952e1#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.772 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.772 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.773 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.773 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.890 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:40:58 np0005603621 nova_compute[247399]: 2026-01-31 08:40:58.980 247403 DEBUG oslo_concurrency.lockutils [None req-38f70474-6797-4448-a3b5-03f262b848be b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "92bd94ef-0031-409f-8c26-23d5f3d952e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:40:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:40:59.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:40:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:40:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:40:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:40:59.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:00 np0005603621 nova_compute[247399]: 2026-01-31 08:41:00.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 129 op/s
Jan 31 03:41:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:01 np0005603621 nova_compute[247399]: 2026-01-31 08:41:01.329 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [{"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:41:01 np0005603621 nova_compute[247399]: 2026-01-31 08:41:01.359 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c95caf87-5069-4b70-9023-d3c2d911e87d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:41:01 np0005603621 nova_compute[247399]: 2026-01-31 08:41:01.359 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:41:01 np0005603621 nova_compute[247399]: 2026-01-31 08:41:01.359 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:01 np0005603621 nova_compute[247399]: 2026-01-31 08:41:01.360 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:01 np0005603621 nova_compute[247399]: 2026-01-31 08:41:01.360 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:41:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:01.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:41:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:01.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:41:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 92 op/s
Jan 31 03:41:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:03.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:03 np0005603621 nova_compute[247399]: 2026-01-31 08:41:03.721 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848848.7205763, 92bd94ef-0031-409f-8c26-23d5f3d952e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:41:03 np0005603621 nova_compute[247399]: 2026-01-31 08:41:03.722 247403 INFO nova.compute.manager [-] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:41:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:03.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:03 np0005603621 nova_compute[247399]: 2026-01-31 08:41:03.892 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:03 np0005603621 nova_compute[247399]: 2026-01-31 08:41:03.957 247403 DEBUG nova.compute.manager [None req-9505df26-077d-451f-8ead-97da55e4c8e9 - - - - - -] [instance: 92bd94ef-0031-409f-8c26-23d5f3d952e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.247 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.248 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.248 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.248 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.248 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 628 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 92 op/s
Jan 31 03:41:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055599878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.674 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.785 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.785 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.902 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.903 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=20.785205841064453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.903 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:04 np0005603621 nova_compute[247399]: 2026-01-31 08:41:04.903 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.119 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.214 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c95caf87-5069-4b70-9023-d3c2d911e87d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.214 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.214 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.276 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467527916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:05.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.715 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.720 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:41:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:05.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.748 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:41:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.877 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:41:05 np0005603621 nova_compute[247399]: 2026-01-31 08:41:05.877 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:41:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 617 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 KiB/s wr, 95 op/s
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:41:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c0e0b61d-acca-465d-af79-2fea9b926e59 does not exist
Jan 31 03:41:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8a548066-7981-4ffe-ab08-5ad84cae0939 does not exist
Jan 31 03:41:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5cb80132-4233-4576-9dba-971b470e894b does not exist
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:41:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:41:06 np0005603621 podman[353613]: 2026-01-31 08:41:06.981453391 +0000 UTC m=+0.036530210 container create 64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:41:07 np0005603621 podman[353613]: 2026-01-31 08:41:06.963703988 +0000 UTC m=+0.018780827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:41:07 np0005603621 systemd[1]: Started libpod-conmon-64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d.scope.
Jan 31 03:41:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:41:07 np0005603621 podman[353613]: 2026-01-31 08:41:07.116200903 +0000 UTC m=+0.171277752 container init 64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:41:07 np0005603621 podman[353613]: 2026-01-31 08:41:07.122260774 +0000 UTC m=+0.177337593 container start 64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:41:07 np0005603621 zen_mcclintock[353630]: 167 167
Jan 31 03:41:07 np0005603621 systemd[1]: libpod-64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d.scope: Deactivated successfully.
Jan 31 03:41:07 np0005603621 podman[353613]: 2026-01-31 08:41:07.153557607 +0000 UTC m=+0.208634436 container attach 64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:41:07 np0005603621 podman[353613]: 2026-01-31 08:41:07.153957899 +0000 UTC m=+0.209034738 container died 64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:41:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-463293ceaae12a34a9dc474ba430d96819b29f3fb36792d5e7a77a91b75a81ba-merged.mount: Deactivated successfully.
Jan 31 03:41:07 np0005603621 podman[353613]: 2026-01-31 08:41:07.190201738 +0000 UTC m=+0.245278557 container remove 64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_mcclintock, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:41:07 np0005603621 systemd[1]: libpod-conmon-64a8c829bc80f02a106881bf257859486440a68121fc75d2e22d536cbbd82f0d.scope: Deactivated successfully.
Jan 31 03:41:07 np0005603621 podman[353656]: 2026-01-31 08:41:07.31294487 +0000 UTC m=+0.041458725 container create 43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:41:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:41:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:41:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:41:07 np0005603621 systemd[1]: Started libpod-conmon-43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5.scope.
Jan 31 03:41:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:41:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f47e940350c0bd15983f22a99ef0838b430b82a5937dcb4bd6f72cfd3da11d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f47e940350c0bd15983f22a99ef0838b430b82a5937dcb4bd6f72cfd3da11d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f47e940350c0bd15983f22a99ef0838b430b82a5937dcb4bd6f72cfd3da11d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f47e940350c0bd15983f22a99ef0838b430b82a5937dcb4bd6f72cfd3da11d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f47e940350c0bd15983f22a99ef0838b430b82a5937dcb4bd6f72cfd3da11d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:07 np0005603621 podman[353656]: 2026-01-31 08:41:07.295363803 +0000 UTC m=+0.023877678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:41:07 np0005603621 podman[353656]: 2026-01-31 08:41:07.397668746 +0000 UTC m=+0.126182621 container init 43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:41:07 np0005603621 podman[353656]: 2026-01-31 08:41:07.405515995 +0000 UTC m=+0.134029850 container start 43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:41:07 np0005603621 podman[353656]: 2026-01-31 08:41:07.40914743 +0000 UTC m=+0.137661305 container attach 43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:41:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Jan 31 03:41:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Jan 31 03:41:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Jan 31 03:41:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:07.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:07 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 31 03:41:08 np0005603621 cool_elbakyan[353673]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:41:08 np0005603621 cool_elbakyan[353673]: --> relative data size: 1.0
Jan 31 03:41:08 np0005603621 cool_elbakyan[353673]: --> All data devices are unavailable
Jan 31 03:41:08 np0005603621 systemd[1]: libpod-43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5.scope: Deactivated successfully.
Jan 31 03:41:08 np0005603621 podman[353656]: 2026-01-31 08:41:08.190871935 +0000 UTC m=+0.919385790 container died 43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 03:41:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-76f47e940350c0bd15983f22a99ef0838b430b82a5937dcb4bd6f72cfd3da11d-merged.mount: Deactivated successfully.
Jan 31 03:41:08 np0005603621 podman[353656]: 2026-01-31 08:41:08.237002067 +0000 UTC m=+0.965515922 container remove 43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_elbakyan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:41:08 np0005603621 systemd[1]: libpod-conmon-43e0e57467f6a70423c5aa0dd89949371b2020e2325dace7455bebdd594534c5.scope: Deactivated successfully.
Jan 31 03:41:08 np0005603621 nova_compute[247399]: 2026-01-31 08:41:08.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 582 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.700246134 +0000 UTC m=+0.032546223 container create a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:41:08 np0005603621 systemd[1]: Started libpod-conmon-a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a.scope.
Jan 31 03:41:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.77643909 +0000 UTC m=+0.108739199 container init a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.78117665 +0000 UTC m=+0.113476739 container start a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.686353974 +0000 UTC m=+0.018654073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.783762462 +0000 UTC m=+0.116062551 container attach a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:41:08 np0005603621 stupefied_kirch[353859]: 167 167
Jan 31 03:41:08 np0005603621 systemd[1]: libpod-a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a.scope: Deactivated successfully.
Jan 31 03:41:08 np0005603621 conmon[353859]: conmon a00b1230ce4d007d1a99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a.scope/container/memory.events
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.78813156 +0000 UTC m=+0.120431649 container died a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:41:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ff20681b4a07e55ebaaf74835061cda404297cf1a182c81974ccfcfc9344ceec-merged.mount: Deactivated successfully.
Jan 31 03:41:08 np0005603621 podman[353843]: 2026-01-31 08:41:08.822770898 +0000 UTC m=+0.155070987 container remove a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:41:08 np0005603621 systemd[1]: libpod-conmon-a00b1230ce4d007d1a99d0ad9eea89907311efb4d0cebea11c3eda8606d1003a.scope: Deactivated successfully.
Jan 31 03:41:08 np0005603621 nova_compute[247399]: 2026-01-31 08:41:08.873 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:08 np0005603621 nova_compute[247399]: 2026-01-31 08:41:08.894 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:08 np0005603621 podman[353883]: 2026-01-31 08:41:08.943704603 +0000 UTC m=+0.036869900 container create a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:41:08 np0005603621 nova_compute[247399]: 2026-01-31 08:41:08.943 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:08 np0005603621 systemd[1]: Started libpod-conmon-a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548.scope.
Jan 31 03:41:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:41:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62231ff48f2d5840008cfabc3df23327d7490a4d5cb5cc235b8b042988e0075a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62231ff48f2d5840008cfabc3df23327d7490a4d5cb5cc235b8b042988e0075a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62231ff48f2d5840008cfabc3df23327d7490a4d5cb5cc235b8b042988e0075a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62231ff48f2d5840008cfabc3df23327d7490a4d5cb5cc235b8b042988e0075a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.021 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.022 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.022 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.022 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.023 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:09 np0005603621 podman[353883]: 2026-01-31 08:41:08.927784168 +0000 UTC m=+0.020949475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.024 247403 INFO nova.compute.manager [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Terminating instance#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.025 247403 DEBUG nova.compute.manager [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:41:09 np0005603621 podman[353883]: 2026-01-31 08:41:09.029342958 +0000 UTC m=+0.122508255 container init a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:41:09 np0005603621 podman[353883]: 2026-01-31 08:41:09.03508081 +0000 UTC m=+0.128246097 container start a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:41:09 np0005603621 podman[353883]: 2026-01-31 08:41:09.040827422 +0000 UTC m=+0.133992729 container attach a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:09 np0005603621 kernel: tap509c791a-f0 (unregistering): left promiscuous mode
Jan 31 03:41:09 np0005603621 NetworkManager[49013]: <info>  [1769848869.4873] device (tap509c791a-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.496 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:09Z|00618|binding|INFO|Releasing lport 509c791a-f0a2-4105-a992-82e720b801e8 from this chassis (sb_readonly=0)
Jan 31 03:41:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:09Z|00619|binding|INFO|Setting lport 509c791a-f0a2-4105-a992-82e720b801e8 down in Southbound
Jan 31 03:41:09 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:09Z|00620|binding|INFO|Removing iface tap509c791a-f0 ovn-installed in OVS
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.511 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:09 np0005603621 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 31 03:41:09 np0005603621 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000090.scope: Consumed 19.764s CPU time.
Jan 31 03:41:09 np0005603621 systemd-machined[212769]: Machine qemu-70-instance-00000090 terminated.
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.657 247403 INFO nova.virt.libvirt.driver [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Instance destroyed successfully.#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.658 247403 DEBUG nova.objects.instance [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lazy-loading 'resources' on Instance uuid c95caf87-5069-4b70-9023-d3c2d911e87d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:41:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:09.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:09.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]: {
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:    "0": [
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:        {
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "devices": [
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "/dev/loop3"
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            ],
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "lv_name": "ceph_lv0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "lv_size": "7511998464",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "name": "ceph_lv0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "tags": {
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.cluster_name": "ceph",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.crush_device_class": "",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.encrypted": "0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.osd_id": "0",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.type": "block",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:                "ceph.vdo": "0"
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            },
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "type": "block",
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:            "vg_name": "ceph_vg0"
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:        }
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]:    ]
Jan 31 03:41:09 np0005603621 gracious_hodgkin[353900]: }
Jan 31 03:41:09 np0005603621 systemd[1]: libpod-a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548.scope: Deactivated successfully.
Jan 31 03:41:09 np0005603621 podman[353883]: 2026-01-31 08:41:09.777925551 +0000 UTC m=+0.871090838 container died a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:41:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:09.928 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:93:70 10.100.0.7'], port_security=['fa:16:3e:0a:93:70 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c95caf87-5069-4b70-9023-d3c2d911e87d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-31da00d3-077b-4620-a7d3-68186467ab47', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e29363ca464487b931af54fe14166b1', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b1c240f5-10ef-43c0-92c2-4688e636b197', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c184d7a-2b72-4f04-8956-830b1e8cd5e4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=509c791a-f0a2-4105-a992-82e720b801e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:41:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:09.930 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 509c791a-f0a2-4105-a992-82e720b801e8 in datapath 31da00d3-077b-4620-a7d3-68186467ab47 unbound from our chassis#033[00m
Jan 31 03:41:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:09.932 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 31da00d3-077b-4620-a7d3-68186467ab47, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:41:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:09.933 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2869b0-4d99-4337-80e6-440329976232]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:09.933 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 namespace which is not needed anymore#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.996 247403 DEBUG nova.virt.libvirt.vif [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:36:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-268360674',display_name='tempest-ServerStableDeviceRescueTest-server-268360674',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-268360674',id=144,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:37:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e29363ca464487b931af54fe14166b1',ramdisk_id='',reservation_id='r-dmlghow8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-319343227',owner_user_name='tempest-ServerStableDeviceRescueTest-319343227-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:37:28Z,user_data=None,user_id='b6733330b634472ca8c21316f1ee5057',uuid=c95caf87-5069-4b70-9023-d3c2d911e87d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.997 247403 DEBUG nova.network.os_vif_util [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converting VIF {"id": "509c791a-f0a2-4105-a992-82e720b801e8", "address": "fa:16:3e:0a:93:70", "network": {"id": "31da00d3-077b-4620-a7d3-68186467ab47", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1178144410-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e29363ca464487b931af54fe14166b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509c791a-f0", "ovs_interfaceid": "509c791a-f0a2-4105-a992-82e720b801e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:41:09 np0005603621 nova_compute[247399]: 2026-01-31 08:41:09.999 247403 DEBUG nova.network.os_vif_util [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.000 247403 DEBUG os_vif [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.003 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap509c791a-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.080 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.083 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.085 247403 INFO os_vif [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:93:70,bridge_name='br-int',has_traffic_filtering=True,id=509c791a-f0a2-4105-a992-82e720b801e8,network=Network(31da00d3-077b-4620-a7d3-68186467ab47),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509c791a-f0')#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-62231ff48f2d5840008cfabc3df23327d7490a4d5cb5cc235b8b042988e0075a-merged.mount: Deactivated successfully.
Jan 31 03:41:10 np0005603621 podman[353883]: 2026-01-31 08:41:10.45696971 +0000 UTC m=+1.550135017 container remove a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:41:10 np0005603621 systemd[1]: libpod-conmon-a49fb8860c2082e71d0bc15d781741ee470d936551127aaea55655eb272ea548.scope: Deactivated successfully.
Jan 31 03:41:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 582 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.0 KiB/s wr, 51 op/s
Jan 31 03:41:10 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [NOTICE]   (347426) : haproxy version is 2.8.14-c23fe91
Jan 31 03:41:10 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [NOTICE]   (347426) : path to executable is /usr/sbin/haproxy
Jan 31 03:41:10 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [WARNING]  (347426) : Exiting Master process...
Jan 31 03:41:10 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [WARNING]  (347426) : Exiting Master process...
Jan 31 03:41:10 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [ALERT]    (347426) : Current worker (347428) exited with code 143 (Terminated)
Jan 31 03:41:10 np0005603621 neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47[347422]: [WARNING]  (347426) : All workers exited. Exiting... (0)
Jan 31 03:41:10 np0005603621 systemd[1]: libpod-636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27.scope: Deactivated successfully.
Jan 31 03:41:10 np0005603621 podman[353958]: 2026-01-31 08:41:10.603688912 +0000 UTC m=+0.502599466 container died 636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:41:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-967c3c03f975694700c844b5f164ab769f79f3de449a753208c7cc1e50f96c15-merged.mount: Deactivated successfully.
Jan 31 03:41:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27-userdata-shm.mount: Deactivated successfully.
Jan 31 03:41:10 np0005603621 podman[353958]: 2026-01-31 08:41:10.725195005 +0000 UTC m=+0.624105549 container cleanup 636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.729 247403 DEBUG nova.compute.manager [req-b07d6725-0cfa-44aa-8855-aa623dae7dd3 req-59096cdc-7ab4-46e4-85c6-577e79d8ad80 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.732 247403 DEBUG oslo_concurrency.lockutils [req-b07d6725-0cfa-44aa-8855-aa623dae7dd3 req-59096cdc-7ab4-46e4-85c6-577e79d8ad80 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.732 247403 DEBUG oslo_concurrency.lockutils [req-b07d6725-0cfa-44aa-8855-aa623dae7dd3 req-59096cdc-7ab4-46e4-85c6-577e79d8ad80 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:10 np0005603621 systemd[1]: libpod-conmon-636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27.scope: Deactivated successfully.
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.733 247403 DEBUG oslo_concurrency.lockutils [req-b07d6725-0cfa-44aa-8855-aa623dae7dd3 req-59096cdc-7ab4-46e4-85c6-577e79d8ad80 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.733 247403 DEBUG nova.compute.manager [req-b07d6725-0cfa-44aa-8855-aa623dae7dd3 req-59096cdc-7ab4-46e4-85c6-577e79d8ad80 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.733 247403 DEBUG nova.compute.manager [req-b07d6725-0cfa-44aa-8855-aa623dae7dd3 req-59096cdc-7ab4-46e4-85c6-577e79d8ad80 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-unplugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:41:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:10 np0005603621 podman[354107]: 2026-01-31 08:41:10.884632389 +0000 UTC m=+0.140852286 container remove 636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.889 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc8fb2a-7dd9-44a2-802d-3e3725f1f3a2]: (4, ('Sat Jan 31 08:41:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 (636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27)\n636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27\nSat Jan 31 08:41:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 (636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27)\n636b817b6a972168719d1c4b5d7bff970c40d51434a2d581b6e3f1db3a208b27\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.891 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[febb6e9c-09e6-4969-8ddc-c98a6cf32d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.892 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31da00d3-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.894 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:10 np0005603621 kernel: tap31da00d3-00: left promiscuous mode
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.900 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:10 np0005603621 nova_compute[247399]: 2026-01-31 08:41:10.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.903 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d514000c-d2e9-4fbd-9b40-4904a57002aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.919 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[414b26ca-80c4-47f9-a6ab-efb31ae471c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.920 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3a338fd5-cee0-4717-8669-31ba2342882a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.930 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1153666e-2737-4bdd-a689-b04393ac7c8d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 795033, 'reachable_time': 34575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354149, 'error': None, 'target': 'ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.932 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-31da00d3-077b-4620-a7d3-68186467ab47 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:41:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:10.932 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[795f082f-87a6-459e-80bc-82cc63620ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:10 np0005603621 systemd[1]: run-netns-ovnmeta\x2d31da00d3\x2d077b\x2d4620\x2da7d3\x2d68186467ab47.mount: Deactivated successfully.
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.032768736 +0000 UTC m=+0.049413068 container create f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:41:11 np0005603621 systemd[1]: Started libpod-conmon-f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61.scope.
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.003440956 +0000 UTC m=+0.020085308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:41:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.116081997 +0000 UTC m=+0.132726339 container init f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.122010385 +0000 UTC m=+0.138654707 container start f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:41:11 np0005603621 charming_montalcini[354180]: 167 167
Jan 31 03:41:11 np0005603621 systemd[1]: libpod-f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61.scope: Deactivated successfully.
Jan 31 03:41:11 np0005603621 conmon[354180]: conmon f7425c3148a40a247697 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61.scope/container/memory.events
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.132846509 +0000 UTC m=+0.149490841 container attach f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.134012425 +0000 UTC m=+0.150656747 container died f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:41:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e6f8ecb2a38da51a2e51773e2ed4bef1a4576ed9d1c6a2b95c99fad8d3f54cc5-merged.mount: Deactivated successfully.
Jan 31 03:41:11 np0005603621 podman[354164]: 2026-01-31 08:41:11.225833067 +0000 UTC m=+0.242477389 container remove f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:41:11 np0005603621 systemd[1]: libpod-conmon-f7425c3148a40a2476974a996b7b2bcc06a60697a32fc364d680202ebd9eec61.scope: Deactivated successfully.
Jan 31 03:41:11 np0005603621 podman[354207]: 2026-01-31 08:41:11.348673541 +0000 UTC m=+0.037352515 container create 8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:41:11 np0005603621 systemd[1]: Started libpod-conmon-8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe.scope.
Jan 31 03:41:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:41:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5128fc9b086e823bd2b67d9e713f6e98e0d64195439cf87d5e6550e1a31f0c12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5128fc9b086e823bd2b67d9e713f6e98e0d64195439cf87d5e6550e1a31f0c12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5128fc9b086e823bd2b67d9e713f6e98e0d64195439cf87d5e6550e1a31f0c12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5128fc9b086e823bd2b67d9e713f6e98e0d64195439cf87d5e6550e1a31f0c12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:41:11 np0005603621 podman[354207]: 2026-01-31 08:41:11.331846978 +0000 UTC m=+0.020525982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:41:11 np0005603621 podman[354207]: 2026-01-31 08:41:11.427641935 +0000 UTC m=+0.116321109 container init 8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:41:11 np0005603621 podman[354207]: 2026-01-31 08:41:11.432424717 +0000 UTC m=+0.121103691 container start 8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:41:11 np0005603621 podman[354207]: 2026-01-31 08:41:11.437557919 +0000 UTC m=+0.126236903 container attach 8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:41:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:11.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]: {
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:        "osd_id": 0,
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:        "type": "bluestore"
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]:    }
Jan 31 03:41:12 np0005603621 trusting_swartz[354223]: }
Jan 31 03:41:12 np0005603621 systemd[1]: libpod-8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe.scope: Deactivated successfully.
Jan 31 03:41:12 np0005603621 conmon[354223]: conmon 8ecfcc3956469687bee5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe.scope/container/memory.events
Jan 31 03:41:12 np0005603621 podman[354207]: 2026-01-31 08:41:12.247152087 +0000 UTC m=+0.935831061 container died 8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:41:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5128fc9b086e823bd2b67d9e713f6e98e0d64195439cf87d5e6550e1a31f0c12-merged.mount: Deactivated successfully.
Jan 31 03:41:12 np0005603621 podman[354207]: 2026-01-31 08:41:12.311488597 +0000 UTC m=+1.000167571 container remove 8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_swartz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 03:41:12 np0005603621 systemd[1]: libpod-conmon-8ecfcc3956469687bee57c8dad70f6f1bd3368523a0a6b71b18acc5dbf131cbe.scope: Deactivated successfully.
Jan 31 03:41:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:41:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:41:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:41:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:41:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9b7bbabc-915b-4f4a-bc7b-cd540909b272 does not exist
Jan 31 03:41:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 92ef3af8-a1f4-498a-b4a3-f9e576b5ae9c does not exist
Jan 31 03:41:12 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5f51e45b-22ec-4e1e-b800-3ce369051e0b does not exist
Jan 31 03:41:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 502 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.1 KiB/s wr, 77 op/s
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.570 247403 INFO nova.virt.libvirt.driver [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Deleting instance files /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d_del#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.572 247403 INFO nova.virt.libvirt.driver [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Deletion of /var/lib/nova/instances/c95caf87-5069-4b70-9023-d3c2d911e87d_del complete#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.700 247403 INFO nova.compute.manager [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Took 3.67 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.701 247403 DEBUG oslo.service.loopingcall [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.702 247403 DEBUG nova.compute.manager [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.702 247403 DEBUG nova.network.neutron [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.952 247403 DEBUG nova.compute.manager [req-73a257c0-1f49-496c-a6c7-aeafe64c6988 req-96202f71-0209-46d1-b458-4a38a465383a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.954 247403 DEBUG oslo_concurrency.lockutils [req-73a257c0-1f49-496c-a6c7-aeafe64c6988 req-96202f71-0209-46d1-b458-4a38a465383a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.955 247403 DEBUG oslo_concurrency.lockutils [req-73a257c0-1f49-496c-a6c7-aeafe64c6988 req-96202f71-0209-46d1-b458-4a38a465383a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.955 247403 DEBUG oslo_concurrency.lockutils [req-73a257c0-1f49-496c-a6c7-aeafe64c6988 req-96202f71-0209-46d1-b458-4a38a465383a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.955 247403 DEBUG nova.compute.manager [req-73a257c0-1f49-496c-a6c7-aeafe64c6988 req-96202f71-0209-46d1-b458-4a38a465383a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] No waiting events found dispatching network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:41:12 np0005603621 nova_compute[247399]: 2026-01-31 08:41:12.955 247403 WARNING nova.compute.manager [req-73a257c0-1f49-496c-a6c7-aeafe64c6988 req-96202f71-0209-46d1-b458-4a38a465383a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received unexpected event network-vif-plugged-509c791a-f0a2-4105-a992-82e720b801e8 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:41:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:41:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:41:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:13.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:13.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 502 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.1 KiB/s wr, 77 op/s
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.121 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.605 247403 DEBUG nova.network.neutron [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.678 247403 DEBUG nova.compute.manager [req-e6b2e639-714a-4330-a03e-b630968ef2f4 req-3ff534a5-85df-4c0d-a80e-8b758ae7a9ea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Received event network-vif-deleted-509c791a-f0a2-4105-a992-82e720b801e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.679 247403 INFO nova.compute.manager [req-e6b2e639-714a-4330-a03e-b630968ef2f4 req-3ff534a5-85df-4c0d-a80e-8b758ae7a9ea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Neutron deleted interface 509c791a-f0a2-4105-a992-82e720b801e8; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.679 247403 DEBUG nova.network.neutron [req-e6b2e639-714a-4330-a03e-b630968ef2f4 req-3ff534a5-85df-4c0d-a80e-8b758ae7a9ea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.697 247403 INFO nova.compute.manager [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Took 3.00 seconds to deallocate network for instance.#033[00m
Jan 31 03:41:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:15.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:15.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:15 np0005603621 nova_compute[247399]: 2026-01-31 08:41:15.794 247403 DEBUG nova.compute.manager [req-e6b2e639-714a-4330-a03e-b630968ef2f4 req-3ff534a5-85df-4c0d-a80e-8b758ae7a9ea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Detach interface failed, port_id=509c791a-f0a2-4105-a992-82e720b801e8, reason: Instance c95caf87-5069-4b70-9023-d3c2d911e87d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.920480) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848875920531, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1430, "num_deletes": 258, "total_data_size": 2326353, "memory_usage": 2359984, "flush_reason": "Manual Compaction"}
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848875952030, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2277783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58984, "largest_seqno": 60412, "table_properties": {"data_size": 2271038, "index_size": 3815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14575, "raw_average_key_size": 20, "raw_value_size": 2257330, "raw_average_value_size": 3126, "num_data_blocks": 168, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848753, "oldest_key_time": 1769848753, "file_creation_time": 1769848875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 31709 microseconds, and 4782 cpu microseconds.
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.952184) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2277783 bytes OK
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.952217) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.958710) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.958793) EVENT_LOG_v1 {"time_micros": 1769848875958781, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.958822) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2320033, prev total WAL file size 2320033, number of live WAL files 2.
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.959519) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323633' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2224KB)], [134(9562KB)]
Jan 31 03:41:15 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848875959590, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12070240, "oldest_snapshot_seqno": -1}
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.089 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.089 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.190 247403 DEBUG oslo_concurrency.processutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8621 keys, 11926652 bytes, temperature: kUnknown
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848876194634, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11926652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11869991, "index_size": 34020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 227163, "raw_average_key_size": 26, "raw_value_size": 11717742, "raw_average_value_size": 1359, "num_data_blocks": 1307, "num_entries": 8621, "num_filter_entries": 8621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.194970) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11926652 bytes
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.198128) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.3 rd, 50.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.3 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 9153, records dropped: 532 output_compression: NoCompression
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.198162) EVENT_LOG_v1 {"time_micros": 1769848876198149, "job": 82, "event": "compaction_finished", "compaction_time_micros": 235163, "compaction_time_cpu_micros": 22939, "output_level": 6, "num_output_files": 1, "total_output_size": 11926652, "num_input_records": 9153, "num_output_records": 8621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848876198645, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848876199895, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:15.959378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.200056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.200062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.200064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.200066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:41:16.200068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.217 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.218 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:41:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 479 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.9 KiB/s wr, 46 op/s
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532739766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.655 247403 DEBUG oslo_concurrency.processutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.663 247403 DEBUG nova.compute.provider_tree [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:41:16 np0005603621 nova_compute[247399]: 2026-01-31 08:41:16.746 247403 DEBUG nova.scheduler.client.report [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:41:17 np0005603621 nova_compute[247399]: 2026-01-31 08:41:17.043 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:17 np0005603621 nova_compute[247399]: 2026-01-31 08:41:17.146 247403 INFO nova.scheduler.client.report [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Deleted allocations for instance c95caf87-5069-4b70-9023-d3c2d911e87d#033[00m
Jan 31 03:41:17 np0005603621 nova_compute[247399]: 2026-01-31 08:41:17.409 247403 DEBUG oslo_concurrency.lockutils [None req-21cf0b7f-a92c-48b9-bc8a-3c61aeca3d3f b6733330b634472ca8c21316f1ee5057 1e29363ca464487b931af54fe14166b1 - - default default] Lock "c95caf87-5069-4b70-9023-d3c2d911e87d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:17.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:17.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Jan 31 03:41:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Jan 31 03:41:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Jan 31 03:41:19 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Jan 31 03:41:19 np0005603621 nova_compute[247399]: 2026-01-31 08:41:19.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:19.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:20 np0005603621 nova_compute[247399]: 2026-01-31 08:41:20.081 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:20 np0005603621 nova_compute[247399]: 2026-01-31 08:41:20.123 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:20 np0005603621 nova_compute[247399]: 2026-01-31 08:41:20.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:20 np0005603621 nova_compute[247399]: 2026-01-31 08:41:20.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Jan 31 03:41:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:21.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:21.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 408 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.5 KiB/s wr, 60 op/s
Jan 31 03:41:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:23.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:23.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 408 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Jan 31 03:41:24 np0005603621 nova_compute[247399]: 2026-01-31 08:41:24.656 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848869.6552162, c95caf87-5069-4b70-9023-d3c2d911e87d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:41:24 np0005603621 nova_compute[247399]: 2026-01-31 08:41:24.657 247403 INFO nova.compute.manager [-] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:41:24 np0005603621 nova_compute[247399]: 2026-01-31 08:41:24.913 247403 DEBUG nova.compute.manager [None req-ee820cca-ec84-474c-922d-291e83688188 - - - - - -] [instance: c95caf87-5069-4b70-9023-d3c2d911e87d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:41:25 np0005603621 nova_compute[247399]: 2026-01-31 08:41:25.085 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:25 np0005603621 nova_compute[247399]: 2026-01-31 08:41:25.125 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:25 np0005603621 podman[354391]: 2026-01-31 08:41:25.524639034 +0000 UTC m=+0.077810788 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:41:25 np0005603621 podman[354390]: 2026-01-31 08:41:25.528549569 +0000 UTC m=+0.081497265 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 31 03:41:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:25.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:25.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Jan 31 03:41:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Jan 31 03:41:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Jan 31 03:41:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.1 KiB/s wr, 50 op/s
Jan 31 03:41:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:27.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:28 np0005603621 nova_compute[247399]: 2026-01-31 08:41:28.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.4 KiB/s wr, 62 op/s
Jan 31 03:41:29 np0005603621 nova_compute[247399]: 2026-01-31 08:41:29.589 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:29 np0005603621 nova_compute[247399]: 2026-01-31 08:41:29.590 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:29.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:29 np0005603621 nova_compute[247399]: 2026-01-31 08:41:29.799 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:41:30 np0005603621 nova_compute[247399]: 2026-01-31 08:41:30.089 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:30 np0005603621 nova_compute[247399]: 2026-01-31 08:41:30.127 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 KiB/s wr, 55 op/s
Jan 31 03:41:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:30.524 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:30.524 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:30.524 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:30 np0005603621 nova_compute[247399]: 2026-01-31 08:41:30.707 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:30 np0005603621 nova_compute[247399]: 2026-01-31 08:41:30.708 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:30 np0005603621 nova_compute[247399]: 2026-01-31 08:41:30.717 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:41:30 np0005603621 nova_compute[247399]: 2026-01-31 08:41:30.717 247403 INFO nova.compute.claims [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:41:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:31.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:31.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:31 np0005603621 nova_compute[247399]: 2026-01-31 08:41:31.905 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:41:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278338001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:41:32 np0005603621 nova_compute[247399]: 2026-01-31 08:41:32.341 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:32 np0005603621 nova_compute[247399]: 2026-01-31 08:41:32.347 247403 DEBUG nova.compute.provider_tree [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:41:32 np0005603621 nova_compute[247399]: 2026-01-31 08:41:32.484 247403 DEBUG nova.scheduler.client.report [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:41:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 31 03:41:32 np0005603621 nova_compute[247399]: 2026-01-31 08:41:32.815 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:32 np0005603621 nova_compute[247399]: 2026-01-31 08:41:32.816 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:41:33 np0005603621 nova_compute[247399]: 2026-01-31 08:41:33.592 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:41:33 np0005603621 nova_compute[247399]: 2026-01-31 08:41:33.593 247403 DEBUG nova.network.neutron [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:41:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:33.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:33.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:34 np0005603621 nova_compute[247399]: 2026-01-31 08:41:34.206 247403 DEBUG nova.policy [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1c6e7eff11b435a81429826a682b32f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bfe11bd9d694684b527666e2c378eed', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:41:34 np0005603621 nova_compute[247399]: 2026-01-31 08:41:34.467 247403 INFO nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:41:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 33 op/s
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.093 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.128 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.369 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:41:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:35.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:35.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.889 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.890 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.890 247403 INFO nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Creating image(s)#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.918 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.946 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.975 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:35 np0005603621 nova_compute[247399]: 2026-01-31 08:41:35.979 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:36 np0005603621 nova_compute[247399]: 2026-01-31 08:41:36.038 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:36 np0005603621 nova_compute[247399]: 2026-01-31 08:41:36.039 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:36 np0005603621 nova_compute[247399]: 2026-01-31 08:41:36.040 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:36 np0005603621 nova_compute[247399]: 2026-01-31 08:41:36.040 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:36 np0005603621 nova_compute[247399]: 2026-01-31 08:41:36.063 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:36 np0005603621 nova_compute[247399]: 2026-01-31 08:41:36.066 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e7694c5e-8d11-4f04-aec6-d1933f668d11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Jan 31 03:41:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:37.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 3.6 KiB/s wr, 29 op/s
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:41:38 np0005603621 nova_compute[247399]: 2026-01-31 08:41:38.576 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 e7694c5e-8d11-4f04-aec6-d1933f668d11_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:41:38
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:41:38 np0005603621 nova_compute[247399]: 2026-01-31 08:41:38.640 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:41:38 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:41:39 np0005603621 nova_compute[247399]: 2026-01-31 08:41:39.308 247403 DEBUG nova.objects.instance [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid e7694c5e-8d11-4f04-aec6-d1933f668d11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:41:39 np0005603621 nova_compute[247399]: 2026-01-31 08:41:39.467 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:41:39 np0005603621 nova_compute[247399]: 2026-01-31 08:41:39.468 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Ensure instance console log exists: /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:41:39 np0005603621 nova_compute[247399]: 2026-01-31 08:41:39.468 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:39 np0005603621 nova_compute[247399]: 2026-01-31 08:41:39.469 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:39 np0005603621 nova_compute[247399]: 2026-01-31 08:41:39.469 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:39.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:39.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:40 np0005603621 nova_compute[247399]: 2026-01-31 08:41:40.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:40 np0005603621 nova_compute[247399]: 2026-01-31 08:41:40.130 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 16 op/s
Jan 31 03:41:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:41 np0005603621 nova_compute[247399]: 2026-01-31 08:41:41.479 247403 DEBUG nova.network.neutron [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Successfully created port: b57f4c41-e254-4e29-be21-1899bdb779e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:41:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:41:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:41:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:41.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Jan 31 03:41:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Jan 31 03:41:41 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Jan 31 03:41:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Jan 31 03:41:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:43.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Jan 31 03:41:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:44.894 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:41:44 np0005603621 nova_compute[247399]: 2026-01-31 08:41:44.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:44.895 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:41:45 np0005603621 nova_compute[247399]: 2026-01-31 08:41:45.099 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:45 np0005603621 nova_compute[247399]: 2026-01-31 08:41:45.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:45 np0005603621 nova_compute[247399]: 2026-01-31 08:41:45.657 247403 DEBUG nova.network.neutron [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Successfully updated port: b57f4c41-e254-4e29-be21-1899bdb779e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:41:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:45 np0005603621 nova_compute[247399]: 2026-01-31 08:41:45.879 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:41:45 np0005603621 nova_compute[247399]: 2026-01-31 08:41:45.880 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:41:45 np0005603621 nova_compute[247399]: 2026-01-31 08:41:45.880 247403 DEBUG nova.network.neutron [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:41:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:45.898 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 353 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 31 03:41:46 np0005603621 nova_compute[247399]: 2026-01-31 08:41:46.797 247403 DEBUG nova.network.neutron [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:41:47 np0005603621 nova_compute[247399]: 2026-01-31 08:41:47.534 247403 DEBUG nova.compute.manager [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-changed-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:41:47 np0005603621 nova_compute[247399]: 2026-01-31 08:41:47.535 247403 DEBUG nova.compute.manager [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Refreshing instance network info cache due to event network-changed-b57f4c41-e254-4e29-be21-1899bdb779e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:41:47 np0005603621 nova_compute[247399]: 2026-01-31 08:41:47.535 247403 DEBUG oslo_concurrency.lockutils [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:41:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:41:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:47.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:41:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005325382700539736 of space, bias 1.0, pg target 1.5976148101619208 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.647132508607854 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:41:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:41:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:49.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:49 np0005603621 nova_compute[247399]: 2026-01-31 08:41:49.863 247403 DEBUG nova.network.neutron [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating instance_info_cache with network_info: [{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.074 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.074 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Instance network_info: |[{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.074 247403 DEBUG oslo_concurrency.lockutils [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.075 247403 DEBUG nova.network.neutron [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Refreshing network info cache for port b57f4c41-e254-4e29-be21-1899bdb779e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.078 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Start _get_guest_xml network_info=[{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.082 247403 WARNING nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.088 247403 DEBUG nova.virt.libvirt.host [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.089 247403 DEBUG nova.virt.libvirt.host [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.091 247403 DEBUG nova.virt.libvirt.host [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.092 247403 DEBUG nova.virt.libvirt.host [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.093 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.093 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.093 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.094 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.094 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.094 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.094 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.095 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.095 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.095 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.095 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.095 247403 DEBUG nova.virt.hardware [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.098 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 31 03:41:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:41:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054655891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.525 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.547 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:50 np0005603621 nova_compute[247399]: 2026-01-31 08:41:50.550 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:41:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505644863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:41:51 np0005603621 nova_compute[247399]: 2026-01-31 08:41:51.013 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:51 np0005603621 nova_compute[247399]: 2026-01-31 08:41:51.015 247403 DEBUG nova.virt.libvirt.vif [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:41:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-617880352',display_name='tempest-TestNetworkAdvancedServerOps-server-617880352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-617880352',id=151,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE9oW7hyNK/c0GmlhWHVsudW1EFOU1/778j2Zfzh7XKLIHLI+8KsqNzzhySs7L5TOC+KBq7HkFVRK05TmxJs9LQc4oVDYzV+eQ5EXf3bE6KOfId7bnJvpzjj70u8lMALWA==',key_name='tempest-TestNetworkAdvancedServerOps-530143631',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3ynh01gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:41:35Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e7694c5e-8d11-4f04-aec6-d1933f668d11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:41:51 np0005603621 nova_compute[247399]: 2026-01-31 08:41:51.016 247403 DEBUG nova.network.os_vif_util [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:41:51 np0005603621 nova_compute[247399]: 2026-01-31 08:41:51.017 247403 DEBUG nova.network.os_vif_util [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:41:51 np0005603621 nova_compute[247399]: 2026-01-31 08:41:51.018 247403 DEBUG nova.objects.instance [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid e7694c5e-8d11-4f04-aec6-d1933f668d11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:41:51 np0005603621 nova_compute[247399]: 2026-01-31 08:41:51.080 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Jan 31 03:41:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Jan 31 03:41:51 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Jan 31 03:41:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:51.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.164 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <uuid>e7694c5e-8d11-4f04-aec6-d1933f668d11</uuid>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <name>instance-00000097</name>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-617880352</nova:name>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:41:50</nova:creationTime>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <nova:port uuid="b57f4c41-e254-4e29-be21-1899bdb779e0">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <entry name="serial">e7694c5e-8d11-4f04-aec6-d1933f668d11</entry>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <entry name="uuid">e7694c5e-8d11-4f04-aec6-d1933f668d11</entry>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e7694c5e-8d11-4f04-aec6-d1933f668d11_disk">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e7694c5e-8d11-4f04-aec6-d1933f668d11_disk.config">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:26:78:9b"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <target dev="tapb57f4c41-e2"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/console.log" append="off"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:41:52 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:41:52 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:41:52 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:41:52 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.165 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Preparing to wait for external event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.166 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.166 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.166 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.167 247403 DEBUG nova.virt.libvirt.vif [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:41:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-617880352',display_name='tempest-TestNetworkAdvancedServerOps-server-617880352',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-617880352',id=151,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE9oW7hyNK/c0GmlhWHVsudW1EFOU1/778j2Zfzh7XKLIHLI+8KsqNzzhySs7L5TOC+KBq7HkFVRK05TmxJs9LQc4oVDYzV+eQ5EXf3bE6KOfId7bnJvpzjj70u8lMALWA==',key_name='tempest-TestNetworkAdvancedServerOps-530143631',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3ynh01gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:41:35Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e7694c5e-8d11-4f04-aec6-d1933f668d11,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.167 247403 DEBUG nova.network.os_vif_util [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.168 247403 DEBUG nova.network.os_vif_util [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.168 247403 DEBUG os_vif [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.169 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.169 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.169 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.172 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.173 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb57f4c41-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.173 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb57f4c41-e2, col_values=(('external_ids', {'iface-id': 'b57f4c41-e254-4e29-be21-1899bdb779e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:78:9b', 'vm-uuid': 'e7694c5e-8d11-4f04-aec6-d1933f668d11'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.174 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:52 np0005603621 NetworkManager[49013]: <info>  [1769848912.1755] manager: (tapb57f4c41-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.176 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.180 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.183 247403 INFO os_vif [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2')#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.184 247403 WARNING nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.185 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid e7694c5e-8d11-4f04-aec6-d1933f668d11 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.186 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.426 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.427 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.427 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:26:78:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.428 247403 INFO nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Using config drive#033[00m
Jan 31 03:41:52 np0005603621 nova_compute[247399]: 2026-01-31 08:41:52.453 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.4 KiB/s wr, 50 op/s
Jan 31 03:41:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:54 np0005603621 nova_compute[247399]: 2026-01-31 08:41:54.303 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.4 KiB/s wr, 50 op/s
Jan 31 03:41:55 np0005603621 nova_compute[247399]: 2026-01-31 08:41:55.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:41:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:41:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:55.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.048 247403 INFO nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Creating config drive at /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/disk.config#033[00m
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.052 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu4m_2yfl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.183 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpu4m_2yfl" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.215 247403 DEBUG nova.storage.rbd_utils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image e7694c5e-8d11-4f04-aec6-d1933f668d11_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.219 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/disk.config e7694c5e-8d11-4f04-aec6-d1933f668d11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.444 247403 DEBUG nova.network.neutron [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updated VIF entry in instance network info cache for port b57f4c41-e254-4e29-be21-1899bdb779e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.445 247403 DEBUG nova.network.neutron [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating instance_info_cache with network_info: [{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:41:56 np0005603621 podman[354858]: 2026-01-31 08:41:56.495030917 +0000 UTC m=+0.050655497 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:41:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.1 KiB/s wr, 44 op/s
Jan 31 03:41:56 np0005603621 podman[354859]: 2026-01-31 08:41:56.520567757 +0000 UTC m=+0.072243633 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:41:56 np0005603621 nova_compute[247399]: 2026-01-31 08:41:56.580 247403 DEBUG oslo_concurrency.lockutils [req-5a4308dd-e975-482f-afe2-234f2df6e4ff req-4c552d9e-d6db-4056-a3bc-a61200cfa806 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:41:57 np0005603621 nova_compute[247399]: 2026-01-31 08:41:57.175 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:57.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:57.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:58 np0005603621 nova_compute[247399]: 2026-01-31 08:41:58.383 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:41:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 12 KiB/s wr, 36 op/s
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.072 247403 DEBUG oslo_concurrency.processutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/disk.config e7694c5e-8d11-4f04-aec6-d1933f668d11_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.853s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.072 247403 INFO nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Deleting local config drive /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11/disk.config because it was imported into RBD.#033[00m
Jan 31 03:41:59 np0005603621 kernel: tapb57f4c41-e2: entered promiscuous mode
Jan 31 03:41:59 np0005603621 NetworkManager[49013]: <info>  [1769848919.1229] manager: (tapb57f4c41-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Jan 31 03:41:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:59Z|00621|binding|INFO|Claiming lport b57f4c41-e254-4e29-be21-1899bdb779e0 for this chassis.
Jan 31 03:41:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:59Z|00622|binding|INFO|b57f4c41-e254-4e29-be21-1899bdb779e0: Claiming fa:16:3e:26:78:9b 10.100.0.7
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.123 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.126 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 systemd-machined[212769]: New machine qemu-75-instance-00000097.
Jan 31 03:41:59 np0005603621 systemd-udevd[354918]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:41:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:59Z|00623|binding|INFO|Setting lport b57f4c41-e254-4e29-be21-1899bdb779e0 ovn-installed in OVS
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.153 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 NetworkManager[49013]: <info>  [1769848919.1585] device (tapb57f4c41-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:41:59 np0005603621 NetworkManager[49013]: <info>  [1769848919.1593] device (tapb57f4c41-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:41:59 np0005603621 systemd[1]: Started Virtual Machine qemu-75-instance-00000097.
Jan 31 03:41:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:59Z|00624|binding|INFO|Setting lport b57f4c41-e254-4e29-be21-1899bdb779e0 up in Southbound
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.245 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:78:9b 10.100.0.7'], port_security=['fa:16:3e:26:78:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e7694c5e-8d11-4f04-aec6-d1933f668d11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '15398c59-1164-4f8b-8737-a5ada60dadf3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afdea9bd-63de-451e-8b4c-572440598122, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b57f4c41-e254-4e29-be21-1899bdb779e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.246 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b57f4c41-e254-4e29-be21-1899bdb779e0 in datapath b000b527-ea00-4c0c-84c4-e93c10d4bae5 bound to our chassis#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.248 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b000b527-ea00-4c0c-84c4-e93c10d4bae5#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.256 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf51808-eda0-46c2-9ff5-40d9da1f0201]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.256 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb000b527-e1 in ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.258 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb000b527-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.258 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf02c07-c865-487e-a687-e4f85eeb8ea6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.259 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3e09da35-6acc-4c09-9471-233af2387fe3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.266 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[69cb3f6b-3548-41b4-9edd-dfab1fd50641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.277 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cd493313-2b41-44e7-804a-7295ba83b119]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.303 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fe3bf7-4f74-4242-9148-a0e596bc7b5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.308 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6a328aa8-4bf0-4824-bc3e-23f25ca167cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 NetworkManager[49013]: <info>  [1769848919.3093] manager: (tapb000b527-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/275)
Jan 31 03:41:59 np0005603621 systemd-udevd[354920]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.339 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2198470c-58ff-4bde-a979-185f136832df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.342 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8f23ac-633e-4f15-a70b-96b67f7a42d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 NetworkManager[49013]: <info>  [1769848919.3663] device (tapb000b527-e0): carrier: link connected
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.374 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9f98e81c-58ab-4f8b-980b-01657f191441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.390 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5117b1-8136-4abf-9d59-10e797a29626]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb000b527-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:b7:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822488, 'reachable_time': 43832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354951, 'error': None, 'target': 'ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.403 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a4635b76-791d-4730-a1ea-f9af272ccfcc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:b75a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822488, 'tstamp': 822488}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354952, 'error': None, 'target': 'ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.420 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c826506c-f5ab-40ef-a722-1ee52e444f9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb000b527-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:b7:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822488, 'reachable_time': 43832, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354953, 'error': None, 'target': 'ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.443 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9a72a440-9884-46ee-bba9-7dac29dc87d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.497 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[092f6c6e-05d3-4d21-a60f-8af67ac7f5d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.499 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb000b527-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.499 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.500 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb000b527-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:59 np0005603621 kernel: tapb000b527-e0: entered promiscuous mode
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.502 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 NetworkManager[49013]: <info>  [1769848919.5029] manager: (tapb000b527-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/276)
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.505 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.507 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb000b527-e0, col_values=(('external_ids', {'iface-id': '3a41031b-7ec9-4414-97e4-222c1c56b61b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.508 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 ovn_controller[149152]: 2026-01-31T08:41:59Z|00625|binding|INFO|Releasing lport 3a41031b-7ec9-4414-97e4-222c1c56b61b from this chassis (sb_readonly=1)
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.509 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.510 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b000b527-ea00-4c0c-84c4-e93c10d4bae5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b000b527-ea00-4c0c-84c4-e93c10d4bae5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.511 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e72470-cb32-436f-947b-da3aa5be264b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.511 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-b000b527-ea00-4c0c-84c4-e93c10d4bae5
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/b000b527-ea00-4c0c-84c4-e93c10d4bae5.pid.haproxy
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID b000b527-ea00-4c0c-84c4-e93c10d4bae5
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:41:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:41:59.512 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'env', 'PROCESS_TAG=haproxy-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b000b527-ea00-4c0c-84c4-e93c10d4bae5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.514 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:41:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:41:59.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:41:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:41:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:41:59.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.869 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848919.8688157, e7694c5e-8d11-4f04-aec6-d1933f668d11 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:41:59 np0005603621 nova_compute[247399]: 2026-01-31 08:41:59.869 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] VM Started (Lifecycle Event)#033[00m
Jan 31 03:41:59 np0005603621 podman[355021]: 2026-01-31 08:41:59.796672513 +0000 UTC m=+0.017606919 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.007 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.012 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848919.8695111, e7694c5e-8d11-4f04-aec6-d1933f668d11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.013 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.136 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.308 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.309 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.342 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.347 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.400 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:42:00 np0005603621 nova_compute[247399]: 2026-01-31 08:42:00.492 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:42:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Jan 31 03:42:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 12 KiB/s wr, 36 op/s
Jan 31 03:42:00 np0005603621 podman[355021]: 2026-01-31 08:42:00.542608403 +0000 UTC m=+0.763542799 container create 8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:42:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Jan 31 03:42:00 np0005603621 systemd[1]: Started libpod-conmon-8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811.scope.
Jan 31 03:42:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Jan 31 03:42:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61635c475f869cad157b08281144d3136f14935a6335a98caa9b3003d43451d8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:42:01 np0005603621 nova_compute[247399]: 2026-01-31 08:42:01.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 42K writes, 164K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 42K writes, 14K syncs, 2.90 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6049 writes, 24K keys, 6049 commit groups, 1.0 writes per commit group, ingest: 23.66 MB, 0.04 MB/s#012Interval WAL: 6048 writes, 2350 syncs, 2.57 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:42:01 np0005603621 nova_compute[247399]: 2026-01-31 08:42:01.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:01 np0005603621 nova_compute[247399]: 2026-01-31 08:42:01.201 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:42:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:01 np0005603621 podman[355021]: 2026-01-31 08:42:01.263118176 +0000 UTC m=+1.484052632 container init 8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:42:01 np0005603621 podman[355021]: 2026-01-31 08:42:01.267894878 +0000 UTC m=+1.488829294 container start 8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:42:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [NOTICE]   (355047) : New worker (355049) forked
Jan 31 03:42:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [NOTICE]   (355047) : Loading success.
Jan 31 03:42:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:01.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:01.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.349 247403 DEBUG nova.compute.manager [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.350 247403 DEBUG oslo_concurrency.lockutils [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.350 247403 DEBUG oslo_concurrency.lockutils [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.350 247403 DEBUG oslo_concurrency.lockutils [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.350 247403 DEBUG nova.compute.manager [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Processing event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.350 247403 DEBUG nova.compute.manager [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.351 247403 DEBUG oslo_concurrency.lockutils [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.351 247403 DEBUG oslo_concurrency.lockutils [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.351 247403 DEBUG oslo_concurrency.lockutils [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.351 247403 DEBUG nova.compute.manager [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.351 247403 WARNING nova.compute.manager [req-3229a71c-ab89-4377-89a1-6a544d14fa3e req-13cc32ec-1be0-4cdc-8a76-2a4b7087f691 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received unexpected event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.352 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.355 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848922.3553703, e7694c5e-8d11-4f04-aec6-d1933f668d11 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.355 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.357 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.361 247403 INFO nova.virt.libvirt.driver [-] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Instance spawned successfully.#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.361 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:42:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 23 op/s
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.607 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.611 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.612 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.612 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.612 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.613 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.613 247403 DEBUG nova.virt.libvirt.driver [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.618 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.680 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.909 247403 INFO nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Took 27.02 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:42:02 np0005603621 nova_compute[247399]: 2026-01-31 08:42:02.910 247403 DEBUG nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:42:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:03.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:03.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:04 np0005603621 nova_compute[247399]: 2026-01-31 08:42:04.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:04 np0005603621 nova_compute[247399]: 2026-01-31 08:42:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 25 KiB/s wr, 23 op/s
Jan 31 03:42:05 np0005603621 nova_compute[247399]: 2026-01-31 08:42:05.139 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:42:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:05.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:42:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.282 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.282 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.283 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.283 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.283 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.509 247403 INFO nova.compute.manager [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Took 35.85 seconds to build instance.#033[00m
Jan 31 03:42:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 271 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 31 03:42:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:42:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068635751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:42:06 np0005603621 nova_compute[247399]: 2026-01-31 08:42:06.710 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.182 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.653 247403 DEBUG oslo_concurrency.lockutils [None req-56f5df10-c229-434d-b381-1e4993293ad1 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 38.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.654 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 15.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.655 247403 INFO nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.655 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.743 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.743 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:42:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:07.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.882 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.883 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4069MB free_disk=20.921825408935547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.883 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:07 np0005603621 nova_compute[247399]: 2026-01-31 08:42:07.883 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.276 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e7694c5e-8d11-4f04-aec6-d1933f668d11 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.277 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.277 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.335 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 298 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 3.5 MiB/s wr, 146 op/s
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:42:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3954746163' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.740 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.745 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:42:08 np0005603621 nova_compute[247399]: 2026-01-31 08:42:08.839 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:42:09 np0005603621 nova_compute[247399]: 2026-01-31 08:42:09.183 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:42:09 np0005603621 nova_compute[247399]: 2026-01-31 08:42:09.184 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:09.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Jan 31 03:42:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Jan 31 03:42:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Jan 31 03:42:10 np0005603621 nova_compute[247399]: 2026-01-31 08:42:10.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 298 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 3.7 MiB/s wr, 152 op/s
Jan 31 03:42:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Jan 31 03:42:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Jan 31 03:42:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Jan 31 03:42:11 np0005603621 nova_compute[247399]: 2026-01-31 08:42:11.186 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:11 np0005603621 nova_compute[247399]: 2026-01-31 08:42:11.187 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:11 np0005603621 nova_compute[247399]: 2026-01-31 08:42:11.187 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 03:42:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:11.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:11.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:12 np0005603621 nova_compute[247399]: 2026-01-31 08:42:12.187 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 197 op/s
Jan 31 03:42:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:13.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.6 MiB/s wr, 113 op/s
Jan 31 03:42:14 np0005603621 NetworkManager[49013]: <info>  [1769848934.6472] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 31 03:42:14 np0005603621 nova_compute[247399]: 2026-01-31 08:42:14.646 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:14 np0005603621 NetworkManager[49013]: <info>  [1769848934.6490] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Jan 31 03:42:14 np0005603621 nova_compute[247399]: 2026-01-31 08:42:14.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:42:14Z|00626|binding|INFO|Releasing lport 3a41031b-7ec9-4414-97e4-222c1c56b61b from this chassis (sb_readonly=0)
Jan 31 03:42:14 np0005603621 nova_compute[247399]: 2026-01-31 08:42:14.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.055 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:42:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:42:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:42:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.957 247403 DEBUG nova.compute.manager [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-changed-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.957 247403 DEBUG nova.compute.manager [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Refreshing instance network info cache due to event network-changed-b57f4c41-e254-4e29-be21-1899bdb779e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.957 247403 DEBUG oslo_concurrency.lockutils [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.957 247403 DEBUG oslo_concurrency.lockutils [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:42:15 np0005603621 nova_compute[247399]: 2026-01-31 08:42:15.958 247403 DEBUG nova.network.neutron [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Refreshing network info cache for port b57f4c41-e254-4e29-be21-1899bdb779e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:42:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:42:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Jan 31 03:42:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 334 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.7 MiB/s wr, 72 op/s
Jan 31 03:42:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Jan 31 03:42:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Jan 31 03:42:17 np0005603621 nova_compute[247399]: 2026-01-31 08:42:17.191 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b64c7270-64e9-4107-8c44-5b651bbb0201 does not exist
Jan 31 03:42:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 27b7f149-124a-4c6d-b65a-8c7b9e9ab87f does not exist
Jan 31 03:42:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a5d71f7f-a58a-47cc-bb07-1466eda47a9c does not exist
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:42:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:42:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:17.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:17.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.133202865 +0000 UTC m=+0.038414929 container create 35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:42:18 np0005603621 systemd[1]: Started libpod-conmon-35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1.scope.
Jan 31 03:42:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.204781614 +0000 UTC m=+0.109993698 container init 35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.113908633 +0000 UTC m=+0.019120717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.210514866 +0000 UTC m=+0.115726930 container start 35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 03:42:18 np0005603621 fervent_dubinsky[355451]: 167 167
Jan 31 03:42:18 np0005603621 systemd[1]: libpod-35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1.scope: Deactivated successfully.
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.21761172 +0000 UTC m=+0.122823804 container attach 35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.218072225 +0000 UTC m=+0.123284289 container died 35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:42:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9684681791a152b2926087221798fd577e289d53c5b7d1002b17389e0cc09648-merged.mount: Deactivated successfully.
Jan 31 03:42:18 np0005603621 podman[355434]: 2026-01-31 08:42:18.263877178 +0000 UTC m=+0.169089242 container remove 35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:42:18 np0005603621 systemd[1]: libpod-conmon-35767ba27823baa14a72ce1821f3d954894858f2cbdd7b553f52bc54578697c1.scope: Deactivated successfully.
Jan 31 03:42:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:42:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:42:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:42:18Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:78:9b 10.100.0.7
Jan 31 03:42:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:42:18Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:78:9b 10.100.0.7
Jan 31 03:42:18 np0005603621 podman[355474]: 2026-01-31 08:42:18.394003803 +0000 UTC m=+0.041767305 container create 3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:42:18 np0005603621 systemd[1]: Started libpod-conmon-3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733.scope.
Jan 31 03:42:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46204abc2e99b049f76809d0fc04c12c32ea59d5d6a40faf9687c5453df7a68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46204abc2e99b049f76809d0fc04c12c32ea59d5d6a40faf9687c5453df7a68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46204abc2e99b049f76809d0fc04c12c32ea59d5d6a40faf9687c5453df7a68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46204abc2e99b049f76809d0fc04c12c32ea59d5d6a40faf9687c5453df7a68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f46204abc2e99b049f76809d0fc04c12c32ea59d5d6a40faf9687c5453df7a68/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:18 np0005603621 podman[355474]: 2026-01-31 08:42:18.376487438 +0000 UTC m=+0.024250950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:42:18 np0005603621 podman[355474]: 2026-01-31 08:42:18.486603329 +0000 UTC m=+0.134366841 container init 3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:42:18 np0005603621 podman[355474]: 2026-01-31 08:42:18.492206366 +0000 UTC m=+0.139969858 container start 3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:42:18 np0005603621 podman[355474]: 2026-01-31 08:42:18.49703495 +0000 UTC m=+0.144798542 container attach 3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:42:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 343 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 4.2 MiB/s wr, 98 op/s
Jan 31 03:42:19 np0005603621 eloquent_bose[355491]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:42:19 np0005603621 eloquent_bose[355491]: --> relative data size: 1.0
Jan 31 03:42:19 np0005603621 eloquent_bose[355491]: --> All data devices are unavailable
Jan 31 03:42:19 np0005603621 systemd[1]: libpod-3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733.scope: Deactivated successfully.
Jan 31 03:42:19 np0005603621 podman[355474]: 2026-01-31 08:42:19.318090601 +0000 UTC m=+0.965854093 container died 3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:42:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f46204abc2e99b049f76809d0fc04c12c32ea59d5d6a40faf9687c5453df7a68-merged.mount: Deactivated successfully.
Jan 31 03:42:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:19.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:19 np0005603621 podman[355474]: 2026-01-31 08:42:19.816585116 +0000 UTC m=+1.464348608 container remove 3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:42:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:19 np0005603621 systemd[1]: libpod-conmon-3e2df758c1aad74ed1e70225edefea49980947dfa6f9d5fd694a29b82af03733.scope: Deactivated successfully.
Jan 31 03:42:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:19.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:20 np0005603621 nova_compute[247399]: 2026-01-31 08:42:20.093 247403 DEBUG nova.network.neutron [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updated VIF entry in instance network info cache for port b57f4c41-e254-4e29-be21-1899bdb779e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:42:20 np0005603621 nova_compute[247399]: 2026-01-31 08:42:20.093 247403 DEBUG nova.network.neutron [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating instance_info_cache with network_info: [{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:42:20 np0005603621 nova_compute[247399]: 2026-01-31 08:42:20.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:20 np0005603621 nova_compute[247399]: 2026-01-31 08:42:20.334 247403 DEBUG oslo_concurrency.lockutils [req-e3061fd3-0590-49d5-9196-e4a96ef7d4a1 req-4ec3bf68-0255-4333-bc78-f4d7389d049c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.367696688 +0000 UTC m=+0.089356804 container create 643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bouman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.298156034 +0000 UTC m=+0.019816180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:42:20 np0005603621 systemd[1]: Started libpod-conmon-643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347.scope.
Jan 31 03:42:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 343 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 74 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.553702566 +0000 UTC m=+0.275362722 container init 643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bouman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.559700656 +0000 UTC m=+0.281360772 container start 643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:42:20 np0005603621 silly_bouman[355677]: 167 167
Jan 31 03:42:20 np0005603621 systemd[1]: libpod-643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347.scope: Deactivated successfully.
Jan 31 03:42:20 np0005603621 conmon[355677]: conmon 643ddcb8969733ac2b6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347.scope/container/memory.events
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.581978313 +0000 UTC m=+0.303638459 container attach 643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.58316694 +0000 UTC m=+0.304827066 container died 643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:42:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-47fdf6fd9b6652817ecbdb196339c5db5abb222c1d0099b0bdd135fb97b92c22-merged.mount: Deactivated successfully.
Jan 31 03:42:20 np0005603621 podman[355661]: 2026-01-31 08:42:20.664211209 +0000 UTC m=+0.385871325 container remove 643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:42:20 np0005603621 systemd[1]: libpod-conmon-643ddcb8969733ac2b6d06df5d55ee6070e721af38c92dbfc41ab62d476cf347.scope: Deactivated successfully.
Jan 31 03:42:20 np0005603621 podman[355703]: 2026-01-31 08:42:20.790241405 +0000 UTC m=+0.044519262 container create 63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:42:20 np0005603621 systemd[1]: Started libpod-conmon-63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c.scope.
Jan 31 03:42:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a222d3c1a3624da35d22900c74d29e976da9329f2b6e508ec26f398372455d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a222d3c1a3624da35d22900c74d29e976da9329f2b6e508ec26f398372455d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a222d3c1a3624da35d22900c74d29e976da9329f2b6e508ec26f398372455d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8a222d3c1a3624da35d22900c74d29e976da9329f2b6e508ec26f398372455d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:20 np0005603621 podman[355703]: 2026-01-31 08:42:20.76483517 +0000 UTC m=+0.019113057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:42:20 np0005603621 podman[355703]: 2026-01-31 08:42:20.890590037 +0000 UTC m=+0.144867904 container init 63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:42:20 np0005603621 podman[355703]: 2026-01-31 08:42:20.899619933 +0000 UTC m=+0.153897790 container start 63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lewin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:42:21 np0005603621 podman[355703]: 2026-01-31 08:42:21.006028426 +0000 UTC m=+0.260306283 container attach 63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lewin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:42:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:21 np0005603621 kind_lewin[355719]: {
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:    "0": [
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:        {
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "devices": [
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "/dev/loop3"
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            ],
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "lv_name": "ceph_lv0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "lv_size": "7511998464",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "name": "ceph_lv0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "tags": {
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.cluster_name": "ceph",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.crush_device_class": "",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.encrypted": "0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.osd_id": "0",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.type": "block",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:                "ceph.vdo": "0"
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            },
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "type": "block",
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:            "vg_name": "ceph_vg0"
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:        }
Jan 31 03:42:21 np0005603621 kind_lewin[355719]:    ]
Jan 31 03:42:21 np0005603621 kind_lewin[355719]: }
Jan 31 03:42:21 np0005603621 systemd[1]: libpod-63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c.scope: Deactivated successfully.
Jan 31 03:42:21 np0005603621 podman[355703]: 2026-01-31 08:42:21.626021004 +0000 UTC m=+0.880298861 container died 63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lewin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:42:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:21.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:21.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a8a222d3c1a3624da35d22900c74d29e976da9329f2b6e508ec26f398372455d-merged.mount: Deactivated successfully.
Jan 31 03:42:22 np0005603621 nova_compute[247399]: 2026-01-31 08:42:22.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 2.6 MiB/s wr, 88 op/s
Jan 31 03:42:22 np0005603621 podman[355703]: 2026-01-31 08:42:22.559032105 +0000 UTC m=+1.813309962 container remove 63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:42:22 np0005603621 systemd[1]: libpod-conmon-63174cc06cb95918a58366d6302d01642b7723c98c7f60ebf7e09c50d8754b3c.scope: Deactivated successfully.
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.063592761 +0000 UTC m=+0.035459345 container create b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:42:23 np0005603621 systemd[1]: Started libpod-conmon-b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f.scope.
Jan 31 03:42:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.140699926 +0000 UTC m=+0.112566530 container init b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.046007304 +0000 UTC m=+0.017873918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.145396845 +0000 UTC m=+0.117263419 container start b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:42:23 np0005603621 elated_bouman[355900]: 167 167
Jan 31 03:42:23 np0005603621 systemd[1]: libpod-b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f.scope: Deactivated successfully.
Jan 31 03:42:23 np0005603621 conmon[355900]: conmon b8b16429a38022f6379f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f.scope/container/memory.events
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.151881141 +0000 UTC m=+0.123747755 container attach b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.152191231 +0000 UTC m=+0.124057845 container died b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:42:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f07e8a1b4fca192cb446301496f0180b71448ec7975d01d7911ba9835be4a25c-merged.mount: Deactivated successfully.
Jan 31 03:42:23 np0005603621 podman[355884]: 2026-01-31 08:42:23.205033306 +0000 UTC m=+0.176899890 container remove b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 03:42:23 np0005603621 systemd[1]: libpod-conmon-b8b16429a38022f6379f26504149674e5d9cdef5661315b9727c5dc13762be5f.scope: Deactivated successfully.
Jan 31 03:42:23 np0005603621 podman[355923]: 2026-01-31 08:42:23.323985566 +0000 UTC m=+0.035428363 container create 92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:23 np0005603621 systemd[1]: Started libpod-conmon-92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16.scope.
Jan 31 03:42:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:42:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7ba7d9e43526c04567d10430c85b80951ad16d6fbb3743111dc61c424730f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7ba7d9e43526c04567d10430c85b80951ad16d6fbb3743111dc61c424730f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7ba7d9e43526c04567d10430c85b80951ad16d6fbb3743111dc61c424730f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7ba7d9e43526c04567d10430c85b80951ad16d6fbb3743111dc61c424730f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:42:23 np0005603621 podman[355923]: 2026-01-31 08:42:23.30577503 +0000 UTC m=+0.017217847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:42:23 np0005603621 podman[355923]: 2026-01-31 08:42:23.405129909 +0000 UTC m=+0.116572716 container init 92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:42:23 np0005603621 podman[355923]: 2026-01-31 08:42:23.410378326 +0000 UTC m=+0.121821113 container start 92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:42:23 np0005603621 podman[355923]: 2026-01-31 08:42:23.422635575 +0000 UTC m=+0.134078392 container attach 92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:42:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:23.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:23.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]: {
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:        "osd_id": 0,
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:        "type": "bluestore"
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]:    }
Jan 31 03:42:24 np0005603621 mystifying_khorana[355939]: }
Jan 31 03:42:24 np0005603621 systemd[1]: libpod-92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16.scope: Deactivated successfully.
Jan 31 03:42:24 np0005603621 podman[355923]: 2026-01-31 08:42:24.179911594 +0000 UTC m=+0.891354401 container died 92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:42:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a9d7ba7d9e43526c04567d10430c85b80951ad16d6fbb3743111dc61c424730f-merged.mount: Deactivated successfully.
Jan 31 03:42:24 np0005603621 podman[355923]: 2026-01-31 08:42:24.468960048 +0000 UTC m=+1.180402845 container remove 92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_khorana, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:42:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:42:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 2.6 MiB/s wr, 88 op/s
Jan 31 03:42:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:42:24 np0005603621 systemd[1]: libpod-conmon-92f61095afa327c3b2466b8d6092196595908050d5e9fa7c667f13ec00191b16.scope: Deactivated successfully.
Jan 31 03:42:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 425c7701-4e5d-420c-8e2f-64f0a2e78e92 does not exist
Jan 31 03:42:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 38e136b3-d81d-4306-8e39-b3aa4a1a1718 does not exist
Jan 31 03:42:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3e7387e2-eb7d-40ce-b7e2-15b25947fc04 does not exist
Jan 31 03:42:25 np0005603621 nova_compute[247399]: 2026-01-31 08:42:25.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:42:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:25.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:25.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 1.6 MiB/s wr, 63 op/s
Jan 31 03:42:27 np0005603621 nova_compute[247399]: 2026-01-31 08:42:27.198 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:27 np0005603621 podman[356024]: 2026-01-31 08:42:27.494045697 +0000 UTC m=+0.045126032 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:42:27 np0005603621 podman[356025]: 2026-01-31 08:42:27.519412131 +0000 UTC m=+0.072369925 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:42:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:27.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 225 KiB/s rd, 1.1 MiB/s wr, 51 op/s
Jan 31 03:42:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:29.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:29.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:30 np0005603621 nova_compute[247399]: 2026-01-31 08:42:30.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:30 np0005603621 nova_compute[247399]: 2026-01-31 08:42:30.372 247403 INFO nova.compute.manager [None req-fbb0d786-0533-4df2-8acd-a779b206a6ea f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Get console output#033[00m
Jan 31 03:42:30 np0005603621 nova_compute[247399]: 2026-01-31 08:42:30.378 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:42:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:42:30.525 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:42:30.525 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:42:30.526 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 202 KiB/s rd, 296 KiB/s wr, 36 op/s
Jan 31 03:42:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:31.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:31.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:32 np0005603621 nova_compute[247399]: 2026-01-31 08:42:32.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 221 KiB/s rd, 298 KiB/s wr, 63 op/s
Jan 31 03:42:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:33.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:42:34Z|00627|binding|INFO|Releasing lport 3a41031b-7ec9-4414-97e4-222c1c56b61b from this chassis (sb_readonly=0)
Jan 31 03:42:34 np0005603621 nova_compute[247399]: 2026-01-31 08:42:34.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:34 np0005603621 nova_compute[247399]: 2026-01-31 08:42:34.482 247403 INFO nova.compute.manager [None req-e57e10a8-9b06-4abc-847e-59d3b64e9154 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Get console output#033[00m
Jan 31 03:42:34 np0005603621 nova_compute[247399]: 2026-01-31 08:42:34.486 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:42:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 27 op/s
Jan 31 03:42:35 np0005603621 nova_compute[247399]: 2026-01-31 08:42:35.152 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:35.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 12 KiB/s wr, 27 op/s
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.687879) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848956687915, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1036, "num_deletes": 256, "total_data_size": 1588704, "memory_usage": 1614536, "flush_reason": "Manual Compaction"}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848956712350, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 1571194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60414, "largest_seqno": 61448, "table_properties": {"data_size": 1565959, "index_size": 2694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11602, "raw_average_key_size": 20, "raw_value_size": 1555436, "raw_average_value_size": 2748, "num_data_blocks": 117, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848876, "oldest_key_time": 1769848876, "file_creation_time": 1769848956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 24522 microseconds, and 3667 cpu microseconds.
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.712396) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 1571194 bytes OK
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.712416) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.724426) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.724470) EVENT_LOG_v1 {"time_micros": 1769848956724462, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.724493) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1583836, prev total WAL file size 1584545, number of live WAL files 2.
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.725037) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(1534KB)], [137(11MB)]
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848956725118, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 13497846, "oldest_snapshot_seqno": -1}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8657 keys, 11616769 bytes, temperature: kUnknown
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848956820353, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 11616769, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11560073, "index_size": 33927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 228664, "raw_average_key_size": 26, "raw_value_size": 11407372, "raw_average_value_size": 1317, "num_data_blocks": 1297, "num_entries": 8657, "num_filter_entries": 8657, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769848956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.820597) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 11616769 bytes
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.823602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.6 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.4 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(16.0) write-amplify(7.4) OK, records in: 9187, records dropped: 530 output_compression: NoCompression
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.823619) EVENT_LOG_v1 {"time_micros": 1769848956823610, "job": 84, "event": "compaction_finished", "compaction_time_micros": 95319, "compaction_time_cpu_micros": 21211, "output_level": 6, "num_output_files": 1, "total_output_size": 11616769, "num_input_records": 9187, "num_output_records": 8657, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848956823934, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769848956824969, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.724968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.825024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.825030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.825032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.825033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:42:36.825035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:42:37 np0005603621 nova_compute[247399]: 2026-01-31 08:42:37.205 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:37.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:37.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 27 op/s
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:42:38
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', '.rgw.root', '.mgr', 'backups', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images']
Jan 31 03:42:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:42:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:42:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:39.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:40 np0005603621 nova_compute[247399]: 2026-01-31 08:42:40.154 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 27 op/s
Jan 31 03:42:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:41 np0005603621 nova_compute[247399]: 2026-01-31 08:42:41.641 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:42:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:41.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:42:42 np0005603621 nova_compute[247399]: 2026-01-31 08:42:42.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 27 op/s
Jan 31 03:42:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 03:42:45 np0005603621 nova_compute[247399]: 2026-01-31 08:42:45.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:45.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:45.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 03:42:46 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:42:46.637 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:42:46 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:42:46.638 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:42:46 np0005603621 nova_compute[247399]: 2026-01-31 08:42:46.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:47 np0005603621 nova_compute[247399]: 2026-01-31 08:42:47.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:47 np0005603621 nova_compute[247399]: 2026-01-31 08:42:47.809 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Check if temp file /var/lib/nova/instances/tmpxoswf42f exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 31 03:42:47 np0005603621 nova_compute[247399]: 2026-01-31 08:42:47.810 247403 DEBUG nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxoswf42f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e7694c5e-8d11-4f04-aec6-d1933f668d11',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 31 03:42:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:47.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002170138888888889 of space, bias 1.0, pg target 0.6510416666666667 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.6492968313791178 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.2200215487158013 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:42:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:42:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:50 np0005603621 nova_compute[247399]: 2026-01-31 08:42:50.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail
Jan 31 03:42:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:51.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:52 np0005603621 nova_compute[247399]: 2026-01-31 08:42:52.255 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Jan 31 03:42:53 np0005603621 nova_compute[247399]: 2026-01-31 08:42:53.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:42:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:42:53.640 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:42:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:53.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Jan 31 03:42:55 np0005603621 nova_compute[247399]: 2026-01-31 08:42:55.158 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:55.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:55.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:42:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Jan 31 03:42:57 np0005603621 nova_compute[247399]: 2026-01-31 08:42:57.258 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:57.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:57.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.082 247403 DEBUG nova.compute.manager [req-26dcc5d6-b0de-4131-b117-e1754c64e7b9 req-cc36bf89-e312-4af1-b163-285b770347f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.083 247403 DEBUG oslo_concurrency.lockutils [req-26dcc5d6-b0de-4131-b117-e1754c64e7b9 req-cc36bf89-e312-4af1-b163-285b770347f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.083 247403 DEBUG oslo_concurrency.lockutils [req-26dcc5d6-b0de-4131-b117-e1754c64e7b9 req-cc36bf89-e312-4af1-b163-285b770347f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.083 247403 DEBUG oslo_concurrency.lockutils [req-26dcc5d6-b0de-4131-b117-e1754c64e7b9 req-cc36bf89-e312-4af1-b163-285b770347f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.083 247403 DEBUG nova.compute.manager [req-26dcc5d6-b0de-4131-b117-e1754c64e7b9 req-cc36bf89-e312-4af1-b163-285b770347f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.084 247403 DEBUG nova.compute.manager [req-26dcc5d6-b0de-4131-b117-e1754c64e7b9 req-cc36bf89-e312-4af1-b163-285b770347f5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:42:58 np0005603621 podman[356188]: 2026-01-31 08:42:58.533610252 +0000 UTC m=+0.095077945 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:42:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Jan 31 03:42:58 np0005603621 podman[356189]: 2026-01-31 08:42:58.545612862 +0000 UTC m=+0.105523396 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:58 np0005603621 nova_compute[247399]: 2026-01-31 08:42:58.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.428 247403 INFO nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Took 9.18 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.428 247403 DEBUG nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.560 247403 DEBUG nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=18432,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpxoswf42f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e7694c5e-8d11-4f04-aec6-d1933f668d11',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(3a883d64-0212-4213-b49a-7b9716133539),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.564 247403 DEBUG nova.objects.instance [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lazy-loading 'migration_context' on Instance uuid e7694c5e-8d11-4f04-aec6-d1933f668d11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.565 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.566 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.567 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.621 247403 DEBUG nova.virt.libvirt.vif [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-617880352',display_name='tempest-TestNetworkAdvancedServerOps-server-617880352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-617880352',id=151,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE9oW7hyNK/c0GmlhWHVsudW1EFOU1/778j2Zfzh7XKLIHLI+8KsqNzzhySs7L5TOC+KBq7HkFVRK05TmxJs9LQc4oVDYzV+eQ5EXf3bE6KOfId7bnJvpzjj70u8lMALWA==',key_name='tempest-TestNetworkAdvancedServerOps-530143631',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:42:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3ynh01gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:42:05Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e7694c5e-8d11-4f04-aec6-d1933f668d11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.621 247403 DEBUG nova.network.os_vif_util [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Converting VIF {"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.622 247403 DEBUG nova.network.os_vif_util [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.623 247403 DEBUG nova.virt.libvirt.migration [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating guest XML with vif config: <interface type="ethernet">
Jan 31 03:42:59 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:26:78:9b"/>
Jan 31 03:42:59 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:42:59 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:42:59 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:42:59 np0005603621 nova_compute[247399]:  <target dev="tapb57f4c41-e2"/>
Jan 31 03:42:59 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:42:59 np0005603621 nova_compute[247399]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 31 03:42:59 np0005603621 nova_compute[247399]: 2026-01-31 08:42:59.623 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 31 03:42:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:42:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:42:59.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:42:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:42:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:42:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:42:59.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.069 247403 DEBUG nova.virt.libvirt.migration [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.069 247403 INFO nova.virt.libvirt.migration [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.159 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.413 247403 INFO nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.464 247403 DEBUG nova.compute.manager [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.465 247403 DEBUG oslo_concurrency.lockutils [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.465 247403 DEBUG oslo_concurrency.lockutils [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.465 247403 DEBUG oslo_concurrency.lockutils [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 DEBUG nova.compute.manager [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 WARNING nova.compute.manager [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received unexpected event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 DEBUG nova.compute.manager [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-changed-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 DEBUG nova.compute.manager [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Refreshing instance network info cache due to event network-changed-b57f4c41-e254-4e29-be21-1899bdb779e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 DEBUG oslo_concurrency.lockutils [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 DEBUG oslo_concurrency.lockutils [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.466 247403 DEBUG nova.network.neutron [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Refreshing network info cache for port b57f4c41-e254-4e29-be21-1899bdb779e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:43:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 2.1 KiB/s wr, 0 op/s
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.918 247403 DEBUG nova.virt.libvirt.migration [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 31 03:43:00 np0005603621 nova_compute[247399]: 2026-01-31 08:43:00.919 247403 DEBUG nova.virt.libvirt.migration [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.006 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769848981.0058022, e7694c5e-8d11-4f04-aec6-d1933f668d11 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.006 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.132 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:01 np0005603621 kernel: tapb57f4c41-e2 (unregistering): left promiscuous mode
Jan 31 03:43:01 np0005603621 NetworkManager[49013]: <info>  [1769848981.1694] device (tapb57f4c41-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:43:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:01Z|00628|binding|INFO|Releasing lport b57f4c41-e254-4e29-be21-1899bdb779e0 from this chassis (sb_readonly=0)
Jan 31 03:43:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:01Z|00629|binding|INFO|Setting lport b57f4c41-e254-4e29-be21-1899bdb779e0 down in Southbound
Jan 31 03:43:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:01Z|00630|binding|INFO|Removing iface tapb57f4c41-e2 ovn-installed in OVS
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.180 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.189 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:43:01 np0005603621 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 31 03:43:01 np0005603621 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000097.scope: Consumed 14.684s CPU time.
Jan 31 03:43:01 np0005603621 systemd-machined[212769]: Machine qemu-75-instance-00000097 terminated.
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.286 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:43:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.303 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:78:9b 10.100.0.7'], port_security=['fa:16:3e:26:78:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '7ec8bf38-9571-4400-a85c-6bd5ac54bdf3'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e7694c5e-8d11-4f04-aec6-d1933f668d11', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '8', 'neutron:security_group_ids': '15398c59-1164-4f8b-8737-a5ada60dadf3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afdea9bd-63de-451e-8b4c-572440598122, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b57f4c41-e254-4e29-be21-1899bdb779e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.304 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b57f4c41-e254-4e29-be21-1899bdb779e0 in datapath b000b527-ea00-4c0c-84c4-e93c10d4bae5 unbound from our chassis#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.306 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b000b527-ea00-4c0c-84c4-e93c10d4bae5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.308 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11947f44-f262-4339-9242-d0b1e65397e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.308 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5 namespace which is not needed anymore#033[00m
Jan 31 03:43:01 np0005603621 virtqemud[247123]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/e7694c5e-8d11-4f04-aec6-d1933f668d11_disk: No such file or directory
Jan 31 03:43:01 np0005603621 virtqemud[247123]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/e7694c5e-8d11-4f04-aec6-d1933f668d11_disk: No such file or directory
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.354 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.354 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.355 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 31 03:43:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [NOTICE]   (355047) : haproxy version is 2.8.14-c23fe91
Jan 31 03:43:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [NOTICE]   (355047) : path to executable is /usr/sbin/haproxy
Jan 31 03:43:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [WARNING]  (355047) : Exiting Master process...
Jan 31 03:43:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [WARNING]  (355047) : Exiting Master process...
Jan 31 03:43:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [ALERT]    (355047) : Current worker (355049) exited with code 143 (Terminated)
Jan 31 03:43:01 np0005603621 neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5[355043]: [WARNING]  (355047) : All workers exited. Exiting... (0)
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.421 247403 DEBUG nova.virt.libvirt.guest [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'e7694c5e-8d11-4f04-aec6-d1933f668d11' (instance-00000097) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.422 247403 INFO nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migration operation has completed#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.423 247403 INFO nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] _post_live_migration() is started..#033[00m
Jan 31 03:43:01 np0005603621 systemd[1]: libpod-8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811.scope: Deactivated successfully.
Jan 31 03:43:01 np0005603621 podman[356267]: 2026-01-31 08:43:01.431940351 +0000 UTC m=+0.042817987 container died 8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:43:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811-userdata-shm.mount: Deactivated successfully.
Jan 31 03:43:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-61635c475f869cad157b08281144d3136f14935a6335a98caa9b3003d43451d8-merged.mount: Deactivated successfully.
Jan 31 03:43:01 np0005603621 podman[356267]: 2026-01-31 08:43:01.466599731 +0000 UTC m=+0.077477357 container cleanup 8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:43:01 np0005603621 systemd[1]: libpod-conmon-8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811.scope: Deactivated successfully.
Jan 31 03:43:01 np0005603621 podman[356298]: 2026-01-31 08:43:01.519282241 +0000 UTC m=+0.037875412 container remove 8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.522 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f6bff1c0-b650-44d5-913d-bea910044ebb]: (4, ('Sat Jan 31 08:43:01 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5 (8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811)\n8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811\nSat Jan 31 08:43:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5 (8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811)\n8a1e3b282d94cc7a3ce67da8414abeef5a7e455c4b50fe3c3d19d9b61a6d9811\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.524 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7757f12b-72a6-431c-b342-e6ef8f5744ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.525 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb000b527-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:01 np0005603621 kernel: tapb000b527-e0: left promiscuous mode
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603621 nova_compute[247399]: 2026-01-31 08:43:01.534 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.537 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6b561e7c-edc8-4fe8-bd86-3e8be9aafa39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.559 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[69cf0d9b-56ff-4c4f-833a-ca88a0d8dfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.561 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d6495ffa-1dc4-4a8c-a2ed-59da13e2f72e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.573 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[516816b1-2ee1-47bc-977d-63ed0e7fe889]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822481, 'reachable_time': 16560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356317, 'error': None, 'target': 'ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.576 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b000b527-ea00-4c0c-84c4-e93c10d4bae5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:43:01 np0005603621 systemd[1]: run-netns-ovnmeta\x2db000b527\x2dea00\x2d4c0c\x2d84c4\x2de93c10d4bae5.mount: Deactivated successfully.
Jan 31 03:43:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:01.576 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4573bb85-185f-4760-bb69-67e430d1e82c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:01.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:01.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:02 np0005603621 nova_compute[247399]: 2026-01-31 08:43:02.262 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 2.2 KiB/s wr, 6 op/s
Jan 31 03:43:03 np0005603621 nova_compute[247399]: 2026-01-31 08:43:03.322 247403 DEBUG nova.compute.manager [req-d2deaa95-9bd8-4b65-ba7f-fbb02b8d1d50 req-b5b21b05-fd6a-4e38-b07a-d6d557b376ba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:03 np0005603621 nova_compute[247399]: 2026-01-31 08:43:03.322 247403 DEBUG oslo_concurrency.lockutils [req-d2deaa95-9bd8-4b65-ba7f-fbb02b8d1d50 req-b5b21b05-fd6a-4e38-b07a-d6d557b376ba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:03 np0005603621 nova_compute[247399]: 2026-01-31 08:43:03.322 247403 DEBUG oslo_concurrency.lockutils [req-d2deaa95-9bd8-4b65-ba7f-fbb02b8d1d50 req-b5b21b05-fd6a-4e38-b07a-d6d557b376ba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:03 np0005603621 nova_compute[247399]: 2026-01-31 08:43:03.322 247403 DEBUG oslo_concurrency.lockutils [req-d2deaa95-9bd8-4b65-ba7f-fbb02b8d1d50 req-b5b21b05-fd6a-4e38-b07a-d6d557b376ba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:03 np0005603621 nova_compute[247399]: 2026-01-31 08:43:03.322 247403 DEBUG nova.compute.manager [req-d2deaa95-9bd8-4b65-ba7f-fbb02b8d1d50 req-b5b21b05-fd6a-4e38-b07a-d6d557b376ba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:03 np0005603621 nova_compute[247399]: 2026-01-31 08:43:03.323 247403 DEBUG nova.compute.manager [req-d2deaa95-9bd8-4b65-ba7f-fbb02b8d1d50 req-b5b21b05-fd6a-4e38-b07a-d6d557b376ba fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:43:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:03.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.272 247403 DEBUG nova.network.neutron [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Activated binding for port b57f4c41-e254-4e29-be21-1899bdb779e0 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.273 247403 DEBUG nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.274 247403 DEBUG nova.virt.libvirt.vif [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:41:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-617880352',display_name='tempest-TestNetworkAdvancedServerOps-server-617880352',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-617880352',id=151,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE9oW7hyNK/c0GmlhWHVsudW1EFOU1/778j2Zfzh7XKLIHLI+8KsqNzzhySs7L5TOC+KBq7HkFVRK05TmxJs9LQc4oVDYzV+eQ5EXf3bE6KOfId7bnJvpzjj70u8lMALWA==',key_name='tempest-TestNetworkAdvancedServerOps-530143631',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:42:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3ynh01gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:42:37Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e7694c5e-8d11-4f04-aec6-d1933f668d11,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.274 247403 DEBUG nova.network.os_vif_util [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Converting VIF {"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.275 247403 DEBUG nova.network.os_vif_util [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.275 247403 DEBUG os_vif [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.277 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb57f4c41-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.283 247403 INFO os_vif [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:78:9b,bridge_name='br-int',has_traffic_filtering=True,id=b57f4c41-e254-4e29-be21-1899bdb779e0,network=Network(b000b527-ea00-4c0c-84c4-e93c10d4bae5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb57f4c41-e2')#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.284 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.284 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.285 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.285 247403 DEBUG nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.285 247403 INFO nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Deleting instance files /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11_del#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.286 247403 INFO nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Deletion of /var/lib/nova/instances/e7694c5e-8d11-4f04-aec6-d1933f668d11_del complete#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.488 247403 DEBUG nova.compute.manager [req-cdf3ac95-67e6-4b5c-858a-b8badee4314e req-cf788c5f-ca15-42e4-96a9-e321c0ea4cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.489 247403 DEBUG oslo_concurrency.lockutils [req-cdf3ac95-67e6-4b5c-858a-b8badee4314e req-cf788c5f-ca15-42e4-96a9-e321c0ea4cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.489 247403 DEBUG oslo_concurrency.lockutils [req-cdf3ac95-67e6-4b5c-858a-b8badee4314e req-cf788c5f-ca15-42e4-96a9-e321c0ea4cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.489 247403 DEBUG oslo_concurrency.lockutils [req-cdf3ac95-67e6-4b5c-858a-b8badee4314e req-cf788c5f-ca15-42e4-96a9-e321c0ea4cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.490 247403 DEBUG nova.compute.manager [req-cdf3ac95-67e6-4b5c-858a-b8badee4314e req-cf788c5f-ca15-42e4-96a9-e321c0ea4cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:04 np0005603621 nova_compute[247399]: 2026-01-31 08:43:04.490 247403 DEBUG nova.compute.manager [req-cdf3ac95-67e6-4b5c-858a-b8badee4314e req-cf788c5f-ca15-42e4-96a9-e321c0ea4cce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-unplugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:43:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 511 B/s wr, 5 op/s
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.185 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.615 247403 DEBUG nova.compute.manager [req-f3f60c00-3b72-4848-a87e-9f12df594aef req-7cf1f08c-8e5f-4c2f-956b-bbfd6e25eb35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.615 247403 DEBUG oslo_concurrency.lockutils [req-f3f60c00-3b72-4848-a87e-9f12df594aef req-7cf1f08c-8e5f-4c2f-956b-bbfd6e25eb35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.616 247403 DEBUG oslo_concurrency.lockutils [req-f3f60c00-3b72-4848-a87e-9f12df594aef req-7cf1f08c-8e5f-4c2f-956b-bbfd6e25eb35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.616 247403 DEBUG oslo_concurrency.lockutils [req-f3f60c00-3b72-4848-a87e-9f12df594aef req-7cf1f08c-8e5f-4c2f-956b-bbfd6e25eb35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.616 247403 DEBUG nova.compute.manager [req-f3f60c00-3b72-4848-a87e-9f12df594aef req-7cf1f08c-8e5f-4c2f-956b-bbfd6e25eb35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:05 np0005603621 nova_compute[247399]: 2026-01-31 08:43:05.616 247403 WARNING nova.compute.manager [req-f3f60c00-3b72-4848-a87e-9f12df594aef req-7cf1f08c-8e5f-4c2f-956b-bbfd6e25eb35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received unexpected event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:43:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:43:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:43:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:05.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:06 np0005603621 nova_compute[247399]: 2026-01-31 08:43:06.393 247403 DEBUG nova.network.neutron [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updated VIF entry in instance network info cache for port b57f4c41-e254-4e29-be21-1899bdb779e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:43:06 np0005603621 nova_compute[247399]: 2026-01-31 08:43:06.394 247403 DEBUG nova.network.neutron [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating instance_info_cache with network_info: [{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:06 np0005603621 nova_compute[247399]: 2026-01-31 08:43:06.493 247403 DEBUG oslo_concurrency.lockutils [req-ce63ea53-c813-4a99-8968-44431c60e3dd req-42b6c16d-5b44-483e-adb2-173c7971a206 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:06 np0005603621 nova_compute[247399]: 2026-01-31 08:43:06.494 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:43:06 np0005603621 nova_compute[247399]: 2026-01-31 08:43:06.494 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:43:06 np0005603621 nova_compute[247399]: 2026-01-31 08:43:06.495 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e7694c5e-8d11-4f04-aec6-d1933f668d11 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:43:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 307 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 33 op/s
Jan 31 03:43:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:07.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.444 247403 DEBUG nova.compute.manager [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.445 247403 DEBUG oslo_concurrency.lockutils [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.445 247403 DEBUG oslo_concurrency.lockutils [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.445 247403 DEBUG oslo_concurrency.lockutils [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.446 247403 DEBUG nova.compute.manager [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.446 247403 WARNING nova.compute.manager [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received unexpected event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.446 247403 DEBUG nova.compute.manager [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.446 247403 DEBUG oslo_concurrency.lockutils [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.446 247403 DEBUG oslo_concurrency.lockutils [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.446 247403 DEBUG oslo_concurrency.lockutils [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.447 247403 DEBUG nova.compute.manager [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] No waiting events found dispatching network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:43:08 np0005603621 nova_compute[247399]: 2026-01-31 08:43:08.447 247403 WARNING nova.compute.manager [req-5b584e42-8e9f-4687-8e16-2a1b06dbf086 req-26f59cdb-c15b-4bd6-a770-a11ce72ab3ab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Received unexpected event network-vif-plugged-b57f4c41-e254-4e29-be21-1899bdb779e0 for instance with vm_state active and task_state migrating.#033[00m
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 82 op/s
Jan 31 03:43:09 np0005603621 nova_compute[247399]: 2026-01-31 08:43:09.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:09 np0005603621 nova_compute[247399]: 2026-01-31 08:43:09.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:09.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:10 np0005603621 nova_compute[247399]: 2026-01-31 08:43:10.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 82 op/s
Jan 31 03:43:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:43:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.281 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating instance_info_cache with network_info: [{"id": "b57f4c41-e254-4e29-be21-1899bdb779e0", "address": "fa:16:3e:26:78:9b", "network": {"id": "b000b527-ea00-4c0c-84c4-e93c10d4bae5", "bridge": "br-int", "label": "tempest-network-smoke--1061452383", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb57f4c41-e2", "ovs_interfaceid": "b57f4c41-e254-4e29-be21-1899bdb779e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.386 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-e7694c5e-8d11-4f04-aec6-d1933f668d11" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.387 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.387 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.388 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.388 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.388 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.389 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.389 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.390 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.470 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.470 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.470 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.470 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.471 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 03:43:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165858469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:12 np0005603621 nova_compute[247399]: 2026-01-31 08:43:12.965 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.140 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.141 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4212MB free_disk=20.897178649902344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.142 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.142 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.365 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Updating resource usage from migration 3a883d64-0212-4213-b49a-7b9716133539#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.424 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Migration 3a883d64-0212-4213-b49a-7b9716133539 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.424 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.425 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:43:13 np0005603621 nova_compute[247399]: 2026-01-31 08:43:13.675 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:13.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578008369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:14 np0005603621 nova_compute[247399]: 2026-01-31 08:43:14.119 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:14 np0005603621 nova_compute[247399]: 2026-01-31 08:43:14.125 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:14 np0005603621 nova_compute[247399]: 2026-01-31 08:43:14.247 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:14 np0005603621 nova_compute[247399]: 2026-01-31 08:43:14.284 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:14 np0005603621 nova_compute[247399]: 2026-01-31 08:43:14.500 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:43:14 np0005603621 nova_compute[247399]: 2026-01-31 08:43:14.500 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 86 op/s
Jan 31 03:43:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Jan 31 03:43:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Jan 31 03:43:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Jan 31 03:43:15 np0005603621 nova_compute[247399]: 2026-01-31 08:43:15.192 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:43:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2396719273' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:43:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:43:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2396719273' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:43:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:15.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:15.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:16 np0005603621 nova_compute[247399]: 2026-01-31 08:43:16.353 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769848981.3517168, e7694c5e-8d11-4f04-aec6-d1933f668d11 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:16 np0005603621 nova_compute[247399]: 2026-01-31 08:43:16.354 247403 INFO nova.compute.manager [-] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:43:16 np0005603621 nova_compute[247399]: 2026-01-31 08:43:16.501 247403 DEBUG nova.compute.manager [None req-bb1093a8-cd52-4491-b307-36c2898828c3 - - - - - -] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 108 op/s
Jan 31 03:43:17 np0005603621 nova_compute[247399]: 2026-01-31 08:43:17.496 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:17 np0005603621 nova_compute[247399]: 2026-01-31 08:43:17.496 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:17.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 139 op/s
Jan 31 03:43:18 np0005603621 nova_compute[247399]: 2026-01-31 08:43:18.983 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquiring lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:18 np0005603621 nova_compute[247399]: 2026-01-31 08:43:18.984 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:18 np0005603621 nova_compute[247399]: 2026-01-31 08:43:18.984 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "e7694c5e-8d11-4f04-aec6-d1933f668d11-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.044 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.045 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.045 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.045 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.045 247403 DEBUG oslo_concurrency.processutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139343816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.458 247403 DEBUG oslo_concurrency.processutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.577 247403 WARNING nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.578 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4218MB free_disk=20.882118225097656GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.579 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.579 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.660 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Migration for instance e7694c5e-8d11-4f04-aec6-d1933f668d11 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 31 03:43:19 np0005603621 nova_compute[247399]: 2026-01-31 08:43:19.702 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 31 03:43:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:43:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:19.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:43:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:19.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.000 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Migration 3a883d64-0212-4213-b49a-7b9716133539 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.001 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.001 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.203 247403 DEBUG nova.scheduler.client.report [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:43:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 373 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 139 op/s
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.738 247403 DEBUG nova.scheduler.client.report [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.738 247403 DEBUG nova.compute.provider_tree [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.772 247403 DEBUG nova.scheduler.client.report [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.832 247403 DEBUG nova.scheduler.client.report [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:43:20 np0005603621 nova_compute[247399]: 2026-01-31 08:43:20.923 247403 DEBUG oslo_concurrency.processutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Jan 31 03:43:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Jan 31 03:43:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Jan 31 03:43:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/186254741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.336 247403 DEBUG oslo_concurrency.processutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.340 247403 DEBUG nova.compute.provider_tree [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.383 247403 DEBUG nova.scheduler.client.report [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.554 247403 DEBUG nova.compute.resource_tracker [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.554 247403 DEBUG oslo_concurrency.lockutils [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.560 247403 INFO nova.compute.manager [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Jan 31 03:43:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:21.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.931 247403 INFO nova.scheduler.client.report [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] Deleted allocation for migration 3a883d64-0212-4213-b49a-7b9716133539#033[00m
Jan 31 03:43:21 np0005603621 nova_compute[247399]: 2026-01-31 08:43:21.932 247403 DEBUG nova.virt.libvirt.driver [None req-648dcc49-c1b4-4494-b3c2-3f1aac78e4e2 f40dad094c6e43daab6e48d01b1df1ff bf9fd1d29e534bd99b47eb8854374663 - - default default] [instance: e7694c5e-8d11-4f04-aec6-d1933f668d11] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 31 03:43:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Jan 31 03:43:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:23.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:24 np0005603621 nova_compute[247399]: 2026-01-31 08:43:24.290 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 141 op/s
Jan 31 03:43:25 np0005603621 nova_compute[247399]: 2026-01-31 08:43:25.195 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:43:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 02a3da65-5bf6-47bb-8770-51250debd835 does not exist
Jan 31 03:43:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6448707c-159c-412f-88f5-718bb6127170 does not exist
Jan 31 03:43:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0f2be830-01d3-4e3d-9957-e2a24feee4f5 does not exist
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:43:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:43:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:25.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:25.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.221239187 +0000 UTC m=+0.034014290 container create d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:43:26 np0005603621 systemd[1]: Started libpod-conmon-d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996.scope.
Jan 31 03:43:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.301251913 +0000 UTC m=+0.114027016 container init d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:43:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.20745065 +0000 UTC m=+0.020225773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.310077353 +0000 UTC m=+0.122852456 container start d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:43:26 np0005603621 friendly_sanderson[356755]: 167 167
Jan 31 03:43:26 np0005603621 systemd[1]: libpod-d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996.scope: Deactivated successfully.
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.327767774 +0000 UTC m=+0.140542877 container attach d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.328130185 +0000 UTC m=+0.140905288 container died d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:43:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9e163014f789c3c1cd38f7679334461247d25ff66599a100371e3da19f29dc6c-merged.mount: Deactivated successfully.
Jan 31 03:43:26 np0005603621 podman[356739]: 2026-01-31 08:43:26.375250989 +0000 UTC m=+0.188026092 container remove d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:43:26 np0005603621 systemd[1]: libpod-conmon-d84e1f65d3cc3c1499843d43e2c94121628cc60c8a2c616d466a4106e60c7996.scope: Deactivated successfully.
Jan 31 03:43:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 31 03:43:26 np0005603621 podman[356781]: 2026-01-31 08:43:26.471554102 +0000 UTC m=+0.019876791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:43:26 np0005603621 podman[356781]: 2026-01-31 08:43:26.638398312 +0000 UTC m=+0.186720981 container create ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:43:26 np0005603621 systemd[1]: Started libpod-conmon-ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d.scope.
Jan 31 03:43:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07293c7475f3de7837a8a587599c491d4dd6c3f864787a71a9abf89e782698c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07293c7475f3de7837a8a587599c491d4dd6c3f864787a71a9abf89e782698c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07293c7475f3de7837a8a587599c491d4dd6c3f864787a71a9abf89e782698c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07293c7475f3de7837a8a587599c491d4dd6c3f864787a71a9abf89e782698c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07293c7475f3de7837a8a587599c491d4dd6c3f864787a71a9abf89e782698c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:26 np0005603621 podman[356781]: 2026-01-31 08:43:26.907943658 +0000 UTC m=+0.456266357 container init ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:43:26 np0005603621 podman[356781]: 2026-01-31 08:43:26.913734712 +0000 UTC m=+0.462057391 container start ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:43:26 np0005603621 podman[356781]: 2026-01-31 08:43:26.975259902 +0000 UTC m=+0.523582571 container attach ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:43:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:27.008 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:43:27 np0005603621 nova_compute[247399]: 2026-01-31 08:43:27.008 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:27.011 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:43:27 np0005603621 intelligent_volhard[356798]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:43:27 np0005603621 intelligent_volhard[356798]: --> relative data size: 1.0
Jan 31 03:43:27 np0005603621 intelligent_volhard[356798]: --> All data devices are unavailable
Jan 31 03:43:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:27.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:27.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:27 np0005603621 systemd[1]: libpod-ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d.scope: Deactivated successfully.
Jan 31 03:43:27 np0005603621 podman[356781]: 2026-01-31 08:43:27.904253376 +0000 UTC m=+1.452576045 container died ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 03:43:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-07293c7475f3de7837a8a587599c491d4dd6c3f864787a71a9abf89e782698c0-merged.mount: Deactivated successfully.
Jan 31 03:43:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 621 KiB/s wr, 62 op/s
Jan 31 03:43:28 np0005603621 podman[356781]: 2026-01-31 08:43:28.552177138 +0000 UTC m=+2.100499807 container remove ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_volhard, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:43:28 np0005603621 systemd[1]: libpod-conmon-ddf273c757f1c291c7ae3a514a869bae4007cf698a337393ac5c2a0d0caaa08d.scope: Deactivated successfully.
Jan 31 03:43:28 np0005603621 podman[356828]: 2026-01-31 08:43:28.672798152 +0000 UTC m=+0.079953375 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:43:28 np0005603621 podman[356829]: 2026-01-31 08:43:28.673014959 +0000 UTC m=+0.078694096 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.090856877 +0000 UTC m=+0.075937609 container create 719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.038389853 +0000 UTC m=+0.023470605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:43:29 np0005603621 systemd[1]: Started libpod-conmon-719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463.scope.
Jan 31 03:43:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:29 np0005603621 nova_compute[247399]: 2026-01-31 08:43:29.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.315131847 +0000 UTC m=+0.300212599 container init 719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.326217048 +0000 UTC m=+0.311297810 container start 719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:43:29 np0005603621 condescending_hofstadter[357027]: 167 167
Jan 31 03:43:29 np0005603621 systemd[1]: libpod-719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463.scope: Deactivated successfully.
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.452633776 +0000 UTC m=+0.437714538 container attach 719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.453115201 +0000 UTC m=+0.438195933 container died 719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:43:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3d84f3a0bcdf2309c5d2727ff48e085a37b3cd5f1fd9ff83eea4337b06659e08-merged.mount: Deactivated successfully.
Jan 31 03:43:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:43:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:29.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:43:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:29.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:29 np0005603621 podman[357011]: 2026-01-31 08:43:29.938120248 +0000 UTC m=+0.923200980 container remove 719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:43:29 np0005603621 systemd[1]: libpod-conmon-719efcfa6827c70b78282031204fd4d1f695b37f74a53ffcff9ca373b7650463.scope: Deactivated successfully.
Jan 31 03:43:30 np0005603621 podman[357052]: 2026-01-31 08:43:30.051932097 +0000 UTC m=+0.023829587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:43:30 np0005603621 nova_compute[247399]: 2026-01-31 08:43:30.234 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:30 np0005603621 podman[357052]: 2026-01-31 08:43:30.252375322 +0000 UTC m=+0.224272832 container create 1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:43:30 np0005603621 systemd[1]: Started libpod-conmon-1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f.scope.
Jan 31 03:43:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31d7e311e9e43ff1ad491940f16391aeecf8f6c1def2fad96b3a0918b35c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31d7e311e9e43ff1ad491940f16391aeecf8f6c1def2fad96b3a0918b35c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31d7e311e9e43ff1ad491940f16391aeecf8f6c1def2fad96b3a0918b35c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c31d7e311e9e43ff1ad491940f16391aeecf8f6c1def2fad96b3a0918b35c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:30.525 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:30.527 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:30.527 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:30 np0005603621 podman[357052]: 2026-01-31 08:43:30.541501428 +0000 UTC m=+0.513398918 container init 1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:43:30 np0005603621 podman[357052]: 2026-01-31 08:43:30.548832601 +0000 UTC m=+0.520730071 container start 1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:43:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 621 KiB/s wr, 62 op/s
Jan 31 03:43:30 np0005603621 podman[357052]: 2026-01-31 08:43:30.752309312 +0000 UTC m=+0.724206832 container attach 1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]: {
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:    "0": [
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:        {
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "devices": [
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "/dev/loop3"
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            ],
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "lv_name": "ceph_lv0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "lv_size": "7511998464",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "name": "ceph_lv0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "tags": {
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.cluster_name": "ceph",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.crush_device_class": "",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.encrypted": "0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.osd_id": "0",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.type": "block",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:                "ceph.vdo": "0"
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            },
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "type": "block",
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:            "vg_name": "ceph_vg0"
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:        }
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]:    ]
Jan 31 03:43:31 np0005603621 relaxed_noyce[357070]: }
Jan 31 03:43:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:31 np0005603621 systemd[1]: libpod-1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f.scope: Deactivated successfully.
Jan 31 03:43:31 np0005603621 podman[357052]: 2026-01-31 08:43:31.320859497 +0000 UTC m=+1.292756967 container died 1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:43:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9c31d7e311e9e43ff1ad491940f16391aeecf8f6c1def2fad96b3a0918b35c73-merged.mount: Deactivated successfully.
Jan 31 03:43:31 np0005603621 podman[357052]: 2026-01-31 08:43:31.389087661 +0000 UTC m=+1.360985121 container remove 1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_noyce, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:43:31 np0005603621 systemd[1]: libpod-conmon-1e6d5176aad404ef8949ee1ca2d721bb374680700e22f39a54a5979a8242786f.scope: Deactivated successfully.
Jan 31 03:43:31 np0005603621 radosgw[94351]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 31 03:43:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:31.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:31.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:31 np0005603621 podman[357231]: 2026-01-31 08:43:31.930549697 +0000 UTC m=+0.039564235 container create c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:43:31 np0005603621 systemd[1]: Started libpod-conmon-c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42.scope.
Jan 31 03:43:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:32 np0005603621 podman[357231]: 2026-01-31 08:43:31.913727494 +0000 UTC m=+0.022742062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:43:32 np0005603621 podman[357231]: 2026-01-31 08:43:32.015667456 +0000 UTC m=+0.124682024 container init c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:43:32 np0005603621 podman[357231]: 2026-01-31 08:43:32.021229683 +0000 UTC m=+0.130244221 container start c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 03:43:32 np0005603621 systemd[1]: libpod-c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42.scope: Deactivated successfully.
Jan 31 03:43:32 np0005603621 admiring_moore[357247]: 167 167
Jan 31 03:43:32 np0005603621 conmon[357247]: conmon c378d2194c05b1d1edec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42.scope/container/memory.events
Jan 31 03:43:32 np0005603621 podman[357231]: 2026-01-31 08:43:32.030536198 +0000 UTC m=+0.139550756 container attach c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:43:32 np0005603621 podman[357231]: 2026-01-31 08:43:32.031151757 +0000 UTC m=+0.140166295 container died c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moore, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:43:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6594bf917551c6f0879ee338fb080e90f4361975780d8eb912f354a4d493ce36-merged.mount: Deactivated successfully.
Jan 31 03:43:32 np0005603621 podman[357231]: 2026-01-31 08:43:32.099533675 +0000 UTC m=+0.208548263 container remove c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_moore, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:43:32 np0005603621 systemd[1]: libpod-conmon-c378d2194c05b1d1edec3a081311075f9d6ae94139b5b78e60faf418b7419f42.scope: Deactivated successfully.
Jan 31 03:43:32 np0005603621 podman[357271]: 2026-01-31 08:43:32.207050334 +0000 UTC m=+0.019896472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:43:32 np0005603621 podman[357271]: 2026-01-31 08:43:32.372894062 +0000 UTC m=+0.185740180 container create 24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ramanujan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:43:32 np0005603621 systemd[1]: Started libpod-conmon-24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f.scope.
Jan 31 03:43:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e3557c8dbe9039a619d2ef273f504101f8b4c982b1be933eecfff48aa87cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e3557c8dbe9039a619d2ef273f504101f8b4c982b1be933eecfff48aa87cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e3557c8dbe9039a619d2ef273f504101f8b4c982b1be933eecfff48aa87cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e3557c8dbe9039a619d2ef273f504101f8b4c982b1be933eecfff48aa87cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 45 KiB/s wr, 117 op/s
Jan 31 03:43:32 np0005603621 podman[357271]: 2026-01-31 08:43:32.578894683 +0000 UTC m=+0.391740831 container init 24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ramanujan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:43:32 np0005603621 podman[357271]: 2026-01-31 08:43:32.585519973 +0000 UTC m=+0.398366091 container start 24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:43:32 np0005603621 podman[357271]: 2026-01-31 08:43:32.622718752 +0000 UTC m=+0.435564890 container attach 24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:43:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:33.013 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:33 np0005603621 nova_compute[247399]: 2026-01-31 08:43:33.208 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:33 np0005603621 nova_compute[247399]: 2026-01-31 08:43:33.209 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]: {
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:        "osd_id": 0,
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:        "type": "bluestore"
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]:    }
Jan 31 03:43:33 np0005603621 elated_ramanujan[357288]: }
Jan 31 03:43:33 np0005603621 systemd[1]: libpod-24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f.scope: Deactivated successfully.
Jan 31 03:43:33 np0005603621 podman[357271]: 2026-01-31 08:43:33.448316507 +0000 UTC m=+1.261162625 container died 24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:43:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a4e3557c8dbe9039a619d2ef273f504101f8b4c982b1be933eecfff48aa87cad-merged.mount: Deactivated successfully.
Jan 31 03:43:33 np0005603621 podman[357271]: 2026-01-31 08:43:33.490822485 +0000 UTC m=+1.303668603 container remove 24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:43:33 np0005603621 systemd[1]: libpod-conmon-24a6ede5a2ed61624a3460bbc07efe5f6554f50aa461844489f1b53df8b5096f.scope: Deactivated successfully.
Jan 31 03:43:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:43:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:43:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:43:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:43:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fa7db512-959d-4089-a44b-5d4140a8555b does not exist
Jan 31 03:43:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3336b7fa-df6f-47b4-82b0-47566fb38ba5 does not exist
Jan 31 03:43:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 564512c3-70b1-4471-9f56-c29289b869af does not exist
Jan 31 03:43:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:33.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:33.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:34 np0005603621 nova_compute[247399]: 2026-01-31 08:43:34.199 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:43:34 np0005603621 nova_compute[247399]: 2026-01-31 08:43:34.297 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:43:34 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:43:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 570 KiB/s rd, 42 KiB/s wr, 107 op/s
Jan 31 03:43:35 np0005603621 nova_compute[247399]: 2026-01-31 08:43:35.057 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:35 np0005603621 nova_compute[247399]: 2026-01-31 08:43:35.057 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:35 np0005603621 nova_compute[247399]: 2026-01-31 08:43:35.072 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:43:35 np0005603621 nova_compute[247399]: 2026-01-31 08:43:35.072 247403 INFO nova.compute.claims [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:43:35 np0005603621 nova_compute[247399]: 2026-01-31 08:43:35.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:35 np0005603621 nova_compute[247399]: 2026-01-31 08:43:35.720 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:35.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:35.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:43:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541974174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:43:36 np0005603621 nova_compute[247399]: 2026-01-31 08:43:36.175 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:36 np0005603621 nova_compute[247399]: 2026-01-31 08:43:36.182 247403 DEBUG nova.compute.provider_tree [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:43:36 np0005603621 nova_compute[247399]: 2026-01-31 08:43:36.260 247403 DEBUG nova.scheduler.client.report [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:43:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 43 KiB/s wr, 169 op/s
Jan 31 03:43:36 np0005603621 nova_compute[247399]: 2026-01-31 08:43:36.729 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:36 np0005603621 nova_compute[247399]: 2026-01-31 08:43:36.902 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "67c3e485-27de-4178-b21b-834d47c99d3a" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:36 np0005603621 nova_compute[247399]: 2026-01-31 08:43:36.903 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "67c3e485-27de-4178-b21b-834d47c99d3a" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:37 np0005603621 nova_compute[247399]: 2026-01-31 08:43:37.124 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "67c3e485-27de-4178-b21b-834d47c99d3a" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:37 np0005603621 nova_compute[247399]: 2026-01-31 08:43:37.125 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:43:37 np0005603621 nova_compute[247399]: 2026-01-31 08:43:37.499 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:43:37 np0005603621 nova_compute[247399]: 2026-01-31 08:43:37.499 247403 DEBUG nova.network.neutron [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:43:37 np0005603621 nova_compute[247399]: 2026-01-31 08:43:37.721 247403 INFO nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:43:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:38 np0005603621 nova_compute[247399]: 2026-01-31 08:43:38.298 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:43:38 np0005603621 nova_compute[247399]: 2026-01-31 08:43:38.334 247403 DEBUG nova.policy [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2e417cf7927412d9555b79aae71bb54', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ac1759c892d49069e58e75323dece87', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 288 op/s
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:43:38
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 31 03:43:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:43:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.735 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.736 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.737 247403 INFO nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Creating image(s)#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.767 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.799 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.833 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.837 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:39.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.907 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.909 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.910 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:39.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.910 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.941 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:39 np0005603621 nova_compute[247399]: 2026-01-31 08:43:39.944 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f7a3c847-b21e-4590-9666-f14efe505115_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.239 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.243 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f7a3c847-b21e-4590-9666-f14efe505115_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.313 247403 DEBUG nova.network.neutron [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Successfully created port: d961323c-850a-4878-8b51-65ed386d2942 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.322 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] resizing rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.494 247403 DEBUG nova.objects.instance [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lazy-loading 'migration_context' on Instance uuid f7a3c847-b21e-4590-9666-f14efe505115 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:43:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 23 KiB/s wr, 254 op/s
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.583 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.583 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Ensure instance console log exists: /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.584 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.584 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:40 np0005603621 nova_compute[247399]: 2026-01-31 08:43:40.584 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:41.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:41.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 294 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 281 op/s
Jan 31 03:43:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:43.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:44 np0005603621 nova_compute[247399]: 2026-01-31 08:43:44.305 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:44 np0005603621 nova_compute[247399]: 2026-01-31 08:43:44.441 247403 DEBUG nova.network.neutron [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Successfully updated port: d961323c-850a-4878-8b51-65ed386d2942 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:43:44 np0005603621 nova_compute[247399]: 2026-01-31 08:43:44.517 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:43:44 np0005603621 nova_compute[247399]: 2026-01-31 08:43:44.518 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquired lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:43:44 np0005603621 nova_compute[247399]: 2026-01-31 08:43:44.518 247403 DEBUG nova.network.neutron [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:43:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 294 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Jan 31 03:43:45 np0005603621 nova_compute[247399]: 2026-01-31 08:43:45.174 247403 DEBUG nova.network.neutron [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:43:45 np0005603621 nova_compute[247399]: 2026-01-31 08:43:45.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:45 np0005603621 nova_compute[247399]: 2026-01-31 08:43:45.640 247403 DEBUG nova.compute.manager [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-changed-d961323c-850a-4878-8b51-65ed386d2942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:45 np0005603621 nova_compute[247399]: 2026-01-31 08:43:45.640 247403 DEBUG nova.compute.manager [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Refreshing instance network info cache due to event network-changed-d961323c-850a-4878-8b51-65ed386d2942. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:43:45 np0005603621 nova_compute[247399]: 2026-01-31 08:43:45.640 247403 DEBUG oslo_concurrency.lockutils [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:43:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:45.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:45.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 229 op/s
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.513 247403 DEBUG nova.network.neutron [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updating instance_info_cache with network_info: [{"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:47.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:47.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.959 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Releasing lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.960 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Instance network_info: |[{"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.960 247403 DEBUG oslo_concurrency.lockutils [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.961 247403 DEBUG nova.network.neutron [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Refreshing network info cache for port d961323c-850a-4878-8b51-65ed386d2942 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.964 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Start _get_guest_xml network_info=[{"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.968 247403 WARNING nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.977 247403 DEBUG nova.virt.libvirt.host [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.978 247403 DEBUG nova.virt.libvirt.host [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.985 247403 DEBUG nova.virt.libvirt.host [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.985 247403 DEBUG nova.virt.libvirt.host [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.987 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.987 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.987 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.988 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.988 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.988 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.988 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.988 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.989 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.989 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.989 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.989 247403 DEBUG nova.virt.hardware [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:43:47 np0005603621 nova_compute[247399]: 2026-01-31 08:43:47.992 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:43:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287619176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:43:48 np0005603621 nova_compute[247399]: 2026-01-31 08:43:48.457 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:48 np0005603621 nova_compute[247399]: 2026-01-31 08:43:48.484 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:48 np0005603621 nova_compute[247399]: 2026-01-31 08:43:48.491 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 MiB/s wr, 225 op/s
Jan 31 03:43:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:43:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3740954343' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.041 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.042 247403 DEBUG nova.virt.libvirt.vif [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:43:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-59852738',display_name='tempest-ServerGroupTestJSON-server-59852738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-59852738',id=153,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ac1759c892d49069e58e75323dece87',ramdisk_id='',reservation_id='r-7itwrejj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-58001083',owner_user_name='tempest-ServerGroupTestJSON-58001083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:43:38Z,user_data=None,user_id='c2e417cf7927412d9555b79aae71bb54',uuid=f7a3c847-b21e-4590-9666-f14efe505115,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.043 247403 DEBUG nova.network.os_vif_util [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Converting VIF {"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.044 247403 DEBUG nova.network.os_vif_util [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.044 247403 DEBUG nova.objects.instance [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lazy-loading 'pci_devices' on Instance uuid f7a3c847-b21e-4590-9666-f14efe505115 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.301 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <uuid>f7a3c847-b21e-4590-9666-f14efe505115</uuid>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <name>instance-00000099</name>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:name>tempest-ServerGroupTestJSON-server-59852738</nova:name>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:43:47</nova:creationTime>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:user uuid="c2e417cf7927412d9555b79aae71bb54">tempest-ServerGroupTestJSON-58001083-project-member</nova:user>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:project uuid="4ac1759c892d49069e58e75323dece87">tempest-ServerGroupTestJSON-58001083</nova:project>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <nova:port uuid="d961323c-850a-4878-8b51-65ed386d2942">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <entry name="serial">f7a3c847-b21e-4590-9666-f14efe505115</entry>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <entry name="uuid">f7a3c847-b21e-4590-9666-f14efe505115</entry>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f7a3c847-b21e-4590-9666-f14efe505115_disk">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f7a3c847-b21e-4590-9666-f14efe505115_disk.config">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:77:d4:9a"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <target dev="tapd961323c-85"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/console.log" append="off"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:43:49 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:43:49 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:43:49 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:43:49 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.302 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Preparing to wait for external event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.303 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.303 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.303 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.304 247403 DEBUG nova.virt.libvirt.vif [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:43:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-59852738',display_name='tempest-ServerGroupTestJSON-server-59852738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-59852738',id=153,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ac1759c892d49069e58e75323dece87',ramdisk_id='',reservation_id='r-7itwrejj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-58001083',owner_user_name='tempest-ServerGroupTestJSON-58001083-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:43:38Z,user_data=None,user_id='c2e417cf7927412d9555b79aae71bb54',uuid=f7a3c847-b21e-4590-9666-f14efe505115,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.304 247403 DEBUG nova.network.os_vif_util [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Converting VIF {"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.305 247403 DEBUG nova.network.os_vif_util [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.305 247403 DEBUG os_vif [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.307 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.307 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.312 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.312 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd961323c-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.313 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd961323c-85, col_values=(('external_ids', {'iface-id': 'd961323c-850a-4878-8b51-65ed386d2942', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:d4:9a', 'vm-uuid': 'f7a3c847-b21e-4590-9666-f14efe505115'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:49 np0005603621 NetworkManager[49013]: <info>  [1769849029.3157] manager: (tapd961323c-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.314 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.321 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.322 247403 INFO os_vif [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85')#033[00m
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0053206571049692905 of space, bias 1.0, pg target 1.5961971314907872 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164322771263726 of space, bias 1.0, pg target 0.647132508607854 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:43:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.908 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.909 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.909 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] No VIF found with MAC fa:16:3e:77:d4:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.909 247403 INFO nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Using config drive#033[00m
Jan 31 03:43:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:49.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:49 np0005603621 nova_compute[247399]: 2026-01-31 08:43:49.942 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:50 np0005603621 nova_compute[247399]: 2026-01-31 08:43:50.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:50 np0005603621 nova_compute[247399]: 2026-01-31 08:43:50.414 247403 DEBUG nova.network.neutron [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updated VIF entry in instance network info cache for port d961323c-850a-4878-8b51-65ed386d2942. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:43:50 np0005603621 nova_compute[247399]: 2026-01-31 08:43:50.414 247403 DEBUG nova.network.neutron [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updating instance_info_cache with network_info: [{"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:43:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 327 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 95 op/s
Jan 31 03:43:50 np0005603621 nova_compute[247399]: 2026-01-31 08:43:50.898 247403 DEBUG oslo_concurrency.lockutils [req-e2bc8fae-12d5-4652-98f9-39146555b7e7 req-8e43054a-7ff4-4076-93ee-14eacb97988a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:43:51 np0005603621 nova_compute[247399]: 2026-01-31 08:43:51.153 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:51 np0005603621 nova_compute[247399]: 2026-01-31 08:43:51.224 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:51.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:51 np0005603621 nova_compute[247399]: 2026-01-31 08:43:51.925 247403 INFO nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Creating config drive at /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/disk.config#033[00m
Jan 31 03:43:51 np0005603621 nova_compute[247399]: 2026-01-31 08:43:51.929 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyvp_qpxq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.055 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpyvp_qpxq" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.085 247403 DEBUG nova.storage.rbd_utils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] rbd image f7a3c847-b21e-4590-9666-f14efe505115_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.088 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/disk.config f7a3c847-b21e-4590-9666-f14efe505115_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:43:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 372 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.643 247403 DEBUG oslo_concurrency.processutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/disk.config f7a3c847-b21e-4590-9666-f14efe505115_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.644 247403 INFO nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Deleting local config drive /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115/disk.config because it was imported into RBD.#033[00m
Jan 31 03:43:52 np0005603621 kernel: tapd961323c-85: entered promiscuous mode
Jan 31 03:43:52 np0005603621 NetworkManager[49013]: <info>  [1769849032.6928] manager: (tapd961323c-85): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Jan 31 03:43:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:52Z|00631|binding|INFO|Claiming lport d961323c-850a-4878-8b51-65ed386d2942 for this chassis.
Jan 31 03:43:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:52Z|00632|binding|INFO|d961323c-850a-4878-8b51-65ed386d2942: Claiming fa:16:3e:77:d4:9a 10.100.0.3
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.694 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.714 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.717 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 systemd-udevd[357756]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:43:52 np0005603621 systemd-machined[212769]: New machine qemu-76-instance-00000099.
Jan 31 03:43:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:52Z|00633|binding|INFO|Setting lport d961323c-850a-4878-8b51-65ed386d2942 ovn-installed in OVS
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 NetworkManager[49013]: <info>  [1769849032.7311] device (tapd961323c-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:43:52 np0005603621 NetworkManager[49013]: <info>  [1769849032.7317] device (tapd961323c-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:43:52 np0005603621 systemd[1]: Started Virtual Machine qemu-76-instance-00000099.
Jan 31 03:43:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:52Z|00634|binding|INFO|Setting lport d961323c-850a-4878-8b51-65ed386d2942 up in Southbound
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.752 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:d4:9a 10.100.0.3'], port_security=['fa:16:3e:77:d4:9a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f7a3c847-b21e-4590-9666-f14efe505115', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac1759c892d49069e58e75323dece87', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc80ec04-df4b-4ec9-a99e-d2f92ba7ce0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de1e1e6-4766-4f0f-adc3-dc68f589636c, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d961323c-850a-4878-8b51-65ed386d2942) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.754 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d961323c-850a-4878-8b51-65ed386d2942 in datapath b1f7fb92-e452-440f-a0c2-1a4bb42b44cb bound to our chassis#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.755 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1f7fb92-e452-440f-a0c2-1a4bb42b44cb#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.764 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4493fa67-9778-446f-aa33-ed1ddd7beca1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.765 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1f7fb92-e1 in ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.767 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1f7fb92-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.768 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2c3cfa9c-14aa-47ec-b5b4-b6505e1c592f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.768 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[665a81b2-a76e-41d4-a5c2-fda888aa07a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.776 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d2371429-c6ad-4f1e-a301-0c9f3a749c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.787 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0822e92-3984-42fc-bc69-46060c30f266]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.816 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2cf33e-fd05-44b7-8d8e-6b5fa7585db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 NetworkManager[49013]: <info>  [1769849032.8239] manager: (tapb1f7fb92-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.823 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[672e8995-a10c-46bd-b7aa-9c7950b16be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.852 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[959e81e9-d64c-48b2-a9fd-34efe24f42f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.855 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[41eadda9-445f-491d-885a-6324237b0e80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 NetworkManager[49013]: <info>  [1769849032.8726] device (tapb1f7fb92-e0): carrier: link connected
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.875 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[65b13c86-5ef7-4f9e-902b-7b9b212fd869]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.889 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29ff09eb-2d23-4663-8c72-8ff6e3d20209]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1f7fb92-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:f6:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833838, 'reachable_time': 43396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357790, 'error': None, 'target': 'ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.902 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[849364a9-3f95-4b61-b74c-c80c46576f26]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb4:f617'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 833838, 'tstamp': 833838}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357791, 'error': None, 'target': 'ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.918 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2c1e7aa7-366c-4044-b6be-a869d5b6676c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1f7fb92-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b4:f6:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833838, 'reachable_time': 43396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357792, 'error': None, 'target': 'ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.942 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e04b15-c10f-4efd-87ae-4366643db409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.981 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8a51fd90-5a95-435d-8b03-c99626efb532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.983 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f7fb92-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.983 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.983 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1f7fb92-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 kernel: tapb1f7fb92-e0: entered promiscuous mode
Jan 31 03:43:52 np0005603621 NetworkManager[49013]: <info>  [1769849032.9856] manager: (tapb1f7fb92-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.988 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1f7fb92-e0, col_values=(('external_ids', {'iface-id': '1950ecfb-1434-411c-ad01-f294e294c2f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:43:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:43:52Z|00635|binding|INFO|Releasing lport 1950ecfb-1434-411c-ad01-f294e294c2f4 from this chassis (sb_readonly=0)
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 nova_compute[247399]: 2026-01-31 08:43:52.995 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.995 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1f7fb92-e452-440f-a0c2-1a4bb42b44cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1f7fb92-e452-440f-a0c2-1a4bb42b44cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.996 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[694ce182-a72a-4737-84ac-2d85e84f1f86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.997 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/b1f7fb92-e452-440f-a0c2-1a4bb42b44cb.pid.haproxy
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID b1f7fb92-e452-440f-a0c2-1a4bb42b44cb
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:43:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:43:52.999 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'env', 'PROCESS_TAG=haproxy-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1f7fb92-e452-440f-a0c2-1a4bb42b44cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:43:53 np0005603621 podman[357824]: 2026-01-31 08:43:53.328034045 +0000 UTC m=+0.046942609 container create e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:43:53 np0005603621 systemd[1]: Started libpod-conmon-e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179.scope.
Jan 31 03:43:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:43:53 np0005603621 podman[357824]: 2026-01-31 08:43:53.304362665 +0000 UTC m=+0.023271259 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:43:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d452a5d5f64782a9a4ed96d01de0f811c56261016a2cd34e04b46b60b8949f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:43:53 np0005603621 podman[357824]: 2026-01-31 08:43:53.415215899 +0000 UTC m=+0.134124493 container init e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:43:53 np0005603621 podman[357824]: 2026-01-31 08:43:53.423094239 +0000 UTC m=+0.142002843 container start e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:43:53 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [NOTICE]   (357858) : New worker (357863) forked
Jan 31 03:43:53 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [NOTICE]   (357858) : Loading success.
Jan 31 03:43:53 np0005603621 nova_compute[247399]: 2026-01-31 08:43:53.720 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849033.7203453, f7a3c847-b21e-4590-9666-f14efe505115 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:53 np0005603621 nova_compute[247399]: 2026-01-31 08:43:53.722 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] VM Started (Lifecycle Event)#033[00m
Jan 31 03:43:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:53.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:54 np0005603621 nova_compute[247399]: 2026-01-31 08:43:54.103 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:54 np0005603621 nova_compute[247399]: 2026-01-31 08:43:54.107 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849033.7214184, f7a3c847-b21e-4590-9666-f14efe505115 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:54 np0005603621 nova_compute[247399]: 2026-01-31 08:43:54.108 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:43:54 np0005603621 nova_compute[247399]: 2026-01-31 08:43:54.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 31 03:43:54 np0005603621 nova_compute[247399]: 2026-01-31 08:43:54.709 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:54 np0005603621 nova_compute[247399]: 2026-01-31 08:43:54.713 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.238 247403 DEBUG nova.compute.manager [req-3f82521a-804c-431f-ae34-70983dd80323 req-2ad4c7f1-e7a5-467f-83ab-5c7cf4d9e6e3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.239 247403 DEBUG oslo_concurrency.lockutils [req-3f82521a-804c-431f-ae34-70983dd80323 req-2ad4c7f1-e7a5-467f-83ab-5c7cf4d9e6e3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.239 247403 DEBUG oslo_concurrency.lockutils [req-3f82521a-804c-431f-ae34-70983dd80323 req-2ad4c7f1-e7a5-467f-83ab-5c7cf4d9e6e3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.240 247403 DEBUG oslo_concurrency.lockutils [req-3f82521a-804c-431f-ae34-70983dd80323 req-2ad4c7f1-e7a5-467f-83ab-5c7cf4d9e6e3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.240 247403 DEBUG nova.compute.manager [req-3f82521a-804c-431f-ae34-70983dd80323 req-2ad4c7f1-e7a5-467f-83ab-5c7cf4d9e6e3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Processing event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.241 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.244 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.247 247403 INFO nova.virt.libvirt.driver [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Instance spawned successfully.#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.247 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.715 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.715 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849035.2441282, f7a3c847-b21e-4590-9666-f14efe505115 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.716 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.765 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.766 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.766 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.766 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.767 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.767 247403 DEBUG nova.virt.libvirt.driver [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.849 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:55 np0005603621 nova_compute[247399]: 2026-01-31 08:43:55.852 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:43:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:55.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:56 np0005603621 nova_compute[247399]: 2026-01-31 08:43:56.094 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:43:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:43:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 31 03:43:56 np0005603621 nova_compute[247399]: 2026-01-31 08:43:56.680 247403 INFO nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Took 16.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:43:56 np0005603621 nova_compute[247399]: 2026-01-31 08:43:56.681 247403 DEBUG nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:43:57 np0005603621 nova_compute[247399]: 2026-01-31 08:43:57.736 247403 INFO nova.compute.manager [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Took 22.71 seconds to build instance.#033[00m
Jan 31 03:43:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:57.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:57.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 141 op/s
Jan 31 03:43:59 np0005603621 nova_compute[247399]: 2026-01-31 08:43:59.142 247403 DEBUG oslo_concurrency.lockutils [None req-a8912c74-b708-4009-ba9c-6a2c2e85b1b0 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:43:59 np0005603621 nova_compute[247399]: 2026-01-31 08:43:59.319 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:43:59 np0005603621 podman[357949]: 2026-01-31 08:43:59.506452679 +0000 UTC m=+0.057881886 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:43:59 np0005603621 podman[357950]: 2026-01-31 08:43:59.520910618 +0000 UTC m=+0.072583113 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:43:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:43:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:43:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:43:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:43:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:43:59.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:00 np0005603621 nova_compute[247399]: 2026-01-31 08:44:00.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 17 KiB/s wr, 84 op/s
Jan 31 03:44:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:01.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 102 op/s
Jan 31 03:44:02 np0005603621 nova_compute[247399]: 2026-01-31 08:44:02.714 247403 DEBUG nova.compute.manager [req-22989770-86ba-4b84-a5a4-d7c7bb7a5f89 req-a8120fdc-5e2a-4a06-994d-ed5c9b0483e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:44:02 np0005603621 nova_compute[247399]: 2026-01-31 08:44:02.716 247403 DEBUG oslo_concurrency.lockutils [req-22989770-86ba-4b84-a5a4-d7c7bb7a5f89 req-a8120fdc-5e2a-4a06-994d-ed5c9b0483e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:02 np0005603621 nova_compute[247399]: 2026-01-31 08:44:02.716 247403 DEBUG oslo_concurrency.lockutils [req-22989770-86ba-4b84-a5a4-d7c7bb7a5f89 req-a8120fdc-5e2a-4a06-994d-ed5c9b0483e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:02 np0005603621 nova_compute[247399]: 2026-01-31 08:44:02.716 247403 DEBUG oslo_concurrency.lockutils [req-22989770-86ba-4b84-a5a4-d7c7bb7a5f89 req-a8120fdc-5e2a-4a06-994d-ed5c9b0483e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:02 np0005603621 nova_compute[247399]: 2026-01-31 08:44:02.716 247403 DEBUG nova.compute.manager [req-22989770-86ba-4b84-a5a4-d7c7bb7a5f89 req-a8120fdc-5e2a-4a06-994d-ed5c9b0483e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] No waiting events found dispatching network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:44:02 np0005603621 nova_compute[247399]: 2026-01-31 08:44:02.717 247403 WARNING nova.compute.manager [req-22989770-86ba-4b84-a5a4-d7c7bb7a5f89 req-a8120fdc-5e2a-4a06-994d-ed5c9b0483e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received unexpected event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:44:03 np0005603621 nova_compute[247399]: 2026-01-31 08:44:03.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:03 np0005603621 nova_compute[247399]: 2026-01-31 08:44:03.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:44:03 np0005603621 nova_compute[247399]: 2026-01-31 08:44:03.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:44:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:03.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:04 np0005603621 nova_compute[247399]: 2026-01-31 08:44:04.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:44:04 np0005603621 nova_compute[247399]: 2026-01-31 08:44:04.233 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:44:04 np0005603621 nova_compute[247399]: 2026-01-31 08:44:04.233 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:44:04 np0005603621 nova_compute[247399]: 2026-01-31 08:44:04.233 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f7a3c847-b21e-4590-9666-f14efe505115 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:44:04 np0005603621 nova_compute[247399]: 2026-01-31 08:44:04.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 71 op/s
Jan 31 03:44:04 np0005603621 nova_compute[247399]: 2026-01-31 08:44:04.649 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:04.649 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:44:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:04.651 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.291 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:05.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.970 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.970 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.971 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.971 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.971 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.972 247403 INFO nova.compute.manager [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Terminating instance#033[00m
Jan 31 03:44:05 np0005603621 nova_compute[247399]: 2026-01-31 08:44:05.973 247403 DEBUG nova.compute.manager [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:44:06 np0005603621 kernel: tapd961323c-85 (unregistering): left promiscuous mode
Jan 31 03:44:06 np0005603621 NetworkManager[49013]: <info>  [1769849046.0259] device (tapd961323c-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:44:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:44:06Z|00636|binding|INFO|Releasing lport d961323c-850a-4878-8b51-65ed386d2942 from this chassis (sb_readonly=0)
Jan 31 03:44:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:44:06Z|00637|binding|INFO|Setting lport d961323c-850a-4878-8b51-65ed386d2942 down in Southbound
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.032 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:06 np0005603621 ovn_controller[149152]: 2026-01-31T08:44:06Z|00638|binding|INFO|Removing iface tapd961323c-85 ovn-installed in OVS
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:06 np0005603621 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 31 03:44:06 np0005603621 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000099.scope: Consumed 11.796s CPU time.
Jan 31 03:44:06 np0005603621 systemd-machined[212769]: Machine qemu-76-instance-00000099 terminated.
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.135 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:d4:9a 10.100.0.3'], port_security=['fa:16:3e:77:d4:9a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f7a3c847-b21e-4590-9666-f14efe505115', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac1759c892d49069e58e75323dece87', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc80ec04-df4b-4ec9-a99e-d2f92ba7ce0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6de1e1e6-4766-4f0f-adc3-dc68f589636c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d961323c-850a-4878-8b51-65ed386d2942) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.136 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d961323c-850a-4878-8b51-65ed386d2942 in datapath b1f7fb92-e452-440f-a0c2-1a4bb42b44cb unbound from our chassis#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.138 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.139 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a581f9ef-4d9b-4c3b-9b05-ad92a528c2da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.139 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb namespace which is not needed anymore#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.205 247403 INFO nova.virt.libvirt.driver [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Instance destroyed successfully.#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.205 247403 DEBUG nova.objects.instance [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lazy-loading 'resources' on Instance uuid f7a3c847-b21e-4590-9666-f14efe505115 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:44:06 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [NOTICE]   (357858) : haproxy version is 2.8.14-c23fe91
Jan 31 03:44:06 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [NOTICE]   (357858) : path to executable is /usr/sbin/haproxy
Jan 31 03:44:06 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [WARNING]  (357858) : Exiting Master process...
Jan 31 03:44:06 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [WARNING]  (357858) : Exiting Master process...
Jan 31 03:44:06 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [ALERT]    (357858) : Current worker (357863) exited with code 143 (Terminated)
Jan 31 03:44:06 np0005603621 neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb[357839]: [WARNING]  (357858) : All workers exited. Exiting... (0)
Jan 31 03:44:06 np0005603621 systemd[1]: libpod-e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179.scope: Deactivated successfully.
Jan 31 03:44:06 np0005603621 podman[358025]: 2026-01-31 08:44:06.253368807 +0000 UTC m=+0.045178524 container died e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.269 247403 DEBUG nova.virt.libvirt.vif [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:43:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-59852738',display_name='tempest-ServerGroupTestJSON-server-59852738',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-59852738',id=153,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:43:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ac1759c892d49069e58e75323dece87',ramdisk_id='',reservation_id='r-7itwrejj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-58001083',owner_user_name='tempest-ServerGroupTestJSON-58001083-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:43:57Z,user_data=None,user_id='c2e417cf7927412d9555b79aae71bb54',uuid=f7a3c847-b21e-4590-9666-f14efe505115,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.270 247403 DEBUG nova.network.os_vif_util [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Converting VIF {"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.271 247403 DEBUG nova.network.os_vif_util [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.272 247403 DEBUG os_vif [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.273 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.274 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd961323c-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.275 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.280 247403 INFO os_vif [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:d4:9a,bridge_name='br-int',has_traffic_filtering=True,id=d961323c-850a-4878-8b51-65ed386d2942,network=Network(b1f7fb92-e452-440f-a0c2-1a4bb42b44cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd961323c-85')#033[00m
Jan 31 03:44:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179-userdata-shm.mount: Deactivated successfully.
Jan 31 03:44:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5d452a5d5f64782a9a4ed96d01de0f811c56261016a2cd34e04b46b60b8949f8-merged.mount: Deactivated successfully.
Jan 31 03:44:06 np0005603621 podman[358025]: 2026-01-31 08:44:06.298065264 +0000 UTC m=+0.089874981 container cleanup e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:44:06 np0005603621 systemd[1]: libpod-conmon-e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179.scope: Deactivated successfully.
Jan 31 03:44:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:06 np0005603621 podman[358070]: 2026-01-31 08:44:06.364791639 +0000 UTC m=+0.050633545 container remove e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.371 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74f15997-4c65-4184-aefb-2c0413e623b8]: (4, ('Sat Jan 31 08:44:06 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb (e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179)\ne907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179\nSat Jan 31 08:44:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb (e907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179)\ne907311a8bd6eb7a5d967ddfdebf2324635b89d01fb0309349c5729ca2075179\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.373 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[721c3fd6-8928-455a-b36d-523e9ff3d7c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.374 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1f7fb92-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:06 np0005603621 kernel: tapb1f7fb92-e0: left promiscuous mode
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.420 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.427 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe45eec-98e0-42f6-8684-395ff9c46cd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.440 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[75e795c7-6865-4ba2-aab1-cdb131cb0d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.441 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[397b7be3-44ec-419d-8119-87b4469fd9e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.452 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb9ac59-4937-499b-a3c1-ab047c2eff76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 833832, 'reachable_time': 27263, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358088, 'error': None, 'target': 'ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.454 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1f7fb92-e452-440f-a0c2-1a4bb42b44cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:44:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:06.454 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[a98a760b-4ac1-44f8-b640-a86582e1a580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:44:06 np0005603621 systemd[1]: run-netns-ovnmeta\x2db1f7fb92\x2de452\x2d440f\x2da0c2\x2d1a4bb42b44cb.mount: Deactivated successfully.
Jan 31 03:44:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 248 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 71 op/s
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.964 247403 INFO nova.virt.libvirt.driver [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Deleting instance files /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115_del#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.965 247403 INFO nova.virt.libvirt.driver [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Deletion of /var/lib/nova/instances/f7a3c847-b21e-4590-9666-f14efe505115_del complete#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.975 247403 DEBUG nova.compute.manager [req-ca9011e8-5994-4c4e-a320-f7f24d660724 req-ce5e6c47-7e8a-4bc9-9748-beb8e79c6796 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-vif-unplugged-d961323c-850a-4878-8b51-65ed386d2942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.975 247403 DEBUG oslo_concurrency.lockutils [req-ca9011e8-5994-4c4e-a320-f7f24d660724 req-ce5e6c47-7e8a-4bc9-9748-beb8e79c6796 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.976 247403 DEBUG oslo_concurrency.lockutils [req-ca9011e8-5994-4c4e-a320-f7f24d660724 req-ce5e6c47-7e8a-4bc9-9748-beb8e79c6796 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.976 247403 DEBUG oslo_concurrency.lockutils [req-ca9011e8-5994-4c4e-a320-f7f24d660724 req-ce5e6c47-7e8a-4bc9-9748-beb8e79c6796 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.976 247403 DEBUG nova.compute.manager [req-ca9011e8-5994-4c4e-a320-f7f24d660724 req-ce5e6c47-7e8a-4bc9-9748-beb8e79c6796 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] No waiting events found dispatching network-vif-unplugged-d961323c-850a-4878-8b51-65ed386d2942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:44:06 np0005603621 nova_compute[247399]: 2026-01-31 08:44:06.976 247403 DEBUG nova.compute.manager [req-ca9011e8-5994-4c4e-a320-f7f24d660724 req-ce5e6c47-7e8a-4bc9-9748-beb8e79c6796 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-vif-unplugged-d961323c-850a-4878-8b51-65ed386d2942 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.606 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updating instance_info_cache with network_info: [{"id": "d961323c-850a-4878-8b51-65ed386d2942", "address": "fa:16:3e:77:d4:9a", "network": {"id": "b1f7fb92-e452-440f-a0c2-1a4bb42b44cb", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1348141125-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ac1759c892d49069e58e75323dece87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd961323c-85", "ovs_interfaceid": "d961323c-850a-4878-8b51-65ed386d2942", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.637 247403 INFO nova.compute.manager [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Took 1.66 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.638 247403 DEBUG oslo.service.loopingcall [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.638 247403 DEBUG nova.compute.manager [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.639 247403 DEBUG nova.network.neutron [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.722 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-f7a3c847-b21e-4590-9666-f14efe505115" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.723 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.723 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.724 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:07 np0005603621 nova_compute[247399]: 2026-01-31 08:44:07.724 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:44:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.360 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.361 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.361 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.361 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.361 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 225 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 54 op/s
Jan 31 03:44:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:44:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698539741' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.816 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.985 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.986 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4219MB free_disk=20.928455352783203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.987 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:08 np0005603621 nova_compute[247399]: 2026-01-31 08:44:08.987 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.382 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f7a3c847-b21e-4590-9666-f14efe505115 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.383 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.383 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.485 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.862 247403 DEBUG nova.network.neutron [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.870 247403 DEBUG nova.compute.manager [req-31dd67f5-94fc-458a-acd5-6cff328f2e4c req-1b158028-520e-4324-88d2-c0ae14f604e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-vif-deleted-d961323c-850a-4878-8b51-65ed386d2942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.870 247403 INFO nova.compute.manager [req-31dd67f5-94fc-458a-acd5-6cff328f2e4c req-1b158028-520e-4324-88d2-c0ae14f604e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Neutron deleted interface d961323c-850a-4878-8b51-65ed386d2942; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.870 247403 DEBUG nova.network.neutron [req-31dd67f5-94fc-458a-acd5-6cff328f2e4c req-1b158028-520e-4324-88d2-c0ae14f604e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.873 247403 DEBUG nova.compute.manager [req-40322492-54e9-43e4-a41f-021851573a33 req-b43ad0d5-4817-443e-8ab3-a550d8454345 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.873 247403 DEBUG oslo_concurrency.lockutils [req-40322492-54e9-43e4-a41f-021851573a33 req-b43ad0d5-4817-443e-8ab3-a550d8454345 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f7a3c847-b21e-4590-9666-f14efe505115-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.873 247403 DEBUG oslo_concurrency.lockutils [req-40322492-54e9-43e4-a41f-021851573a33 req-b43ad0d5-4817-443e-8ab3-a550d8454345 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.873 247403 DEBUG oslo_concurrency.lockutils [req-40322492-54e9-43e4-a41f-021851573a33 req-b43ad0d5-4817-443e-8ab3-a550d8454345 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.874 247403 DEBUG nova.compute.manager [req-40322492-54e9-43e4-a41f-021851573a33 req-b43ad0d5-4817-443e-8ab3-a550d8454345 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] No waiting events found dispatching network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.874 247403 WARNING nova.compute.manager [req-40322492-54e9-43e4-a41f-021851573a33 req-b43ad0d5-4817-443e-8ab3-a550d8454345 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Received unexpected event network-vif-plugged-d961323c-850a-4878-8b51-65ed386d2942 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:44:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:44:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680461639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.897 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.902 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:44:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:09.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:09 np0005603621 nova_compute[247399]: 2026-01-31 08:44:09.979 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.065 247403 INFO nova.compute.manager [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Took 2.43 seconds to deallocate network for instance.#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.075 247403 DEBUG nova.compute.manager [req-31dd67f5-94fc-458a-acd5-6cff328f2e4c req-1b158028-520e-4324-88d2-c0ae14f604e1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Detach interface failed, port_id=d961323c-850a-4878-8b51-65ed386d2942, reason: Instance f7a3c847-b21e-4590-9666-f14efe505115 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.367 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.368 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.396 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.397 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.488 247403 DEBUG oslo_concurrency.processutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:44:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 225 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 578 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 31 03:44:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:44:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3348851704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.929 247403 DEBUG oslo_concurrency.processutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:44:10 np0005603621 nova_compute[247399]: 2026-01-31 08:44:10.935 247403 DEBUG nova.compute.provider_tree [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.010 247403 DEBUG nova.scheduler.client.report [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.264 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.867s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.276 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.366 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.368 247403 INFO nova.scheduler.client.report [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Deleted allocations for instance f7a3c847-b21e-4590-9666-f14efe505115#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.369 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.370 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:11.653 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:11 np0005603621 nova_compute[247399]: 2026-01-31 08:44:11.800 247403 DEBUG oslo_concurrency.lockutils [None req-a26e8cf3-abe2-4bba-965d-d2ae91721c75 c2e417cf7927412d9555b79aae71bb54 4ac1759c892d49069e58e75323dece87 - - default default] Lock "f7a3c847-b21e-4590-9666-f14efe505115" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 588 KiB/s rd, 4.6 KiB/s wr, 45 op/s
Jan 31 03:44:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:44:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:44:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:13.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Jan 31 03:44:15 np0005603621 nova_compute[247399]: 2026-01-31 08:44:15.295 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:15.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:15.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:16 np0005603621 nova_compute[247399]: 2026-01-31 08:44:16.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Jan 31 03:44:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:17.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:17.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Jan 31 03:44:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:19.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:20 np0005603621 nova_compute[247399]: 2026-01-31 08:44:20.295 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 03:44:21 np0005603621 nova_compute[247399]: 2026-01-31 08:44:21.204 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849046.2033148, f7a3c847-b21e-4590-9666-f14efe505115 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:44:21 np0005603621 nova_compute[247399]: 2026-01-31 08:44:21.205 247403 INFO nova.compute.manager [-] [instance: f7a3c847-b21e-4590-9666-f14efe505115] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:44:21 np0005603621 nova_compute[247399]: 2026-01-31 08:44:21.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:21 np0005603621 nova_compute[247399]: 2026-01-31 08:44:21.288 247403 DEBUG nova.compute.manager [None req-14e82e27-7261-4d20-bcf0-bd32952dbe1b - - - - - -] [instance: f7a3c847-b21e-4590-9666-f14efe505115] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:44:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:21.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 3.2 KiB/s wr, 13 op/s
Jan 31 03:44:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 31 03:44:25 np0005603621 nova_compute[247399]: 2026-01-31 08:44:25.302 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:44:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:25.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:44:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:25.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:26 np0005603621 nova_compute[247399]: 2026-01-31 08:44:26.283 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.6 KiB/s rd, 255 B/s wr, 2 op/s
Jan 31 03:44:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:27.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:28 np0005603621 nova_compute[247399]: 2026-01-31 08:44:28.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 31 03:44:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:29.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:30 np0005603621 podman[358218]: 2026-01-31 08:44:30.064579301 +0000 UTC m=+0.075219245 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 03:44:30 np0005603621 podman[358219]: 2026-01-31 08:44:30.086786815 +0000 UTC m=+0.095679394 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:44:30 np0005603621 nova_compute[247399]: 2026-01-31 08:44:30.303 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:30.527 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:30.527 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:30.527 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:44:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 31 03:44:31 np0005603621 nova_compute[247399]: 2026-01-31 08:44:31.286 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 31 03:44:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:44:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0c10ea26-07f1-4d6f-b9b2-082b343beba1 does not exist
Jan 31 03:44:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3b160f33-9d52-46e3-9236-0f3d6ee8764d does not exist
Jan 31 03:44:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5609d700-ca95-44bf-9d36-40812d7e290a does not exist
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:44:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.141944047 +0000 UTC m=+0.058991931 container create 3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:44:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:44:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:44:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.106659619 +0000 UTC m=+0.023707523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:44:35 np0005603621 systemd[1]: Started libpod-conmon-3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee.scope.
Jan 31 03:44:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:44:35 np0005603621 nova_compute[247399]: 2026-01-31 08:44:35.305 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.332403905 +0000 UTC m=+0.249451789 container init 3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.338431157 +0000 UTC m=+0.255479031 container start 3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:44:35 np0005603621 keen_brattain[358598]: 167 167
Jan 31 03:44:35 np0005603621 systemd[1]: libpod-3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee.scope: Deactivated successfully.
Jan 31 03:44:35 np0005603621 conmon[358598]: conmon 3156d58efbd7077ce84e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee.scope/container/memory.events
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.346498642 +0000 UTC m=+0.263546546 container attach 3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.346872124 +0000 UTC m=+0.263920008 container died 3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:44:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-632d8053547a6479dd1ef6c96855cb95a184203d31fa8c46ad9949acd8112f17-merged.mount: Deactivated successfully.
Jan 31 03:44:35 np0005603621 podman[358582]: 2026-01-31 08:44:35.468631695 +0000 UTC m=+0.385679579 container remove 3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:44:35 np0005603621 systemd[1]: libpod-conmon-3156d58efbd7077ce84eb3a8c35fd7635ec48a2651f195c6388cfeefaaee3fee.scope: Deactivated successfully.
Jan 31 03:44:35 np0005603621 podman[358623]: 2026-01-31 08:44:35.584281711 +0000 UTC m=+0.036771216 container create 99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:44:35 np0005603621 systemd[1]: Started libpod-conmon-99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861.scope.
Jan 31 03:44:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:44:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ae195b0e40a7141309cd5d8807293dea181674cf3fba9a5c5ec60343fd387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ae195b0e40a7141309cd5d8807293dea181674cf3fba9a5c5ec60343fd387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ae195b0e40a7141309cd5d8807293dea181674cf3fba9a5c5ec60343fd387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ae195b0e40a7141309cd5d8807293dea181674cf3fba9a5c5ec60343fd387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1ae195b0e40a7141309cd5d8807293dea181674cf3fba9a5c5ec60343fd387/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:35 np0005603621 podman[358623]: 2026-01-31 08:44:35.565879788 +0000 UTC m=+0.018369323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:44:35 np0005603621 podman[358623]: 2026-01-31 08:44:35.705411142 +0000 UTC m=+0.157900667 container init 99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:44:35 np0005603621 podman[358623]: 2026-01-31 08:44:35.71165892 +0000 UTC m=+0.164148425 container start 99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:44:35 np0005603621 podman[358623]: 2026-01-31 08:44:35.756010706 +0000 UTC m=+0.208500241 container attach 99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:36 np0005603621 nova_compute[247399]: 2026-01-31 08:44:36.288 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:36 np0005603621 admiring_lehmann[358639]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:44:36 np0005603621 admiring_lehmann[358639]: --> relative data size: 1.0
Jan 31 03:44:36 np0005603621 admiring_lehmann[358639]: --> All data devices are unavailable
Jan 31 03:44:36 np0005603621 systemd[1]: libpod-99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861.scope: Deactivated successfully.
Jan 31 03:44:36 np0005603621 podman[358623]: 2026-01-31 08:44:36.508607607 +0000 UTC m=+0.961097122 container died 99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:44:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 31 03:44:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5b1ae195b0e40a7141309cd5d8807293dea181674cf3fba9a5c5ec60343fd387-merged.mount: Deactivated successfully.
Jan 31 03:44:37 np0005603621 podman[358623]: 2026-01-31 08:44:37.068060084 +0000 UTC m=+1.520549589 container remove 99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:44:37 np0005603621 systemd[1]: libpod-conmon-99c3d2ca319dd6f72718e1cc29ff495b015542fd18dc56cad688d7493c4d7861.scope: Deactivated successfully.
Jan 31 03:44:37 np0005603621 podman[358808]: 2026-01-31 08:44:37.544289212 +0000 UTC m=+0.057125941 container create bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:44:37 np0005603621 podman[358808]: 2026-01-31 08:44:37.509129268 +0000 UTC m=+0.021966027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:44:37 np0005603621 systemd[1]: Started libpod-conmon-bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e.scope.
Jan 31 03:44:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:44:37 np0005603621 podman[358808]: 2026-01-31 08:44:37.779069416 +0000 UTC m=+0.291906165 container init bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:44:37 np0005603621 podman[358808]: 2026-01-31 08:44:37.783477906 +0000 UTC m=+0.296314635 container start bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:44:37 np0005603621 serene_mclean[358824]: 167 167
Jan 31 03:44:37 np0005603621 systemd[1]: libpod-bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e.scope: Deactivated successfully.
Jan 31 03:44:37 np0005603621 podman[358808]: 2026-01-31 08:44:37.828004028 +0000 UTC m=+0.340840777 container attach bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:37 np0005603621 podman[358808]: 2026-01-31 08:44:37.828417841 +0000 UTC m=+0.341254570 container died bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:37.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-486b922c4f81b585db9ee0798f1210ff21a021f90badd7cd2d8e92b49f1a2ec8-merged.mount: Deactivated successfully.
Jan 31 03:44:38 np0005603621 podman[358808]: 2026-01-31 08:44:38.203500783 +0000 UTC m=+0.716337512 container remove bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:38 np0005603621 systemd[1]: libpod-conmon-bc16100164534e11b5934eb53b186e0694a33634df184fa0e38b6cdf5d3f356e.scope: Deactivated successfully.
Jan 31 03:44:38 np0005603621 podman[358847]: 2026-01-31 08:44:38.394165538 +0000 UTC m=+0.110157564 container create 920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:44:38 np0005603621 podman[358847]: 2026-01-31 08:44:38.306301242 +0000 UTC m=+0.022293288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:44:38 np0005603621 systemd[1]: Started libpod-conmon-920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232.scope.
Jan 31 03:44:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:44:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435296a87db5154684487e88b3f7cf7ec57b0275bad9dc85b246e2d1d89b25ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435296a87db5154684487e88b3f7cf7ec57b0275bad9dc85b246e2d1d89b25ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435296a87db5154684487e88b3f7cf7ec57b0275bad9dc85b246e2d1d89b25ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435296a87db5154684487e88b3f7cf7ec57b0275bad9dc85b246e2d1d89b25ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:44:38 np0005603621 podman[358847]: 2026-01-31 08:44:38.57905569 +0000 UTC m=+0.295047736 container init 920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 255 B/s wr, 24 op/s
Jan 31 03:44:38 np0005603621 podman[358847]: 2026-01-31 08:44:38.58541012 +0000 UTC m=+0.301402146 container start 920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:44:38
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', 'images', '.mgr']
Jan 31 03:44:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:44:38 np0005603621 podman[358847]: 2026-01-31 08:44:38.632107272 +0000 UTC m=+0.348099318 container attach 920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:44:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:44:39 np0005603621 happy_meitner[358864]: {
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:    "0": [
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:        {
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "devices": [
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "/dev/loop3"
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            ],
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "lv_name": "ceph_lv0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "lv_size": "7511998464",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "name": "ceph_lv0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "tags": {
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.cluster_name": "ceph",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.crush_device_class": "",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.encrypted": "0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.osd_id": "0",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.type": "block",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:                "ceph.vdo": "0"
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            },
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "type": "block",
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:            "vg_name": "ceph_vg0"
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:        }
Jan 31 03:44:39 np0005603621 happy_meitner[358864]:    ]
Jan 31 03:44:39 np0005603621 happy_meitner[358864]: }
Jan 31 03:44:39 np0005603621 systemd[1]: libpod-920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232.scope: Deactivated successfully.
Jan 31 03:44:39 np0005603621 podman[358847]: 2026-01-31 08:44:39.350115675 +0000 UTC m=+1.066107701 container died 920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:44:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-435296a87db5154684487e88b3f7cf7ec57b0275bad9dc85b246e2d1d89b25ee-merged.mount: Deactivated successfully.
Jan 31 03:44:39 np0005603621 podman[358847]: 2026-01-31 08:44:39.772017032 +0000 UTC m=+1.488009058 container remove 920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:44:39 np0005603621 systemd[1]: libpod-conmon-920f0bf25a3264b4adb54a07700aa158bae512debd4d9fc9f25053a797033232.scope: Deactivated successfully.
Jan 31 03:44:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:39.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:39.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:40 np0005603621 nova_compute[247399]: 2026-01-31 08:44:40.307 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:40 np0005603621 podman[359028]: 2026-01-31 08:44:40.222074421 +0000 UTC m=+0.020416019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:44:40 np0005603621 podman[359028]: 2026-01-31 08:44:40.400974422 +0000 UTC m=+0.199316020 container create 36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:44:40 np0005603621 systemd[1]: Started libpod-conmon-36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a.scope.
Jan 31 03:44:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:44:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 202 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 31 03:44:40 np0005603621 podman[359028]: 2026-01-31 08:44:40.580586907 +0000 UTC m=+0.378928525 container init 36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:44:40 np0005603621 podman[359028]: 2026-01-31 08:44:40.586311929 +0000 UTC m=+0.384653527 container start 36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:44:40 np0005603621 inspiring_spence[359045]: 167 167
Jan 31 03:44:40 np0005603621 systemd[1]: libpod-36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a.scope: Deactivated successfully.
Jan 31 03:44:40 np0005603621 podman[359028]: 2026-01-31 08:44:40.77372828 +0000 UTC m=+0.572069898 container attach 36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:44:40 np0005603621 podman[359028]: 2026-01-31 08:44:40.775402823 +0000 UTC m=+0.573744421 container died 36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:44:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-daae2e541ad9036ecf50b9b0935423f23d00144ca47b7827418136f114389ddf-merged.mount: Deactivated successfully.
Jan 31 03:44:41 np0005603621 nova_compute[247399]: 2026-01-31 08:44:41.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:41 np0005603621 podman[359028]: 2026-01-31 08:44:41.675596364 +0000 UTC m=+1.473937962 container remove 36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:44:41 np0005603621 systemd[1]: libpod-conmon-36049221b609cfc25e41dec2f25fafd9798c83b8219291f669b9ff24c97ae11a.scope: Deactivated successfully.
Jan 31 03:44:41 np0005603621 podman[359072]: 2026-01-31 08:44:41.768352994 +0000 UTC m=+0.018251699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:44:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:44:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:41.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:41.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:42 np0005603621 podman[359072]: 2026-01-31 08:44:42.038703636 +0000 UTC m=+0.288602321 container create 28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:44:42 np0005603621 systemd[1]: Started libpod-conmon-28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9.scope.
Jan 31 03:44:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:44:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010fe9b300d8d1ff73ab01a8024e4fc752513791b3fb774497dcbdc6978aec79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010fe9b300d8d1ff73ab01a8024e4fc752513791b3fb774497dcbdc6978aec79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010fe9b300d8d1ff73ab01a8024e4fc752513791b3fb774497dcbdc6978aec79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/010fe9b300d8d1ff73ab01a8024e4fc752513791b3fb774497dcbdc6978aec79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:44:42 np0005603621 podman[359072]: 2026-01-31 08:44:42.288577478 +0000 UTC m=+0.538476173 container init 28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:44:42 np0005603621 podman[359072]: 2026-01-31 08:44:42.293705161 +0000 UTC m=+0.543603846 container start 28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:44:42 np0005603621 podman[359072]: 2026-01-31 08:44:42.368782451 +0000 UTC m=+0.618681136 container attach 28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:44:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 125 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 682 B/s wr, 27 op/s
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]: {
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:        "osd_id": 0,
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:        "type": "bluestore"
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]:    }
Jan 31 03:44:43 np0005603621 mystifying_lewin[359091]: }
Jan 31 03:44:43 np0005603621 systemd[1]: libpod-28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9.scope: Deactivated successfully.
Jan 31 03:44:43 np0005603621 podman[359072]: 2026-01-31 08:44:43.076963193 +0000 UTC m=+1.326861878 container died 28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:44:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-010fe9b300d8d1ff73ab01a8024e4fc752513791b3fb774497dcbdc6978aec79-merged.mount: Deactivated successfully.
Jan 31 03:44:43 np0005603621 podman[359072]: 2026-01-31 08:44:43.767344361 +0000 UTC m=+2.017243046 container remove 28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_lewin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:44:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:44:43 np0005603621 systemd[1]: libpod-conmon-28846f64c816e0de4e608de960b424b814db7c54de587784cdaf05b797ee75d9.scope: Deactivated successfully.
Jan 31 03:44:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:44:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:44:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:43.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:44:44 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev efaaddfd-9f4f-464b-b721-9b9a7e1f4022 does not exist
Jan 31 03:44:44 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a73c1d36-ad99-4b86-afa2-2ab075ee41c8 does not exist
Jan 31 03:44:44 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5b7faab4-ea8d-4956-bd4f-1c006409e553 does not exist
Jan 31 03:44:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:44.578 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:44:44 np0005603621 nova_compute[247399]: 2026-01-31 08:44:44.578 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:44.580 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:44:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 125 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 426 B/s wr, 13 op/s
Jan 31 03:44:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:44:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:44:45 np0005603621 nova_compute[247399]: 2026-01-31 08:44:45.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:45.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:46 np0005603621 nova_compute[247399]: 2026-01-31 08:44:46.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 682 B/s wr, 22 op/s
Jan 31 03:44:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:44:47.582 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:44:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:47.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:47.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:44:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:44:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:49.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:49.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:50 np0005603621 nova_compute[247399]: 2026-01-31 08:44:50.311 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 03:44:51 np0005603621 nova_compute[247399]: 2026-01-31 08:44:51.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:51.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:51.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 03:44:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:53.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:53.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 767 B/s wr, 13 op/s
Jan 31 03:44:55 np0005603621 nova_compute[247399]: 2026-01-31 08:44:55.352 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:55.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:55.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:56 np0005603621 nova_compute[247399]: 2026-01-31 08:44:56.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:44:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:44:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.9 KiB/s rd, 767 B/s wr, 13 op/s
Jan 31 03:44:57 np0005603621 nova_compute[247399]: 2026-01-31 08:44:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:44:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:44:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:57.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:44:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:57.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 KiB/s rd, 511 B/s wr, 4 op/s
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.574 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.574 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.680 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.984 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.985 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:44:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.997 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:44:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:44:59.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:44:59 np0005603621 nova_compute[247399]: 2026-01-31 08:44:59.997 247403 INFO nova.compute.claims [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:44:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:44:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:44:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:44:59.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:00 np0005603621 nova_compute[247399]: 2026-01-31 08:45:00.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:00 np0005603621 podman[359234]: 2026-01-31 08:45:00.512751058 +0000 UTC m=+0.067588644 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 31 03:45:00 np0005603621 podman[359233]: 2026-01-31 08:45:00.513037517 +0000 UTC m=+0.067987006 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:45:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 31 03:45:01 np0005603621 nova_compute[247399]: 2026-01-31 08:45:01.302 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 31 03:45:03 np0005603621 nova_compute[247399]: 2026-01-31 08:45:03.809 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:03.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:04.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.314 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.315 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.315 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.316 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:45:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:45:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002259160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.335 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.341 247403 DEBUG nova.compute.provider_tree [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:45:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.632 247403 DEBUG nova.scheduler.client.report [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.944 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 4.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:04 np0005603621 nova_compute[247399]: 2026-01-31 08:45:04.944 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.132 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.133 247403 DEBUG nova.network.neutron [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.357 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.430 247403 INFO nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.781 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:45:05 np0005603621 nova_compute[247399]: 2026-01-31 08:45:05.859 247403 DEBUG nova.policy [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1c6e7eff11b435a81429826a682b32f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bfe11bd9d694684b527666e2c378eed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:45:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:06.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.310 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.311 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.312 247403 INFO nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Creating image(s)#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.362 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.394 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.424 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.428 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.450 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.484 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.485 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.486 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.486 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.514 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:06 np0005603621 nova_compute[247399]: 2026-01-31 08:45:06.520 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f3c33ab5-ba7c-4982-b706-525d0bbead40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 120 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 31 03:45:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:08.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:08.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.096 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f3c33ab5-ba7c-4982-b706-525d0bbead40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.240 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.554 247403 DEBUG nova.network.neutron [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Successfully created port: 444d01ec-ea25-476c-a3da-ecffec3d3863 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:45:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 136 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 565 KiB/s wr, 3 op/s
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.745 247403 DEBUG nova.objects.instance [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid f3c33ab5-ba7c-4982-b706-525d0bbead40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.894 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.895 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Ensure instance console log exists: /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.896 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.896 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:08 np0005603621 nova_compute[247399]: 2026-01-31 08:45:08.897 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:10.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:10.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.359 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.419 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.419 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.420 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.420 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.420 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 136 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 565 KiB/s wr, 3 op/s
Jan 31 03:45:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:45:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2211173927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.836 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.958 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.959 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4170MB free_disk=20.980300903320312GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.959 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:10 np0005603621 nova_compute[247399]: 2026-01-31 08:45:10.960 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.209 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f3c33ab5-ba7c-4982-b706-525d0bbead40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.211 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.211 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.334 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.453 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:45:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220520359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.752 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.757 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:45:11 np0005603621 nova_compute[247399]: 2026-01-31 08:45:11.908 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:45:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:12.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:12.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:12 np0005603621 nova_compute[247399]: 2026-01-31 08:45:12.068 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:45:12 np0005603621 nova_compute[247399]: 2026-01-31 08:45:12.069 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:45:13 np0005603621 nova_compute[247399]: 2026-01-31 08:45:13.070 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:13 np0005603621 nova_compute[247399]: 2026-01-31 08:45:13.070 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:13 np0005603621 nova_compute[247399]: 2026-01-31 08:45:13.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:45:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:14.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:45:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:14.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.167 247403 DEBUG nova.network.neutron [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Successfully updated port: 444d01ec-ea25-476c-a3da-ecffec3d3863 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.255 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.256 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.256 247403 DEBUG nova.network.neutron [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:45:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.655 247403 DEBUG nova.compute.manager [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-changed-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.656 247403 DEBUG nova.compute.manager [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Refreshing instance network info cache due to event network-changed-444d01ec-ea25-476c-a3da-ecffec3d3863. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:45:14 np0005603621 nova_compute[247399]: 2026-01-31 08:45:14.656 247403 DEBUG oslo_concurrency.lockutils [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:15 np0005603621 nova_compute[247399]: 2026-01-31 08:45:15.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:15 np0005603621 nova_compute[247399]: 2026-01-31 08:45:15.407 247403 DEBUG nova.network.neutron [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:45:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:16.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:16.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:16 np0005603621 nova_compute[247399]: 2026-01-31 08:45:16.457 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:45:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:18.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:18.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.058 247403 DEBUG nova.network.neutron [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updating instance_info_cache with network_info: [{"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.121 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.122 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Instance network_info: |[{"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.122 247403 DEBUG oslo_concurrency.lockutils [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.122 247403 DEBUG nova.network.neutron [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Refreshing network info cache for port 444d01ec-ea25-476c-a3da-ecffec3d3863 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.125 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Start _get_guest_xml network_info=[{"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.130 247403 WARNING nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.134 247403 DEBUG nova.virt.libvirt.host [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.135 247403 DEBUG nova.virt.libvirt.host [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.138 247403 DEBUG nova.virt.libvirt.host [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.139 247403 DEBUG nova.virt.libvirt.host [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.139 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.140 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.140 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.140 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.141 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.141 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.141 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.141 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.141 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.141 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.142 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.142 247403 DEBUG nova.virt.hardware [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.145 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:45:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386266519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.544 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.566 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.570 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:45:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2454052953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.962 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.965 247403 DEBUG nova.virt.libvirt.vif [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1347972004',display_name='tempest-TestNetworkAdvancedServerOps-server-1347972004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1347972004',id=154,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+kOg3MmPVJwauNjDmypRbLLandlUlpzTQii+iWKYcs5Wr2mmgLhyx+j8qhUMVQhENZlTPlciAr00wc/NRRCC3Zn3OrZptTwDzkDZ8wBXbvhwqaJDIvKwOglYmsS/IEXA==',key_name='tempest-TestNetworkAdvancedServerOps-130362912',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3fgj2v9p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:45:06Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=f3c33ab5-ba7c-4982-b706-525d0bbead40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.965 247403 DEBUG nova.network.os_vif_util [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.966 247403 DEBUG nova.network.os_vif_util [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.967 247403 DEBUG nova.objects.instance [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid f3c33ab5-ba7c-4982-b706-525d0bbead40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:19 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.998 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:45:19 np0005603621 nova_compute[247399]:  <uuid>f3c33ab5-ba7c-4982-b706-525d0bbead40</uuid>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:  <name>instance-0000009a</name>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1347972004</nova:name>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:45:19</nova:creationTime>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:45:19 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <nova:port uuid="444d01ec-ea25-476c-a3da-ecffec3d3863">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <entry name="serial">f3c33ab5-ba7c-4982-b706-525d0bbead40</entry>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <entry name="uuid">f3c33ab5-ba7c-4982-b706-525d0bbead40</entry>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f3c33ab5-ba7c-4982-b706-525d0bbead40_disk">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f3c33ab5-ba7c-4982-b706-525d0bbead40_disk.config">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:44:6e:3f"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <target dev="tap444d01ec-ea"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/console.log" append="off"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:45:20 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:45:20 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:45:20 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:45:20 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.999 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Preparing to wait for external event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.999 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:19.999 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.000 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.000 247403 DEBUG nova.virt.libvirt.vif [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1347972004',display_name='tempest-TestNetworkAdvancedServerOps-server-1347972004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1347972004',id=154,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+kOg3MmPVJwauNjDmypRbLLandlUlpzTQii+iWKYcs5Wr2mmgLhyx+j8qhUMVQhENZlTPlciAr00wc/NRRCC3Zn3OrZptTwDzkDZ8wBXbvhwqaJDIvKwOglYmsS/IEXA==',key_name='tempest-TestNetworkAdvancedServerOps-130362912',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3fgj2v9p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:45:06Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=f3c33ab5-ba7c-4982-b706-525d0bbead40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.001 247403 DEBUG nova.network.os_vif_util [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.001 247403 DEBUG nova.network.os_vif_util [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.001 247403 DEBUG os_vif [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.002 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.003 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.005 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.005 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap444d01ec-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.005 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap444d01ec-ea, col_values=(('external_ids', {'iface-id': '444d01ec-ea25-476c-a3da-ecffec3d3863', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:6e:3f', 'vm-uuid': 'f3c33ab5-ba7c-4982-b706-525d0bbead40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.007 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:20 np0005603621 NetworkManager[49013]: <info>  [1769849120.0078] manager: (tap444d01ec-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.012 247403 INFO os_vif [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea')#033[00m
Jan 31 03:45:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.180 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.181 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.181 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:44:6e:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.181 247403 INFO nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Using config drive#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.203 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:20 np0005603621 nova_compute[247399]: 2026-01-31 08:45:20.363 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 23 op/s
Jan 31 03:45:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:21 np0005603621 nova_compute[247399]: 2026-01-31 08:45:21.976 247403 INFO nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Creating config drive at /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/disk.config#033[00m
Jan 31 03:45:21 np0005603621 nova_compute[247399]: 2026-01-31 08:45:21.980 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1bwdoojh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:22.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.106 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1bwdoojh" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.131 247403 DEBUG nova.storage.rbd_utils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image f3c33ab5-ba7c-4982-b706-525d0bbead40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.134 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/disk.config f3c33ab5-ba7c-4982-b706-525d0bbead40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.424 247403 DEBUG oslo_concurrency.processutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/disk.config f3c33ab5-ba7c-4982-b706-525d0bbead40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.425 247403 INFO nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Deleting local config drive /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40/disk.config because it was imported into RBD.#033[00m
Jan 31 03:45:22 np0005603621 kernel: tap444d01ec-ea: entered promiscuous mode
Jan 31 03:45:22 np0005603621 NetworkManager[49013]: <info>  [1769849122.4649] manager: (tap444d01ec-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Jan 31 03:45:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:22Z|00639|binding|INFO|Claiming lport 444d01ec-ea25-476c-a3da-ecffec3d3863 for this chassis.
Jan 31 03:45:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:22Z|00640|binding|INFO|444d01ec-ea25-476c-a3da-ecffec3d3863: Claiming fa:16:3e:44:6e:3f 10.100.0.13
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.465 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.467 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.472 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 systemd-udevd[359705]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.494 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 systemd-machined[212769]: New machine qemu-77-instance-0000009a.
Jan 31 03:45:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:22Z|00641|binding|INFO|Setting lport 444d01ec-ea25-476c-a3da-ecffec3d3863 ovn-installed in OVS
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.497 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 NetworkManager[49013]: <info>  [1769849122.4995] device (tap444d01ec-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:45:22 np0005603621 NetworkManager[49013]: <info>  [1769849122.5000] device (tap444d01ec-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:45:22 np0005603621 systemd[1]: Started Virtual Machine qemu-77-instance-0000009a.
Jan 31 03:45:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:22Z|00642|binding|INFO|Setting lport 444d01ec-ea25-476c-a3da-ecffec3d3863 up in Southbound
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.515 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6e:3f 10.100.0.13'], port_security=['fa:16:3e:44:6e:3f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f3c33ab5-ba7c-4982-b706-525d0bbead40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-baa8ab21-9039-47e6-9a0b-098146c212d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '39c1ce9d-d4af-47b0-a485-5cc56970d2fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94c4012-4a05-4e0d-a178-28b0d2f8086e, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=444d01ec-ea25-476c-a3da-ecffec3d3863) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.516 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 444d01ec-ea25-476c-a3da-ecffec3d3863 in datapath baa8ab21-9039-47e6-9a0b-098146c212d0 bound to our chassis#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.517 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network baa8ab21-9039-47e6-9a0b-098146c212d0#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.526 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[597d49df-ceb2-445d-8336-c00432f60227]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.527 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbaa8ab21-91 in ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.529 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbaa8ab21-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.529 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a006a6a3-a814-41c0-b8dd-3d863bb6ccb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.529 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea3750e-ecb7-4723-a140-d0c22b78f83c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.536 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[48879666-eeb4-43cd-9cb6-54207a8daf41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.556 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd10542-fe82-425a-8a14-4e44ab91e2a3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.574 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[46bcaebb-6e6f-45cd-ab83-7fa58cb0d6ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 NetworkManager[49013]: <info>  [1769849122.5812] manager: (tapbaa8ab21-90): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Jan 31 03:45:22 np0005603621 systemd-udevd[359708]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.580 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[92efea7d-6ca7-4de3-9bcf-5fff1d17fff0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 MiB/s wr, 27 op/s
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.602 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d1373a09-e8b0-4cd4-b355-eb76795ebe8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.605 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1074c194-b149-4a34-8576-70c8900382fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 NetworkManager[49013]: <info>  [1769849122.6209] device (tapbaa8ab21-90): carrier: link connected
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.625 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1ecbcd-6b63-4b2f-a7c9-62382a31c694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.638 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0f48cca3-8108-4a31-862a-1e2ae3da988f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbaa8ab21-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:41:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842813, 'reachable_time': 35177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359739, 'error': None, 'target': 'ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.651 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[079b1d24-47ff-4ee9-a630-ecd995e111ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:410f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 842813, 'tstamp': 842813}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359740, 'error': None, 'target': 'ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.661 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a8dcfd-25c3-4e7b-90c5-3fa59967630d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbaa8ab21-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:41:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842813, 'reachable_time': 35177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359741, 'error': None, 'target': 'ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.685 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5c451d-9e05-45d3-b6da-bddb5360cd38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.724 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f97ab6-e217-4ff9-9eb5-2a13e553ac15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.725 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbaa8ab21-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.726 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.726 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbaa8ab21-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:22 np0005603621 kernel: tapbaa8ab21-90: entered promiscuous mode
Jan 31 03:45:22 np0005603621 NetworkManager[49013]: <info>  [1769849122.7283] manager: (tapbaa8ab21-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.728 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.732 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbaa8ab21-90, col_values=(('external_ids', {'iface-id': 'b6f2546a-7bba-4b7c-8be1-b3e30c9d42bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:22 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:22Z|00643|binding|INFO|Releasing lport b6f2546a-7bba-4b7c-8be1-b3e30c9d42bd from this chassis (sb_readonly=0)
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.733 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.734 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/baa8ab21-9039-47e6-9a0b-098146c212d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/baa8ab21-9039-47e6-9a0b-098146c212d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:45:22 np0005603621 nova_compute[247399]: 2026-01-31 08:45:22.737 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.737 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7e3ef5-4dd1-4a6e-8f66-b307ee1930e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.738 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-baa8ab21-9039-47e6-9a0b-098146c212d0
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/baa8ab21-9039-47e6-9a0b-098146c212d0.pid.haproxy
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID baa8ab21-9039-47e6-9a0b-098146c212d0
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:45:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:22.739 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0', 'env', 'PROCESS_TAG=haproxy-baa8ab21-9039-47e6-9a0b-098146c212d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/baa8ab21-9039-47e6-9a0b-098146c212d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.016 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849123.0162501, f3c33ab5-ba7c-4982-b706-525d0bbead40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.017 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] VM Started (Lifecycle Event)#033[00m
Jan 31 03:45:23 np0005603621 podman[359814]: 2026-01-31 08:45:23.052364016 +0000 UTC m=+0.045807043 container create 41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:45:23 np0005603621 systemd[1]: Started libpod-conmon-41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8.scope.
Jan 31 03:45:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df0dab9966838eea37def0bc8b9513581544ee94f1d80b54826dcfbca115d21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:23 np0005603621 podman[359814]: 2026-01-31 08:45:23.108712033 +0000 UTC m=+0.102154950 container init 41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:45:23 np0005603621 podman[359814]: 2026-01-31 08:45:23.113298558 +0000 UTC m=+0.106741465 container start 41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:45:23 np0005603621 podman[359814]: 2026-01-31 08:45:23.028588992 +0000 UTC m=+0.022031919 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:45:23 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [NOTICE]   (359834) : New worker (359836) forked
Jan 31 03:45:23 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [NOTICE]   (359834) : Loading success.
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.134 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.138 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849123.0170867, f3c33ab5-ba7c-4982-b706-525d0bbead40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.139 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.352 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.354 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.475 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.499 247403 DEBUG nova.network.neutron [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updated VIF entry in instance network info cache for port 444d01ec-ea25-476c-a3da-ecffec3d3863. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.500 247403 DEBUG nova.network.neutron [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updating instance_info_cache with network_info: [{"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:45:23 np0005603621 nova_compute[247399]: 2026-01-31 08:45:23.666 247403 DEBUG oslo_concurrency.lockutils [req-5889592e-df76-4706-8802-0ea18e968e88 req-057c685e-0a6c-4edd-8e14-10df9da41e04 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:45:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:24.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:24.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.180 247403 DEBUG nova.compute.manager [req-3dd68de3-9259-49d5-9380-141b24c488ac req-22113876-76fa-42a6-a42c-0f08ef4bab85 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.180 247403 DEBUG oslo_concurrency.lockutils [req-3dd68de3-9259-49d5-9380-141b24c488ac req-22113876-76fa-42a6-a42c-0f08ef4bab85 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.180 247403 DEBUG oslo_concurrency.lockutils [req-3dd68de3-9259-49d5-9380-141b24c488ac req-22113876-76fa-42a6-a42c-0f08ef4bab85 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.181 247403 DEBUG oslo_concurrency.lockutils [req-3dd68de3-9259-49d5-9380-141b24c488ac req-22113876-76fa-42a6-a42c-0f08ef4bab85 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.181 247403 DEBUG nova.compute.manager [req-3dd68de3-9259-49d5-9380-141b24c488ac req-22113876-76fa-42a6-a42c-0f08ef4bab85 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Processing event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.181 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.184 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849124.184651, f3c33ab5-ba7c-4982-b706-525d0bbead40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.185 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.186 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.190 247403 INFO nova.virt.libvirt.driver [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Instance spawned successfully.#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.190 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.448 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.449 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.449 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.450 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.450 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.451 247403 DEBUG nova.virt.libvirt.driver [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.486 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.490 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:45:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 4 op/s
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.787 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.921 247403 INFO nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Took 18.61 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:45:24 np0005603621 nova_compute[247399]: 2026-01-31 08:45:24.922 247403 DEBUG nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:25 np0005603621 nova_compute[247399]: 2026-01-31 08:45:25.009 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:25 np0005603621 nova_compute[247399]: 2026-01-31 08:45:25.235 247403 INFO nova.compute.manager [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Took 25.31 seconds to build instance.#033[00m
Jan 31 03:45:25 np0005603621 nova_compute[247399]: 2026-01-31 08:45:25.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:25 np0005603621 nova_compute[247399]: 2026-01-31 08:45:25.545 247403 DEBUG oslo_concurrency.lockutils [None req-2989fc0c-345c-4fce-aa3b-a0d9035b7ff0 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:26.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:26.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:26 np0005603621 nova_compute[247399]: 2026-01-31 08:45:26.367 247403 DEBUG nova.compute.manager [req-2a5ab433-ad8c-46be-9c1a-9353a20a61a7 req-62b0951d-0d01-4f31-a5b1-886eb541fb02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:26 np0005603621 nova_compute[247399]: 2026-01-31 08:45:26.368 247403 DEBUG oslo_concurrency.lockutils [req-2a5ab433-ad8c-46be-9c1a-9353a20a61a7 req-62b0951d-0d01-4f31-a5b1-886eb541fb02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:26 np0005603621 nova_compute[247399]: 2026-01-31 08:45:26.368 247403 DEBUG oslo_concurrency.lockutils [req-2a5ab433-ad8c-46be-9c1a-9353a20a61a7 req-62b0951d-0d01-4f31-a5b1-886eb541fb02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:26 np0005603621 nova_compute[247399]: 2026-01-31 08:45:26.369 247403 DEBUG oslo_concurrency.lockutils [req-2a5ab433-ad8c-46be-9c1a-9353a20a61a7 req-62b0951d-0d01-4f31-a5b1-886eb541fb02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:26 np0005603621 nova_compute[247399]: 2026-01-31 08:45:26.369 247403 DEBUG nova.compute.manager [req-2a5ab433-ad8c-46be-9c1a-9353a20a61a7 req-62b0951d-0d01-4f31-a5b1-886eb541fb02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] No waiting events found dispatching network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:45:26 np0005603621 nova_compute[247399]: 2026-01-31 08:45:26.369 247403 WARNING nova.compute.manager [req-2a5ab433-ad8c-46be-9c1a-9353a20a61a7 req-62b0951d-0d01-4f31-a5b1-886eb541fb02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received unexpected event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:45:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 628 KiB/s rd, 12 KiB/s wr, 26 op/s
Jan 31 03:45:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:28.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:28.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:45:30 np0005603621 nova_compute[247399]: 2026-01-31 08:45:30.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:30.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:30 np0005603621 nova_compute[247399]: 2026-01-31 08:45:30.400 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:30.527 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:45:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:30.529 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:45:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:30.529 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:45:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:45:31 np0005603621 podman[359849]: 2026-01-31 08:45:31.491439322 +0000 UTC m=+0.048037833 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:45:31 np0005603621 podman[359850]: 2026-01-31 08:45:31.514796023 +0000 UTC m=+0.071207108 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:45:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:45:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:32.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:45:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:45:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:34.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:34.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 31 03:45:35 np0005603621 nova_compute[247399]: 2026-01-31 08:45:35.017 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:35 np0005603621 NetworkManager[49013]: <info>  [1769849135.2418] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Jan 31 03:45:35 np0005603621 nova_compute[247399]: 2026-01-31 08:45:35.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:35 np0005603621 NetworkManager[49013]: <info>  [1769849135.2428] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Jan 31 03:45:35 np0005603621 nova_compute[247399]: 2026-01-31 08:45:35.273 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:35Z|00644|binding|INFO|Releasing lport b6f2546a-7bba-4b7c-8be1-b3e30c9d42bd from this chassis (sb_readonly=0)
Jan 31 03:45:35 np0005603621 nova_compute[247399]: 2026-01-31 08:45:35.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:35 np0005603621 nova_compute[247399]: 2026-01-31 08:45:35.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:36.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:36.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:36 np0005603621 nova_compute[247399]: 2026-01-31 08:45:36.433 247403 DEBUG nova.compute.manager [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-changed-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:45:36 np0005603621 nova_compute[247399]: 2026-01-31 08:45:36.433 247403 DEBUG nova.compute.manager [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Refreshing instance network info cache due to event network-changed-444d01ec-ea25-476c-a3da-ecffec3d3863. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:45:36 np0005603621 nova_compute[247399]: 2026-01-31 08:45:36.433 247403 DEBUG oslo_concurrency.lockutils [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:45:36 np0005603621 nova_compute[247399]: 2026-01-31 08:45:36.434 247403 DEBUG oslo_concurrency.lockutils [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:45:36 np0005603621 nova_compute[247399]: 2026-01-31 08:45:36.434 247403 DEBUG nova.network.neutron [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Refreshing network info cache for port 444d01ec-ea25-476c-a3da-ecffec3d3863 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:45:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 185 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 85 op/s
Jan 31 03:45:36 np0005603621 nova_compute[247399]: 2026-01-31 08:45:36.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:37Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:6e:3f 10.100.0.13
Jan 31 03:45:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:45:37Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:6e:3f 10.100.0.13
Jan 31 03:45:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:38.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:38.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 238 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 112 op/s
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:45:38
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'images', '.mgr', 'cephfs.cephfs.meta']
Jan 31 03:45:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:45:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:45:39 np0005603621 nova_compute[247399]: 2026-01-31 08:45:39.873 247403 DEBUG nova.network.neutron [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updated VIF entry in instance network info cache for port 444d01ec-ea25-476c-a3da-ecffec3d3863. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:45:39 np0005603621 nova_compute[247399]: 2026-01-31 08:45:39.874 247403 DEBUG nova.network.neutron [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updating instance_info_cache with network_info: [{"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:45:40 np0005603621 nova_compute[247399]: 2026-01-31 08:45:40.021 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:40.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:40.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:40 np0005603621 nova_compute[247399]: 2026-01-31 08:45:40.404 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 238 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 65 op/s
Jan 31 03:45:40 np0005603621 nova_compute[247399]: 2026-01-31 08:45:40.604 247403 DEBUG oslo_concurrency.lockutils [req-0106bb14-0e3a-4f4c-a2fd-6017c0fd0778 req-85039510-6ab4-413b-8352-7b23d0299b22 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:45:40 np0005603621 ceph-osd[84880]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 03:45:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 101 op/s
Jan 31 03:45:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:44.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000065s ======
Jan 31 03:45:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:44.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000065s
Jan 31 03:45:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 101 op/s
Jan 31 03:45:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:45:44 np0005603621 nova_compute[247399]: 2026-01-31 08:45:44.807 247403 INFO nova.compute.manager [None req-f032dd53-a8ad-46d6-9d9a-bf2d6b21cde8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Get console output#033[00m
Jan 31 03:45:44 np0005603621 nova_compute[247399]: 2026-01-31 08:45:44.813 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:45:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:45:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:45:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:45:45 np0005603621 nova_compute[247399]: 2026-01-31 08:45:45.025 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 nova_compute[247399]: 2026-01-31 08:45:45.405 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bbf3bc0a-aff6-4c90-8814-318df926b21c does not exist
Jan 31 03:45:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ad810667-302a-4760-b19a-1fb04423ffc3 does not exist
Jan 31 03:45:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4edb680f-ce3a-4a92-870f-8aa68009822f does not exist
Jan 31 03:45:45 np0005603621 nova_compute[247399]: 2026-01-31 08:45:45.841 247403 INFO nova.compute.manager [None req-0fdd6021-1ef7-4441-9b53-a187fa5effe6 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Pausing#033[00m
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:45:45 np0005603621 nova_compute[247399]: 2026-01-31 08:45:45.842 247403 DEBUG nova.objects.instance [None req-0fdd6021-1ef7-4441-9b53-a187fa5effe6 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'flavor' on Instance uuid f3c33ab5-ba7c-4982-b706-525d0bbead40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:45:46 np0005603621 nova_compute[247399]: 2026-01-31 08:45:46.020 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:46.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:46.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:46 np0005603621 nova_compute[247399]: 2026-01-31 08:45:46.144 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849146.1438951, f3c33ab5-ba7c-4982-b706-525d0bbead40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:46 np0005603621 nova_compute[247399]: 2026-01-31 08:45:46.144 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:45:46 np0005603621 nova_compute[247399]: 2026-01-31 08:45:46.146 247403 DEBUG nova.compute.manager [None req-0fdd6021-1ef7-4441-9b53-a187fa5effe6 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.313801489 +0000 UTC m=+0.034122913 container create 8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:45:46 np0005603621 systemd[1]: Started libpod-conmon-8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670.scope.
Jan 31 03:45:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.39235537 +0000 UTC m=+0.112676874 container init 8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.298013119 +0000 UTC m=+0.018334563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.400899051 +0000 UTC m=+0.121220475 container start 8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cannon, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.404057061 +0000 UTC m=+0.124378535 container attach 8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:45:46 np0005603621 fervent_cannon[360357]: 167 167
Jan 31 03:45:46 np0005603621 nova_compute[247399]: 2026-01-31 08:45:46.407 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:46 np0005603621 systemd[1]: libpod-8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670.scope: Deactivated successfully.
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.409874565 +0000 UTC m=+0.130195989 container died 8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:45:46 np0005603621 nova_compute[247399]: 2026-01-31 08:45:46.414 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:45:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a20b35309f884c96bb3eacd4d2b517075873e09d442ac21e15eacb49b5da7d8f-merged.mount: Deactivated successfully.
Jan 31 03:45:46 np0005603621 podman[360341]: 2026-01-31 08:45:46.445545506 +0000 UTC m=+0.165866930 container remove 8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cannon, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:45:46 np0005603621 systemd[1]: libpod-conmon-8daf9d5f73a250331be819b0eb24f4db23b8e125054a7cb431c1d32806399670.scope: Deactivated successfully.
Jan 31 03:45:46 np0005603621 podman[360380]: 2026-01-31 08:45:46.555864064 +0000 UTC m=+0.034886977 container create a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:45:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:46 np0005603621 systemd[1]: Started libpod-conmon-a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4.scope.
Jan 31 03:45:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 116 op/s
Jan 31 03:45:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be64580a15e81b05c281a1b933eb112928d65227f95797f13e4a740a26126da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be64580a15e81b05c281a1b933eb112928d65227f95797f13e4a740a26126da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be64580a15e81b05c281a1b933eb112928d65227f95797f13e4a740a26126da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be64580a15e81b05c281a1b933eb112928d65227f95797f13e4a740a26126da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be64580a15e81b05c281a1b933eb112928d65227f95797f13e4a740a26126da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:46 np0005603621 podman[360380]: 2026-01-31 08:45:46.63524296 +0000 UTC m=+0.114265883 container init a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:45:46 np0005603621 podman[360380]: 2026-01-31 08:45:46.540846698 +0000 UTC m=+0.019869631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:45:46 np0005603621 podman[360380]: 2026-01-31 08:45:46.640987692 +0000 UTC m=+0.120010605 container start a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:45:46 np0005603621 podman[360380]: 2026-01-31 08:45:46.644516024 +0000 UTC m=+0.123539007 container attach a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:45:47 np0005603621 stupefied_yalow[360396]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:45:47 np0005603621 stupefied_yalow[360396]: --> relative data size: 1.0
Jan 31 03:45:47 np0005603621 stupefied_yalow[360396]: --> All data devices are unavailable
Jan 31 03:45:47 np0005603621 systemd[1]: libpod-a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4.scope: Deactivated successfully.
Jan 31 03:45:47 np0005603621 podman[360380]: 2026-01-31 08:45:47.41238898 +0000 UTC m=+0.891411893 container died a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:45:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1be64580a15e81b05c281a1b933eb112928d65227f95797f13e4a740a26126da-merged.mount: Deactivated successfully.
Jan 31 03:45:47 np0005603621 podman[360380]: 2026-01-31 08:45:47.458421979 +0000 UTC m=+0.937444892 container remove a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:45:47 np0005603621 systemd[1]: libpod-conmon-a1b5c93ce4fba473ce0e2740f3fafb45bf5a0bd8c3848b116465efa5ed3ab2e4.scope: Deactivated successfully.
Jan 31 03:45:47 np0005603621 podman[360562]: 2026-01-31 08:45:47.935106692 +0000 UTC m=+0.033668478 container create dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:45:47 np0005603621 systemd[1]: Started libpod-conmon-dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c.scope.
Jan 31 03:45:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:47 np0005603621 podman[360562]: 2026-01-31 08:45:47.994494775 +0000 UTC m=+0.093056571 container init dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:45:48 np0005603621 podman[360562]: 2026-01-31 08:45:48.000145504 +0000 UTC m=+0.098707310 container start dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:45:48 np0005603621 eloquent_bhabha[360577]: 167 167
Jan 31 03:45:48 np0005603621 systemd[1]: libpod-dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c.scope: Deactivated successfully.
Jan 31 03:45:48 np0005603621 podman[360562]: 2026-01-31 08:45:48.003724517 +0000 UTC m=+0.102286343 container attach dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:45:48 np0005603621 podman[360562]: 2026-01-31 08:45:48.004495142 +0000 UTC m=+0.103056948 container died dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:45:48 np0005603621 podman[360562]: 2026-01-31 08:45:47.920733396 +0000 UTC m=+0.019295212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:45:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-33507edf0532f4fd7e660bbe336cd82b66b8863ce6b08ecd7a3ecd68a7c8e604-merged.mount: Deactivated successfully.
Jan 31 03:45:48 np0005603621 podman[360562]: 2026-01-31 08:45:48.033269864 +0000 UTC m=+0.131831660 container remove dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:45:48 np0005603621 systemd[1]: libpod-conmon-dff367349240affca71351bf6bdc7609c7c2810b8a7d853a8f1d39f04ec87b0c.scope: Deactivated successfully.
Jan 31 03:45:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:48.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:48.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:48 np0005603621 podman[360600]: 2026-01-31 08:45:48.14795852 +0000 UTC m=+0.033415980 container create 2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:45:48 np0005603621 systemd[1]: Started libpod-conmon-2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39.scope.
Jan 31 03:45:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e6c7b222ba8241484ddf8315b040475d511d3800ac4b115d338c2808e98e85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e6c7b222ba8241484ddf8315b040475d511d3800ac4b115d338c2808e98e85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e6c7b222ba8241484ddf8315b040475d511d3800ac4b115d338c2808e98e85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91e6c7b222ba8241484ddf8315b040475d511d3800ac4b115d338c2808e98e85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:48 np0005603621 podman[360600]: 2026-01-31 08:45:48.226901684 +0000 UTC m=+0.112359174 container init 2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:45:48 np0005603621 podman[360600]: 2026-01-31 08:45:48.133717159 +0000 UTC m=+0.019174649 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:45:48 np0005603621 podman[360600]: 2026-01-31 08:45:48.231664834 +0000 UTC m=+0.117122294 container start 2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:45:48 np0005603621 podman[360600]: 2026-01-31 08:45:48.243681706 +0000 UTC m=+0.129139216 container attach 2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:45:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 108 op/s
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]: {
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:    "0": [
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:        {
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "devices": [
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "/dev/loop3"
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            ],
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "lv_name": "ceph_lv0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "lv_size": "7511998464",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "name": "ceph_lv0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "tags": {
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.cluster_name": "ceph",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.crush_device_class": "",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.encrypted": "0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.osd_id": "0",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.type": "block",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:                "ceph.vdo": "0"
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            },
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "type": "block",
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:            "vg_name": "ceph_vg0"
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:        }
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]:    ]
Jan 31 03:45:48 np0005603621 suspicious_satoshi[360617]: }
Jan 31 03:45:48 np0005603621 systemd[1]: libpod-2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39.scope: Deactivated successfully.
Jan 31 03:45:48 np0005603621 podman[360600]: 2026-01-31 08:45:48.983057737 +0000 UTC m=+0.868515197 container died 2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:45:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-91e6c7b222ba8241484ddf8315b040475d511d3800ac4b115d338c2808e98e85-merged.mount: Deactivated successfully.
Jan 31 03:45:49 np0005603621 podman[360600]: 2026-01-31 08:45:49.023919062 +0000 UTC m=+0.909376522 container remove 2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:45:49 np0005603621 systemd[1]: libpod-conmon-2aaa9ca63841d5593f245548b5ef281be1c696023a3b6e5aa327c585fc0c1e39.scope: Deactivated successfully.
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.482005715 +0000 UTC m=+0.037954994 container create ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rhodes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003165058510143309 of space, bias 1.0, pg target 0.9495175530429928 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0022997292597245486 of space, bias 1.0, pg target 0.6899187779173646 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:45:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:45:49 np0005603621 systemd[1]: Started libpod-conmon-ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc.scope.
Jan 31 03:45:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.553615646 +0000 UTC m=+0.109564925 container init ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.558839782 +0000 UTC m=+0.114789061 container start ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rhodes, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.466432552 +0000 UTC m=+0.022381851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.562215889 +0000 UTC m=+0.118165188 container attach ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:45:49 np0005603621 systemd[1]: libpod-ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc.scope: Deactivated successfully.
Jan 31 03:45:49 np0005603621 goofy_rhodes[360797]: 167 167
Jan 31 03:45:49 np0005603621 conmon[360797]: conmon ae8b24a4397f2af932c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc.scope/container/memory.events
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.564118549 +0000 UTC m=+0.120067828 container died ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:45:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8f51632303f5b43b77d6a4421d8e00273e16814c803cb732c64c77d264564c5b-merged.mount: Deactivated successfully.
Jan 31 03:45:49 np0005603621 podman[360781]: 2026-01-31 08:45:49.593177751 +0000 UTC m=+0.149127020 container remove ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_rhodes, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:45:49 np0005603621 systemd[1]: libpod-conmon-ae8b24a4397f2af932c9bc50ecde1bd6b35502b845595b12a7ee8e51f4c9acbc.scope: Deactivated successfully.
Jan 31 03:45:49 np0005603621 podman[360821]: 2026-01-31 08:45:49.718716041 +0000 UTC m=+0.034775844 container create 611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:45:49 np0005603621 systemd[1]: Started libpod-conmon-611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f.scope.
Jan 31 03:45:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:45:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ecb9b7ee8d811823666ecb9b5d068957cb00bd84d5a69f17f6550c8cf8263a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ecb9b7ee8d811823666ecb9b5d068957cb00bd84d5a69f17f6550c8cf8263a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ecb9b7ee8d811823666ecb9b5d068957cb00bd84d5a69f17f6550c8cf8263a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:49 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ecb9b7ee8d811823666ecb9b5d068957cb00bd84d5a69f17f6550c8cf8263a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:45:49 np0005603621 podman[360821]: 2026-01-31 08:45:49.783869656 +0000 UTC m=+0.099929459 container init 611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:45:49 np0005603621 podman[360821]: 2026-01-31 08:45:49.788314357 +0000 UTC m=+0.104374160 container start 611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:45:49 np0005603621 podman[360821]: 2026-01-31 08:45:49.791313602 +0000 UTC m=+0.107373395 container attach 611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 03:45:49 np0005603621 podman[360821]: 2026-01-31 08:45:49.704431648 +0000 UTC m=+0.020491481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:45:50 np0005603621 nova_compute[247399]: 2026-01-31 08:45:50.051 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:50.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:50.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:50 np0005603621 nova_compute[247399]: 2026-01-31 08:45:50.408 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:50 np0005603621 nova_compute[247399]: 2026-01-31 08:45:50.435 247403 INFO nova.compute.manager [None req-4db02323-e47d-4980-b587-2cbf973fde41 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Get console output#033[00m
Jan 31 03:45:50 np0005603621 nova_compute[247399]: 2026-01-31 08:45:50.441 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]: {
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:        "osd_id": 0,
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:        "type": "bluestore"
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]:    }
Jan 31 03:45:50 np0005603621 trusting_mclaren[360837]: }
Jan 31 03:45:50 np0005603621 systemd[1]: libpod-611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f.scope: Deactivated successfully.
Jan 31 03:45:50 np0005603621 podman[360821]: 2026-01-31 08:45:50.54948048 +0000 UTC m=+0.865540283 container died 611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:45:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-58ecb9b7ee8d811823666ecb9b5d068957cb00bd84d5a69f17f6550c8cf8263a-merged.mount: Deactivated successfully.
Jan 31 03:45:50 np0005603621 podman[360821]: 2026-01-31 08:45:50.590318544 +0000 UTC m=+0.906378357 container remove 611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mclaren, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:45:50 np0005603621 systemd[1]: libpod-conmon-611795e94b19146290c1c5930f0672dc92f619440dd02e75680d60f6f1de4c7f.scope: Deactivated successfully.
Jan 31 03:45:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 183 KiB/s rd, 388 KiB/s wr, 58 op/s
Jan 31 03:45:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:45:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:45:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bdd80f2a-38d6-4845-8996-ebc112d529b5 does not exist
Jan 31 03:45:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c16efadb-c61f-415b-9c86-aebf1c97786d does not exist
Jan 31 03:45:50 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d73268ea-dee4-4e70-84f6-aa3af135e376 does not exist
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.447 247403 INFO nova.compute.manager [None req-87360838-8715-4d67-bb8a-a1006874f37a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Unpausing#033[00m
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.448 247403 DEBUG nova.objects.instance [None req-87360838-8715-4d67-bb8a-a1006874f37a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'flavor' on Instance uuid f3c33ab5-ba7c-4982-b706-525d0bbead40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.563 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849151.562804, f3c33ab5-ba7c-4982-b706-525d0bbead40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.563 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:45:51 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.566 247403 DEBUG nova.virt.libvirt.guest [None req-87360838-8715-4d67-bb8a-a1006874f37a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.567 247403 DEBUG nova.compute.manager [None req-87360838-8715-4d67-bb8a-a1006874f37a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.699 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:45:51 np0005603621 nova_compute[247399]: 2026-01-31 08:45:51.701 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:45:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:45:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:52.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:52.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Jan 31 03:45:54 np0005603621 nova_compute[247399]: 2026-01-31 08:45:54.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:54.056 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:45:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:54.058 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:45:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:54.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:45:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:54.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:45:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 31 03:45:55 np0005603621 nova_compute[247399]: 2026-01-31 08:45:55.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:45:55.060 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:45:55 np0005603621 nova_compute[247399]: 2026-01-31 08:45:55.409 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:45:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:45:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:56.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:45:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:56.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:45:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:45:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 31 03:45:57 np0005603621 nova_compute[247399]: 2026-01-31 08:45:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:45:57 np0005603621 nova_compute[247399]: 2026-01-31 08:45:57.594 247403 INFO nova.compute.manager [None req-bd6d5044-b5c8-4eeb-b3d7-741b7b7e99c6 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Get console output#033[00m
Jan 31 03:45:57 np0005603621 nova_compute[247399]: 2026-01-31 08:45:57.599 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:45:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:45:58.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:45:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:45:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:45:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:45:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:00.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:00.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.411 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 85 op/s
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.849 247403 DEBUG nova.compute.manager [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-changed-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.849 247403 DEBUG nova.compute.manager [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Refreshing instance network info cache due to event network-changed-444d01ec-ea25-476c-a3da-ecffec3d3863. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.850 247403 DEBUG oslo_concurrency.lockutils [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.850 247403 DEBUG oslo_concurrency.lockutils [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:46:00 np0005603621 nova_compute[247399]: 2026-01-31 08:46:00.850 247403 DEBUG nova.network.neutron [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Refreshing network info cache for port 444d01ec-ea25-476c-a3da-ecffec3d3863 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.563 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.564 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.564 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.565 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.565 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.567 247403 INFO nova.compute.manager [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Terminating instance#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.569 247403 DEBUG nova.compute.manager [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:46:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:01 np0005603621 kernel: tap444d01ec-ea (unregistering): left promiscuous mode
Jan 31 03:46:01 np0005603621 NetworkManager[49013]: <info>  [1769849161.6747] device (tap444d01ec-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:46:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:01Z|00645|binding|INFO|Releasing lport 444d01ec-ea25-476c-a3da-ecffec3d3863 from this chassis (sb_readonly=0)
Jan 31 03:46:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:01Z|00646|binding|INFO|Setting lport 444d01ec-ea25-476c-a3da-ecffec3d3863 down in Southbound
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.681 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:01Z|00647|binding|INFO|Removing iface tap444d01ec-ea ovn-installed in OVS
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.696 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009a.scope: Deactivated successfully.
Jan 31 03:46:01 np0005603621 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000009a.scope: Consumed 13.309s CPU time.
Jan 31 03:46:01 np0005603621 systemd-machined[212769]: Machine qemu-77-instance-0000009a terminated.
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.759 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:6e:3f 10.100.0.13'], port_security=['fa:16:3e:44:6e:3f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f3c33ab5-ba7c-4982-b706-525d0bbead40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-baa8ab21-9039-47e6-9a0b-098146c212d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '39c1ce9d-d4af-47b0-a485-5cc56970d2fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e94c4012-4a05-4e0d-a178-28b0d2f8086e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=444d01ec-ea25-476c-a3da-ecffec3d3863) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.760 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 444d01ec-ea25-476c-a3da-ecffec3d3863 in datapath baa8ab21-9039-47e6-9a0b-098146c212d0 unbound from our chassis#033[00m
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.762 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network baa8ab21-9039-47e6-9a0b-098146c212d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.763 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e567f7f7-f390-4075-a5a7-e09bf441bbcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.764 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0 namespace which is not needed anymore#033[00m
Jan 31 03:46:01 np0005603621 podman[360980]: 2026-01-31 08:46:01.773947316 +0000 UTC m=+0.071445336 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:46:01 np0005603621 podman[360976]: 2026-01-31 08:46:01.775806385 +0000 UTC m=+0.073434809 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.803 247403 INFO nova.virt.libvirt.driver [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Instance destroyed successfully.#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.804 247403 DEBUG nova.objects.instance [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid f3c33ab5-ba7c-4982-b706-525d0bbead40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:01 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [NOTICE]   (359834) : haproxy version is 2.8.14-c23fe91
Jan 31 03:46:01 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [NOTICE]   (359834) : path to executable is /usr/sbin/haproxy
Jan 31 03:46:01 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [WARNING]  (359834) : Exiting Master process...
Jan 31 03:46:01 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [ALERT]    (359834) : Current worker (359836) exited with code 143 (Terminated)
Jan 31 03:46:01 np0005603621 neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0[359830]: [WARNING]  (359834) : All workers exited. Exiting... (0)
Jan 31 03:46:01 np0005603621 systemd[1]: libpod-41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8.scope: Deactivated successfully.
Jan 31 03:46:01 np0005603621 podman[361057]: 2026-01-31 08:46:01.882308822 +0000 UTC m=+0.038838973 container died 41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:46:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0df0dab9966838eea37def0bc8b9513581544ee94f1d80b54826dcfbca115d21-merged.mount: Deactivated successfully.
Jan 31 03:46:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8-userdata-shm.mount: Deactivated successfully.
Jan 31 03:46:01 np0005603621 podman[361057]: 2026-01-31 08:46:01.92484309 +0000 UTC m=+0.081373241 container cleanup 41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:46:01 np0005603621 systemd[1]: libpod-conmon-41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8.scope: Deactivated successfully.
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.963 247403 DEBUG nova.virt.libvirt.vif [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:44:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1347972004',display_name='tempest-TestNetworkAdvancedServerOps-server-1347972004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1347972004',id=154,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+kOg3MmPVJwauNjDmypRbLLandlUlpzTQii+iWKYcs5Wr2mmgLhyx+j8qhUMVQhENZlTPlciAr00wc/NRRCC3Zn3OrZptTwDzkDZ8wBXbvhwqaJDIvKwOglYmsS/IEXA==',key_name='tempest-TestNetworkAdvancedServerOps-130362912',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:45:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-3fgj2v9p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:45:51Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=f3c33ab5-ba7c-4982-b706-525d0bbead40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.964 247403 DEBUG nova.network.os_vif_util [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.964 247403 DEBUG nova.network.os_vif_util [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.965 247403 DEBUG os_vif [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.967 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap444d01ec-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.969 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.971 247403 INFO os_vif [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:6e:3f,bridge_name='br-int',has_traffic_filtering=True,id=444d01ec-ea25-476c-a3da-ecffec3d3863,network=Network(baa8ab21-9039-47e6-9a0b-098146c212d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444d01ec-ea')#033[00m
Jan 31 03:46:01 np0005603621 podman[361088]: 2026-01-31 08:46:01.984547293 +0000 UTC m=+0.043632504 container remove 41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.989 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6d099dde-1b15-435a-8704-fa396940e6ff]: (4, ('Sat Jan 31 08:46:01 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0 (41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8)\n41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8\nSat Jan 31 08:46:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0 (41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8)\n41f0cca4e4fc67c136767a6a45c4c186e6441f883cce7a866b472d056c19c9c8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.991 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[16deeacc-03f3-4571-ba2b-ca7d5553705b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:01.992 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbaa8ab21-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.993 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:01 np0005603621 kernel: tapbaa8ab21-90: left promiscuous mode
Jan 31 03:46:01 np0005603621 nova_compute[247399]: 2026-01-31 08:46:01.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:02.002 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[774da259-382b-4b1f-8e4f-f2f622f38d17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:02.016 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[02a5f622-721c-4349-a289-d63fa3330c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:02.017 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[321ca943-4405-4e90-b05c-ea5238805629]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:02.029 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d674577-e7be-4596-823a-f89359bbea04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842808, 'reachable_time': 22122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361122, 'error': None, 'target': 'ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:02.032 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-baa8ab21-9039-47e6-9a0b-098146c212d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:46:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:02.032 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[09801fa3-4461-48f0-94f0-d6ba7a8dab9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:02 np0005603621 systemd[1]: run-netns-ovnmeta\x2dbaa8ab21\x2d9039\x2d47e6\x2d9a0b\x2d098146c212d0.mount: Deactivated successfully.
Jan 31 03:46:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:02.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 273 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 125 op/s
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.674 247403 INFO nova.virt.libvirt.driver [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Deleting instance files /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40_del#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.675 247403 INFO nova.virt.libvirt.driver [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Deletion of /var/lib/nova/instances/f3c33ab5-ba7c-4982-b706-525d0bbead40_del complete#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.778 247403 DEBUG nova.compute.manager [req-bdc33c0c-7e7a-4ec0-936a-75322b4a4728 req-d080e6d0-a643-448f-a4a7-badf15369049 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-vif-unplugged-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.780 247403 DEBUG oslo_concurrency.lockutils [req-bdc33c0c-7e7a-4ec0-936a-75322b4a4728 req-d080e6d0-a643-448f-a4a7-badf15369049 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.780 247403 DEBUG oslo_concurrency.lockutils [req-bdc33c0c-7e7a-4ec0-936a-75322b4a4728 req-d080e6d0-a643-448f-a4a7-badf15369049 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.781 247403 DEBUG oslo_concurrency.lockutils [req-bdc33c0c-7e7a-4ec0-936a-75322b4a4728 req-d080e6d0-a643-448f-a4a7-badf15369049 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.781 247403 DEBUG nova.compute.manager [req-bdc33c0c-7e7a-4ec0-936a-75322b4a4728 req-d080e6d0-a643-448f-a4a7-badf15369049 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] No waiting events found dispatching network-vif-unplugged-444d01ec-ea25-476c-a3da-ecffec3d3863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.781 247403 DEBUG nova.compute.manager [req-bdc33c0c-7e7a-4ec0-936a-75322b4a4728 req-d080e6d0-a643-448f-a4a7-badf15369049 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-vif-unplugged-444d01ec-ea25-476c-a3da-ecffec3d3863 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.949 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.974 247403 INFO nova.compute.manager [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Took 1.40 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.975 247403 DEBUG oslo.service.loopingcall [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.975 247403 DEBUG nova.compute.manager [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:46:02 np0005603621 nova_compute[247399]: 2026-01-31 08:46:02.976 247403 DEBUG nova.network.neutron [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:46:03 np0005603621 nova_compute[247399]: 2026-01-31 08:46:03.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:03 np0005603621 nova_compute[247399]: 2026-01-31 08:46:03.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:46:03 np0005603621 nova_compute[247399]: 2026-01-31 08:46:03.465 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:46:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:04.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:04.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:04 np0005603621 nova_compute[247399]: 2026-01-31 08:46:04.465 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:04 np0005603621 nova_compute[247399]: 2026-01-31 08:46:04.465 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:46:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 273 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 637 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.113 247403 DEBUG nova.compute.manager [req-cb26e170-a09c-4f14-bb39-9fadb739bfec req-7b2bf33f-898f-4e91-98fa-3d953a8fa9b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.114 247403 DEBUG oslo_concurrency.lockutils [req-cb26e170-a09c-4f14-bb39-9fadb739bfec req-7b2bf33f-898f-4e91-98fa-3d953a8fa9b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.114 247403 DEBUG oslo_concurrency.lockutils [req-cb26e170-a09c-4f14-bb39-9fadb739bfec req-7b2bf33f-898f-4e91-98fa-3d953a8fa9b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.114 247403 DEBUG oslo_concurrency.lockutils [req-cb26e170-a09c-4f14-bb39-9fadb739bfec req-7b2bf33f-898f-4e91-98fa-3d953a8fa9b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.115 247403 DEBUG nova.compute.manager [req-cb26e170-a09c-4f14-bb39-9fadb739bfec req-7b2bf33f-898f-4e91-98fa-3d953a8fa9b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] No waiting events found dispatching network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.115 247403 WARNING nova.compute.manager [req-cb26e170-a09c-4f14-bb39-9fadb739bfec req-7b2bf33f-898f-4e91-98fa-3d953a8fa9b5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received unexpected event network-vif-plugged-444d01ec-ea25-476c-a3da-ecffec3d3863 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.237 247403 DEBUG nova.network.neutron [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updated VIF entry in instance network info cache for port 444d01ec-ea25-476c-a3da-ecffec3d3863. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.237 247403 DEBUG nova.network.neutron [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updating instance_info_cache with network_info: [{"id": "444d01ec-ea25-476c-a3da-ecffec3d3863", "address": "fa:16:3e:44:6e:3f", "network": {"id": "baa8ab21-9039-47e6-9a0b-098146c212d0", "bridge": "br-int", "label": "tempest-network-smoke--1303319364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444d01ec-ea", "ovs_interfaceid": "444d01ec-ea25-476c-a3da-ecffec3d3863", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.410 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.411 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:05 np0005603621 nova_compute[247399]: 2026-01-31 08:46:05.787 247403 DEBUG oslo_concurrency.lockutils [req-0db42bd5-0c93-42d1-9f93-a05d2b0de380 req-65a80346-d535-4644-95cb-12f3ad6a5a71 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3c33ab5-ba7c-4982-b706-525d0bbead40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:46:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:06.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:06 np0005603621 nova_compute[247399]: 2026-01-31 08:46:06.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 235 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 717 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.685469) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849166685505, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2120, "num_deletes": 252, "total_data_size": 3802136, "memory_usage": 3853584, "flush_reason": "Manual Compaction"}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849166740793, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3726365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61449, "largest_seqno": 63568, "table_properties": {"data_size": 3716782, "index_size": 6012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19999, "raw_average_key_size": 20, "raw_value_size": 3697597, "raw_average_value_size": 3792, "num_data_blocks": 262, "num_entries": 975, "num_filter_entries": 975, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769848956, "oldest_key_time": 1769848956, "file_creation_time": 1769849166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 55372 microseconds, and 5541 cpu microseconds.
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.740837) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3726365 bytes OK
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.740856) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.743884) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.743905) EVENT_LOG_v1 {"time_micros": 1769849166743899, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.743928) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3793476, prev total WAL file size 3793476, number of live WAL files 2.
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.744646) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3639KB)], [140(11MB)]
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849166744699, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 15343134, "oldest_snapshot_seqno": -1}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9107 keys, 13361002 bytes, temperature: kUnknown
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849166902846, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 13361002, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13299968, "index_size": 37177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 238783, "raw_average_key_size": 26, "raw_value_size": 13138133, "raw_average_value_size": 1442, "num_data_blocks": 1432, "num_entries": 9107, "num_filter_entries": 9107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849166, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.903069) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 13361002 bytes
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.905908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 97.0 rd, 84.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 11.1 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 9632, records dropped: 525 output_compression: NoCompression
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.905949) EVENT_LOG_v1 {"time_micros": 1769849166905931, "job": 86, "event": "compaction_finished", "compaction_time_micros": 158212, "compaction_time_cpu_micros": 21776, "output_level": 6, "num_output_files": 1, "total_output_size": 13361002, "num_input_records": 9632, "num_output_records": 9107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849166907147, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849166909669, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.744534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.909756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.909761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.909762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.909764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:06 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:06.909765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:06 np0005603621 nova_compute[247399]: 2026-01-31 08:46:06.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:07 np0005603621 nova_compute[247399]: 2026-01-31 08:46:07.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 31 03:46:08 np0005603621 nova_compute[247399]: 2026-01-31 08:46:08.735 247403 DEBUG nova.network.neutron [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:08 np0005603621 nova_compute[247399]: 2026-01-31 08:46:08.980 247403 INFO nova.compute.manager [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Took 6.00 seconds to deallocate network for instance.#033[00m
Jan 31 03:46:09 np0005603621 nova_compute[247399]: 2026-01-31 08:46:09.117 247403 DEBUG nova.compute.manager [req-3c3e404d-f26b-474d-8b81-4d06ba098258 req-3582ada7-e7b9-4594-98b1-083e48b3fa2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Received event network-vif-deleted-444d01ec-ea25-476c-a3da-ecffec3d3863 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:09 np0005603621 nova_compute[247399]: 2026-01-31 08:46:09.161 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:09 np0005603621 nova_compute[247399]: 2026-01-31 08:46:09.161 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:09 np0005603621 nova_compute[247399]: 2026-01-31 08:46:09.249 247403 DEBUG oslo_concurrency.processutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74189844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:09 np0005603621 nova_compute[247399]: 2026-01-31 08:46:09.669 247403 DEBUG oslo_concurrency.processutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:09 np0005603621 nova_compute[247399]: 2026-01-31 08:46:09.674 247403 DEBUG nova.compute.provider_tree [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:46:10 np0005603621 nova_compute[247399]: 2026-01-31 08:46:10.029 247403 DEBUG nova.scheduler.client.report [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:46:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:10.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:10.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:10 np0005603621 nova_compute[247399]: 2026-01-31 08:46:10.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:10 np0005603621 nova_compute[247399]: 2026-01-31 08:46:10.415 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:10 np0005603621 nova_compute[247399]: 2026-01-31 08:46:10.436 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 31 03:46:10 np0005603621 nova_compute[247399]: 2026-01-31 08:46:10.675 247403 INFO nova.scheduler.client.report [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Deleted allocations for instance f3c33ab5-ba7c-4982-b706-525d0bbead40#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.145 247403 DEBUG oslo_concurrency.lockutils [None req-1bac0580-71b6-4cfe-95f3-3680a24b05d9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "f3c33ab5-ba7c-4982-b706-525d0bbead40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.272 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.272 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.273 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.273 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.274 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852975782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.733 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.867 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.868 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4150MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.869 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.869 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:11 np0005603621 nova_compute[247399]: 2026-01-31 08:46:11.973 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.037 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.037 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.073 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:12.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:12.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/909818379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.500 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.506 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.559 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:46:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.695 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:46:12 np0005603621 nova_compute[247399]: 2026-01-31 08:46:12.695 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:14.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:14.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 266 KiB/s rd, 364 KiB/s wr, 80 op/s
Jan 31 03:46:14 np0005603621 nova_compute[247399]: 2026-01-31 08:46:14.697 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:14 np0005603621 nova_compute[247399]: 2026-01-31 08:46:14.697 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:15 np0005603621 nova_compute[247399]: 2026-01-31 08:46:15.417 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:16.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:16.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 267 KiB/s rd, 364 KiB/s wr, 80 op/s
Jan 31 03:46:16 np0005603621 nova_compute[247399]: 2026-01-31 08:46:16.801 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849161.8006234, f3c33ab5-ba7c-4982-b706-525d0bbead40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:16 np0005603621 nova_compute[247399]: 2026-01-31 08:46:16.802 247403 INFO nova.compute.manager [-] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:46:16 np0005603621 nova_compute[247399]: 2026-01-31 08:46:16.976 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:17 np0005603621 nova_compute[247399]: 2026-01-31 08:46:17.684 247403 DEBUG nova.compute.manager [None req-4515718b-0399-4884-9a08-53c39e04bdf0 - - - - - -] [instance: f3c33ab5-ba7c-4982-b706-525d0bbead40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:18.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:18.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:18 np0005603621 nova_compute[247399]: 2026-01-31 08:46:18.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:18 np0005603621 nova_compute[247399]: 2026-01-31 08:46:18.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:46:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 62 KiB/s wr, 61 op/s
Jan 31 03:46:18 np0005603621 nova_compute[247399]: 2026-01-31 08:46:18.660 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:20.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:20.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:20 np0005603621 nova_compute[247399]: 2026-01-31 08:46:20.419 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.865460) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849181865497, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 382, "num_deletes": 251, "total_data_size": 249654, "memory_usage": 256896, "flush_reason": "Manual Compaction"}
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849181874060, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 231490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63569, "largest_seqno": 63950, "table_properties": {"data_size": 229197, "index_size": 392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6275, "raw_average_key_size": 20, "raw_value_size": 224646, "raw_average_value_size": 729, "num_data_blocks": 18, "num_entries": 308, "num_filter_entries": 308, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849167, "oldest_key_time": 1769849167, "file_creation_time": 1769849181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 8670 microseconds, and 1183 cpu microseconds.
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.874124) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 231490 bytes OK
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.874148) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.882136) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.882177) EVENT_LOG_v1 {"time_micros": 1769849181882168, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.882204) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 247185, prev total WAL file size 247185, number of live WAL files 2.
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.882670) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353035' seq:0, type:0; will stop at (end)
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(226KB)], [143(12MB)]
Jan 31 03:46:21 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849181882766, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13592492, "oldest_snapshot_seqno": -1}
Jan 31 03:46:21 np0005603621 nova_compute[247399]: 2026-01-31 08:46:21.979 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:22.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:22.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8906 keys, 9755994 bytes, temperature: kUnknown
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849182158020, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9755994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9701075, "index_size": 31574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22277, "raw_key_size": 234818, "raw_average_key_size": 26, "raw_value_size": 9547483, "raw_average_value_size": 1072, "num_data_blocks": 1198, "num_entries": 8906, "num_filter_entries": 8906, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.158275) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9755994 bytes
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.162912) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.4 rd, 35.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(100.9) write-amplify(42.1) OK, records in: 9415, records dropped: 509 output_compression: NoCompression
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.162988) EVENT_LOG_v1 {"time_micros": 1769849182162962, "job": 88, "event": "compaction_finished", "compaction_time_micros": 275327, "compaction_time_cpu_micros": 20254, "output_level": 6, "num_output_files": 1, "total_output_size": 9755994, "num_input_records": 9415, "num_output_records": 8906, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849182163457, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849182166400, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:21.882542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.166511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.166518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.166521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.166524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:46:22.166527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:46:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:46:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:24.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:24.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:46:25 np0005603621 nova_compute[247399]: 2026-01-31 08:46:25.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:25 np0005603621 nova_compute[247399]: 2026-01-31 08:46:25.420 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:25 np0005603621 nova_compute[247399]: 2026-01-31 08:46:25.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:26.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:46:26 np0005603621 nova_compute[247399]: 2026-01-31 08:46:26.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:28.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:46:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:30.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:30.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:30 np0005603621 nova_compute[247399]: 2026-01-31 08:46:30.461 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:30.528 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:30.528 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:30.528 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 31 03:46:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:31 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 31 03:46:31 np0005603621 nova_compute[247399]: 2026-01-31 08:46:31.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:32.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:32 np0005603621 podman[361258]: 2026-01-31 08:46:32.505530038 +0000 UTC m=+0.054161238 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:46:32 np0005603621 podman[361259]: 2026-01-31 08:46:32.525830342 +0000 UTC m=+0.071626682 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Jan 31 03:46:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 221 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 135 op/s
Jan 31 03:46:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:34.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:34.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 221 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Jan 31 03:46:35 np0005603621 nova_compute[247399]: 2026-01-31 08:46:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:35 np0005603621 nova_compute[247399]: 2026-01-31 08:46:35.462 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:36.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:36.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 235 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 3.5 MiB/s wr, 75 op/s
Jan 31 03:46:36 np0005603621 nova_compute[247399]: 2026-01-31 08:46:36.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:38.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.288 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.288 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.423 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 3.9 MiB/s wr, 77 op/s
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:46:38
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control']
Jan 31 03:46:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.637 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.637 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.650 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:46:38 np0005603621 nova_compute[247399]: 2026-01-31 08:46:38.650 247403 INFO nova.compute.claims [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:46:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:46:39 np0005603621 nova_compute[247399]: 2026-01-31 08:46:39.443 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:46:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051705099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:46:39 np0005603621 nova_compute[247399]: 2026-01-31 08:46:39.850 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:39 np0005603621 nova_compute[247399]: 2026-01-31 08:46:39.855 247403 DEBUG nova.compute.provider_tree [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:46:39 np0005603621 nova_compute[247399]: 2026-01-31 08:46:39.947 247403 DEBUG nova.scheduler.client.report [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:46:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:46:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:46:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:40.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.163 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.164 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.464 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.606 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.606 247403 DEBUG nova.network.neutron [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:46:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 284 KiB/s rd, 3.9 MiB/s wr, 77 op/s
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.882 247403 INFO nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:46:40 np0005603621 nova_compute[247399]: 2026-01-31 08:46:40.956 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.316 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.318 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.319 247403 INFO nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Creating image(s)#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.344 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.370 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.395 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.399 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.424 247403 DEBUG nova.policy [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a498364761ef428b99cac3f92e603385', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8397e0fed04b4dabb57148d0924de2dc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.459 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.461 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.462 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.462 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.492 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.496 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 3308d345-19b7-4fbb-bd81-631135649e7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.930 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 3308d345-19b7-4fbb-bd81-631135649e7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:41 np0005603621 nova_compute[247399]: 2026-01-31 08:46:41.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.002 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] resizing rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.108 247403 DEBUG nova.objects.instance [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'migration_context' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.165 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.165 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Ensure instance console log exists: /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.166 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.166 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:42 np0005603621 nova_compute[247399]: 2026-01-31 08:46:42.166 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 285 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 294 KiB/s rd, 5.6 MiB/s wr, 92 op/s
Jan 31 03:46:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:46:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:46:44 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:46:44 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:46:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 285 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 2.7 MiB/s wr, 20 op/s
Jan 31 03:46:45 np0005603621 nova_compute[247399]: 2026-01-31 08:46:45.424 247403 DEBUG nova.network.neutron [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Successfully created port: df3fc295-9afc-49a0-87f8-9dda757af02a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:46:45 np0005603621 nova_compute[247399]: 2026-01-31 08:46:45.465 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:46.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.8 MiB/s wr, 32 op/s
Jan 31 03:46:47 np0005603621 nova_compute[247399]: 2026-01-31 08:46:46.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:48.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.283 247403 DEBUG nova.network.neutron [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Successfully updated port: df3fc295-9afc-49a0-87f8-9dda757af02a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.350 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.350 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.351 247403 DEBUG nova.network.neutron [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.536 247403 DEBUG nova.compute.manager [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-changed-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.537 247403 DEBUG nova.compute.manager [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Refreshing instance network info cache due to event network-changed-df3fc295-9afc-49a0-87f8-9dda757af02a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.537 247403 DEBUG oslo_concurrency.lockutils [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:46:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 2.2 MiB/s wr, 29 op/s
Jan 31 03:46:48 np0005603621 nova_compute[247399]: 2026-01-31 08:46:48.878 247403 DEBUG nova.network.neutron [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019845683859110366 of space, bias 1.0, pg target 0.595370515773311 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004323556439605435 of space, bias 1.0, pg target 1.2970669318816304 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:46:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:46:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:46:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:50 np0005603621 nova_compute[247399]: 2026-01-31 08:46:50.467 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:46:50 np0005603621 nova_compute[247399]: 2026-01-31 08:46:50.911 247403 DEBUG nova.network.neutron [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.291 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.292 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Instance network_info: |[{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.293 247403 DEBUG oslo_concurrency.lockutils [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.293 247403 DEBUG nova.network.neutron [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Refreshing network info cache for port df3fc295-9afc-49a0-87f8-9dda757af02a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.296 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Start _get_guest_xml network_info=[{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.301 247403 WARNING nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.309 247403 DEBUG nova.virt.libvirt.host [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.310 247403 DEBUG nova.virt.libvirt.host [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.317 247403 DEBUG nova.virt.libvirt.host [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.317 247403 DEBUG nova.virt.libvirt.host [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.319 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.319 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.320 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.320 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.320 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.321 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.321 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.321 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.321 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.322 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.322 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.322 247403 DEBUG nova.virt.hardware [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.326 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:46:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e08fca35-b53c-4410-add3-3941b3001aeb does not exist
Jan 31 03:46:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fcf3437d-06f7-4ece-b735-149d7f8a203d does not exist
Jan 31 03:46:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fbaa0e16-3bec-4f59-bf0e-d9600ee3b067 does not exist
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032088692' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.737 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.763 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:51 np0005603621 nova_compute[247399]: 2026-01-31 08:46:51.768 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:46:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.156213227 +0000 UTC m=+0.046640130 container create 849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meitner, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:46:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082882730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:46:52 np0005603621 systemd[1]: Started libpod-conmon-849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a.scope.
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.205 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.207 247403 DEBUG nova.virt.libvirt.vif [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:46:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=158,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGvV4tGHwFrQ7+1WPmMS3fGcrpcMKpLQBFiD2ZG0NedKq4jaCN6oHf8RWlX+X72Ff/PSGJSQ5nqRPZm+CDMr01vn3vAMra9m4dZ/R1d2vwh+NDFwu298PivPHJQkyuCpg==',key_name='tempest-keypair-600650673',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-qz7ydqvm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:46:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a498364761ef428b99cac3f92e603385',uuid=3308d345-19b7-4fbb-bd81-631135649e7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.208 247403 DEBUG nova.network.os_vif_util [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.209 247403 DEBUG nova.network.os_vif_util [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.211 247403 DEBUG nova.objects.instance [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:46:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.133497342 +0000 UTC m=+0.023924265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.230344944 +0000 UTC m=+0.120771867 container init 849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meitner, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.23723605 +0000 UTC m=+0.127662963 container start 849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.241 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:46:52 np0005603621 relaxed_meitner[361900]: 167 167
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <uuid>3308d345-19b7-4fbb-bd81-631135649e7d</uuid>
Jan 31 03:46:52 np0005603621 conmon[361900]: conmon 849ecccdcd08d838e852 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a.scope/container/memory.events
Jan 31 03:46:52 np0005603621 systemd[1]: libpod-849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a.scope: Deactivated successfully.
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <name>instance-0000009e</name>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:name>multiattach-server-0</nova:name>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:46:51</nova:creationTime>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:user uuid="a498364761ef428b99cac3f92e603385">tempest-AttachVolumeMultiAttachTest-1931311941-project-member</nova:user>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:project uuid="8397e0fed04b4dabb57148d0924de2dc">tempest-AttachVolumeMultiAttachTest-1931311941</nova:project>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <nova:port uuid="df3fc295-9afc-49a0-87f8-9dda757af02a">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <entry name="serial">3308d345-19b7-4fbb-bd81-631135649e7d</entry>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <entry name="uuid">3308d345-19b7-4fbb-bd81-631135649e7d</entry>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/3308d345-19b7-4fbb-bd81-631135649e7d_disk">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.24230689 +0000 UTC m=+0.132733793 container attach 849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/3308d345-19b7-4fbb-bd81-631135649e7d_disk.config">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:00:6b:61"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <target dev="tapdf3fc295-9a"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/console.log" append="off"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:46:52 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:46:52 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:46:52 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:46:52 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.242 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Preparing to wait for external event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.243 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.243441426 +0000 UTC m=+0.133868329 container died 849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.243 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.243 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.244 247403 DEBUG nova.virt.libvirt.vif [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:46:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=158,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGvV4tGHwFrQ7+1WPmMS3fGcrpcMKpLQBFiD2ZG0NedKq4jaCN6oHf8RWlX+X72Ff/PSGJSQ5nqRPZm+CDMr01vn3vAMra9m4dZ/R1d2vwh+NDFwu298PivPHJQkyuCpg==',key_name='tempest-keypair-600650673',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-qz7ydqvm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:46:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a498364761ef428b99cac3f92e603385',uuid=3308d345-19b7-4fbb-bd81-631135649e7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.245 247403 DEBUG nova.network.os_vif_util [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.245 247403 DEBUG nova.network.os_vif_util [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.246 247403 DEBUG os_vif [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.247 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.247 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.251 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf3fc295-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.251 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf3fc295-9a, col_values=(('external_ids', {'iface-id': 'df3fc295-9afc-49a0-87f8-9dda757af02a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:6b:61', 'vm-uuid': '3308d345-19b7-4fbb-bd81-631135649e7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.253 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:52 np0005603621 NetworkManager[49013]: <info>  [1769849212.2542] manager: (tapdf3fc295-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.256 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.258 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.259 247403 INFO os_vif [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a')#033[00m
Jan 31 03:46:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d2029f95b4b3a78e49120708e45df6d7592743f21f5160e44a63d9222118723f-merged.mount: Deactivated successfully.
Jan 31 03:46:52 np0005603621 podman[361881]: 2026-01-31 08:46:52.291532441 +0000 UTC m=+0.181959344 container remove 849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:46:52 np0005603621 systemd[1]: libpod-conmon-849ecccdcd08d838e852bbe51d5a669a5c97d6978ee3a62f96d4bbef1aa05e8a.scope: Deactivated successfully.
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.350 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.351 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.351 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No VIF found with MAC fa:16:3e:00:6b:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.352 247403 INFO nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Using config drive#033[00m
Jan 31 03:46:52 np0005603621 nova_compute[247399]: 2026-01-31 08:46:52.376 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:52 np0005603621 podman[361933]: 2026-01-31 08:46:52.408497396 +0000 UTC m=+0.040722213 container create 43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:46:52 np0005603621 systemd[1]: Started libpod-conmon-43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3.scope.
Jan 31 03:46:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3db729d7da75c87ec032ad6f9241207186904c8427306397783c3e5dd3009ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3db729d7da75c87ec032ad6f9241207186904c8427306397783c3e5dd3009ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3db729d7da75c87ec032ad6f9241207186904c8427306397783c3e5dd3009ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3db729d7da75c87ec032ad6f9241207186904c8427306397783c3e5dd3009ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3db729d7da75c87ec032ad6f9241207186904c8427306397783c3e5dd3009ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:52 np0005603621 podman[361933]: 2026-01-31 08:46:52.474321621 +0000 UTC m=+0.106546448 container init 43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:52 np0005603621 podman[361933]: 2026-01-31 08:46:52.482897861 +0000 UTC m=+0.115122678 container start 43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:52 np0005603621 podman[361933]: 2026-01-31 08:46:52.390591553 +0000 UTC m=+0.022816390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:46:52 np0005603621 podman[361933]: 2026-01-31 08:46:52.487602449 +0000 UTC m=+0.119827266 container attach 43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:46:53 np0005603621 naughty_panini[361960]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:46:53 np0005603621 naughty_panini[361960]: --> relative data size: 1.0
Jan 31 03:46:53 np0005603621 naughty_panini[361960]: --> All data devices are unavailable
Jan 31 03:46:53 np0005603621 systemd[1]: libpod-43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3.scope: Deactivated successfully.
Jan 31 03:46:53 np0005603621 podman[361933]: 2026-01-31 08:46:53.218647652 +0000 UTC m=+0.850872469 container died 43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:46:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c3db729d7da75c87ec032ad6f9241207186904c8427306397783c3e5dd3009ad-merged.mount: Deactivated successfully.
Jan 31 03:46:53 np0005603621 podman[361933]: 2026-01-31 08:46:53.27477879 +0000 UTC m=+0.907003607 container remove 43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_panini, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:46:53 np0005603621 systemd[1]: libpod-conmon-43c0a7f361f6aaf44c75b4c19ed355efd2a05db539a9b8238e88bfd3da718fd3.scope: Deactivated successfully.
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.443 247403 INFO nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Creating config drive at /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/disk.config#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.448 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpybxd9wz1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.578 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpybxd9wz1" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.617 247403 DEBUG nova.storage.rbd_utils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image 3308d345-19b7-4fbb-bd81-631135649e7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.622 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/disk.config 3308d345-19b7-4fbb-bd81-631135649e7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:46:53 np0005603621 podman[362163]: 2026-01-31 08:46:53.753820363 +0000 UTC m=+0.038345989 container create fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elbakyan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:53 np0005603621 systemd[1]: Started libpod-conmon-fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5.scope.
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.787 247403 DEBUG oslo_concurrency.processutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/disk.config 3308d345-19b7-4fbb-bd81-631135649e7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.788 247403 INFO nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Deleting local config drive /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d/disk.config because it was imported into RBD.#033[00m
Jan 31 03:46:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:53 np0005603621 kernel: tapdf3fc295-9a: entered promiscuous mode
Jan 31 03:46:53 np0005603621 NetworkManager[49013]: <info>  [1769849213.8292] manager: (tapdf3fc295-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Jan 31 03:46:53 np0005603621 podman[362163]: 2026-01-31 08:46:53.740769392 +0000 UTC m=+0.025295038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:46:53 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:53Z|00648|binding|INFO|Claiming lport df3fc295-9afc-49a0-87f8-9dda757af02a for this chassis.
Jan 31 03:46:53 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:53Z|00649|binding|INFO|df3fc295-9afc-49a0-87f8-9dda757af02a: Claiming fa:16:3e:00:6b:61 10.100.0.5
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:53 np0005603621 podman[362163]: 2026-01-31 08:46:53.876400965 +0000 UTC m=+0.160926611 container init fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elbakyan, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:53 np0005603621 podman[362163]: 2026-01-31 08:46:53.882874639 +0000 UTC m=+0.167400265 container start fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elbakyan, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:46:53 np0005603621 strange_elbakyan[362182]: 167 167
Jan 31 03:46:53 np0005603621 systemd[1]: libpod-fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5.scope: Deactivated successfully.
Jan 31 03:46:53 np0005603621 systemd-udevd[362199]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.897 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:6b:61 10.100.0.5'], port_security=['fa:16:3e:00:6b:61 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3308d345-19b7-4fbb-bd81-631135649e7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8397e0fed04b4dabb57148d0924de2dc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd636f3a4-efef-465a-ac59-8182d61336f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbd2578f-ff6e-4dc3-bc49-93cbf023edc5, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=df3fc295-9afc-49a0-87f8-9dda757af02a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:46:53 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:53Z|00650|binding|INFO|Setting lport df3fc295-9afc-49a0-87f8-9dda757af02a ovn-installed in OVS
Jan 31 03:46:53 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:53Z|00651|binding|INFO|Setting lport df3fc295-9afc-49a0-87f8-9dda757af02a up in Southbound
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.898 159734 INFO neutron.agent.ovn.metadata.agent [-] Port df3fc295-9afc-49a0-87f8-9dda757af02a in datapath 3afaf607-43a1-4d65-95fc-0a22b5c901d0 bound to our chassis#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:53 np0005603621 nova_compute[247399]: 2026-01-31 08:46:53.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:53 np0005603621 NetworkManager[49013]: <info>  [1769849213.9015] device (tapdf3fc295-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:46:53 np0005603621 NetworkManager[49013]: <info>  [1769849213.9021] device (tapdf3fc295-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.900 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3afaf607-43a1-4d65-95fc-0a22b5c901d0#033[00m
Jan 31 03:46:53 np0005603621 systemd-machined[212769]: New machine qemu-78-instance-0000009e.
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.908 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd986b0-52fb-4185-9a90-90d8d70cbe49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.909 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3afaf607-41 in ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.910 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3afaf607-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.911 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[799c2820-d579-4e06-942b-fcd732c668e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.911 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[64970743-2b15-46e2-afff-a6ec6866e9f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 systemd[1]: Started Virtual Machine qemu-78-instance-0000009e.
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.918 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[8cec39db-3856-4680-a70c-cb0d6cc6e129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.926 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8a501371-1c41-41af-bf7a-91bfa36eb624]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 podman[362163]: 2026-01-31 08:46:53.934055841 +0000 UTC m=+0.218581487 container attach fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:46:53 np0005603621 podman[362163]: 2026-01-31 08:46:53.935634202 +0000 UTC m=+0.220159828 container died fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elbakyan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.948 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b742525c-9363-48f1-97c9-8516998a7382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 systemd-udevd[362208]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:46:53 np0005603621 NetworkManager[49013]: <info>  [1769849213.9567] manager: (tap3afaf607-40): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.957 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d763b5c3-eaeb-443c-b68d-2c39e5853f9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.984 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4709c923-780a-4172-9805-2338e734aea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:53.987 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cf80b7-b9a2-43f6-ab05-5b1565020bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 NetworkManager[49013]: <info>  [1769849214.0040] device (tap3afaf607-40): carrier: link connected
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.006 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7c149227-1b8e-4181-ae11-5ab869b421d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.020 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce9ea81-cc09-4d02-ae6f-3dce441faefa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3afaf607-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851951, 'reachable_time': 34729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362243, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.037 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a4420a9a-efe3-4304-b025-790f7a9a0d70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:8444'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851951, 'tstamp': 851951}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362244, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-32e703e68ae12d6e0eecf408bd9186be16a751ea43bbbce3896fee2dfde014e1-merged.mount: Deactivated successfully.
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.054 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f4b01ce1-49d5-4451-a635-6b91eba44f59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3afaf607-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851951, 'reachable_time': 34729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362246, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 podman[362163]: 2026-01-31 08:46:54.058891325 +0000 UTC m=+0.343416951 container remove fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:54 np0005603621 systemd[1]: libpod-conmon-fa79279f43e936ad2a223911dcbe8818dada7d6915670e6b7801602598cb7ff5.scope: Deactivated successfully.
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.074 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74b5ecf9-5cc0-4c2d-b724-0c6ea4c3643c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.108 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e4bf11-3139-4abf-b5db-777c0f5c998f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3afaf607-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.110 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3afaf607-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.112 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:54 np0005603621 NetworkManager[49013]: <info>  [1769849214.1127] manager: (tap3afaf607-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 31 03:46:54 np0005603621 kernel: tap3afaf607-40: entered promiscuous mode
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.114 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.115 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3afaf607-40, col_values=(('external_ids', {'iface-id': '0ed76a0a-650c-4ec7-a4d4-0e745236b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:54Z|00652|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.119 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3afaf607-43a1-4d65-95fc-0a22b5c901d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3afaf607-43a1-4d65-95fc-0a22b5c901d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.120 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[474513a0-263f-4c6f-9736-c5e80e865323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.122 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-3afaf607-43a1-4d65-95fc-0a22b5c901d0
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/3afaf607-43a1-4d65-95fc-0a22b5c901d0.pid.haproxy
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 3afaf607-43a1-4d65-95fc-0a22b5c901d0
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:46:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:54.122 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'env', 'PROCESS_TAG=haproxy-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3afaf607-43a1-4d65-95fc-0a22b5c901d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.122 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:54.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:54 np0005603621 podman[362263]: 2026-01-31 08:46:54.189865182 +0000 UTC m=+0.037901856 container create bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:46:54 np0005603621 systemd[1]: Started libpod-conmon-bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e.scope.
Jan 31 03:46:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc40bc953db4d5defd2472243332f5c2c3fc8bb6b10a1a5f727aab076182a37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc40bc953db4d5defd2472243332f5c2c3fc8bb6b10a1a5f727aab076182a37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc40bc953db4d5defd2472243332f5c2c3fc8bb6b10a1a5f727aab076182a37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dc40bc953db4d5defd2472243332f5c2c3fc8bb6b10a1a5f727aab076182a37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:54 np0005603621 podman[362263]: 2026-01-31 08:46:54.17392941 +0000 UTC m=+0.021966104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:46:54 np0005603621 podman[362263]: 2026-01-31 08:46:54.277374859 +0000 UTC m=+0.125411563 container init bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:46:54 np0005603621 podman[362263]: 2026-01-31 08:46:54.283572414 +0000 UTC m=+0.131609088 container start bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:46:54 np0005603621 podman[362263]: 2026-01-31 08:46:54.288615143 +0000 UTC m=+0.136651847 container attach bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.326 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849214.3255007, 3308d345-19b7-4fbb-bd81-631135649e7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.326 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] VM Started (Lifecycle Event)#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.365 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.369 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849214.32572, 3308d345-19b7-4fbb-bd81-631135649e7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.369 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.420 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.423 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:46:54 np0005603621 podman[362347]: 2026-01-31 08:46:54.457535875 +0000 UTC m=+0.055194790 container create 946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.467 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:46:54 np0005603621 systemd[1]: Started libpod-conmon-946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f.scope.
Jan 31 03:46:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:54 np0005603621 podman[362347]: 2026-01-31 08:46:54.42883109 +0000 UTC m=+0.026490025 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:46:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aec0079c84a7539f4a6b717c1c6c2ced5ce5ba40165ce0587c6f288b921f805f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:54 np0005603621 podman[362347]: 2026-01-31 08:46:54.538620749 +0000 UTC m=+0.136279694 container init 946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:46:54 np0005603621 podman[362347]: 2026-01-31 08:46:54.542672207 +0000 UTC m=+0.140331122 container start 946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:46:54 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [NOTICE]   (362367) : New worker (362369) forked
Jan 31 03:46:54 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [NOTICE]   (362367) : Loading success.
Jan 31 03:46:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 131 KiB/s wr, 87 op/s
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.901 247403 DEBUG nova.compute.manager [req-9444b8ab-09b2-4d34-9c0a-4a4c00eb46dd req-c34ca5ce-105d-4fc5-91d0-1e735bf9940d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.901 247403 DEBUG oslo_concurrency.lockutils [req-9444b8ab-09b2-4d34-9c0a-4a4c00eb46dd req-c34ca5ce-105d-4fc5-91d0-1e735bf9940d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.902 247403 DEBUG oslo_concurrency.lockutils [req-9444b8ab-09b2-4d34-9c0a-4a4c00eb46dd req-c34ca5ce-105d-4fc5-91d0-1e735bf9940d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.902 247403 DEBUG oslo_concurrency.lockutils [req-9444b8ab-09b2-4d34-9c0a-4a4c00eb46dd req-c34ca5ce-105d-4fc5-91d0-1e735bf9940d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.902 247403 DEBUG nova.compute.manager [req-9444b8ab-09b2-4d34-9c0a-4a4c00eb46dd req-c34ca5ce-105d-4fc5-91d0-1e735bf9940d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Processing event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.903 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.909 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.912 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849214.9108076, 3308d345-19b7-4fbb-bd81-631135649e7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.912 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.916 247403 INFO nova.virt.libvirt.driver [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Instance spawned successfully.#033[00m
Jan 31 03:46:54 np0005603621 nova_compute[247399]: 2026-01-31 08:46:54.917 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.003 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.007 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:46:55 np0005603621 epic_bose[362317]: {
Jan 31 03:46:55 np0005603621 epic_bose[362317]:    "0": [
Jan 31 03:46:55 np0005603621 epic_bose[362317]:        {
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "devices": [
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "/dev/loop3"
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            ],
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "lv_name": "ceph_lv0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "lv_size": "7511998464",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "name": "ceph_lv0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "tags": {
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.cluster_name": "ceph",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.crush_device_class": "",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.encrypted": "0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.osd_id": "0",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.type": "block",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:                "ceph.vdo": "0"
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            },
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "type": "block",
Jan 31 03:46:55 np0005603621 epic_bose[362317]:            "vg_name": "ceph_vg0"
Jan 31 03:46:55 np0005603621 epic_bose[362317]:        }
Jan 31 03:46:55 np0005603621 epic_bose[362317]:    ]
Jan 31 03:46:55 np0005603621 epic_bose[362317]: }
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.040 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.043 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.043 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.044 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.044 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.044 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.045 247403 DEBUG nova.virt.libvirt.driver [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:46:55 np0005603621 systemd[1]: libpod-bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e.scope: Deactivated successfully.
Jan 31 03:46:55 np0005603621 podman[362263]: 2026-01-31 08:46:55.049264058 +0000 UTC m=+0.897300742 container died bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:46:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9dc40bc953db4d5defd2472243332f5c2c3fc8bb6b10a1a5f727aab076182a37-merged.mount: Deactivated successfully.
Jan 31 03:46:55 np0005603621 podman[362263]: 2026-01-31 08:46:55.0892895 +0000 UTC m=+0.937326174 container remove bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:46:55 np0005603621 systemd[1]: libpod-conmon-bf9754e76be4d0a98830069f70e26bbd230c3db0a09c131aa01def6c7c52c56e.scope: Deactivated successfully.
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.180 247403 INFO nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Took 13.86 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.181 247403 DEBUG nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.348 247403 INFO nova.compute.manager [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Took 16.77 seconds to build instance.#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.356 247403 DEBUG nova.network.neutron [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updated VIF entry in instance network info cache for port df3fc295-9afc-49a0-87f8-9dda757af02a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.356 247403 DEBUG nova.network.neutron [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.438 247403 DEBUG oslo_concurrency.lockutils [None req-40398325-6034-404a-b9f2-c9dbf4607886 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.440 247403 DEBUG oslo_concurrency.lockutils [req-82159a40-0e14-4c70-8238-922e940c3008 req-77ba65ef-a410-4e01-867e-a3cc3ef89e8e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.470 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.590959905 +0000 UTC m=+0.045902037 container create 1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:46:55 np0005603621 systemd[1]: Started libpod-conmon-1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50.scope.
Jan 31 03:46:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.565634137 +0000 UTC m=+0.020576279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.663025746 +0000 UTC m=+0.117967898 container init 1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.669095787 +0000 UTC m=+0.124037939 container start 1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.672949658 +0000 UTC m=+0.127891810 container attach 1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:46:55 np0005603621 inspiring_ptolemy[362604]: 167 167
Jan 31 03:46:55 np0005603621 systemd[1]: libpod-1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50.scope: Deactivated successfully.
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.67393302 +0000 UTC m=+0.128875172 container died 1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:46:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-69304f4ec12848a23de1353902f6a038ce42413e5a18acefcd182c2628b88b6e-merged.mount: Deactivated successfully.
Jan 31 03:46:55 np0005603621 podman[362587]: 2026-01-31 08:46:55.709475159 +0000 UTC m=+0.164417291 container remove 1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:46:55 np0005603621 systemd[1]: libpod-conmon-1e7ea019b7885161fa7ffc9b381a61b25cbb5ab0b7b1a7b0e6374b16df610d50.scope: Deactivated successfully.
Jan 31 03:46:55 np0005603621 nova_compute[247399]: 2026-01-31 08:46:55.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:55.736 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:46:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:46:55.739 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:46:55 np0005603621 podman[362627]: 2026-01-31 08:46:55.848446907 +0000 UTC m=+0.040782055 container create 4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:46:55 np0005603621 podman[362627]: 2026-01-31 08:46:55.828924202 +0000 UTC m=+0.021259380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:46:55 np0005603621 systemd[1]: Started libpod-conmon-4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041.scope.
Jan 31 03:46:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:46:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e42333560b5368b08fae819f619c8acdac6fbd8ffdb105d41030f7b1d304e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e42333560b5368b08fae819f619c8acdac6fbd8ffdb105d41030f7b1d304e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e42333560b5368b08fae819f619c8acdac6fbd8ffdb105d41030f7b1d304e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309e42333560b5368b08fae819f619c8acdac6fbd8ffdb105d41030f7b1d304e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:46:56 np0005603621 podman[362627]: 2026-01-31 08:46:56.02052654 +0000 UTC m=+0.212861768 container init 4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:46:56 np0005603621 podman[362627]: 2026-01-31 08:46:56.029499122 +0000 UTC m=+0.221834270 container start 4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:46:56 np0005603621 podman[362627]: 2026-01-31 08:46:56.081378396 +0000 UTC m=+0.273713564 container attach 4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:46:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:56.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:46:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:56.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:46:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:46:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 292 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 131 KiB/s wr, 91 op/s
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]: {
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:        "osd_id": 0,
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:        "type": "bluestore"
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]:    }
Jan 31 03:46:56 np0005603621 pensive_mahavira[362643]: }
Jan 31 03:46:56 np0005603621 systemd[1]: libpod-4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041.scope: Deactivated successfully.
Jan 31 03:46:56 np0005603621 podman[362627]: 2026-01-31 08:46:56.880822785 +0000 UTC m=+1.073157953 container died 4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:46:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-309e42333560b5368b08fae819f619c8acdac6fbd8ffdb105d41030f7b1d304e-merged.mount: Deactivated successfully.
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.098 247403 DEBUG nova.compute.manager [req-d8d502c9-6f00-4562-bbaa-c441f2cf1483 req-79331b03-541e-4e03-800c-e6477de19159 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.099 247403 DEBUG oslo_concurrency.lockutils [req-d8d502c9-6f00-4562-bbaa-c441f2cf1483 req-79331b03-541e-4e03-800c-e6477de19159 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.100 247403 DEBUG oslo_concurrency.lockutils [req-d8d502c9-6f00-4562-bbaa-c441f2cf1483 req-79331b03-541e-4e03-800c-e6477de19159 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.100 247403 DEBUG oslo_concurrency.lockutils [req-d8d502c9-6f00-4562-bbaa-c441f2cf1483 req-79331b03-541e-4e03-800c-e6477de19159 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.100 247403 DEBUG nova.compute.manager [req-d8d502c9-6f00-4562-bbaa-c441f2cf1483 req-79331b03-541e-4e03-800c-e6477de19159 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] No waiting events found dispatching network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.101 247403 WARNING nova.compute.manager [req-d8d502c9-6f00-4562-bbaa-c441f2cf1483 req-79331b03-541e-4e03-800c-e6477de19159 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received unexpected event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:46:57 np0005603621 podman[362627]: 2026-01-31 08:46:57.114076843 +0000 UTC m=+1.306411991 container remove 4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mahavira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:46:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:46:57 np0005603621 systemd[1]: libpod-conmon-4b152695dddd86959e63e39c63d9dbaca51e92ef2942af8808c4d877f4600041.scope: Deactivated successfully.
Jan 31 03:46:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:46:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:46:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:46:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e9660592-9e1d-47d9-92a4-c6f54c592ae2 does not exist
Jan 31 03:46:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0cb5329c-21fe-48e1-ad6c-abca9a2d9f87 does not exist
Jan 31 03:46:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5445154f-bb3b-4fb7-9f36-37d2ec999307 does not exist
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.254 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:57 np0005603621 nova_compute[247399]: 2026-01-31 08:46:57.255 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:46:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:46:58.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:46:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:46:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:46:58.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:46:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:46:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:46:58 np0005603621 nova_compute[247399]: 2026-01-31 08:46:58.462 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:58 np0005603621 NetworkManager[49013]: <info>  [1769849218.4667] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Jan 31 03:46:58 np0005603621 NetworkManager[49013]: <info>  [1769849218.4678] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Jan 31 03:46:58 np0005603621 nova_compute[247399]: 2026-01-31 08:46:58.512 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:58 np0005603621 ovn_controller[149152]: 2026-01-31T08:46:58Z|00653|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:46:58 np0005603621 nova_compute[247399]: 2026-01-31 08:46:58.531 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:46:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 148 op/s
Jan 31 03:46:59 np0005603621 nova_compute[247399]: 2026-01-31 08:46:59.264 247403 DEBUG nova.compute.manager [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-changed-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:46:59 np0005603621 nova_compute[247399]: 2026-01-31 08:46:59.264 247403 DEBUG nova.compute.manager [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Refreshing instance network info cache due to event network-changed-df3fc295-9afc-49a0-87f8-9dda757af02a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:46:59 np0005603621 nova_compute[247399]: 2026-01-31 08:46:59.265 247403 DEBUG oslo_concurrency.lockutils [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:46:59 np0005603621 nova_compute[247399]: 2026-01-31 08:46:59.265 247403 DEBUG oslo_concurrency.lockutils [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:46:59 np0005603621 nova_compute[247399]: 2026-01-31 08:46:59.265 247403 DEBUG nova.network.neutron [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Refreshing network info cache for port df3fc295-9afc-49a0-87f8-9dda757af02a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:47:00 np0005603621 nova_compute[247399]: 2026-01-31 08:47:00.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:00.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:00.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:00 np0005603621 nova_compute[247399]: 2026-01-31 08:47:00.472 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 147 op/s
Jan 31 03:47:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:02.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:02.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:02 np0005603621 nova_compute[247399]: 2026-01-31 08:47:02.257 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:02 np0005603621 nova_compute[247399]: 2026-01-31 08:47:02.494 247403 DEBUG nova.network.neutron [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updated VIF entry in instance network info cache for port df3fc295-9afc-49a0-87f8-9dda757af02a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:47:02 np0005603621 nova_compute[247399]: 2026-01-31 08:47:02.495 247403 DEBUG nova.network.neutron [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 174 op/s
Jan 31 03:47:03 np0005603621 nova_compute[247399]: 2026-01-31 08:47:03.111 247403 DEBUG oslo_concurrency.lockutils [req-45c49221-942d-4031-bce1-aa9901b75a04 req-986181c5-202d-488d-b850-972c5e627cc9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:03 np0005603621 podman[362732]: 2026-01-31 08:47:03.497713241 +0000 UTC m=+0.049709707 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127)
Jan 31 03:47:03 np0005603621 podman[362733]: 2026-01-31 08:47:03.523583126 +0000 UTC m=+0.075258452 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 31 03:47:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:47:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:04.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:04.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 31 03:47:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:04.742 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:04 np0005603621 nova_compute[247399]: 2026-01-31 08:47:04.984 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:04 np0005603621 nova_compute[247399]: 2026-01-31 08:47:04.984 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.143 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.440 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.441 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.447 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.448 247403 INFO nova.compute.claims [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:05 np0005603621 nova_compute[247399]: 2026-01-31 08:47:05.925 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:06.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:06.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:47:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:47:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052812428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.379 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.384 247403 DEBUG nova.compute.provider_tree [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.433 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.552 247403 DEBUG nova.scheduler.client.report [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:47:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 251 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 464 KiB/s wr, 107 op/s
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.691 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.692 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.692 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.692 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.902 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:06 np0005603621 nova_compute[247399]: 2026-01-31 08:47:06.903 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:47:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:07Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:6b:61 10.100.0.5
Jan 31 03:47:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:07Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:6b:61 10.100.0.5
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.078 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.082 247403 DEBUG nova.network.neutron [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.284 247403 INFO nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.432 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.467 247403 DEBUG nova.policy [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1c6e7eff11b435a81429826a682b32f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bfe11bd9d694684b527666e2c378eed', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.864 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.865 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.866 247403 INFO nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Creating image(s)#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.889 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.913 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.936 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:07 np0005603621 nova_compute[247399]: 2026-01-31 08:47:07.941 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:08 np0005603621 nova_compute[247399]: 2026-01-31 08:47:08.011 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:08 np0005603621 nova_compute[247399]: 2026-01-31 08:47:08.012 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:08 np0005603621 nova_compute[247399]: 2026-01-31 08:47:08.013 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:08 np0005603621 nova_compute[247399]: 2026-01-31 08:47:08.013 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:08 np0005603621 nova_compute[247399]: 2026-01-31 08:47:08.042 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:08 np0005603621 nova_compute[247399]: 2026-01-31 08:47:08.047 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:08.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 31 03:47:09 np0005603621 nova_compute[247399]: 2026-01-31 08:47:09.839 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.793s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:09 np0005603621 nova_compute[247399]: 2026-01-31 08:47:09.907 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:47:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:10.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.219 247403 DEBUG nova.network.neutron [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Successfully created port: f8883354-defb-43f3-9474-fbd6f0f701c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.521 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.528 247403 DEBUG nova.objects.instance [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid 1e70d6f8-7f61-4626-b67d-55f18422dbdf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.577 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.577 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Ensure instance console log exists: /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.578 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.578 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.579 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.595 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.632 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.632 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.633 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.633 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:10 np0005603621 nova_compute[247399]: 2026-01-31 08:47:10.634 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:47:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 267 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 212 KiB/s rd, 2.0 MiB/s wr, 71 op/s
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.250 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.251 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.251 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.251 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.252 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:47:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026360447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.667 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.820 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.821 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.976 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.977 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3936MB free_disk=20.94394302368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.977 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:11 np0005603621 nova_compute[247399]: 2026-01-31 08:47:11.978 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.280 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3308d345-19b7-4fbb-bd81-631135649e7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.280 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 1e70d6f8-7f61-4626-b67d-55f18422dbdf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.280 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.280 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.375 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 444 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 8.3 MiB/s wr, 188 op/s
Jan 31 03:47:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:47:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985089455' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.791 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.797 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:47:12 np0005603621 nova_compute[247399]: 2026-01-31 08:47:12.861 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.007 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.008 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.430 247403 DEBUG nova.network.neutron [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Successfully updated port: f8883354-defb-43f3-9474-fbd6f0f701c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.837 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.838 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.838 247403 DEBUG nova.network.neutron [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.856 247403 DEBUG nova.compute.manager [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-changed-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.856 247403 DEBUG nova.compute.manager [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Refreshing instance network info cache due to event network-changed-f8883354-defb-43f3-9474-fbd6f0f701c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:47:13 np0005603621 nova_compute[247399]: 2026-01-31 08:47:13.857 247403 DEBUG oslo_concurrency.lockutils [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:47:14 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:14Z|00654|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:47:14 np0005603621 nova_compute[247399]: 2026-01-31 08:47:14.046 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 03:47:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:14.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 03:47:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:14.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:14 np0005603621 nova_compute[247399]: 2026-01-31 08:47:14.455 247403 DEBUG nova.network.neutron [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:47:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:47:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554105504' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:47:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:47:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3554105504' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:47:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 444 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 8.3 MiB/s wr, 162 op/s
Jan 31 03:47:15 np0005603621 nova_compute[247399]: 2026-01-31 08:47:15.007 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:15 np0005603621 nova_compute[247399]: 2026-01-31 08:47:15.102 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:15 np0005603621 nova_compute[247399]: 2026-01-31 08:47:15.103 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:15 np0005603621 nova_compute[247399]: 2026-01-31 08:47:15.517 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:16.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.537 247403 DEBUG nova.network.neutron [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updating instance_info_cache with network_info: [{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 397 KiB/s rd, 9.2 MiB/s wr, 172 op/s
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.765 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.766 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance network_info: |[{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.766 247403 DEBUG oslo_concurrency.lockutils [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.767 247403 DEBUG nova.network.neutron [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Refreshing network info cache for port f8883354-defb-43f3-9474-fbd6f0f701c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.769 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Start _get_guest_xml network_info=[{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.773 247403 WARNING nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.779 247403 DEBUG nova.virt.libvirt.host [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.779 247403 DEBUG nova.virt.libvirt.host [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.795 247403 DEBUG nova.virt.libvirt.host [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.796 247403 DEBUG nova.virt.libvirt.host [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.797 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.797 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.798 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.798 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.798 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.798 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.798 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.799 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.799 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.799 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.799 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.800 247403 DEBUG nova.virt.hardware [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:47:16 np0005603621 nova_compute[247399]: 2026-01-31 08:47:16.802 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:47:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698480824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.228 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.253 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.257 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.276 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.668 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.670 247403 DEBUG nova.virt.libvirt.vif [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1654865498',display_name='tempest-TestNetworkAdvancedServerOps-server-1654865498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1654865498',id=159,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAGxIXCHGQ6hk+A0fx44fLj3vio2mrPnPx+2cfjZDcxrsWkQIQWCWHLjAHPg/Ubk4hFn8GL58FwqBRal0DdirB8z1k4BmmxTtEag3yt0l5GVuP/rftjagTqAJwOTHRLnTg==',key_name='tempest-TestNetworkAdvancedServerOps-243082686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-k2ceb35y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:47:07Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e70d6f8-7f61-4626-b67d-55f18422dbdf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.670 247403 DEBUG nova.network.os_vif_util [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.672 247403 DEBUG nova.network.os_vif_util [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:47:17 np0005603621 nova_compute[247399]: 2026-01-31 08:47:17.673 247403 DEBUG nova.objects.instance [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e70d6f8-7f61-4626-b67d-55f18422dbdf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.017 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <uuid>1e70d6f8-7f61-4626-b67d-55f18422dbdf</uuid>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <name>instance-0000009f</name>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1654865498</nova:name>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:47:16</nova:creationTime>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <nova:port uuid="f8883354-defb-43f3-9474-fbd6f0f701c7">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <entry name="serial">1e70d6f8-7f61-4626-b67d-55f18422dbdf</entry>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <entry name="uuid">1e70d6f8-7f61-4626-b67d-55f18422dbdf</entry>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk.config">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:7f:1e:4a"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <target dev="tapf8883354-de"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/console.log" append="off"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:47:18 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:47:18 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:47:18 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:47:18 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.019 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Preparing to wait for external event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.020 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.020 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.021 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.021 247403 DEBUG nova.virt.libvirt.vif [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1654865498',display_name='tempest-TestNetworkAdvancedServerOps-server-1654865498',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1654865498',id=159,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAGxIXCHGQ6hk+A0fx44fLj3vio2mrPnPx+2cfjZDcxrsWkQIQWCWHLjAHPg/Ubk4hFn8GL58FwqBRal0DdirB8z1k4BmmxTtEag3yt0l5GVuP/rftjagTqAJwOTHRLnTg==',key_name='tempest-TestNetworkAdvancedServerOps-243082686',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-k2ceb35y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:47:07Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e70d6f8-7f61-4626-b67d-55f18422dbdf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.022 247403 DEBUG nova.network.os_vif_util [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.023 247403 DEBUG nova.network.os_vif_util [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.023 247403 DEBUG os_vif [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.024 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.024 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.024 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.028 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.028 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8883354-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.029 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf8883354-de, col_values=(('external_ids', {'iface-id': 'f8883354-defb-43f3-9474-fbd6f0f701c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:1e:4a', 'vm-uuid': '1e70d6f8-7f61-4626-b67d-55f18422dbdf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.030 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:18 np0005603621 NetworkManager[49013]: <info>  [1769849238.0312] manager: (tapf8883354-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.033 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.035 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.036 247403 INFO os_vif [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de')#033[00m
Jan 31 03:47:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:18.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.373 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.374 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.374 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:7f:1e:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.374 247403 INFO nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Using config drive#033[00m
Jan 31 03:47:18 np0005603621 nova_compute[247399]: 2026-01-31 08:47:18.400 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 392 KiB/s rd, 8.8 MiB/s wr, 167 op/s
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.465 247403 INFO nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Creating config drive at /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/disk.config#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.469 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5a590lmv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.492 247403 DEBUG nova.network.neutron [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updated VIF entry in instance network info cache for port f8883354-defb-43f3-9474-fbd6f0f701c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.493 247403 DEBUG nova.network.neutron [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updating instance_info_cache with network_info: [{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.537 247403 DEBUG oslo_concurrency.lockutils [req-dd8cfc99-36ca-4d20-a18c-34e711d0bc5c req-f60e53a3-4a8a-49c0-8b8f-95107aea356f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.597 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp5a590lmv" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.627 247403 DEBUG nova.storage.rbd_utils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.632 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/disk.config 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.859 247403 DEBUG oslo_concurrency.processutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/disk.config 1e70d6f8-7f61-4626-b67d-55f18422dbdf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.860 247403 INFO nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Deleting local config drive /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf/disk.config because it was imported into RBD.#033[00m
Jan 31 03:47:19 np0005603621 kernel: tapf8883354-de: entered promiscuous mode
Jan 31 03:47:19 np0005603621 NetworkManager[49013]: <info>  [1769849239.8998] manager: (tapf8883354-de): new Tun device (/org/freedesktop/NetworkManager/Devices/296)
Jan 31 03:47:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:19Z|00655|binding|INFO|Claiming lport f8883354-defb-43f3-9474-fbd6f0f701c7 for this chassis.
Jan 31 03:47:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:19Z|00656|binding|INFO|f8883354-defb-43f3-9474-fbd6f0f701c7: Claiming fa:16:3e:7f:1e:4a 10.100.0.5
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.901 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:19Z|00657|binding|INFO|Setting lport f8883354-defb-43f3-9474-fbd6f0f701c7 ovn-installed in OVS
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:19 np0005603621 nova_compute[247399]: 2026-01-31 08:47:19.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:19 np0005603621 systemd-udevd[363202]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:47:19 np0005603621 systemd-machined[212769]: New machine qemu-79-instance-0000009f.
Jan 31 03:47:19 np0005603621 NetworkManager[49013]: <info>  [1769849239.9354] device (tapf8883354-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:47:19 np0005603621 NetworkManager[49013]: <info>  [1769849239.9364] device (tapf8883354-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:47:19 np0005603621 systemd[1]: Started Virtual Machine qemu-79-instance-0000009f.
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.952 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:1e:4a 10.100.0.5'], port_security=['fa:16:3e:7f:1e:4a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1e70d6f8-7f61-4626-b67d-55f18422dbdf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6f3a6488-5dce-49ae-859f-d782e87a2be7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3227746e-bf7a-477a-acc4-5f39020b4728, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f8883354-defb-43f3-9474-fbd6f0f701c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:47:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:19Z|00658|binding|INFO|Setting lport f8883354-defb-43f3-9474-fbd6f0f701c7 up in Southbound
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.953 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f8883354-defb-43f3-9474-fbd6f0f701c7 in datapath 48d2e38d-e6ae-44ba-84b3-12f97789abcf bound to our chassis#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.955 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48d2e38d-e6ae-44ba-84b3-12f97789abcf#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.964 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6489f531-d46e-40db-b0f2-b0f5d7567ced]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.965 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48d2e38d-e1 in ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.967 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48d2e38d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.967 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6432bb90-2ad5-4c43-9ecf-18e68c0406b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.968 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80fce75f-bc08-4c11-afe9-3367d3ef844e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.977 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fecef515-2964-4873-9701-512a91a01bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:19.996 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[060721b9-bc12-4b46-a1ec-cf64b2f98692]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.026 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5509bc90-7be0-4c36-bdb2-f99dae1223b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 NetworkManager[49013]: <info>  [1769849240.0340] manager: (tap48d2e38d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/297)
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.033 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c801aead-0060-4418-8bff-9f3c174d806c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.065 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[75666326-39b1-4cf8-8fe9-f646d4d942ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.068 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a1dda5-7616-467f-b360-583a04e8353f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 NetworkManager[49013]: <info>  [1769849240.0861] device (tap48d2e38d-e0): carrier: link connected
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.092 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2359a8da-ce41-4121-9208-46da3100aa01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.109 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3db488ba-1091-473a-b1d6-8c949b513fde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d2e38d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:f5:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 854560, 'reachable_time': 37100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363235, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.126 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[76c30267-ac5c-4c34-a6d5-57e14c700720]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:f5d8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 854560, 'tstamp': 854560}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363236, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.144 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[de0bb1ec-95c6-4ae1-98a9-9c7e52d43f7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d2e38d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:f5:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 198], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 854560, 'reachable_time': 37100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363238, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:20.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.167 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f6b504-5378-41ef-8912-71bb0cb008a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.205 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8c273e4c-6e32-4624-af06-f36dfc4c3ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.207 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d2e38d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.207 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.207 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48d2e38d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.209 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:20 np0005603621 kernel: tap48d2e38d-e0: entered promiscuous mode
Jan 31 03:47:20 np0005603621 NetworkManager[49013]: <info>  [1769849240.2100] manager: (tap48d2e38d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.212 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48d2e38d-e0, col_values=(('external_ids', {'iface-id': '774935b7-f9f5-459a-b0c1-706b34904ed7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:20 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:20Z|00659|binding|INFO|Releasing lport 774935b7-f9f5-459a-b0c1-706b34904ed7 from this chassis (sb_readonly=0)
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.219 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48d2e38d-e6ae-44ba-84b3-12f97789abcf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48d2e38d-e6ae-44ba-84b3-12f97789abcf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.220 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6610a73e-7ad2-433b-8384-dac08c49d2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.221 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-48d2e38d-e6ae-44ba-84b3-12f97789abcf
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/48d2e38d-e6ae-44ba-84b3-12f97789abcf.pid.haproxy
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 48d2e38d-e6ae-44ba-84b3-12f97789abcf
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:47:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:20.222 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'env', 'PROCESS_TAG=haproxy-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48d2e38d-e6ae-44ba-84b3-12f97789abcf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.310 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849240.309328, 1e70d6f8-7f61-4626-b67d-55f18422dbdf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.310 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] VM Started (Lifecycle Event)#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.519 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.558 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.562 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849240.309698, 1e70d6f8-7f61-4626-b67d-55f18422dbdf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.562 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:47:20 np0005603621 podman[363312]: 2026-01-31 08:47:20.566092131 +0000 UTC m=+0.048695355 container create 7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.603 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:20 np0005603621 systemd[1]: Started libpod-conmon-7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216.scope.
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.611 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:47:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:47:20 np0005603621 podman[363312]: 2026-01-31 08:47:20.537124249 +0000 UTC m=+0.019727493 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:47:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9319dedd0677cedc193498063f72d9f9da6f74880cd73e27f44a75405b6dc66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 464 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 204 KiB/s rd, 7.2 MiB/s wr, 129 op/s
Jan 31 03:47:20 np0005603621 podman[363312]: 2026-01-31 08:47:20.649492318 +0000 UTC m=+0.132095572 container init 7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:47:20 np0005603621 podman[363312]: 2026-01-31 08:47:20.653927819 +0000 UTC m=+0.136531043 container start 7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:47:20 np0005603621 nova_compute[247399]: 2026-01-31 08:47:20.654 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:47:20 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [NOTICE]   (363331) : New worker (363333) forked
Jan 31 03:47:20 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [NOTICE]   (363331) : Loading success.
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.181 247403 DEBUG nova.compute.manager [req-ab4573b5-ec7c-4acc-8ee2-14e59c7fde08 req-60998e16-595c-42c2-ae1f-cb956a937826 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.181 247403 DEBUG oslo_concurrency.lockutils [req-ab4573b5-ec7c-4acc-8ee2-14e59c7fde08 req-60998e16-595c-42c2-ae1f-cb956a937826 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.182 247403 DEBUG oslo_concurrency.lockutils [req-ab4573b5-ec7c-4acc-8ee2-14e59c7fde08 req-60998e16-595c-42c2-ae1f-cb956a937826 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.182 247403 DEBUG oslo_concurrency.lockutils [req-ab4573b5-ec7c-4acc-8ee2-14e59c7fde08 req-60998e16-595c-42c2-ae1f-cb956a937826 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.182 247403 DEBUG nova.compute.manager [req-ab4573b5-ec7c-4acc-8ee2-14e59c7fde08 req-60998e16-595c-42c2-ae1f-cb956a937826 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Processing event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.183 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.186 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849241.1861374, 1e70d6f8-7f61-4626-b67d-55f18422dbdf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.186 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.188 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.191 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance spawned successfully.#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.192 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.217 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.223 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.227 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.227 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.228 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.228 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.229 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.229 247403 DEBUG nova.virt.libvirt.driver [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.291 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.335 247403 INFO nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Took 13.47 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.335 247403 DEBUG nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.479 247403 INFO nova.compute.manager [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Took 16.07 seconds to build instance.#033[00m
Jan 31 03:47:21 np0005603621 nova_compute[247399]: 2026-01-31 08:47:21.535 247403 DEBUG oslo_concurrency.lockutils [None req-6e0f9309-d871-47b6-b6aa-e581dfd9f478 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:22.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:22.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 7.3 MiB/s wr, 155 op/s
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.033 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.576 247403 DEBUG nova.compute.manager [req-3acf6f68-b470-4b55-a6da-635d994a7853 req-0e0a814f-174a-4802-9ff5-4168c8822c6f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.576 247403 DEBUG oslo_concurrency.lockutils [req-3acf6f68-b470-4b55-a6da-635d994a7853 req-0e0a814f-174a-4802-9ff5-4168c8822c6f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.576 247403 DEBUG oslo_concurrency.lockutils [req-3acf6f68-b470-4b55-a6da-635d994a7853 req-0e0a814f-174a-4802-9ff5-4168c8822c6f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.576 247403 DEBUG oslo_concurrency.lockutils [req-3acf6f68-b470-4b55-a6da-635d994a7853 req-0e0a814f-174a-4802-9ff5-4168c8822c6f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.576 247403 DEBUG nova.compute.manager [req-3acf6f68-b470-4b55-a6da-635d994a7853 req-0e0a814f-174a-4802-9ff5-4168c8822c6f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:47:23 np0005603621 nova_compute[247399]: 2026-01-31 08:47:23.577 247403 WARNING nova.compute.manager [req-3acf6f68-b470-4b55-a6da-635d994a7853 req-0e0a814f-174a-4802-9ff5-4168c8822c6f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received unexpected event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:47:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:24.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:24Z|00660|binding|INFO|Releasing lport 774935b7-f9f5-459a-b0c1-706b34904ed7 from this chassis (sb_readonly=0)
Jan 31 03:47:24 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:24Z|00661|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:47:24 np0005603621 nova_compute[247399]: 2026-01-31 08:47:24.555 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 1021 KiB/s wr, 38 op/s
Jan 31 03:47:25 np0005603621 nova_compute[247399]: 2026-01-31 08:47:25.524 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:47:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:47:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:26.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1021 KiB/s wr, 133 op/s
Jan 31 03:47:28 np0005603621 nova_compute[247399]: 2026-01-31 08:47:28.037 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:28 np0005603621 nova_compute[247399]: 2026-01-31 08:47:28.077 247403 DEBUG nova.compute.manager [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-changed-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:28 np0005603621 nova_compute[247399]: 2026-01-31 08:47:28.077 247403 DEBUG nova.compute.manager [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Refreshing instance network info cache due to event network-changed-f8883354-defb-43f3-9474-fbd6f0f701c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:47:28 np0005603621 nova_compute[247399]: 2026-01-31 08:47:28.077 247403 DEBUG oslo_concurrency.lockutils [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:47:28 np0005603621 nova_compute[247399]: 2026-01-31 08:47:28.077 247403 DEBUG oslo_concurrency.lockutils [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:47:28 np0005603621 nova_compute[247399]: 2026-01-31 08:47:28.078 247403 DEBUG nova.network.neutron [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Refreshing network info cache for port f8883354-defb-43f3-9474-fbd6f0f701c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:47:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 71 KiB/s wr, 248 op/s
Jan 31 03:47:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:30.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:30 np0005603621 nova_compute[247399]: 2026-01-31 08:47:30.523 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:30.528 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:30.529 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:30.530 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 53 KiB/s wr, 246 op/s
Jan 31 03:47:30 np0005603621 nova_compute[247399]: 2026-01-31 08:47:30.832 247403 DEBUG nova.compute.manager [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-changed-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:30 np0005603621 nova_compute[247399]: 2026-01-31 08:47:30.832 247403 DEBUG nova.compute.manager [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Refreshing instance network info cache due to event network-changed-df3fc295-9afc-49a0-87f8-9dda757af02a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:47:30 np0005603621 nova_compute[247399]: 2026-01-31 08:47:30.833 247403 DEBUG oslo_concurrency.lockutils [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:47:30 np0005603621 nova_compute[247399]: 2026-01-31 08:47:30.833 247403 DEBUG oslo_concurrency.lockutils [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:47:30 np0005603621 nova_compute[247399]: 2026-01-31 08:47:30.833 247403 DEBUG nova.network.neutron [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Refreshing network info cache for port df3fc295-9afc-49a0-87f8-9dda757af02a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:47:31 np0005603621 nova_compute[247399]: 2026-01-31 08:47:31.275 247403 DEBUG nova.network.neutron [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updated VIF entry in instance network info cache for port f8883354-defb-43f3-9474-fbd6f0f701c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:47:31 np0005603621 nova_compute[247399]: 2026-01-31 08:47:31.276 247403 DEBUG nova.network.neutron [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updating instance_info_cache with network_info: [{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:31 np0005603621 nova_compute[247399]: 2026-01-31 08:47:31.335 247403 DEBUG oslo_concurrency.lockutils [req-31c59d47-6cfb-4683-b199-f56ec6ead603 req-6233c6d0-561f-433c-852a-11ff52a61f35 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:32.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 53 KiB/s wr, 296 op/s
Jan 31 03:47:33 np0005603621 nova_compute[247399]: 2026-01-31 08:47:33.040 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:33 np0005603621 nova_compute[247399]: 2026-01-31 08:47:33.888 247403 DEBUG nova.network.neutron [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updated VIF entry in instance network info cache for port df3fc295-9afc-49a0-87f8-9dda757af02a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:47:33 np0005603621 nova_compute[247399]: 2026-01-31 08:47:33.888 247403 DEBUG nova.network.neutron [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:33 np0005603621 nova_compute[247399]: 2026-01-31 08:47:33.943 247403 DEBUG oslo_concurrency.lockutils [req-86c36fc9-990e-46c7-ade8-394cf89f173a req-ead52010-d818-4cf8-96d2-acd688356c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:34 np0005603621 podman[363349]: 2026-01-31 08:47:34.489096049 +0000 UTC m=+0.046115214 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 03:47:34 np0005603621 podman[363350]: 2026-01-31 08:47:34.509508023 +0000 UTC m=+0.063907595 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:47:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 26 KiB/s wr, 270 op/s
Jan 31 03:47:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:34Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:1e:4a 10.100.0.5
Jan 31 03:47:34 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:34Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:1e:4a 10.100.0.5
Jan 31 03:47:35 np0005603621 nova_compute[247399]: 2026-01-31 08:47:35.526 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:36.147 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:47:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:36.148 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:47:36 np0005603621 nova_compute[247399]: 2026-01-31 08:47:36.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:36.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 488 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.0 MiB/s wr, 310 op/s
Jan 31 03:47:37 np0005603621 nova_compute[247399]: 2026-01-31 08:47:37.406 247403 DEBUG oslo_concurrency.lockutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:37 np0005603621 nova_compute[247399]: 2026-01-31 08:47:37.407 247403 DEBUG oslo_concurrency.lockutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:37 np0005603621 nova_compute[247399]: 2026-01-31 08:47:37.478 247403 DEBUG nova.objects.instance [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'flavor' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:47:37 np0005603621 nova_compute[247399]: 2026-01-31 08:47:37.863 247403 DEBUG oslo_concurrency.lockutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.043 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:38.151 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:38.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:38.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.242 247403 DEBUG oslo_concurrency.lockutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.243 247403 DEBUG oslo_concurrency.lockutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.243 247403 INFO nova.compute.manager [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Attaching volume dad97247-8d79-4c56-b9b0-e61729262e21 to /dev/vdb#033[00m
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.567 247403 DEBUG os_brick.utils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.569 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.577 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.577 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[113e2290-d36d-4425-b228-5f17127a11f0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.578 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.585 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.585 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4122c4-80ed-4ada-96b7-e9fb607e4f67]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.587 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.592 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.592 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[17fc6052-b6c7-4cbd-9e8e-feb9c946e05f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.594 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[593d3f29-de97-4dfe-90c4-e652c00c70a7]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.594 247403 DEBUG oslo_concurrency.processutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.618 247403 DEBUG oslo_concurrency.processutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.620 247403 DEBUG os_brick.initiator.connectors.lightos [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.620 247403 DEBUG os_brick.initiator.connectors.lightos [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.620 247403 DEBUG os_brick.initiator.connectors.lightos [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.621 247403 DEBUG os_brick.utils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] <== get_connector_properties: return (52ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:47:38 np0005603621 nova_compute[247399]: 2026-01-31 08:47:38.621 247403 DEBUG nova.virt.block_device [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating existing volume attachment record: 30edc071-5fcd-4a34-a0e2-82228dd04330 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:47:38
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', '.mgr', 'vms', 'images', 'backups', 'volumes', 'default.rgw.meta']
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:47:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 528 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 283 op/s
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:47:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.098 247403 DEBUG nova.objects.instance [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'flavor' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.124 247403 DEBUG nova.virt.libvirt.driver [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Attempting to attach volume dad97247-8d79-4c56-b9b0-e61729262e21 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.126 247403 DEBUG nova.virt.libvirt.guest [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-dad97247-8d79-4c56-b9b0-e61729262e21">
Jan 31 03:47:40 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:47:40 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  <serial>dad97247-8d79-4c56-b9b0-e61729262e21</serial>
Jan 31 03:47:40 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 03:47:40 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:47:40 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:47:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:40.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:40.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.235 247403 DEBUG nova.virt.libvirt.driver [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.235 247403 DEBUG nova.virt.libvirt.driver [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.236 247403 DEBUG nova.virt.libvirt.driver [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.236 247403 DEBUG nova.virt.libvirt.driver [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No VIF found with MAC fa:16:3e:00:6b:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.532 247403 DEBUG oslo_concurrency.lockutils [None req-41c81bad-ca33-4820-8f47-d169e2da13bd a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 528 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.2 MiB/s wr, 157 op/s
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.790 247403 INFO nova.compute.manager [None req-ddcc145f-17a4-47f7-8c50-a2954a2bf5d6 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Get console output#033[00m
Jan 31 03:47:40 np0005603621 nova_compute[247399]: 2026-01-31 08:47:40.795 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:47:41 np0005603621 nova_compute[247399]: 2026-01-31 08:47:41.472 247403 DEBUG oslo_concurrency.lockutils [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:41 np0005603621 nova_compute[247399]: 2026-01-31 08:47:41.472 247403 DEBUG oslo_concurrency.lockutils [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:41 np0005603621 nova_compute[247399]: 2026-01-31 08:47:41.473 247403 INFO nova.compute.manager [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Rebooting instance#033[00m
Jan 31 03:47:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:41 np0005603621 nova_compute[247399]: 2026-01-31 08:47:41.811 247403 DEBUG oslo_concurrency.lockutils [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:47:41 np0005603621 nova_compute[247399]: 2026-01-31 08:47:41.812 247403 DEBUG oslo_concurrency.lockutils [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:47:41 np0005603621 nova_compute[247399]: 2026-01-31 08:47:41.812 247403 DEBUG nova.network.neutron [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:47:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:42.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:42.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 596 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.5 MiB/s wr, 301 op/s
Jan 31 03:47:43 np0005603621 nova_compute[247399]: 2026-01-31 08:47:43.089 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:47:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:44.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:47:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:44.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 596 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.5 MiB/s wr, 251 op/s
Jan 31 03:47:44 np0005603621 nova_compute[247399]: 2026-01-31 08:47:44.819 247403 DEBUG nova.network.neutron [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updating instance_info_cache with network_info: [{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:47:44 np0005603621 nova_compute[247399]: 2026-01-31 08:47:44.962 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:44 np0005603621 nova_compute[247399]: 2026-01-31 08:47:44.966 247403 DEBUG oslo_concurrency.lockutils [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:47:44 np0005603621 nova_compute[247399]: 2026-01-31 08:47:44.968 247403 DEBUG nova.compute.manager [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:45 np0005603621 nova_compute[247399]: 2026-01-31 08:47:45.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:46.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 596 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.5 MiB/s wr, 253 op/s
Jan 31 03:47:47 np0005603621 nova_compute[247399]: 2026-01-31 08:47:47.763 247403 DEBUG oslo_concurrency.lockutils [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:47 np0005603621 nova_compute[247399]: 2026-01-31 08:47:47.764 247403 DEBUG oslo_concurrency.lockutils [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:47 np0005603621 nova_compute[247399]: 2026-01-31 08:47:47.799 247403 INFO nova.compute.manager [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Detaching volume dad97247-8d79-4c56-b9b0-e61729262e21#033[00m
Jan 31 03:47:47 np0005603621 nova_compute[247399]: 2026-01-31 08:47:47.992 247403 INFO nova.virt.block_device [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Attempting to driver detach volume dad97247-8d79-4c56-b9b0-e61729262e21 from mountpoint /dev/vdb#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.000 247403 DEBUG nova.virt.libvirt.driver [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Attempting to detach device vdb from instance 3308d345-19b7-4fbb-bd81-631135649e7d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.001 247403 DEBUG nova.virt.libvirt.guest [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-dad97247-8d79-4c56-b9b0-e61729262e21">
Jan 31 03:47:48 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <serial>dad97247-8d79-4c56-b9b0-e61729262e21</serial>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:47:48 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.145 247403 INFO nova.virt.libvirt.driver [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully detached device vdb from instance 3308d345-19b7-4fbb-bd81-631135649e7d from the persistent domain config.#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.146 247403 DEBUG nova.virt.libvirt.driver [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3308d345-19b7-4fbb-bd81-631135649e7d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.147 247403 DEBUG nova.virt.libvirt.guest [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-dad97247-8d79-4c56-b9b0-e61729262e21">
Jan 31 03:47:48 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <serial>dad97247-8d79-4c56-b9b0-e61729262e21</serial>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:47:48 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:47:48 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:47:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:48.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:48.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.405 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849268.404888, 3308d345-19b7-4fbb-bd81-631135649e7d => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.407 247403 DEBUG nova.virt.libvirt.driver [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3308d345-19b7-4fbb-bd81-631135649e7d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.409 247403 INFO nova.virt.libvirt.driver [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully detached device vdb from instance 3308d345-19b7-4fbb-bd81-631135649e7d from the live domain config.#033[00m
Jan 31 03:47:48 np0005603621 kernel: tapf8883354-de (unregistering): left promiscuous mode
Jan 31 03:47:48 np0005603621 NetworkManager[49013]: <info>  [1769849268.6225] device (tapf8883354-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:47:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 596 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.6 MiB/s wr, 220 op/s
Jan 31 03:47:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:48Z|00662|binding|INFO|Releasing lport f8883354-defb-43f3-9474-fbd6f0f701c7 from this chassis (sb_readonly=0)
Jan 31 03:47:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:48Z|00663|binding|INFO|Setting lport f8883354-defb-43f3-9474-fbd6f0f701c7 down in Southbound
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.664 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:48 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:48Z|00664|binding|INFO|Removing iface tapf8883354-de ovn-installed in OVS
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.667 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:48 np0005603621 nova_compute[247399]: 2026-01-31 08:47:48.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:48 np0005603621 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 31 03:47:48 np0005603621 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d0000009f.scope: Consumed 14.036s CPU time.
Jan 31 03:47:48 np0005603621 systemd-machined[212769]: Machine qemu-79-instance-0000009f terminated.
Jan 31 03:47:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:48.718 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:1e:4a 10.100.0.5'], port_security=['fa:16:3e:7f:1e:4a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1e70d6f8-7f61-4626-b67d-55f18422dbdf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3a6488-5dce-49ae-859f-d782e87a2be7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3227746e-bf7a-477a-acc4-5f39020b4728, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f8883354-defb-43f3-9474-fbd6f0f701c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:47:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:48.719 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f8883354-defb-43f3-9474-fbd6f0f701c7 in datapath 48d2e38d-e6ae-44ba-84b3-12f97789abcf unbound from our chassis#033[00m
Jan 31 03:47:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:48.720 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48d2e38d-e6ae-44ba-84b3-12f97789abcf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:47:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:48.721 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[700e0d53-794a-4126-9e8e-bd626b1d5a1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:48 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:48.722 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf namespace which is not needed anymore#033[00m
Jan 31 03:47:48 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [NOTICE]   (363331) : haproxy version is 2.8.14-c23fe91
Jan 31 03:47:48 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [NOTICE]   (363331) : path to executable is /usr/sbin/haproxy
Jan 31 03:47:48 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [WARNING]  (363331) : Exiting Master process...
Jan 31 03:47:48 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [WARNING]  (363331) : Exiting Master process...
Jan 31 03:47:48 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [ALERT]    (363331) : Current worker (363333) exited with code 143 (Terminated)
Jan 31 03:47:48 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363327]: [WARNING]  (363331) : All workers exited. Exiting... (0)
Jan 31 03:47:48 np0005603621 systemd[1]: libpod-7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216.scope: Deactivated successfully.
Jan 31 03:47:48 np0005603621 podman[363501]: 2026-01-31 08:47:48.981360963 +0000 UTC m=+0.172579009 container died 7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:47:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216-userdata-shm.mount: Deactivated successfully.
Jan 31 03:47:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d9319dedd0677cedc193498063f72d9f9da6f74880cd73e27f44a75405b6dc66-merged.mount: Deactivated successfully.
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01085687406942118 of space, bias 1.0, pg target 3.257062220826354 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432428345430858 of space, bias 1.0, pg target 1.2843121859296482 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032279452819653827 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:47:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 31 03:47:49 np0005603621 podman[363501]: 2026-01-31 08:47:49.560545101 +0000 UTC m=+0.751763147 container cleanup 7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:47:49 np0005603621 systemd[1]: libpod-conmon-7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216.scope: Deactivated successfully.
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.565 247403 INFO nova.virt.libvirt.driver [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance shutdown successfully.#033[00m
Jan 31 03:47:49 np0005603621 kernel: tapf8883354-de: entered promiscuous mode
Jan 31 03:47:49 np0005603621 systemd-udevd[363480]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:47:49 np0005603621 NetworkManager[49013]: <info>  [1769849269.6101] manager: (tapf8883354-de): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Jan 31 03:47:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:49Z|00665|binding|INFO|Claiming lport f8883354-defb-43f3-9474-fbd6f0f701c7 for this chassis.
Jan 31 03:47:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:49Z|00666|binding|INFO|f8883354-defb-43f3-9474-fbd6f0f701c7: Claiming fa:16:3e:7f:1e:4a 10.100.0.5
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:49 np0005603621 NetworkManager[49013]: <info>  [1769849269.6179] device (tapf8883354-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:47:49 np0005603621 NetworkManager[49013]: <info>  [1769849269.6184] device (tapf8883354-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:47:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:49Z|00667|binding|INFO|Setting lport f8883354-defb-43f3-9474-fbd6f0f701c7 ovn-installed in OVS
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.621 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:49 np0005603621 systemd-machined[212769]: New machine qemu-80-instance-0000009f.
Jan 31 03:47:49 np0005603621 systemd[1]: Started Virtual Machine qemu-80-instance-0000009f.
Jan 31 03:47:49 np0005603621 podman[363544]: 2026-01-31 08:47:49.706428666 +0000 UTC m=+0.127316590 container remove 7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.712 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[82e56f01-b39f-4e3b-98b5-ac50974706f2]: (4, ('Sat Jan 31 08:47:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf (7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216)\n7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216\nSat Jan 31 08:47:49 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf (7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216)\n7bd47095967458c00f19c7c5263e511761b2e59b6381f2d413d0046dce2e7216\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.715 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3fbd6a-ceef-4747-8cf5-eca98ca45c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.715 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d2e38d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.768 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:49 np0005603621 kernel: tap48d2e38d-e0: left promiscuous mode
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.775 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.778 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2730a62d-3741-405f-8e67-6ed65e2e53b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.797 247403 DEBUG nova.compute.manager [req-d2e35023-28d0-4ef3-a942-926d6e706b1e req-49bc9cd6-d6e8-4731-9a34-511a0602b6b7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-unplugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.796 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[226061c9-7474-408c-8dd5-de6f29ffa749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.797 247403 DEBUG oslo_concurrency.lockutils [req-d2e35023-28d0-4ef3-a942-926d6e706b1e req-49bc9cd6-d6e8-4731-9a34-511a0602b6b7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.797 247403 DEBUG oslo_concurrency.lockutils [req-d2e35023-28d0-4ef3-a942-926d6e706b1e req-49bc9cd6-d6e8-4731-9a34-511a0602b6b7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.798 247403 DEBUG oslo_concurrency.lockutils [req-d2e35023-28d0-4ef3-a942-926d6e706b1e req-49bc9cd6-d6e8-4731-9a34-511a0602b6b7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.798 247403 DEBUG nova.compute.manager [req-d2e35023-28d0-4ef3-a942-926d6e706b1e req-49bc9cd6-d6e8-4731-9a34-511a0602b6b7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-unplugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:47:49 np0005603621 nova_compute[247399]: 2026-01-31 08:47:49.798 247403 WARNING nova.compute.manager [req-d2e35023-28d0-4ef3-a942-926d6e706b1e req-49bc9cd6-d6e8-4731-9a34-511a0602b6b7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received unexpected event network-vif-unplugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with vm_state active and task_state reboot_started.#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.798 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cff21c-6e52-49cd-b3dc-5f227bf540cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.812 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b40f63-b37a-4406-b84c-b26e2102fbf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 854553, 'reachable_time': 31313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363580, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.818 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.818 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d990a798-b767-4921-8805-a508403fcab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 systemd[1]: run-netns-ovnmeta\x2d48d2e38d\x2de6ae\x2d44ba\x2d84b3\x2d12f97789abcf.mount: Deactivated successfully.
Jan 31 03:47:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:49Z|00668|binding|INFO|Setting lport f8883354-defb-43f3-9474-fbd6f0f701c7 up in Southbound
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.927 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:1e:4a 10.100.0.5'], port_security=['fa:16:3e:7f:1e:4a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1e70d6f8-7f61-4626-b67d-55f18422dbdf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6f3a6488-5dce-49ae-859f-d782e87a2be7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3227746e-bf7a-477a-acc4-5f39020b4728, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f8883354-defb-43f3-9474-fbd6f0f701c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.928 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f8883354-defb-43f3-9474-fbd6f0f701c7 in datapath 48d2e38d-e6ae-44ba-84b3-12f97789abcf bound to our chassis#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.930 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48d2e38d-e6ae-44ba-84b3-12f97789abcf#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.943 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[84d04a71-c331-4736-8e21-093ef82370b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.945 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48d2e38d-e1 in ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.947 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48d2e38d-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.947 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37a786e3-59e9-459f-9ca3-2efe7fb0af3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.949 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef76d33-475a-4359-ae71-893489b6ca06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.957 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac0c4c0-218c-46e5-ab2d-addf31aa66e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:49.979 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6743ffe4-7079-4d27-b49d-9e18120cc75b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.000 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[41049eb9-74a1-4502-a726-59f1397487ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 NetworkManager[49013]: <info>  [1769849270.0087] manager: (tap48d2e38d-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.007 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[88b6f59c-5429-41ed-a074-9408fa7854b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.035 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4279aa-a779-4025-8105-c96d9ba00471]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.037 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d112356c-d375-4b82-97a9-5bd6c0ffc76f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 NetworkManager[49013]: <info>  [1769849270.0521] device (tap48d2e38d-e0): carrier: link connected
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.055 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4a6433-2b5e-4f76-8021-5301579e6601]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.070 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9a44d74a-174d-444b-9d01-4ae0e20bbcd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d2e38d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:f5:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857556, 'reachable_time': 29336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363645, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.078 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d6f1c5f-718a-4443-8e55-fd1c8789aa13]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:f5d8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 857556, 'tstamp': 857556}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363647, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.087 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d16ac815-5f47-4565-bc65-0a181667d7e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48d2e38d-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:f5:d8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857556, 'reachable_time': 29336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363648, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.107 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0b00ac-d7cb-4305-ba35-23b058af41a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.128 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 1e70d6f8-7f61-4626-b67d-55f18422dbdf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.129 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849270.1281576, 1e70d6f8-7f61-4626-b67d-55f18422dbdf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.129 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.134 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance running successfully.#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.134 247403 INFO nova.virt.libvirt.driver [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance soft rebooted successfully.#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.135 247403 DEBUG nova.compute.manager [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.147 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a426ce5c-38d8-45d7-a29c-7de495f6823e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.148 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d2e38d-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.149 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.149 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48d2e38d-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.151 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:50 np0005603621 NetworkManager[49013]: <info>  [1769849270.1520] manager: (tap48d2e38d-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Jan 31 03:47:50 np0005603621 kernel: tap48d2e38d-e0: entered promiscuous mode
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.154 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.156 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48d2e38d-e0, col_values=(('external_ids', {'iface-id': '774935b7-f9f5-459a-b0c1-706b34904ed7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:47:50Z|00669|binding|INFO|Releasing lport 774935b7-f9f5-459a-b0c1-706b34904ed7 from this chassis (sb_readonly=1)
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.158 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48d2e38d-e6ae-44ba-84b3-12f97789abcf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48d2e38d-e6ae-44ba-84b3-12f97789abcf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.158 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4c31fe-73e6-49a3-9c2b-10f13c785715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.159 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-48d2e38d-e6ae-44ba-84b3-12f97789abcf
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/48d2e38d-e6ae-44ba-84b3-12f97789abcf.pid.haproxy
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 48d2e38d-e6ae-44ba-84b3-12f97789abcf
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:47:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:47:50.160 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'env', 'PROCESS_TAG=haproxy-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48d2e38d-e6ae-44ba-84b3-12f97789abcf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.162 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:50.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:47:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.328 247403 DEBUG nova.objects.instance [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'flavor' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:47:50 np0005603621 podman[363682]: 2026-01-31 08:47:50.508609285 +0000 UTC m=+0.043082702 container create 2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.530 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:50 np0005603621 systemd[1]: Started libpod-conmon-2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9.scope.
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.560 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.563 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:47:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:47:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b353d57e633d36d9abcf839263a5e26b13bbcf7d2f0c7fbb7c37a192b35f8d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:50 np0005603621 podman[363682]: 2026-01-31 08:47:50.486220008 +0000 UTC m=+0.020693445 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:47:50 np0005603621 podman[363682]: 2026-01-31 08:47:50.587218517 +0000 UTC m=+0.121691954 container init 2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 03:47:50 np0005603621 podman[363682]: 2026-01-31 08:47:50.592515164 +0000 UTC m=+0.126988581 container start 2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 03:47:50 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [NOTICE]   (363702) : New worker (363704) forked
Jan 31 03:47:50 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [NOTICE]   (363702) : Loading success.
Jan 31 03:47:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 596 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 766 KiB/s rd, 3.4 MiB/s wr, 152 op/s
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.946 247403 DEBUG oslo_concurrency.lockutils [None req-1256a8c4-a012-4898-a297-d9af09f3beae f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 9.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.964 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849270.1288786, 1e70d6f8-7f61-4626-b67d-55f18422dbdf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:47:50 np0005603621 nova_compute[247399]: 2026-01-31 08:47:50.964 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] VM Started (Lifecycle Event)#033[00m
Jan 31 03:47:51 np0005603621 nova_compute[247399]: 2026-01-31 08:47:51.129 247403 DEBUG oslo_concurrency.lockutils [None req-aae7720a-43a5-4733-9612-30fb77a1164d a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:51 np0005603621 nova_compute[247399]: 2026-01-31 08:47:51.154 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:47:51 np0005603621 nova_compute[247399]: 2026-01-31 08:47:51.159 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:47:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.069 247403 DEBUG nova.compute.manager [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.070 247403 DEBUG oslo_concurrency.lockutils [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.070 247403 DEBUG oslo_concurrency.lockutils [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.071 247403 DEBUG oslo_concurrency.lockutils [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.071 247403 DEBUG nova.compute.manager [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.071 247403 WARNING nova.compute.manager [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received unexpected event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.071 247403 DEBUG nova.compute.manager [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.071 247403 DEBUG oslo_concurrency.lockutils [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.072 247403 DEBUG oslo_concurrency.lockutils [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.072 247403 DEBUG oslo_concurrency.lockutils [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.072 247403 DEBUG nova.compute.manager [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:47:52 np0005603621 nova_compute[247399]: 2026-01-31 08:47:52.072 247403 WARNING nova.compute.manager [req-f9af7ee6-1a27-4d98-8915-85f3a0d7b9b2 req-85da7442-3907-40d0-9d88-0f2eaf7bb09f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received unexpected event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:47:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 635 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.6 MiB/s wr, 238 op/s
Jan 31 03:47:53 np0005603621 nova_compute[247399]: 2026-01-31 08:47:53.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:54.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:54.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.292 247403 DEBUG nova.compute.manager [req-5e3951a7-722c-491a-b3c4-2aa182ac9e59 req-6a89459b-d5c4-4270-8d49-684ef1accee0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.293 247403 DEBUG oslo_concurrency.lockutils [req-5e3951a7-722c-491a-b3c4-2aa182ac9e59 req-6a89459b-d5c4-4270-8d49-684ef1accee0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.293 247403 DEBUG oslo_concurrency.lockutils [req-5e3951a7-722c-491a-b3c4-2aa182ac9e59 req-6a89459b-d5c4-4270-8d49-684ef1accee0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.293 247403 DEBUG oslo_concurrency.lockutils [req-5e3951a7-722c-491a-b3c4-2aa182ac9e59 req-6a89459b-d5c4-4270-8d49-684ef1accee0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.294 247403 DEBUG nova.compute.manager [req-5e3951a7-722c-491a-b3c4-2aa182ac9e59 req-6a89459b-d5c4-4270-8d49-684ef1accee0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.294 247403 WARNING nova.compute.manager [req-5e3951a7-722c-491a-b3c4-2aa182ac9e59 req-6a89459b-d5c4-4270-8d49-684ef1accee0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received unexpected event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:47:54 np0005603621 nova_compute[247399]: 2026-01-31 08:47:54.440 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 635 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 94 op/s
Jan 31 03:47:55 np0005603621 nova_compute[247399]: 2026-01-31 08:47:55.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:56.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:56.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:47:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 104 op/s
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.684835) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849276684867, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1091, "num_deletes": 255, "total_data_size": 1626127, "memory_usage": 1662128, "flush_reason": "Manual Compaction"}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849276700256, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 1607580, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63951, "largest_seqno": 65041, "table_properties": {"data_size": 1602497, "index_size": 2542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11355, "raw_average_key_size": 19, "raw_value_size": 1592067, "raw_average_value_size": 2730, "num_data_blocks": 112, "num_entries": 583, "num_filter_entries": 583, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849182, "oldest_key_time": 1769849182, "file_creation_time": 1769849276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 15471 microseconds, and 3361 cpu microseconds.
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.700302) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 1607580 bytes OK
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.700321) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.704310) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.704326) EVENT_LOG_v1 {"time_micros": 1769849276704321, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.704347) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 1621139, prev total WAL file size 1621139, number of live WAL files 2.
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.704791) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(1569KB)], [146(9527KB)]
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849276704880, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 11363574, "oldest_snapshot_seqno": -1}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8966 keys, 11230612 bytes, temperature: kUnknown
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849276835349, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11230612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11173509, "index_size": 33566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 237028, "raw_average_key_size": 26, "raw_value_size": 11017246, "raw_average_value_size": 1228, "num_data_blocks": 1280, "num_entries": 8966, "num_filter_entries": 8966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.835705) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11230612 bytes
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.838135) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.0 rd, 86.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.3 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(14.1) write-amplify(7.0) OK, records in: 9489, records dropped: 523 output_compression: NoCompression
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.838173) EVENT_LOG_v1 {"time_micros": 1769849276838158, "job": 90, "event": "compaction_finished", "compaction_time_micros": 130581, "compaction_time_cpu_micros": 23771, "output_level": 6, "num_output_files": 1, "total_output_size": 11230612, "num_input_records": 9489, "num_output_records": 8966, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849276838442, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849276839276, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.704665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.839376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.839381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.839383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.839384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:56 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:47:56.839386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:47:58 np0005603621 nova_compute[247399]: 2026-01-31 08:47:58.137 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:47:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:47:58.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:47:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:47:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:47:58.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:47:58 np0005603621 nova_compute[247399]: 2026-01-31 08:47:58.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:47:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev eed5aa97-f4fa-49cb-9853-c1de30c55612 does not exist
Jan 31 03:47:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ef81d5da-0343-4c45-baca-d6b6d1d637dc does not exist
Jan 31 03:47:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d32f9801-1b62-427f-94ff-dfa5bba5bc75 does not exist
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:47:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:47:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.754065655 +0000 UTC m=+0.036628147 container create 1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_johnson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:47:58 np0005603621 systemd[1]: Started libpod-conmon-1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde.scope.
Jan 31 03:47:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.736073597 +0000 UTC m=+0.018636109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.843148158 +0000 UTC m=+0.125710670 container init 1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_johnson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.85145522 +0000 UTC m=+0.134017722 container start 1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:47:58 np0005603621 blissful_johnson[364054]: 167 167
Jan 31 03:47:58 np0005603621 conmon[364054]: conmon 1e07660e17f47b61eaf2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde.scope/container/memory.events
Jan 31 03:47:58 np0005603621 systemd[1]: libpod-1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde.scope: Deactivated successfully.
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.876368727 +0000 UTC m=+0.158931249 container attach 1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.876931315 +0000 UTC m=+0.159493807 container died 1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:47:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ac38ba4b6082f169ee322a05a45fd95ca7db219c812aad305f715175b3979fd6-merged.mount: Deactivated successfully.
Jan 31 03:47:58 np0005603621 podman[364038]: 2026-01-31 08:47:58.936782754 +0000 UTC m=+0.219345246 container remove 1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_johnson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:47:58 np0005603621 systemd[1]: libpod-conmon-1e07660e17f47b61eaf23c3555e9ce8dfe6f34de3db35db6362668a1d414ecde.scope: Deactivated successfully.
Jan 31 03:47:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:47:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:47:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:47:59 np0005603621 podman[364080]: 2026-01-31 08:47:59.077322601 +0000 UTC m=+0.033066525 container create ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_archimedes, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:47:59 np0005603621 systemd[1]: Started libpod-conmon-ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd.scope.
Jan 31 03:47:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:47:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77bd6ed863a525b248c2a637de3f897038f59ab2e60882cb5e64716256590b87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77bd6ed863a525b248c2a637de3f897038f59ab2e60882cb5e64716256590b87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:59 np0005603621 podman[364080]: 2026-01-31 08:47:59.062588046 +0000 UTC m=+0.018331990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:47:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77bd6ed863a525b248c2a637de3f897038f59ab2e60882cb5e64716256590b87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77bd6ed863a525b248c2a637de3f897038f59ab2e60882cb5e64716256590b87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77bd6ed863a525b248c2a637de3f897038f59ab2e60882cb5e64716256590b87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:47:59 np0005603621 podman[364080]: 2026-01-31 08:47:59.235773635 +0000 UTC m=+0.191517579 container init ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_archimedes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:47:59 np0005603621 podman[364080]: 2026-01-31 08:47:59.242057452 +0000 UTC m=+0.197801376 container start ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:47:59 np0005603621 podman[364080]: 2026-01-31 08:47:59.244880641 +0000 UTC m=+0.200624565 container attach ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:48:00 np0005603621 infallible_archimedes[364097]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:48:00 np0005603621 infallible_archimedes[364097]: --> relative data size: 1.0
Jan 31 03:48:00 np0005603621 infallible_archimedes[364097]: --> All data devices are unavailable
Jan 31 03:48:00 np0005603621 systemd[1]: libpod-ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd.scope: Deactivated successfully.
Jan 31 03:48:00 np0005603621 podman[364112]: 2026-01-31 08:48:00.086925869 +0000 UTC m=+0.024829486 container died ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:48:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-77bd6ed863a525b248c2a637de3f897038f59ab2e60882cb5e64716256590b87-merged.mount: Deactivated successfully.
Jan 31 03:48:00 np0005603621 podman[364112]: 2026-01-31 08:48:00.145189418 +0000 UTC m=+0.083093025 container remove ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:48:00 np0005603621 systemd[1]: libpod-conmon-ebf65914f0f288cf0b662f904410b780a538d9519084ddde335488979bd4f2fd.scope: Deactivated successfully.
Jan 31 03:48:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:00.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:00.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:00 np0005603621 nova_compute[247399]: 2026-01-31 08:48:00.536 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.709790944 +0000 UTC m=+0.049130561 container create c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:48:00 np0005603621 systemd[1]: Started libpod-conmon-c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271.scope.
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.688982537 +0000 UTC m=+0.028322174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:48:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.813569071 +0000 UTC m=+0.152908688 container init c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.819092386 +0000 UTC m=+0.158432003 container start c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.82271149 +0000 UTC m=+0.162051107 container attach c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elion, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:48:00 np0005603621 systemd[1]: libpod-c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271.scope: Deactivated successfully.
Jan 31 03:48:00 np0005603621 goofy_elion[364284]: 167 167
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.82556131 +0000 UTC m=+0.164900947 container died c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:48:00 np0005603621 conmon[364284]: conmon c9ef2a0d45bba6c007dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271.scope/container/memory.events
Jan 31 03:48:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8ec062c622a9ec06910bd7d95a01c999ee4320fae05fb0f3eb056c6ce6af33e2-merged.mount: Deactivated successfully.
Jan 31 03:48:00 np0005603621 podman[364268]: 2026-01-31 08:48:00.864473059 +0000 UTC m=+0.203812676 container remove c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_elion, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:48:00 np0005603621 systemd[1]: libpod-conmon-c9ef2a0d45bba6c007ddc5b6412e0c0c49e9b910b3ae2a402c6e9d5e867c0271.scope: Deactivated successfully.
Jan 31 03:48:01 np0005603621 podman[364307]: 2026-01-31 08:48:00.999208293 +0000 UTC m=+0.042787542 container create 27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_margulis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:48:01 np0005603621 systemd[1]: Started libpod-conmon-27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6.scope.
Jan 31 03:48:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:48:01 np0005603621 podman[364307]: 2026-01-31 08:48:00.979094068 +0000 UTC m=+0.022673347 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:48:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d367721db2ae79bbc75ba737dc8caeefdc324a6d7db5850462e5161a3cb2b68f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d367721db2ae79bbc75ba737dc8caeefdc324a6d7db5850462e5161a3cb2b68f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d367721db2ae79bbc75ba737dc8caeefdc324a6d7db5850462e5161a3cb2b68f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:01 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d367721db2ae79bbc75ba737dc8caeefdc324a6d7db5850462e5161a3cb2b68f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:01 np0005603621 podman[364307]: 2026-01-31 08:48:01.801118802 +0000 UTC m=+0.844698061 container init 27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 31 03:48:01 np0005603621 podman[364307]: 2026-01-31 08:48:01.806686707 +0000 UTC m=+0.850265956 container start 27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:48:01 np0005603621 podman[364307]: 2026-01-31 08:48:01.809651841 +0000 UTC m=+0.853231090 container attach 27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:48:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:02.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:02.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]: {
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:    "0": [
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:        {
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "devices": [
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "/dev/loop3"
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            ],
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "lv_name": "ceph_lv0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "lv_size": "7511998464",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "name": "ceph_lv0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "tags": {
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.cluster_name": "ceph",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.crush_device_class": "",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.encrypted": "0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.osd_id": "0",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.type": "block",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:                "ceph.vdo": "0"
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            },
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "type": "block",
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:            "vg_name": "ceph_vg0"
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:        }
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]:    ]
Jan 31 03:48:02 np0005603621 naughty_margulis[364323]: }
Jan 31 03:48:02 np0005603621 systemd[1]: libpod-27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6.scope: Deactivated successfully.
Jan 31 03:48:02 np0005603621 podman[364333]: 2026-01-31 08:48:02.573419716 +0000 UTC m=+0.026059963 container died 27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:48:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d367721db2ae79bbc75ba737dc8caeefdc324a6d7db5850462e5161a3cb2b68f-merged.mount: Deactivated successfully.
Jan 31 03:48:02 np0005603621 podman[364333]: 2026-01-31 08:48:02.637637523 +0000 UTC m=+0.090277750 container remove 27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:48:02 np0005603621 systemd[1]: libpod-conmon-27ddbc5ecc5d67ea1141af64af26b4716f9263e8c15a1fe0b67e427d44a5f1a6.scope: Deactivated successfully.
Jan 31 03:48:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Jan 31 03:48:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:03Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:1e:4a 10.100.0.5
Jan 31 03:48:03 np0005603621 nova_compute[247399]: 2026-01-31 08:48:03.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.159060557 +0000 UTC m=+0.041228723 container create 9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banach, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:48:03 np0005603621 systemd[1]: Started libpod-conmon-9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1.scope.
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.137653271 +0000 UTC m=+0.019821487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:48:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.245388913 +0000 UTC m=+0.127557099 container init 9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banach, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.251120874 +0000 UTC m=+0.133289040 container start 9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banach, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:48:03 np0005603621 agitated_banach[364504]: 167 167
Jan 31 03:48:03 np0005603621 systemd[1]: libpod-9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1.scope: Deactivated successfully.
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.254318165 +0000 UTC m=+0.136486331 container attach 9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banach, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.254821331 +0000 UTC m=+0.136989497 container died 9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:48:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3ac875bdf701b874b3acda1b2d03d2ea1d69c692e9f471671e8942b0495a921d-merged.mount: Deactivated successfully.
Jan 31 03:48:03 np0005603621 podman[364488]: 2026-01-31 08:48:03.288803944 +0000 UTC m=+0.170972110 container remove 9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:48:03 np0005603621 systemd[1]: libpod-conmon-9d33d7c8fe009b2685b7b45b2aabd12f6ebecb28bdf54225299039fc70cacea1.scope: Deactivated successfully.
Jan 31 03:48:03 np0005603621 podman[364527]: 2026-01-31 08:48:03.473371001 +0000 UTC m=+0.058976013 container create 2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:03 np0005603621 systemd[1]: Started libpod-conmon-2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c.scope.
Jan 31 03:48:03 np0005603621 podman[364527]: 2026-01-31 08:48:03.446060829 +0000 UTC m=+0.031665951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:48:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:48:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e43473627cf11a659aff765bfece6e429a1648b5e7b771ef46a9ab58ab9b7a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e43473627cf11a659aff765bfece6e429a1648b5e7b771ef46a9ab58ab9b7a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e43473627cf11a659aff765bfece6e429a1648b5e7b771ef46a9ab58ab9b7a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e43473627cf11a659aff765bfece6e429a1648b5e7b771ef46a9ab58ab9b7a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:03 np0005603621 podman[364527]: 2026-01-31 08:48:03.553998737 +0000 UTC m=+0.139603789 container init 2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:48:03 np0005603621 podman[364527]: 2026-01-31 08:48:03.562411212 +0000 UTC m=+0.148016244 container start 2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:48:03 np0005603621 podman[364527]: 2026-01-31 08:48:03.565793229 +0000 UTC m=+0.151398261 container attach 2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:48:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:04.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:04.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]: {
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:        "osd_id": 0,
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:        "type": "bluestore"
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]:    }
Jan 31 03:48:04 np0005603621 interesting_wescoff[364544]: }
Jan 31 03:48:04 np0005603621 systemd[1]: libpod-2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c.scope: Deactivated successfully.
Jan 31 03:48:04 np0005603621 podman[364566]: 2026-01-31 08:48:04.379242173 +0000 UTC m=+0.022576894 container died 2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:48:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8e43473627cf11a659aff765bfece6e429a1648b5e7b771ef46a9ab58ab9b7a4-merged.mount: Deactivated successfully.
Jan 31 03:48:04 np0005603621 podman[364566]: 2026-01-31 08:48:04.541230547 +0000 UTC m=+0.184565268 container remove 2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:48:04 np0005603621 systemd[1]: libpod-conmon-2df843133918a053eb395afe26501d08dd3667ed3106f9bc3e045bc683366f9c.scope: Deactivated successfully.
Jan 31 03:48:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:48:04 np0005603621 podman[364583]: 2026-01-31 08:48:04.64390914 +0000 UTC m=+0.069780184 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:48:04 np0005603621 podman[364581]: 2026-01-31 08:48:04.649722813 +0000 UTC m=+0.076313770 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:48:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 643 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 595 KiB/s wr, 90 op/s
Jan 31 03:48:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:48:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:48:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:48:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9de9154e-d249-49b3-89a7-7ef04be65b60 does not exist
Jan 31 03:48:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 65814ace-1b77-4dbc-aaea-f07e8597c025 does not exist
Jan 31 03:48:04 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fc3c129c-5fb1-4216-9f47-4a0821f560f9 does not exist
Jan 31 03:48:05 np0005603621 nova_compute[247399]: 2026-01-31 08:48:05.538 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:48:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:48:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:06.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:06.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.648 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.648 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.648 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:48:06 np0005603621 nova_compute[247399]: 2026-01-31 08:48:06.648 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 654 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 109 op/s
Jan 31 03:48:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:08 np0005603621 nova_compute[247399]: 2026-01-31 08:48:08.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:08.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:08.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 31 03:48:08 np0005603621 nova_compute[247399]: 2026-01-31 08:48:08.885 247403 INFO nova.compute.manager [None req-283abccc-a293-4d61-925f-e1e52cd345a7 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Get console output#033[00m
Jan 31 03:48:08 np0005603621 nova_compute[247399]: 2026-01-31 08:48:08.890 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:48:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:10.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:10.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.410 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.410 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.464 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.540 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.598 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.599 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.605 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.605 247403 INFO nova.compute.claims [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:48:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 689 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Jan 31 03:48:10 np0005603621 nova_compute[247399]: 2026-01-31 08:48:10.993 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.048 247403 DEBUG nova.compute.manager [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-changed-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.048 247403 DEBUG nova.compute.manager [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Refreshing instance network info cache due to event network-changed-f8883354-defb-43f3-9474-fbd6f0f701c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.049 247403 DEBUG oslo_concurrency.lockutils [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.049 247403 DEBUG oslo_concurrency.lockutils [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.049 247403 DEBUG nova.network.neutron [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Refreshing network info cache for port f8883354-defb-43f3-9474-fbd6f0f701c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:48:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036773715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.414 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.420 247403 DEBUG nova.compute.provider_tree [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.466 247403 DEBUG nova.scheduler.client.report [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.972 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:11 np0005603621 nova_compute[247399]: 2026-01-31 08:48:11.973 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.093 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.094 247403 DEBUG nova.network.neutron [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.165 247403 INFO nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:48:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:12.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:12.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.297 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.635 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.636 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.636 247403 INFO nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Creating image(s)#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.663 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.690 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.719 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.722 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.746 247403 DEBUG nova.policy [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3bd4ce8a916a4bdbbc988eb4fe32991e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76fb5cb7abcd4d74abfc471a96bbd12c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.782 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.782 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.783 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.783 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.809 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.812 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 74f09648-834b-4da1-89a4-bcdcca255908_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.833 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.935 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.935 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.936 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.936 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.936 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:48:12 np0005603621 nova_compute[247399]: 2026-01-31 08:48:12.936 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.071 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.071 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.072 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.072 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.072 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.199 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.200 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.201 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.201 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.201 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.204 247403 INFO nova.compute.manager [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Terminating instance#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.205 247403 DEBUG nova.compute.manager [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.230 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 74f09648-834b-4da1-89a4-bcdcca255908_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:13 np0005603621 kernel: tapf8883354-de (unregistering): left promiscuous mode
Jan 31 03:48:13 np0005603621 NetworkManager[49013]: <info>  [1769849293.2994] device (tapf8883354-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:48:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:13Z|00670|binding|INFO|Releasing lport f8883354-defb-43f3-9474-fbd6f0f701c7 from this chassis (sb_readonly=0)
Jan 31 03:48:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:13Z|00671|binding|INFO|Setting lport f8883354-defb-43f3-9474-fbd6f0f701c7 down in Southbound
Jan 31 03:48:13 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:13Z|00672|binding|INFO|Removing iface tapf8883354-de ovn-installed in OVS
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.332 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] resizing rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.347 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:1e:4a 10.100.0.5'], port_security=['fa:16:3e:7f:1e:4a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1e70d6f8-7f61-4626-b67d-55f18422dbdf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6f3a6488-5dce-49ae-859f-d782e87a2be7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3227746e-bf7a-477a-acc4-5f39020b4728, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=f8883354-defb-43f3-9474-fbd6f0f701c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.348 159734 INFO neutron.agent.ovn.metadata.agent [-] Port f8883354-defb-43f3-9474-fbd6f0f701c7 in datapath 48d2e38d-e6ae-44ba-84b3-12f97789abcf unbound from our chassis#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.350 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48d2e38d-e6ae-44ba-84b3-12f97789abcf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.351 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11f76e5b-09b7-47bc-b04c-5c3d522308a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.352 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf namespace which is not needed anymore#033[00m
Jan 31 03:48:13 np0005603621 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 31 03:48:13 np0005603621 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d0000009f.scope: Consumed 13.233s CPU time.
Jan 31 03:48:13 np0005603621 systemd-machined[212769]: Machine qemu-80-instance-0000009f terminated.
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.449 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.458 247403 DEBUG nova.objects.instance [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'migration_context' on Instance uuid 74f09648-834b-4da1-89a4-bcdcca255908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.462 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Instance destroyed successfully.#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.463 247403 DEBUG nova.objects.instance [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid 1e70d6f8-7f61-4626-b67d-55f18422dbdf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:13 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [NOTICE]   (363702) : haproxy version is 2.8.14-c23fe91
Jan 31 03:48:13 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [NOTICE]   (363702) : path to executable is /usr/sbin/haproxy
Jan 31 03:48:13 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [WARNING]  (363702) : Exiting Master process...
Jan 31 03:48:13 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [ALERT]    (363702) : Current worker (363704) exited with code 143 (Terminated)
Jan 31 03:48:13 np0005603621 neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf[363698]: [WARNING]  (363702) : All workers exited. Exiting... (0)
Jan 31 03:48:13 np0005603621 systemd[1]: libpod-2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9.scope: Deactivated successfully.
Jan 31 03:48:13 np0005603621 podman[364912]: 2026-01-31 08:48:13.49491961 +0000 UTC m=+0.051967922 container died 2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Jan 31 03:48:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9-userdata-shm.mount: Deactivated successfully.
Jan 31 03:48:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-37b353d57e633d36d9abcf839263a5e26b13bbcf7d2f0c7fbb7c37a192b35f8d-merged.mount: Deactivated successfully.
Jan 31 03:48:13 np0005603621 podman[364912]: 2026-01-31 08:48:13.540579232 +0000 UTC m=+0.097627544 container cleanup 2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:48:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3869424641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:13 np0005603621 systemd[1]: libpod-conmon-2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9.scope: Deactivated successfully.
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.561 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:13 np0005603621 podman[364950]: 2026-01-31 08:48:13.597768818 +0000 UTC m=+0.043171135 container remove 2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.601 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39bdac84-f78b-4ff9-8d47-b00018461e57]: (4, ('Sat Jan 31 08:48:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf (2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9)\n2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9\nSat Jan 31 08:48:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf (2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9)\n2fd6cee43981349119feaeec84acd69aef8e74a89eb9a4183ecc902a464d08b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.603 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5def68-1d73-4cea-9b14-b752fa0967cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.604 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48d2e38d-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.605 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 kernel: tap48d2e38d-e0: left promiscuous mode
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.609 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.609 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Ensure instance console log exists: /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.610 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.610 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.610 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.613 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.616 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3e6581-64fa-44d5-a24e-b45c1f35147f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.635 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[36427671-3255-4f2d-958e-19948b1e89be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.636 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3d1529-066c-4d5c-b16d-179cca728b35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.641 247403 DEBUG nova.virt.libvirt.vif [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1654865498',display_name='tempest-TestNetworkAdvancedServerOps-server-1654865498',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1654865498',id=159,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAGxIXCHGQ6hk+A0fx44fLj3vio2mrPnPx+2cfjZDcxrsWkQIQWCWHLjAHPg/Ubk4hFn8GL58FwqBRal0DdirB8z1k4BmmxTtEag3yt0l5GVuP/rftjagTqAJwOTHRLnTg==',key_name='tempest-TestNetworkAdvancedServerOps-243082686',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:47:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-k2ceb35y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:47:50Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e70d6f8-7f61-4626-b67d-55f18422dbdf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.641 247403 DEBUG nova.network.os_vif_util [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.642 247403 DEBUG nova.network.os_vif_util [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.643 247403 DEBUG os_vif [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.644 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.645 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8883354-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.646 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.648 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.647 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f38d033c-d562-46d7-8126-0ff44b244eca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 857551, 'reachable_time': 27332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364972, 'error': None, 'target': 'ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 systemd[1]: run-netns-ovnmeta\x2d48d2e38d\x2de6ae\x2d44ba\x2d84b3\x2d12f97789abcf.mount: Deactivated successfully.
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.650 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48d2e38d-e6ae-44ba-84b3-12f97789abcf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:48:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:13.651 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6161af3d-772e-4c64-a832-e11c9c1f4e89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.651 247403 INFO os_vif [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:1e:4a,bridge_name='br-int',has_traffic_filtering=True,id=f8883354-defb-43f3-9474-fbd6f0f701c7,network=Network(48d2e38d-e6ae-44ba-84b3-12f97789abcf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf8883354-de')#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.710 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.711 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.714 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.714 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.843 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.844 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3906MB free_disk=20.718692779541016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.844 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:13 np0005603621 nova_compute[247399]: 2026-01-31 08:48:13.844 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.109 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3308d345-19b7-4fbb-bd81-631135649e7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.110 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 1e70d6f8-7f61-4626-b67d-55f18422dbdf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.110 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 74f09648-834b-4da1-89a4-bcdcca255908 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.110 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.110 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:48:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:14.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.254 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.515 247403 INFO nova.virt.libvirt.driver [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Deleting instance files /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf_del#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.517 247403 INFO nova.virt.libvirt.driver [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Deletion of /var/lib/nova/instances/1e70d6f8-7f61-4626-b67d-55f18422dbdf_del complete#033[00m
Jan 31 03:48:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/388102054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 691 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1002 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.678 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.681 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.725 247403 INFO nova.compute.manager [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.725 247403 DEBUG oslo.service.loopingcall [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.726 247403 DEBUG nova.compute.manager [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.726 247403 DEBUG nova.network.neutron [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.735 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:48:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653080690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:48:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:48:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653080690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.847 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:48:14 np0005603621 nova_compute[247399]: 2026-01-31 08:48:14.848 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.110 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.110 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.149 247403 DEBUG nova.compute.manager [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-unplugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.150 247403 DEBUG oslo_concurrency.lockutils [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.150 247403 DEBUG oslo_concurrency.lockutils [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.150 247403 DEBUG oslo_concurrency.lockutils [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.150 247403 DEBUG nova.compute.manager [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-unplugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.151 247403 DEBUG nova.compute.manager [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-unplugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.151 247403 DEBUG nova.compute.manager [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.151 247403 DEBUG oslo_concurrency.lockutils [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.151 247403 DEBUG oslo_concurrency.lockutils [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.152 247403 DEBUG oslo_concurrency.lockutils [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.152 247403 DEBUG nova.compute.manager [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] No waiting events found dispatching network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.152 247403 WARNING nova.compute.manager [req-7f819c4a-7526-4bf1-bc77-909a0b694b4a req-bbc79cd8-051e-4eb5-976d-260764b69a25 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received unexpected event network-vif-plugged-f8883354-defb-43f3-9474-fbd6f0f701c7 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.542 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.585 247403 DEBUG nova.network.neutron [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updated VIF entry in instance network info cache for port f8883354-defb-43f3-9474-fbd6f0f701c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.586 247403 DEBUG nova.network.neutron [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updating instance_info_cache with network_info: [{"id": "f8883354-defb-43f3-9474-fbd6f0f701c7", "address": "fa:16:3e:7f:1e:4a", "network": {"id": "48d2e38d-e6ae-44ba-84b3-12f97789abcf", "bridge": "br-int", "label": "tempest-network-smoke--782663815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf8883354-de", "ovs_interfaceid": "f8883354-defb-43f3-9474-fbd6f0f701c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:15 np0005603621 nova_compute[247399]: 2026-01-31 08:48:15.657 247403 DEBUG oslo_concurrency.lockutils [req-d7e5571c-f135-42e4-8fa3-cbfec57d90c6 req-3c3d1a16-5ac2-4f73-9074-43f4a6a7c10d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-1e70d6f8-7f61-4626-b67d-55f18422dbdf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:16.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 670 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1006 KiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 31 03:48:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:16.987 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:16 np0005603621 nova_compute[247399]: 2026-01-31 08:48:16.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:16.988 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:48:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:16.989 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:17 np0005603621 nova_compute[247399]: 2026-01-31 08:48:17.606 247403 DEBUG nova.network.neutron [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:17 np0005603621 nova_compute[247399]: 2026-01-31 08:48:17.721 247403 INFO nova.compute.manager [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Took 3.00 seconds to deallocate network for instance.#033[00m
Jan 31 03:48:17 np0005603621 nova_compute[247399]: 2026-01-31 08:48:17.815 247403 DEBUG nova.compute.manager [req-9d12d4ee-df0f-450b-b949-7dd9fa106a52 req-19aeef0f-fe59-4793-b740-bb84020f917c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Received event network-vif-deleted-f8883354-defb-43f3-9474-fbd6f0f701c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:17 np0005603621 nova_compute[247399]: 2026-01-31 08:48:17.913 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:17 np0005603621 nova_compute[247399]: 2026-01-31 08:48:17.913 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.011 247403 DEBUG nova.network.neutron [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Successfully created port: b07e666e-f751-41ba-b006-3496f51d6eaa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.169 247403 DEBUG oslo_concurrency.processutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:48:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2269040139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.635 247403 DEBUG oslo_concurrency.processutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.640 247403 DEBUG nova.compute.provider_tree [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.647 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 658 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 853 KiB/s rd, 3.1 MiB/s wr, 161 op/s
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.674 247403 DEBUG nova.scheduler.client.report [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.719 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:18 np0005603621 nova_compute[247399]: 2026-01-31 08:48:18.780 247403 INFO nova.scheduler.client.report [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Deleted allocations for instance 1e70d6f8-7f61-4626-b67d-55f18422dbdf#033[00m
Jan 31 03:48:19 np0005603621 nova_compute[247399]: 2026-01-31 08:48:19.353 247403 DEBUG oslo_concurrency.lockutils [None req-99aa8151-60e7-49ac-a1c5-ff24bbb09971 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e70d6f8-7f61-4626-b67d-55f18422dbdf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:20.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:20.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:20 np0005603621 nova_compute[247399]: 2026-01-31 08:48:20.544 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 658 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 494 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 31 03:48:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:22.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 658 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 31 03:48:23 np0005603621 nova_compute[247399]: 2026-01-31 08:48:23.648 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:24.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.279 247403 DEBUG nova.network.neutron [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Successfully updated port: b07e666e-f751-41ba-b006-3496f51d6eaa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.344 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.344 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquired lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.345 247403 DEBUG nova.network.neutron [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.646 247403 DEBUG nova.compute.manager [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-changed-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.646 247403 DEBUG nova.compute.manager [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Refreshing instance network info cache due to event network-changed-b07e666e-f751-41ba-b006-3496f51d6eaa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:48:24 np0005603621 nova_compute[247399]: 2026-01-31 08:48:24.646 247403 DEBUG oslo_concurrency.lockutils [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 658 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 31 03:48:25 np0005603621 nova_compute[247399]: 2026-01-31 08:48:25.545 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:25 np0005603621 nova_compute[247399]: 2026-01-31 08:48:25.691 247403 DEBUG nova.network.neutron [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:48:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:26.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 668 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 128 op/s
Jan 31 03:48:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:28.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:28 np0005603621 nova_compute[247399]: 2026-01-31 08:48:28.452 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849293.4346526, 1e70d6f8-7f61-4626-b67d-55f18422dbdf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:28 np0005603621 nova_compute[247399]: 2026-01-31 08:48:28.453 247403 INFO nova.compute.manager [-] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:48:28 np0005603621 nova_compute[247399]: 2026-01-31 08:48:28.539 247403 DEBUG nova.compute.manager [None req-8db4ba33-40f9-4718-b732-9b88553f7b7f - - - - - -] [instance: 1e70d6f8-7f61-4626-b67d-55f18422dbdf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:28 np0005603621 nova_compute[247399]: 2026-01-31 08:48:28.650 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 704 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 142 op/s
Jan 31 03:48:29 np0005603621 nova_compute[247399]: 2026-01-31 08:48:29.861 247403 DEBUG nova.network.neutron [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updating instance_info_cache with network_info: [{"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.378 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Releasing lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.378 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Instance network_info: |[{"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.378 247403 DEBUG oslo_concurrency.lockutils [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.378 247403 DEBUG nova.network.neutron [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Refreshing network info cache for port b07e666e-f751-41ba-b006-3496f51d6eaa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.381 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Start _get_guest_xml network_info=[{"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.385 247403 WARNING nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.391 247403 DEBUG nova.virt.libvirt.host [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.391 247403 DEBUG nova.virt.libvirt.host [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.396 247403 DEBUG nova.virt.libvirt.host [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.396 247403 DEBUG nova.virt.libvirt.host [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.397 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.398 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.398 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.398 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.399 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.399 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.399 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.399 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.399 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.400 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.400 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.400 247403 DEBUG nova.virt.hardware [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.403 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:30.528 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:30.529 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:30.530 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.547 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 704 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 31 03:48:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:48:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403778558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.806 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.838 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:30 np0005603621 nova_compute[247399]: 2026-01-31 08:48:30.843 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:48:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684317207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.301 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.303 247403 DEBUG nova.virt.libvirt.vif [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:48:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1818783830',display_name='tempest-AttachVolumeNegativeTest-server-1818783830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1818783830',id=164,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbi4ee/uHKZmZ3YiUCQp4yI4jD1xvBXHz9BPflKR0a7UgKXiUXfNBhcRX7TqA1zE6SNa9Di7QCiwXBvdnITfa2LB4R28IrOx0I6vDefRfO6dTgkvP1J7g1i1OmBo6fVwA==',key_name='tempest-keypair-1160191621',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76fb5cb7abcd4d74abfc471a96bbd12c',ramdisk_id='',reservation_id='r-g01x1810',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-457307401',owner_user_name='tempest-AttachVolumeNegativeTest-457307401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3bd4ce8a916a4bdbbc988eb4fe32991e',uuid=74f09648-834b-4da1-89a4-bcdcca255908,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.303 247403 DEBUG nova.network.os_vif_util [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converting VIF {"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.304 247403 DEBUG nova.network.os_vif_util [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.306 247403 DEBUG nova.objects.instance [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'pci_devices' on Instance uuid 74f09648-834b-4da1-89a4-bcdcca255908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.372 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <uuid>74f09648-834b-4da1-89a4-bcdcca255908</uuid>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <name>instance-000000a4</name>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1818783830</nova:name>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:48:30</nova:creationTime>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:user uuid="3bd4ce8a916a4bdbbc988eb4fe32991e">tempest-AttachVolumeNegativeTest-457307401-project-member</nova:user>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:project uuid="76fb5cb7abcd4d74abfc471a96bbd12c">tempest-AttachVolumeNegativeTest-457307401</nova:project>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <nova:port uuid="b07e666e-f751-41ba-b006-3496f51d6eaa">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <entry name="serial">74f09648-834b-4da1-89a4-bcdcca255908</entry>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <entry name="uuid">74f09648-834b-4da1-89a4-bcdcca255908</entry>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/74f09648-834b-4da1-89a4-bcdcca255908_disk">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/74f09648-834b-4da1-89a4-bcdcca255908_disk.config">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:67:22:c1"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <target dev="tapb07e666e-f7"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/console.log" append="off"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:48:31 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:48:31 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:48:31 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:48:31 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.373 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Preparing to wait for external event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.373 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.374 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.374 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.375 247403 DEBUG nova.virt.libvirt.vif [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:48:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1818783830',display_name='tempest-AttachVolumeNegativeTest-server-1818783830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1818783830',id=164,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbi4ee/uHKZmZ3YiUCQp4yI4jD1xvBXHz9BPflKR0a7UgKXiUXfNBhcRX7TqA1zE6SNa9Di7QCiwXBvdnITfa2LB4R28IrOx0I6vDefRfO6dTgkvP1J7g1i1OmBo6fVwA==',key_name='tempest-keypair-1160191621',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76fb5cb7abcd4d74abfc471a96bbd12c',ramdisk_id='',reservation_id='r-g01x1810',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-457307401',owner_user_name='tempest-AttachVolumeNegativeTest-457307401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:48:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3bd4ce8a916a4bdbbc988eb4fe32991e',uuid=74f09648-834b-4da1-89a4-bcdcca255908,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.375 247403 DEBUG nova.network.os_vif_util [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converting VIF {"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.375 247403 DEBUG nova.network.os_vif_util [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.376 247403 DEBUG os_vif [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.376 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.377 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.377 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.379 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.379 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb07e666e-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.380 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb07e666e-f7, col_values=(('external_ids', {'iface-id': 'b07e666e-f751-41ba-b006-3496f51d6eaa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:22:c1', 'vm-uuid': '74f09648-834b-4da1-89a4-bcdcca255908'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.381 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:31 np0005603621 NetworkManager[49013]: <info>  [1769849311.3821] manager: (tapb07e666e-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.384 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.385 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:31 np0005603621 nova_compute[247399]: 2026-01-31 08:48:31.386 247403 INFO os_vif [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7')#033[00m
Jan 31 03:48:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:32.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:32.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 705 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 31 03:48:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:34.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 705 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 728 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 31 03:48:35 np0005603621 nova_compute[247399]: 2026-01-31 08:48:35.109 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:48:35 np0005603621 nova_compute[247399]: 2026-01-31 08:48:35.110 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:48:35 np0005603621 nova_compute[247399]: 2026-01-31 08:48:35.110 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No VIF found with MAC fa:16:3e:67:22:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:48:35 np0005603621 nova_compute[247399]: 2026-01-31 08:48:35.110 247403 INFO nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Using config drive#033[00m
Jan 31 03:48:35 np0005603621 nova_compute[247399]: 2026-01-31 08:48:35.136 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:35 np0005603621 podman[365180]: 2026-01-31 08:48:35.509710029 +0000 UTC m=+0.058331804 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:48:35 np0005603621 nova_compute[247399]: 2026-01-31 08:48:35.568 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:35 np0005603621 podman[365181]: 2026-01-31 08:48:35.611603226 +0000 UTC m=+0.160755076 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 31 03:48:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:36.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:36 np0005603621 nova_compute[247399]: 2026-01-31 08:48:36.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 989 KiB/s rd, 2.0 MiB/s wr, 80 op/s
Jan 31 03:48:36 np0005603621 nova_compute[247399]: 2026-01-31 08:48:36.745 247403 INFO nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Creating config drive at /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/disk.config#033[00m
Jan 31 03:48:36 np0005603621 nova_compute[247399]: 2026-01-31 08:48:36.750 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjobl3s_c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:36 np0005603621 nova_compute[247399]: 2026-01-31 08:48:36.876 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpjobl3s_c" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:36 np0005603621 nova_compute[247399]: 2026-01-31 08:48:36.910 247403 DEBUG nova.storage.rbd_utils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image 74f09648-834b-4da1-89a4-bcdcca255908_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:48:36 np0005603621 nova_compute[247399]: 2026-01-31 08:48:36.915 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/disk.config 74f09648-834b-4da1-89a4-bcdcca255908_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.429 247403 DEBUG oslo_concurrency.processutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/disk.config 74f09648-834b-4da1-89a4-bcdcca255908_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.432 247403 INFO nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Deleting local config drive /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908/disk.config because it was imported into RBD.#033[00m
Jan 31 03:48:37 np0005603621 kernel: tapb07e666e-f7: entered promiscuous mode
Jan 31 03:48:37 np0005603621 NetworkManager[49013]: <info>  [1769849317.4856] manager: (tapb07e666e-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/303)
Jan 31 03:48:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:37Z|00673|binding|INFO|Claiming lport b07e666e-f751-41ba-b006-3496f51d6eaa for this chassis.
Jan 31 03:48:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:37Z|00674|binding|INFO|b07e666e-f751-41ba-b006-3496f51d6eaa: Claiming fa:16:3e:67:22:c1 10.100.0.13
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.486 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.494 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:37Z|00675|binding|INFO|Setting lport b07e666e-f751-41ba-b006-3496f51d6eaa ovn-installed in OVS
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.497 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603621 systemd-machined[212769]: New machine qemu-81-instance-000000a4.
Jan 31 03:48:37 np0005603621 systemd-udevd[365329]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:48:37 np0005603621 NetworkManager[49013]: <info>  [1769849317.5244] device (tapb07e666e-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:48:37 np0005603621 NetworkManager[49013]: <info>  [1769849317.5256] device (tapb07e666e-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:48:37 np0005603621 systemd[1]: Started Virtual Machine qemu-81-instance-000000a4.
Jan 31 03:48:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:37Z|00676|binding|INFO|Setting lport b07e666e-f751-41ba-b006-3496f51d6eaa up in Southbound
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.532 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:22:c1 10.100.0.13'], port_security=['fa:16:3e:67:22:c1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '74f09648-834b-4da1-89a4-bcdcca255908', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f269a-650e-4227-8352-05abf2566c17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76fb5cb7abcd4d74abfc471a96bbd12c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7f545401-a445-41e0-97ac-2f2cec520248', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97876448-21a4-4b64-9452-bd401dfcc8ac, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b07e666e-f751-41ba-b006-3496f51d6eaa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.533 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b07e666e-f751-41ba-b006-3496f51d6eaa in datapath a02f269a-650e-4227-8352-05abf2566c17 bound to our chassis#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.535 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a02f269a-650e-4227-8352-05abf2566c17#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.544 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf6e9a9-9157-479b-a0d7-bca281751e56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.544 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa02f269a-61 in ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.547 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa02f269a-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.547 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39106446-8104-42e1-9626-cab2ace92ed4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.547 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5bb632-b590-4e1c-b579-36cdc91543d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.555 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d3551a32-a314-4340-8583-075d290218ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.565 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ec65fa67-e1ee-4d40-9065-029a82cbee8a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.588 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[94ad8357-df45-47c2-bb88-766bc6a7f464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.593 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fb68e170-eb8e-41d6-adc4-aa88fb891f26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 NetworkManager[49013]: <info>  [1769849317.5947] manager: (tapa02f269a-60): new Veth device (/org/freedesktop/NetworkManager/Devices/304)
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.618 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8639d0d4-7842-4763-a46c-4145522253de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.620 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4dce6233-88c2-4570-acb9-d4170581bbfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 NetworkManager[49013]: <info>  [1769849317.6344] device (tapa02f269a-60): carrier: link connected
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.638 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf7bc7c-06a0-4410-ac98-d70219e12dc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.648 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8a31c46d-51a8-40f4-9cbd-152ef9ce720b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f269a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862314, 'reachable_time': 25454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365362, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.658 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[89d12d97-292e-4493-a665-8aef16db61c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:e973'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 862314, 'tstamp': 862314}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365363, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.668 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29f4a7bb-472b-418f-9554-827cb34a018e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f269a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 204], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862314, 'reachable_time': 25454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365364, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.691 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[84d7bef0-c029-4dba-bbbc-a0885a23b57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.733 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[340c173a-afab-477c-8b9b-1649048dfc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.734 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f269a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.735 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.735 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02f269a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.777 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603621 NetworkManager[49013]: <info>  [1769849317.7779] manager: (tapa02f269a-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/305)
Jan 31 03:48:37 np0005603621 kernel: tapa02f269a-60: entered promiscuous mode
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.780 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa02f269a-60, col_values=(('external_ids', {'iface-id': '2c775482-0f82-4695-be62-4a95328fbf79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:37Z|00677|binding|INFO|Releasing lport 2c775482-0f82-4695-be62-4a95328fbf79 from this chassis (sb_readonly=0)
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.781 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.782 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a02f269a-650e-4227-8352-05abf2566c17.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a02f269a-650e-4227-8352-05abf2566c17.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.782 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f54f3771-7c22-4b1a-8bd6-4a2908c2b8a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.784 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-a02f269a-650e-4227-8352-05abf2566c17
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/a02f269a-650e-4227-8352-05abf2566c17.pid.haproxy
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID a02f269a-650e-4227-8352-05abf2566c17
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:48:37 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:37.784 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'env', 'PROCESS_TAG=haproxy-a02f269a-650e-4227-8352-05abf2566c17', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a02f269a-650e-4227-8352-05abf2566c17.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:48:37 np0005603621 nova_compute[247399]: 2026-01-31 08:48:37.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.043 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849318.0425344, 74f09648-834b-4da1-89a4-bcdcca255908 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.045 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] VM Started (Lifecycle Event)#033[00m
Jan 31 03:48:38 np0005603621 podman[365438]: 2026-01-31 08:48:38.092700773 +0000 UTC m=+0.043451033 container create 6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:48:38 np0005603621 systemd[1]: Started libpod-conmon-6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85.scope.
Jan 31 03:48:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:48:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9af882f7876b0e341b1d261fd1c60ca9d3d83a915356272ee768b1472995e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:48:38 np0005603621 podman[365438]: 2026-01-31 08:48:38.163614822 +0000 UTC m=+0.114365112 container init 6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:48:38 np0005603621 podman[365438]: 2026-01-31 08:48:38.069467779 +0000 UTC m=+0.020218059 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:48:38 np0005603621 podman[365438]: 2026-01-31 08:48:38.168344081 +0000 UTC m=+0.119094341 container start 6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 31 03:48:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [NOTICE]   (365459) : New worker (365461) forked
Jan 31 03:48:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [NOTICE]   (365459) : Loading success.
Jan 31 03:48:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:38.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.326 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.331 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849318.0428255, 74f09648-834b-4da1-89a4-bcdcca255908 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.331 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.453 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.456 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:48:38 np0005603621 nova_compute[247399]: 2026-01-31 08:48:38.631 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:48:38
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:48:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 162 op/s
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:48:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:48:39 np0005603621 nova_compute[247399]: 2026-01-31 08:48:39.266 247403 DEBUG nova.network.neutron [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updated VIF entry in instance network info cache for port b07e666e-f751-41ba-b006-3496f51d6eaa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:48:39 np0005603621 nova_compute[247399]: 2026-01-31 08:48:39.267 247403 DEBUG nova.network.neutron [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updating instance_info_cache with network_info: [{"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:48:39 np0005603621 nova_compute[247399]: 2026-01-31 08:48:39.344 247403 DEBUG oslo_concurrency.lockutils [req-7ebedc44-0ce1-4de6-ba8d-80aa94df5183 req-ae1341fa-b336-48bf-8980-22829cf7ce40 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:48:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:40.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:40.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:40 np0005603621 nova_compute[247399]: 2026-01-31 08:48:40.572 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 31 03:48:41 np0005603621 nova_compute[247399]: 2026-01-31 08:48:41.384 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.000 247403 DEBUG nova.compute.manager [req-518b903a-7c82-4f0b-beb1-b28b37f44822 req-0fb29ed5-98bc-47cb-a746-36df03471228 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.001 247403 DEBUG oslo_concurrency.lockutils [req-518b903a-7c82-4f0b-beb1-b28b37f44822 req-0fb29ed5-98bc-47cb-a746-36df03471228 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.001 247403 DEBUG oslo_concurrency.lockutils [req-518b903a-7c82-4f0b-beb1-b28b37f44822 req-0fb29ed5-98bc-47cb-a746-36df03471228 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.002 247403 DEBUG oslo_concurrency.lockutils [req-518b903a-7c82-4f0b-beb1-b28b37f44822 req-0fb29ed5-98bc-47cb-a746-36df03471228 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.002 247403 DEBUG nova.compute.manager [req-518b903a-7c82-4f0b-beb1-b28b37f44822 req-0fb29ed5-98bc-47cb-a746-36df03471228 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Processing event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.002 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.006 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849322.0055401, 74f09648-834b-4da1-89a4-bcdcca255908 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.006 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.009 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.012 247403 INFO nova.virt.libvirt.driver [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Instance spawned successfully.#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.012 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.065 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.070 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.073 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.073 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.074 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.074 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.074 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.075 247403 DEBUG nova.virt.libvirt.driver [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.118 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:48:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:42.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.233 247403 INFO nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Took 29.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.233 247403 DEBUG nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:48:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:42.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.385 247403 INFO nova.compute.manager [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Took 31.82 seconds to build instance.#033[00m
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.482 247403 DEBUG oslo_concurrency.lockutils [None req-7a4a1aef-b77a-45f5-a170-c8e48effe97a 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:42 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:42Z|00678|binding|INFO|Releasing lport 2c775482-0f82-4695-be62-4a95328fbf79 from this chassis (sb_readonly=0)
Jan 31 03:48:42 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:42Z|00679|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:48:42 np0005603621 nova_compute[247399]: 2026-01-31 08:48:42.669 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 153 op/s
Jan 31 03:48:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:44.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 31 03:48:44 np0005603621 nova_compute[247399]: 2026-01-31 08:48:44.919 247403 DEBUG nova.compute.manager [req-44f206eb-a558-4a74-aadf-f68e4073df06 req-d06635da-3a90-4293-b8d7-a5016df0cf1d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:44 np0005603621 nova_compute[247399]: 2026-01-31 08:48:44.920 247403 DEBUG oslo_concurrency.lockutils [req-44f206eb-a558-4a74-aadf-f68e4073df06 req-d06635da-3a90-4293-b8d7-a5016df0cf1d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:48:44 np0005603621 nova_compute[247399]: 2026-01-31 08:48:44.920 247403 DEBUG oslo_concurrency.lockutils [req-44f206eb-a558-4a74-aadf-f68e4073df06 req-d06635da-3a90-4293-b8d7-a5016df0cf1d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:48:44 np0005603621 nova_compute[247399]: 2026-01-31 08:48:44.920 247403 DEBUG oslo_concurrency.lockutils [req-44f206eb-a558-4a74-aadf-f68e4073df06 req-d06635da-3a90-4293-b8d7-a5016df0cf1d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:48:44 np0005603621 nova_compute[247399]: 2026-01-31 08:48:44.920 247403 DEBUG nova.compute.manager [req-44f206eb-a558-4a74-aadf-f68e4073df06 req-d06635da-3a90-4293-b8d7-a5016df0cf1d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] No waiting events found dispatching network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:48:44 np0005603621 nova_compute[247399]: 2026-01-31 08:48:44.920 247403 WARNING nova.compute.manager [req-44f206eb-a558-4a74-aadf-f68e4073df06 req-d06635da-3a90-4293-b8d7-a5016df0cf1d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received unexpected event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa for instance with vm_state active and task_state None.#033[00m
Jan 31 03:48:45 np0005603621 nova_compute[247399]: 2026-01-31 08:48:45.573 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:46.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:46.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:46 np0005603621 nova_compute[247399]: 2026-01-31 08:48:46.427 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 182 op/s
Jan 31 03:48:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:48:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:48.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:48:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:48.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.9 MiB/s wr, 238 op/s
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.013850538863297972 of space, bias 1.0, pg target 4.155161658989392 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004324646961660152 of space, bias 1.0, pg target 1.280095500651405 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003217040061418202 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:48:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 31 03:48:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:50.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:50.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:50 np0005603621 nova_compute[247399]: 2026-01-31 08:48:50.576 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 738 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 52 KiB/s wr, 135 op/s
Jan 31 03:48:51 np0005603621 nova_compute[247399]: 2026-01-31 08:48:51.430 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:52.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 755 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Jan 31 03:48:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:48:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:48:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:54.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:54.260 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:48:54 np0005603621 nova_compute[247399]: 2026-01-31 08:48:54.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:54.262 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:48:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 755 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 235 op/s
Jan 31 03:48:55 np0005603621 nova_compute[247399]: 2026-01-31 08:48:55.578 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:56.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:56 np0005603621 nova_compute[247399]: 2026-01-31 08:48:56.433 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:48:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:56Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:22:c1 10.100.0.13
Jan 31 03:48:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:48:56Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:22:c1 10.100.0.13
Jan 31 03:48:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 757 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.3 MiB/s wr, 259 op/s
Jan 31 03:48:56 np0005603621 nova_compute[247399]: 2026-01-31 08:48:56.726 247403 DEBUG nova.compute.manager [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-changed-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:48:56 np0005603621 nova_compute[247399]: 2026-01-31 08:48:56.727 247403 DEBUG nova.compute.manager [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Refreshing instance network info cache due to event network-changed-b07e666e-f751-41ba-b006-3496f51d6eaa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:48:56 np0005603621 nova_compute[247399]: 2026-01-31 08:48:56.728 247403 DEBUG oslo_concurrency.lockutils [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:48:56 np0005603621 nova_compute[247399]: 2026-01-31 08:48:56.728 247403 DEBUG oslo_concurrency.lockutils [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:48:56 np0005603621 nova_compute[247399]: 2026-01-31 08:48:56.729 247403 DEBUG nova.network.neutron [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Refreshing network info cache for port b07e666e-f751-41ba-b006-3496f51d6eaa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:48:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:48:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:48:58.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:48:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:48:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:48:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:48:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 770 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Jan 31 03:48:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:48:59.264 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:00 np0005603621 nova_compute[247399]: 2026-01-31 08:49:00.040 247403 DEBUG nova.network.neutron [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updated VIF entry in instance network info cache for port b07e666e-f751-41ba-b006-3496f51d6eaa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:49:00 np0005603621 nova_compute[247399]: 2026-01-31 08:49:00.041 247403 DEBUG nova.network.neutron [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updating instance_info_cache with network_info: [{"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:49:00 np0005603621 nova_compute[247399]: 2026-01-31 08:49:00.145 247403 DEBUG oslo_concurrency.lockutils [req-7ff36810-5c17-4d6d-bd73-9bd85049c16a req-faec4e5e-6913-4a4e-aec8-7e9da93e4b1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-74f09648-834b-4da1-89a4-bcdcca255908" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:49:00 np0005603621 nova_compute[247399]: 2026-01-31 08:49:00.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:00.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:00 np0005603621 nova_compute[247399]: 2026-01-31 08:49:00.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 770 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Jan 31 03:49:01 np0005603621 nova_compute[247399]: 2026-01-31 08:49:01.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:02.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 770 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Jan 31 03:49:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:04.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:04.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 770 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 600 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 31 03:49:05 np0005603621 nova_compute[247399]: 2026-01-31 08:49:05.583 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:06 np0005603621 nova_compute[247399]: 2026-01-31 08:49:06.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:06 np0005603621 nova_compute[247399]: 2026-01-31 08:49:06.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:49:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:49:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:06.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:49:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:06.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:06 np0005603621 nova_compute[247399]: 2026-01-31 08:49:06.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:49:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:49:06 np0005603621 nova_compute[247399]: 2026-01-31 08:49:06.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:06 np0005603621 podman[365668]: 2026-01-31 08:49:06.539105082 +0000 UTC m=+0.043857555 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:49:06 np0005603621 podman[365669]: 2026-01-31 08:49:06.554800658 +0000 UTC m=+0.059168249 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 03:49:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 770 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 813 KiB/s rd, 2.1 MiB/s wr, 116 op/s
Jan 31 03:49:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:07 np0005603621 nova_compute[247399]: 2026-01-31 08:49:07.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e95f6965-d1f6-445f-b124-fd116adbaef8 does not exist
Jan 31 03:49:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 34616c32-01f6-4770-b62b-27a6d37afa90 does not exist
Jan 31 03:49:07 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7e49ea00-d164-481e-9f3d-78dc860c2a95 does not exist
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:07 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.737891623 +0000 UTC m=+0.036042109 container create b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:49:07 np0005603621 systemd[1]: Started libpod-conmon-b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed.scope.
Jan 31 03:49:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.809181043 +0000 UTC m=+0.107331549 container init b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.814531872 +0000 UTC m=+0.112682358 container start b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.720779582 +0000 UTC m=+0.018930088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:49:07 np0005603621 hopeful_cohen[365865]: 167 167
Jan 31 03:49:07 np0005603621 systemd[1]: libpod-b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed.scope: Deactivated successfully.
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.817799475 +0000 UTC m=+0.115949981 container attach b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.820246252 +0000 UTC m=+0.118396738 container died b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 03:49:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-13a1b5af81f1568a59c52e6bbc6d17f47aa7cc22b329967b35fdc3554c5b6e1f-merged.mount: Deactivated successfully.
Jan 31 03:49:07 np0005603621 podman[365849]: 2026-01-31 08:49:07.855905009 +0000 UTC m=+0.154055485 container remove b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:49:07 np0005603621 systemd[1]: libpod-conmon-b43cdbf7760bb6a67c708ca5727fcc539649ade680c37a3ee94f04b0896d30ed.scope: Deactivated successfully.
Jan 31 03:49:07 np0005603621 podman[365890]: 2026-01-31 08:49:07.973035107 +0000 UTC m=+0.035399779 container create d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wiles, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 03:49:08 np0005603621 systemd[1]: Started libpod-conmon-d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398.scope.
Jan 31 03:49:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1c501f4cc31fa3696d667a6cccac72c77433c9dc9d4241128377be2fdce352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1c501f4cc31fa3696d667a6cccac72c77433c9dc9d4241128377be2fdce352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1c501f4cc31fa3696d667a6cccac72c77433c9dc9d4241128377be2fdce352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1c501f4cc31fa3696d667a6cccac72c77433c9dc9d4241128377be2fdce352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1c501f4cc31fa3696d667a6cccac72c77433c9dc9d4241128377be2fdce352/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:08 np0005603621 podman[365890]: 2026-01-31 08:49:08.04597675 +0000 UTC m=+0.108341442 container init d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wiles, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:49:08 np0005603621 podman[365890]: 2026-01-31 08:49:07.955182523 +0000 UTC m=+0.017547215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:49:08 np0005603621 podman[365890]: 2026-01-31 08:49:08.05233843 +0000 UTC m=+0.114703102 container start d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:49:08 np0005603621 podman[365890]: 2026-01-31 08:49:08.05516853 +0000 UTC m=+0.117533222 container attach d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:49:08 np0005603621 nova_compute[247399]: 2026-01-31 08:49:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:08 np0005603621 nova_compute[247399]: 2026-01-31 08:49:08.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:49:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:08.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:08.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:08 np0005603621 nova_compute[247399]: 2026-01-31 08:49:08.370 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.6 MiB/s wr, 140 op/s
Jan 31 03:49:08 np0005603621 busy_wiles[365907]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:49:08 np0005603621 busy_wiles[365907]: --> relative data size: 1.0
Jan 31 03:49:08 np0005603621 busy_wiles[365907]: --> All data devices are unavailable
Jan 31 03:49:08 np0005603621 systemd[1]: libpod-d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398.scope: Deactivated successfully.
Jan 31 03:49:08 np0005603621 conmon[365907]: conmon d0e365f75ed92a8a7340 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398.scope/container/memory.events
Jan 31 03:49:08 np0005603621 podman[365890]: 2026-01-31 08:49:08.823372255 +0000 UTC m=+0.885736927 container died d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wiles, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:49:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fb1c501f4cc31fa3696d667a6cccac72c77433c9dc9d4241128377be2fdce352-merged.mount: Deactivated successfully.
Jan 31 03:49:08 np0005603621 podman[365890]: 2026-01-31 08:49:08.868651285 +0000 UTC m=+0.931015947 container remove d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:49:08 np0005603621 systemd[1]: libpod-conmon-d0e365f75ed92a8a73406ce292b8c43c1076c4ebb38ef0187112ce0ef4a91398.scope: Deactivated successfully.
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.338627024 +0000 UTC m=+0.034293664 container create 8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 03:49:09 np0005603621 systemd[1]: Started libpod-conmon-8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048.scope.
Jan 31 03:49:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.395067596 +0000 UTC m=+0.090734266 container init 8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.39961628 +0000 UTC m=+0.095282910 container start 8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.40247338 +0000 UTC m=+0.098140020 container attach 8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:49:09 np0005603621 ecstatic_easley[366090]: 167 167
Jan 31 03:49:09 np0005603621 systemd[1]: libpod-8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048.scope: Deactivated successfully.
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.403321217 +0000 UTC m=+0.098987857 container died 8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:49:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ba28d72d77fd27952f6f2c43b3b89a2bd3e6de99526e3207e7aed80457e2f04a-merged.mount: Deactivated successfully.
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.324090915 +0000 UTC m=+0.019757585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:49:09 np0005603621 podman[366074]: 2026-01-31 08:49:09.430746522 +0000 UTC m=+0.126413162 container remove 8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 03:49:09 np0005603621 systemd[1]: libpod-conmon-8a727ccd91c0706c91c78cc6058b87545d62313247fdd473682f3689b14a3048.scope: Deactivated successfully.
Jan 31 03:49:09 np0005603621 podman[366114]: 2026-01-31 08:49:09.54911733 +0000 UTC m=+0.032674603 container create d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:49:09 np0005603621 systemd[1]: Started libpod-conmon-d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2.scope.
Jan 31 03:49:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7739f1c556064e108be406082f9ef379589678e0d459a16bdc5b4d0f7fb4050b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7739f1c556064e108be406082f9ef379589678e0d459a16bdc5b4d0f7fb4050b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7739f1c556064e108be406082f9ef379589678e0d459a16bdc5b4d0f7fb4050b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7739f1c556064e108be406082f9ef379589678e0d459a16bdc5b4d0f7fb4050b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:09 np0005603621 podman[366114]: 2026-01-31 08:49:09.617489008 +0000 UTC m=+0.101046281 container init d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:49:09 np0005603621 podman[366114]: 2026-01-31 08:49:09.622538459 +0000 UTC m=+0.106095732 container start d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:49:09 np0005603621 podman[366114]: 2026-01-31 08:49:09.626531784 +0000 UTC m=+0.110089057 container attach d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_antonelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:49:09 np0005603621 podman[366114]: 2026-01-31 08:49:09.535268523 +0000 UTC m=+0.018825826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:49:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:10.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:10.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]: {
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:    "0": [
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:        {
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "devices": [
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "/dev/loop3"
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            ],
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "lv_name": "ceph_lv0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "lv_size": "7511998464",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "name": "ceph_lv0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "tags": {
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.cluster_name": "ceph",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.crush_device_class": "",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.encrypted": "0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.osd_id": "0",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.type": "block",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:                "ceph.vdo": "0"
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            },
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "type": "block",
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:            "vg_name": "ceph_vg0"
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:        }
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]:    ]
Jan 31 03:49:10 np0005603621 epic_antonelli[366130]: }
Jan 31 03:49:10 np0005603621 systemd[1]: libpod-d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2.scope: Deactivated successfully.
Jan 31 03:49:10 np0005603621 podman[366114]: 2026-01-31 08:49:10.342336724 +0000 UTC m=+0.825893997 container died d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 03:49:10 np0005603621 nova_compute[247399]: 2026-01-31 08:49:10.366 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:10 np0005603621 nova_compute[247399]: 2026-01-31 08:49:10.585 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7739f1c556064e108be406082f9ef379589678e0d459a16bdc5b4d0f7fb4050b-merged.mount: Deactivated successfully.
Jan 31 03:49:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 41 KiB/s wr, 90 op/s
Jan 31 03:49:10 np0005603621 podman[366114]: 2026-01-31 08:49:10.992637727 +0000 UTC m=+1.476195000 container remove d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:49:11 np0005603621 systemd[1]: libpod-conmon-d132136a8d4f0f4ad2524bb38bae23dfad9e1f93702fafe7837e4ca2d7edd0f2.scope: Deactivated successfully.
Jan 31 03:49:11 np0005603621 nova_compute[247399]: 2026-01-31 08:49:11.487 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:11 np0005603621 podman[366289]: 2026-01-31 08:49:11.429937124 +0000 UTC m=+0.021052955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:49:11 np0005603621 podman[366289]: 2026-01-31 08:49:11.632198211 +0000 UTC m=+0.223314022 container create a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kepler, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:49:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:12 np0005603621 systemd[1]: Started libpod-conmon-a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea.scope.
Jan 31 03:49:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:12 np0005603621 nova_compute[247399]: 2026-01-31 08:49:12.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:12.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:12 np0005603621 nova_compute[247399]: 2026-01-31 08:49:12.317 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:12 np0005603621 nova_compute[247399]: 2026-01-31 08:49:12.317 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:12 np0005603621 nova_compute[247399]: 2026-01-31 08:49:12.318 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:12 np0005603621 nova_compute[247399]: 2026-01-31 08:49:12.318 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:49:12 np0005603621 nova_compute[247399]: 2026-01-31 08:49:12.318 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 46 KiB/s wr, 118 op/s
Jan 31 03:49:12 np0005603621 podman[366289]: 2026-01-31 08:49:12.699705146 +0000 UTC m=+1.290820987 container init a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 03:49:12 np0005603621 podman[366289]: 2026-01-31 08:49:12.707301976 +0000 UTC m=+1.298417797 container start a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kepler, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:49:12 np0005603621 wonderful_kepler[366306]: 167 167
Jan 31 03:49:12 np0005603621 systemd[1]: libpod-a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea.scope: Deactivated successfully.
Jan 31 03:49:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104633721' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.174 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.856s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.544 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.544 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.547 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.547 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.698 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.699 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3707MB free_disk=20.67287826538086GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.700 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:13 np0005603621 nova_compute[247399]: 2026-01-31 08:49:13.700 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:13 np0005603621 podman[366289]: 2026-01-31 08:49:13.941703611 +0000 UTC m=+2.532819442 container attach a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:49:13 np0005603621 podman[366289]: 2026-01-31 08:49:13.943077084 +0000 UTC m=+2.534192895 container died a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:49:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:14.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.391 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3308d345-19b7-4fbb-bd81-631135649e7d actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.391 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 74f09648-834b-4da1-89a4-bcdcca255908 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.392 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.392 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.518 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:49:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 34 KiB/s wr, 95 op/s
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.701 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.701 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.729 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:49:14 np0005603621 nova_compute[247399]: 2026-01-31 08:49:14.768 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:49:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:49:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298708248' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:49:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:49:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298708248' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:49:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3153ccf0ec4400bbc6d83f4df0e0f71b137ecaa74ce11bf5064566a885505a61-merged.mount: Deactivated successfully.
Jan 31 03:49:15 np0005603621 nova_compute[247399]: 2026-01-31 08:49:15.587 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.124 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:16 np0005603621 podman[366289]: 2026-01-31 08:49:16.192121905 +0000 UTC m=+4.783237716 container remove a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:49:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:16.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:16 np0005603621 systemd[1]: libpod-conmon-a259afe8127dde2ff3d63bf99ffd96705a9a88198777eb61371338ca7b800eea.scope: Deactivated successfully.
Jan 31 03:49:16 np0005603621 podman[366426]: 2026-01-31 08:49:16.309094888 +0000 UTC m=+0.021046165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 35 KiB/s wr, 96 op/s
Jan 31 03:49:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236188609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.765 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.771 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.778 247403 DEBUG oslo_concurrency.lockutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.778 247403 DEBUG oslo_concurrency.lockutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.799 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:49:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.903 247403 DEBUG nova.objects.instance [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'flavor' on Instance uuid 74f09648-834b-4da1-89a4-bcdcca255908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.917 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:49:16 np0005603621 nova_compute[247399]: 2026-01-31 08:49:16.917 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:17 np0005603621 podman[366426]: 2026-01-31 08:49:17.011390262 +0000 UTC m=+0.723341509 container create d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:49:17 np0005603621 nova_compute[247399]: 2026-01-31 08:49:17.048 247403 DEBUG oslo_concurrency.lockutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:17 np0005603621 systemd[1]: Started libpod-conmon-d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db.scope.
Jan 31 03:49:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca0e9069a3d4b1d2675f8b5b9d58940e2c530a9fdd9943f2a655e78f5161d3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca0e9069a3d4b1d2675f8b5b9d58940e2c530a9fdd9943f2a655e78f5161d3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca0e9069a3d4b1d2675f8b5b9d58940e2c530a9fdd9943f2a655e78f5161d3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca0e9069a3d4b1d2675f8b5b9d58940e2c530a9fdd9943f2a655e78f5161d3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:17 np0005603621 podman[366426]: 2026-01-31 08:49:17.752292886 +0000 UTC m=+1.464244163 container init d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_robinson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:49:17 np0005603621 podman[366426]: 2026-01-31 08:49:17.759972939 +0000 UTC m=+1.471924186 container start d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_robinson, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:49:17 np0005603621 nova_compute[247399]: 2026-01-31 08:49:17.795 247403 DEBUG oslo_concurrency.lockutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:17 np0005603621 nova_compute[247399]: 2026-01-31 08:49:17.796 247403 DEBUG oslo_concurrency.lockutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:17 np0005603621 nova_compute[247399]: 2026-01-31 08:49:17.796 247403 INFO nova.compute.manager [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Attaching volume 528b148a-172a-43e7-8be2-21819c2d44e5 to /dev/vdb#033[00m
Jan 31 03:49:17 np0005603621 nova_compute[247399]: 2026-01-31 08:49:17.994 247403 DEBUG os_brick.utils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:49:17 np0005603621 nova_compute[247399]: 2026-01-31 08:49:17.995 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.004 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.004 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[69a69181-39e9-41a6-967a-31dd710540d7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.005 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.013 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.014 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[b52c7b15-59a5-4085-8a26-e27b99df1705]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.015 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:18 np0005603621 podman[366426]: 2026-01-31 08:49:18.018481821 +0000 UTC m=+1.730433148 container attach d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_robinson, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.024 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.024 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[ca757a02-f866-4c22-b029-fe187c67781c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.025 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[371cfb4b-fec2-47d4-b0cf-1ffc5f2ccfe1]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.026 247403 DEBUG oslo_concurrency.processutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.046 247403 DEBUG oslo_concurrency.processutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.049 247403 DEBUG os_brick.initiator.connectors.lightos [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.050 247403 DEBUG os_brick.initiator.connectors.lightos [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.050 247403 DEBUG os_brick.initiator.connectors.lightos [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.050 247403 DEBUG os_brick.utils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.051 247403 DEBUG nova.virt.block_device [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updating existing volume attachment record: b098b19f-6dce-4976-a847-3c46824f709e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:49:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:18.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:18.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]: {
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:        "osd_id": 0,
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:        "type": "bluestore"
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]:    }
Jan 31 03:49:18 np0005603621 crazy_robinson[366444]: }
Jan 31 03:49:18 np0005603621 systemd[1]: libpod-d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db.scope: Deactivated successfully.
Jan 31 03:49:18 np0005603621 podman[366426]: 2026-01-31 08:49:18.605892798 +0000 UTC m=+2.317844055 container died d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:49:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 43 KiB/s wr, 83 op/s
Jan 31 03:49:18 np0005603621 nova_compute[247399]: 2026-01-31 08:49:18.918 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:19 np0005603621 nova_compute[247399]: 2026-01-31 08:49:19.071 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:19 np0005603621 nova_compute[247399]: 2026-01-31 08:49:19.071 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:19 np0005603621 nova_compute[247399]: 2026-01-31 08:49:19.073 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:49:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:20.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:20.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cca0e9069a3d4b1d2675f8b5b9d58940e2c530a9fdd9943f2a655e78f5161d3b-merged.mount: Deactivated successfully.
Jan 31 03:49:20 np0005603621 nova_compute[247399]: 2026-01-31 08:49:20.588 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 772 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 847 KiB/s rd, 16 KiB/s wr, 35 op/s
Jan 31 03:49:20 np0005603621 nova_compute[247399]: 2026-01-31 08:49:20.709 247403 DEBUG nova.objects.instance [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'flavor' on Instance uuid 74f09648-834b-4da1-89a4-bcdcca255908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:20 np0005603621 nova_compute[247399]: 2026-01-31 08:49:20.769 247403 DEBUG nova.virt.libvirt.driver [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Attempting to attach volume 528b148a-172a-43e7-8be2-21819c2d44e5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:49:20 np0005603621 nova_compute[247399]: 2026-01-31 08:49:20.773 247403 DEBUG nova.virt.libvirt.guest [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-528b148a-172a-43e7-8be2-21819c2d44e5">
Jan 31 03:49:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:49:20 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:49:20 np0005603621 nova_compute[247399]:  <serial>528b148a-172a-43e7-8be2-21819c2d44e5</serial>
Jan 31 03:49:20 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:49:20 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:49:20 np0005603621 nova_compute[247399]: 2026-01-31 08:49:20.846 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:21 np0005603621 podman[366426]: 2026-01-31 08:49:21.072935941 +0000 UTC m=+4.784887188 container remove d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_robinson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:49:21 np0005603621 systemd[1]: libpod-conmon-d70b9c1f5e7688fd4129ddef9c1c675b6ef138d09c397cfefe660f5a1cdf74db.scope: Deactivated successfully.
Jan 31 03:49:21 np0005603621 nova_compute[247399]: 2026-01-31 08:49:21.126 247403 DEBUG nova.virt.libvirt.driver [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:49:21 np0005603621 nova_compute[247399]: 2026-01-31 08:49:21.127 247403 DEBUG nova.virt.libvirt.driver [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:49:21 np0005603621 nova_compute[247399]: 2026-01-31 08:49:21.127 247403 DEBUG nova.virt.libvirt.driver [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:49:21 np0005603621 nova_compute[247399]: 2026-01-31 08:49:21.127 247403 DEBUG nova.virt.libvirt.driver [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No VIF found with MAC fa:16:3e:67:22:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:49:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:49:21 np0005603621 nova_compute[247399]: 2026-01-31 08:49:21.532 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:21 np0005603621 nova_compute[247399]: 2026-01-31 08:49:21.890 247403 DEBUG oslo_concurrency.lockutils [None req-d9b12320-17b5-43ff-a196-219a2e72f786 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:49:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:22.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:22.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 785 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 863 KiB/s rd, 1.4 MiB/s wr, 60 op/s
Jan 31 03:49:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 884fd582-0aff-4262-a4f4-551194b0c4f9 does not exist
Jan 31 03:49:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 49e6e82b-b84a-462e-b8ab-4244bf53e9d0 does not exist
Jan 31 03:49:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6e39fe6f-e9c3-4260-86c0-8a9330bb0789 does not exist
Jan 31 03:49:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:24.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 785 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Jan 31 03:49:25 np0005603621 nova_compute[247399]: 2026-01-31 08:49:25.590 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:25 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.259 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.259 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:26.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.295 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.452 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.453 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.460 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.461 247403 INFO nova.compute.claims [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.574 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 801 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.2 MiB/s wr, 48 op/s
Jan 31 03:49:26 np0005603621 nova_compute[247399]: 2026-01-31 08:49:26.850 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261643337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:27 np0005603621 nova_compute[247399]: 2026-01-31 08:49:27.972 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:27 np0005603621 nova_compute[247399]: 2026-01-31 08:49:27.977 247403 DEBUG nova.compute.provider_tree [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:49:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:28 np0005603621 nova_compute[247399]: 2026-01-31 08:49:28.335 247403 DEBUG nova.scheduler.client.report [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:49:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 840 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 3.8 MiB/s wr, 61 op/s
Jan 31 03:49:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:30.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.348 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.349 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:49:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:30.530 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:30.531 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:30.531 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.538 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.539 247403 DEBUG nova.network.neutron [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.565 247403 INFO nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.600 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:49:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 840 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 3.8 MiB/s wr, 54 op/s
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.800 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.801 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.802 247403 INFO nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Creating image(s)#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.835 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.861 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.887 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.891 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.936 247403 DEBUG nova.policy [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1c6e7eff11b435a81429826a682b32f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bfe11bd9d694684b527666e2c378eed', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.953 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.954 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.955 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.955 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.976 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:30 np0005603621 nova_compute[247399]: 2026-01-31 08:49:30.980 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 1e96eda0-223d-45b7-b1a7-573b51604c50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.576 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.628 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 1e96eda0-223d-45b7-b1a7-573b51604c50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.700 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.933 247403 DEBUG nova.objects.instance [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.963 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.963 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Ensure instance console log exists: /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.964 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.964 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:31 np0005603621 nova_compute[247399]: 2026-01-31 08:49:31.964 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:32.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 875 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 5.2 MiB/s wr, 125 op/s
Jan 31 03:49:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:33.399 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:49:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:33.400 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:49:33 np0005603621 nova_compute[247399]: 2026-01-31 08:49:33.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:34.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 875 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 3.8 MiB/s wr, 100 op/s
Jan 31 03:49:35 np0005603621 nova_compute[247399]: 2026-01-31 08:49:35.346 247403 DEBUG nova.network.neutron [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Successfully created port: 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:49:35 np0005603621 nova_compute[247399]: 2026-01-31 08:49:35.634 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:36.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:36 np0005603621 nova_compute[247399]: 2026-01-31 08:49:36.579 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 898 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 433 KiB/s rd, 4.3 MiB/s wr, 107 op/s
Jan 31 03:49:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:37 np0005603621 podman[366803]: 2026-01-31 08:49:37.498502007 +0000 UTC m=+0.050300049 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:49:37 np0005603621 podman[366804]: 2026-01-31 08:49:37.551478469 +0000 UTC m=+0.103489698 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 03:49:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:38.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:38.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.348413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849378348443, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1151, "num_deletes": 251, "total_data_size": 1825294, "memory_usage": 1856208, "flush_reason": "Manual Compaction"}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849378383221, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1794342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65042, "largest_seqno": 66192, "table_properties": {"data_size": 1788844, "index_size": 2893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12368, "raw_average_key_size": 20, "raw_value_size": 1777646, "raw_average_value_size": 2899, "num_data_blocks": 127, "num_entries": 613, "num_filter_entries": 613, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849276, "oldest_key_time": 1769849276, "file_creation_time": 1769849378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 34848 microseconds, and 4382 cpu microseconds.
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.383258) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1794342 bytes OK
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.383277) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.386851) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.386896) EVENT_LOG_v1 {"time_micros": 1769849378386885, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.386926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1820010, prev total WAL file size 1820010, number of live WAL files 2.
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.387440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1752KB)], [149(10MB)]
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849378387475, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 13024954, "oldest_snapshot_seqno": -1}
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:49:38
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups']
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 9062 keys, 11040638 bytes, temperature: kUnknown
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849378674438, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 11040638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10983303, "index_size": 33606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22661, "raw_key_size": 239826, "raw_average_key_size": 26, "raw_value_size": 10825638, "raw_average_value_size": 1194, "num_data_blocks": 1276, "num_entries": 9062, "num_filter_entries": 9062, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.674720) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 11040638 bytes
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.681340) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.4 rd, 38.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.7 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(13.4) write-amplify(6.2) OK, records in: 9579, records dropped: 517 output_compression: NoCompression
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.681410) EVENT_LOG_v1 {"time_micros": 1769849378681385, "job": 92, "event": "compaction_finished", "compaction_time_micros": 287050, "compaction_time_cpu_micros": 21080, "output_level": 6, "num_output_files": 1, "total_output_size": 11040638, "num_input_records": 9579, "num_output_records": 9062, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849378682344, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849378683617, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.387359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.683712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.683719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.683721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.683723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:49:38.683725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:49:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 898 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 141 op/s
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:49:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:49:39 np0005603621 nova_compute[247399]: 2026-01-31 08:49:39.747 247403 DEBUG nova.network.neutron [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Successfully updated port: 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:49:39 np0005603621 nova_compute[247399]: 2026-01-31 08:49:39.800 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:49:39 np0005603621 nova_compute[247399]: 2026-01-31 08:49:39.801 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:49:39 np0005603621 nova_compute[247399]: 2026-01-31 08:49:39.801 247403 DEBUG nova.network.neutron [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:49:40 np0005603621 nova_compute[247399]: 2026-01-31 08:49:40.192 247403 DEBUG nova.compute.manager [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-changed-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:40 np0005603621 nova_compute[247399]: 2026-01-31 08:49:40.193 247403 DEBUG nova.compute.manager [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Refreshing instance network info cache due to event network-changed-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:49:40 np0005603621 nova_compute[247399]: 2026-01-31 08:49:40.193 247403 DEBUG oslo_concurrency.lockutils [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:49:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:40.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:40 np0005603621 nova_compute[247399]: 2026-01-31 08:49:40.637 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 898 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Jan 31 03:49:40 np0005603621 nova_compute[247399]: 2026-01-31 08:49:40.979 247403 DEBUG nova.network.neutron [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:49:41 np0005603621 nova_compute[247399]: 2026-01-31 08:49:41.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.189 247403 DEBUG nova.compute.manager [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:49:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:42.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.336 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.337 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.367 247403 DEBUG nova.objects.instance [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'pci_requests' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.395 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.396 247403 INFO nova.compute.claims [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.396 247403 DEBUG nova.objects.instance [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'resources' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.431 247403 DEBUG nova.objects.instance [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'pci_devices' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.574 247403 INFO nova.compute.resource_tracker [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating resource usage from migration 9ab432c0-3f96-470c-ac27-8c3e3291f927#033[00m
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.575 247403 DEBUG nova.compute.resource_tracker [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Starting to track incoming migration 9ab432c0-3f96-470c-ac27-8c3e3291f927 with flavor f75c4aee-d826-4343-a7e3-f06a4b21de52 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:49:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 944 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 179 op/s
Jan 31 03:49:42 np0005603621 nova_compute[247399]: 2026-01-31 08:49:42.816 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:49:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838563821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.233 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.238 247403 DEBUG nova.compute.provider_tree [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.265 247403 DEBUG nova.scheduler.client.report [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.319 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.319 247403 INFO nova.compute.manager [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Migrating#033[00m
Jan 31 03:49:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:43.403 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.885 247403 DEBUG nova.network.neutron [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updating instance_info_cache with network_info: [{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.933 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.933 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance network_info: |[{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.934 247403 DEBUG oslo_concurrency.lockutils [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.934 247403 DEBUG nova.network.neutron [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Refreshing network info cache for port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.937 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Start _get_guest_xml network_info=[{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.942 247403 WARNING nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.948 247403 DEBUG nova.virt.libvirt.host [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.949 247403 DEBUG nova.virt.libvirt.host [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.953 247403 DEBUG nova.virt.libvirt.host [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.953 247403 DEBUG nova.virt.libvirt.host [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.954 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.955 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.955 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.955 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.956 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.956 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.956 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.956 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.957 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.957 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.957 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.957 247403 DEBUG nova.virt.hardware [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:49:43 np0005603621 nova_compute[247399]: 2026-01-31 08:49:43.960 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:49:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:44.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:44.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:49:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362059368' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.403 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.427 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.431 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 944 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 108 op/s
Jan 31 03:49:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:49:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549378499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.849 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.851 247403 DEBUG nova.virt.libvirt.vif [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:49:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1394409259',display_name='tempest-TestNetworkAdvancedServerOps-server-1394409259',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1394409259',id=167,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/lAD4OCru9xRbM5RwUuWIkZogXrbc5YYzvQOqv8vzq3yHSuGXSkcCz+Uq294UgXDvDOWlIVlc+KtPz2i57cLrRAb2n00QBhxyN/0ozf8lbd5nBtA8rBOi6LAdh2ntUJw==',key_name='tempest-TestNetworkAdvancedServerOps-1864686962',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-46zn1h0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:49:30Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e96eda0-223d-45b7-b1a7-573b51604c50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.851 247403 DEBUG nova.network.os_vif_util [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.852 247403 DEBUG nova.network.os_vif_util [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.853 247403 DEBUG nova.objects.instance [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.894 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <uuid>1e96eda0-223d-45b7-b1a7-573b51604c50</uuid>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <name>instance-000000a7</name>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1394409259</nova:name>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:49:43</nova:creationTime>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <nova:port uuid="412e7f4c-bd7f-493a-b50e-cf3e36e6dea8">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <entry name="serial">1e96eda0-223d-45b7-b1a7-573b51604c50</entry>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <entry name="uuid">1e96eda0-223d-45b7-b1a7-573b51604c50</entry>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/1e96eda0-223d-45b7-b1a7-573b51604c50_disk">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:fe:e3:f6"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <target dev="tap412e7f4c-bd"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/console.log" append="off"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:49:44 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:49:44 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:49:44 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:49:44 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.895 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Preparing to wait for external event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.896 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.896 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.896 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.897 247403 DEBUG nova.virt.libvirt.vif [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:49:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1394409259',display_name='tempest-TestNetworkAdvancedServerOps-server-1394409259',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1394409259',id=167,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/lAD4OCru9xRbM5RwUuWIkZogXrbc5YYzvQOqv8vzq3yHSuGXSkcCz+Uq294UgXDvDOWlIVlc+KtPz2i57cLrRAb2n00QBhxyN/0ozf8lbd5nBtA8rBOi6LAdh2ntUJw==',key_name='tempest-TestNetworkAdvancedServerOps-1864686962',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-46zn1h0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:49:30Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e96eda0-223d-45b7-b1a7-573b51604c50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.897 247403 DEBUG nova.network.os_vif_util [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.898 247403 DEBUG nova.network.os_vif_util [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.898 247403 DEBUG os_vif [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.899 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.899 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.902 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap412e7f4c-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.903 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap412e7f4c-bd, col_values=(('external_ids', {'iface-id': '412e7f4c-bd7f-493a-b50e-cf3e36e6dea8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:e3:f6', 'vm-uuid': '1e96eda0-223d-45b7-b1a7-573b51604c50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:44 np0005603621 NetworkManager[49013]: <info>  [1769849384.9057] manager: (tap412e7f4c-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:44 np0005603621 nova_compute[247399]: 2026-01-31 08:49:44.912 247403 INFO os_vif [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd')#033[00m
Jan 31 03:49:45 np0005603621 nova_compute[247399]: 2026-01-31 08:49:45.041 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:49:45 np0005603621 nova_compute[247399]: 2026-01-31 08:49:45.041 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:49:45 np0005603621 nova_compute[247399]: 2026-01-31 08:49:45.041 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:fe:e3:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:49:45 np0005603621 nova_compute[247399]: 2026-01-31 08:49:45.042 247403 INFO nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Using config drive#033[00m
Jan 31 03:49:45 np0005603621 nova_compute[247399]: 2026-01-31 08:49:45.064 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:45 np0005603621 nova_compute[247399]: 2026-01-31 08:49:45.639 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:46 np0005603621 nova_compute[247399]: 2026-01-31 08:49:46.167 247403 INFO nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Creating config drive at /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config#033[00m
Jan 31 03:49:46 np0005603621 nova_compute[247399]: 2026-01-31 08:49:46.170 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8sqkvtoe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:46 np0005603621 nova_compute[247399]: 2026-01-31 08:49:46.295 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8sqkvtoe" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:46.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:46.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:46 np0005603621 nova_compute[247399]: 2026-01-31 08:49:46.319 247403 DEBUG nova.storage.rbd_utils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:49:46 np0005603621 nova_compute[247399]: 2026-01-31 08:49:46.322 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 944 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 109 op/s
Jan 31 03:49:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.268 247403 DEBUG nova.network.neutron [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updated VIF entry in instance network info cache for port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.269 247403 DEBUG nova.network.neutron [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updating instance_info_cache with network_info: [{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.483 247403 DEBUG oslo_concurrency.processutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.483 247403 INFO nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deleting local config drive /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config because it was imported into RBD.#033[00m
Jan 31 03:49:47 np0005603621 kernel: tap412e7f4c-bd: entered promiscuous mode
Jan 31 03:49:47 np0005603621 NetworkManager[49013]: <info>  [1769849387.5217] manager: (tap412e7f4c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/307)
Jan 31 03:49:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:49:47Z|00680|binding|INFO|Claiming lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for this chassis.
Jan 31 03:49:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:49:47Z|00681|binding|INFO|412e7f4c-bd7f-493a-b50e-cf3e36e6dea8: Claiming fa:16:3e:fe:e3:f6 10.100.0.14
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.522 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:49:47Z|00682|binding|INFO|Setting lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 ovn-installed in OVS
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.531 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603621 systemd-machined[212769]: New machine qemu-82-instance-000000a7.
Jan 31 03:49:47 np0005603621 systemd-udevd[367008]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:49:47 np0005603621 NetworkManager[49013]: <info>  [1769849387.5593] device (tap412e7f4c-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:49:47 np0005603621 NetworkManager[49013]: <info>  [1769849387.5602] device (tap412e7f4c-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:49:47 np0005603621 systemd[1]: Started Virtual Machine qemu-82-instance-000000a7.
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.566 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e3:f6 10.100.0.14'], port_security=['fa:16:3e:fe:e3:f6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e96eda0-223d-45b7-b1a7-573b51604c50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c64fb20-295a-4907-91a3-2d6622028082', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13de6dfa-d3f8-48e3-9375-41e3868371bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c13c6f4-6161-4053-9f6c-b8f9a12a63dc, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:49:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:49:47Z|00683|binding|INFO|Setting lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 up in Southbound
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.567 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 in datapath 5c64fb20-295a-4907-91a3-2d6622028082 bound to our chassis#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.569 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c64fb20-295a-4907-91a3-2d6622028082#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.578 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e80b68ba-54ce-419c-979e-2d8d68532c12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.578 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c64fb20-21 in ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.580 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c64fb20-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.581 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[627079c9-8d60-4455-8f11-0334b0578aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.582 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcfd1f9-9282-4458-ae2d-b3d0421463be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.590 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[010597d1-6cc6-4d11-8d45-c13bac3a0a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.600 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c15cb5fd-133b-4dfb-a466-b850293ca7fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.618 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[66fee313-fd45-4633-9e64-8651fa63ea68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 NetworkManager[49013]: <info>  [1769849387.6243] manager: (tap5c64fb20-20): new Veth device (/org/freedesktop/NetworkManager/Devices/308)
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.623 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fe233118-c1dc-4b62-b62d-f9db3f392d1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.644 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[70c1ee86-d1b5-4283-9360-2f4544afccb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.646 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[64b7a061-1b9a-44ca-8cf7-7392fefb35f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 NetworkManager[49013]: <info>  [1769849387.6618] device (tap5c64fb20-20): carrier: link connected
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.666 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[534d4775-21cb-45b5-bd0e-118bfb3dce6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.669 247403 DEBUG oslo_concurrency.lockutils [req-6f92db35-8f3c-456a-9060-8202e0325534 req-39aec078-e58a-4630-b064-bc5bc1ec1ef4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.679 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[82b83e09-8777-4391-a059-6569734aa5f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c64fb20-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:94:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869317, 'reachable_time': 42764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367041, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.690 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[242ef99a-b89a-4387-94ce-43b98f0781e2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:94a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869317, 'tstamp': 869317}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367042, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.701 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3f883026-05dc-4329-8d75-2e3f1102fb0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c64fb20-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:94:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869317, 'reachable_time': 42764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367043, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.725 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[08d27ee4-7243-493b-bea2-a012d0744658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.768 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[abd52487-b98d-4b6f-a7a0-c57a1eb879cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.769 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c64fb20-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.770 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.770 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c64fb20-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:47 np0005603621 NetworkManager[49013]: <info>  [1769849387.7738] manager: (tap5c64fb20-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/309)
Jan 31 03:49:47 np0005603621 kernel: tap5c64fb20-20: entered promiscuous mode
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.775 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.776 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c64fb20-20, col_values=(('external_ids', {'iface-id': '36d69125-9445-4eb0-8c79-64082311a234'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:49:47 np0005603621 ovn_controller[149152]: 2026-01-31T08:49:47Z|00684|binding|INFO|Releasing lport 36d69125-9445-4eb0-8c79-64082311a234 from this chassis (sb_readonly=0)
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.777 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.778 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.779 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c64fb20-295a-4907-91a3-2d6622028082.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c64fb20-295a-4907-91a3-2d6622028082.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.780 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80bcf603-5795-4eee-ae4e-047d7a20b891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.781 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-5c64fb20-295a-4907-91a3-2d6622028082
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/5c64fb20-295a-4907-91a3-2d6622028082.pid.haproxy
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 5c64fb20-295a-4907-91a3-2d6622028082
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:49:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:49:47.783 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'env', 'PROCESS_TAG=haproxy-5c64fb20-295a-4907-91a3-2d6622028082', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c64fb20-295a-4907-91a3-2d6622028082.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:49:47 np0005603621 nova_compute[247399]: 2026-01-31 08:49:47.783 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:48 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:49:48 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:49:48 np0005603621 systemd-logind[818]: New session 69 of user nova.
Jan 31 03:49:48 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:49:48 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:49:48 np0005603621 systemd[367082]: Queued start job for default target Main User Target.
Jan 31 03:49:48 np0005603621 podman[367075]: 2026-01-31 08:49:48.112146029 +0000 UTC m=+0.019788766 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:49:48 np0005603621 systemd[367082]: Created slice User Application Slice.
Jan 31 03:49:48 np0005603621 systemd[367082]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:49:48 np0005603621 systemd[367082]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:49:48 np0005603621 systemd[367082]: Reached target Paths.
Jan 31 03:49:48 np0005603621 systemd[367082]: Reached target Timers.
Jan 31 03:49:48 np0005603621 systemd[367082]: Starting D-Bus User Message Bus Socket...
Jan 31 03:49:48 np0005603621 systemd[367082]: Starting Create User's Volatile Files and Directories...
Jan 31 03:49:48 np0005603621 systemd[367082]: Finished Create User's Volatile Files and Directories.
Jan 31 03:49:48 np0005603621 systemd[367082]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:49:48 np0005603621 systemd[367082]: Reached target Sockets.
Jan 31 03:49:48 np0005603621 systemd[367082]: Reached target Basic System.
Jan 31 03:49:48 np0005603621 systemd[367082]: Reached target Main User Target.
Jan 31 03:49:48 np0005603621 systemd[367082]: Startup finished in 106ms.
Jan 31 03:49:48 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:49:48 np0005603621 systemd[1]: Started Session 69 of User nova.
Jan 31 03:49:48 np0005603621 nova_compute[247399]: 2026-01-31 08:49:48.294 247403 DEBUG nova.compute.manager [req-baae7378-cc45-463f-88d2-cf528fa7b516 req-1bfd2dfe-0e2c-489e-a805-1b56f60520a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:48 np0005603621 nova_compute[247399]: 2026-01-31 08:49:48.295 247403 DEBUG oslo_concurrency.lockutils [req-baae7378-cc45-463f-88d2-cf528fa7b516 req-1bfd2dfe-0e2c-489e-a805-1b56f60520a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:48 np0005603621 nova_compute[247399]: 2026-01-31 08:49:48.295 247403 DEBUG oslo_concurrency.lockutils [req-baae7378-cc45-463f-88d2-cf528fa7b516 req-1bfd2dfe-0e2c-489e-a805-1b56f60520a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:48 np0005603621 nova_compute[247399]: 2026-01-31 08:49:48.295 247403 DEBUG oslo_concurrency.lockutils [req-baae7378-cc45-463f-88d2-cf528fa7b516 req-1bfd2dfe-0e2c-489e-a805-1b56f60520a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:48 np0005603621 nova_compute[247399]: 2026-01-31 08:49:48.295 247403 DEBUG nova.compute.manager [req-baae7378-cc45-463f-88d2-cf528fa7b516 req-1bfd2dfe-0e2c-489e-a805-1b56f60520a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Processing event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:49:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:49:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:49:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:48.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:49:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:48.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:48 np0005603621 systemd[1]: session-69.scope: Deactivated successfully.
Jan 31 03:49:48 np0005603621 systemd-logind[818]: Session 69 logged out. Waiting for processes to exit.
Jan 31 03:49:48 np0005603621 systemd-logind[818]: Removed session 69.
Jan 31 03:49:48 np0005603621 systemd-logind[818]: New session 71 of user nova.
Jan 31 03:49:48 np0005603621 systemd[1]: Started Session 71 of User nova.
Jan 31 03:49:48 np0005603621 podman[367075]: 2026-01-31 08:49:48.457424851 +0000 UTC m=+0.365067578 container create 7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:49:48 np0005603621 systemd[1]: session-71.scope: Deactivated successfully.
Jan 31 03:49:48 np0005603621 systemd-logind[818]: Session 71 logged out. Waiting for processes to exit.
Jan 31 03:49:48 np0005603621 systemd-logind[818]: Removed session 71.
Jan 31 03:49:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:49:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 14K writes, 66K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1555 writes, 6985 keys, 1554 commit groups, 1.0 writes per commit group, ingest: 10.43 MB, 0.02 MB/s#012Interval WAL: 1555 writes, 1554 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     25.9      3.47              0.21        46    0.075       0      0       0.0       0.0#012  L6      1/0   10.53 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     48.5     41.2     10.98              1.01        45    0.244    317K    24K       0.0       0.0#012 Sum      1/0   10.53 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     36.8     37.5     14.45              1.23        91    0.159    317K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1     55.6     56.5      1.35              0.15        12    0.113     56K   3136       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     48.5     41.2     10.98              1.01        45    0.244    317K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     25.9      3.47              0.21        45    0.077       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.088, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.53 GB write, 0.10 MB/s write, 0.52 GB read, 0.10 MB/s read, 14.4 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 58.85 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000556 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3400,56.51 MB,18.5878%) FilterBlock(92,890.73 KB,0.286137%) IndexBlock(92,1.47 MB,0.48398%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:49:48 np0005603621 systemd[1]: Started libpod-conmon-7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6.scope.
Jan 31 03:49:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:49:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f386fb80c20658dd3f89ea0a764c1a7fd87296ef617a680fcd7b4704dcdd3f2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:49:48 np0005603621 podman[367075]: 2026-01-31 08:49:48.699162794 +0000 UTC m=+0.606805541 container init 7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:49:48 np0005603621 podman[367075]: 2026-01-31 08:49:48.704683268 +0000 UTC m=+0.612325985 container start 7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 03:49:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 944 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 31 03:49:48 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [NOTICE]   (367125) : New worker (367136) forked
Jan 31 03:49:48 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [NOTICE]   (367125) : Loading success.
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.409 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849389.40936, 1e96eda0-223d-45b7-b1a7-573b51604c50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.410 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] VM Started (Lifecycle Event)#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.412 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.414 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.417 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance spawned successfully.#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.417 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01917828436162293 of space, bias 1.0, pg target 5.753485308486879 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004325555730039084 of space, bias 1.0, pg target 1.2760389403615298 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5595239653126745 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003206134840871022 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:49:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.796 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.799 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:49:49 np0005603621 nova_compute[247399]: 2026-01-31 08:49:49.906 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.194 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.194 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.195 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.195 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.195 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.196 247403 DEBUG nova.virt.libvirt.driver [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:49:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:50.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:49:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:50.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.392 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.393 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849389.409664, 1e96eda0-223d-45b7-b1a7-573b51604c50 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.393 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.445 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.448 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849389.4139483, 1e96eda0-223d-45b7-b1a7-573b51604c50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.449 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.452 247403 DEBUG nova.compute.manager [req-a3189a1d-9f33-454b-b29c-7d8f2955b51e req-5852ba01-a66c-4773-8548-2249361492d8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.452 247403 DEBUG oslo_concurrency.lockutils [req-a3189a1d-9f33-454b-b29c-7d8f2955b51e req-5852ba01-a66c-4773-8548-2249361492d8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.453 247403 DEBUG oslo_concurrency.lockutils [req-a3189a1d-9f33-454b-b29c-7d8f2955b51e req-5852ba01-a66c-4773-8548-2249361492d8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.453 247403 DEBUG oslo_concurrency.lockutils [req-a3189a1d-9f33-454b-b29c-7d8f2955b51e req-5852ba01-a66c-4773-8548-2249361492d8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.453 247403 DEBUG nova.compute.manager [req-a3189a1d-9f33-454b-b29c-7d8f2955b51e req-5852ba01-a66c-4773-8548-2249361492d8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.454 247403 WARNING nova.compute.manager [req-a3189a1d-9f33-454b-b29c-7d8f2955b51e req-5852ba01-a66c-4773-8548-2249361492d8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received unexpected event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.488 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.492 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.511 247403 INFO nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Took 19.71 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.512 247403 DEBUG nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.572 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 944 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 734 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.718 247403 INFO nova.compute.manager [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Took 24.30 seconds to build instance.#033[00m
Jan 31 03:49:50 np0005603621 nova_compute[247399]: 2026-01-31 08:49:50.821 247403 DEBUG oslo_concurrency.lockutils [None req-834be1a4-4118-4f9d-8e70-2fde5c70a5c5 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 973 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.008 247403 DEBUG nova.compute.manager [req-4b08318c-227b-4f2a-a29e-6cc24b521bcc req-865d8949-76cd-4dbe-82cf-cff4566550ac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-unplugged-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.008 247403 DEBUG oslo_concurrency.lockutils [req-4b08318c-227b-4f2a-a29e-6cc24b521bcc req-865d8949-76cd-4dbe-82cf-cff4566550ac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.009 247403 DEBUG oslo_concurrency.lockutils [req-4b08318c-227b-4f2a-a29e-6cc24b521bcc req-865d8949-76cd-4dbe-82cf-cff4566550ac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.009 247403 DEBUG oslo_concurrency.lockutils [req-4b08318c-227b-4f2a-a29e-6cc24b521bcc req-865d8949-76cd-4dbe-82cf-cff4566550ac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.009 247403 DEBUG nova.compute.manager [req-4b08318c-227b-4f2a-a29e-6cc24b521bcc req-865d8949-76cd-4dbe-82cf-cff4566550ac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] No waiting events found dispatching network-vif-unplugged-fbe66833-82a6-4f72-9b11-a4732140845a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.009 247403 WARNING nova.compute.manager [req-4b08318c-227b-4f2a-a29e-6cc24b521bcc req-865d8949-76cd-4dbe-82cf-cff4566550ac fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received unexpected event network-vif-unplugged-fbe66833-82a6-4f72-9b11-a4732140845a for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 31 03:49:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:49:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:54.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:54.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 973 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.903 247403 INFO nova.network.neutron [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating port fbe66833-82a6-4f72-9b11-a4732140845a with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:49:54 np0005603621 nova_compute[247399]: 2026-01-31 08:49:54.909 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:55 np0005603621 nova_compute[247399]: 2026-01-31 08:49:55.675 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:49:56 np0005603621 nova_compute[247399]: 2026-01-31 08:49:56.273 247403 DEBUG nova.compute.manager [req-4c3a4484-8482-402d-b13e-9c89ddab0e5d req-3e5e711b-4676-46b9-88b1-48f6da27aa3b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:56 np0005603621 nova_compute[247399]: 2026-01-31 08:49:56.273 247403 DEBUG oslo_concurrency.lockutils [req-4c3a4484-8482-402d-b13e-9c89ddab0e5d req-3e5e711b-4676-46b9-88b1-48f6da27aa3b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:49:56 np0005603621 nova_compute[247399]: 2026-01-31 08:49:56.273 247403 DEBUG oslo_concurrency.lockutils [req-4c3a4484-8482-402d-b13e-9c89ddab0e5d req-3e5e711b-4676-46b9-88b1-48f6da27aa3b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:49:56 np0005603621 nova_compute[247399]: 2026-01-31 08:49:56.274 247403 DEBUG oslo_concurrency.lockutils [req-4c3a4484-8482-402d-b13e-9c89ddab0e5d req-3e5e711b-4676-46b9-88b1-48f6da27aa3b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:49:56 np0005603621 nova_compute[247399]: 2026-01-31 08:49:56.274 247403 DEBUG nova.compute.manager [req-4c3a4484-8482-402d-b13e-9c89ddab0e5d req-3e5e711b-4676-46b9-88b1-48f6da27aa3b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] No waiting events found dispatching network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:49:56 np0005603621 nova_compute[247399]: 2026-01-31 08:49:56.274 247403 WARNING nova.compute.manager [req-4c3a4484-8482-402d-b13e-9c89ddab0e5d req-3e5e711b-4676-46b9-88b1-48f6da27aa3b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received unexpected event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:49:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:49:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:56.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:56.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 976 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Jan 31 03:49:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:49:57 np0005603621 nova_compute[247399]: 2026-01-31 08:49:57.254 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:49:57 np0005603621 nova_compute[247399]: 2026-01-31 08:49:57.255 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquired lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:49:57 np0005603621 nova_compute[247399]: 2026-01-31 08:49:57.255 247403 DEBUG nova.network.neutron [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:49:57 np0005603621 nova_compute[247399]: 2026-01-31 08:49:57.495 247403 DEBUG nova.compute.manager [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-changed-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:57 np0005603621 nova_compute[247399]: 2026-01-31 08:49:57.496 247403 DEBUG nova.compute.manager [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Refreshing instance network info cache due to event network-changed-fbe66833-82a6-4f72-9b11-a4732140845a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:49:57 np0005603621 nova_compute[247399]: 2026-01-31 08:49:57.496 247403 DEBUG oslo_concurrency.lockutils [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:49:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:49:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:49:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:49:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:49:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:49:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:49:58 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:49:58 np0005603621 systemd[367082]: Activating special unit Exit the Session...
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped target Main User Target.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped target Basic System.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped target Paths.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped target Sockets.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped target Timers.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:49:58 np0005603621 systemd[367082]: Closed D-Bus User Message Bus Socket.
Jan 31 03:49:58 np0005603621 systemd[367082]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:49:58 np0005603621 systemd[367082]: Removed slice User Application Slice.
Jan 31 03:49:58 np0005603621 systemd[367082]: Reached target Shutdown.
Jan 31 03:49:58 np0005603621 systemd[367082]: Finished Exit the Session.
Jan 31 03:49:58 np0005603621 systemd[367082]: Reached target Exit the Session.
Jan 31 03:49:58 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:49:58 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:49:58 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:49:58 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:49:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 978 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Jan 31 03:49:58 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:49:58 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:49:58 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.373 247403 DEBUG nova.network.neutron [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating instance_info_cache with network_info: [{"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.440 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Releasing lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.444 247403 DEBUG oslo_concurrency.lockutils [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.444 247403 DEBUG nova.network.neutron [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Refreshing network info cache for port fbe66833-82a6-4f72-9b11-a4732140845a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.614 247403 DEBUG os_brick.utils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.616 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.624 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.624 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[087309bc-9663-4fdb-9402-1e67018e35b5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.625 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.631 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.631 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[75c975ac-de59-48dd-b1f4-b4d963e8ef19]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.632 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.639 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.639 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed5b398-2202-4a81-b7b3-f777b617394c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.641 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[56bb03bc-94ed-44d5-81e3-59be522a9580]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.641 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.675 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.678 247403 DEBUG os_brick.initiator.connectors.lightos [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.679 247403 DEBUG os_brick.initiator.connectors.lightos [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.680 247403 DEBUG os_brick.initiator.connectors.lightos [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.681 247403 DEBUG os_brick.utils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.853 247403 DEBUG nova.compute.manager [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-changed-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.853 247403 DEBUG nova.compute.manager [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Refreshing instance network info cache due to event network-changed-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.853 247403 DEBUG oslo_concurrency.lockutils [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.854 247403 DEBUG oslo_concurrency.lockutils [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.854 247403 DEBUG nova.network.neutron [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Refreshing network info cache for port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:49:59 np0005603621 nova_compute[247399]: 2026-01-31 08:49:59.912 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 03:50:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 03:50:00 np0005603621 nova_compute[247399]: 2026-01-31 08:50:00.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:00 np0005603621 nova_compute[247399]: 2026-01-31 08:50:00.678 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 978 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.033 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.034 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.035 247403 INFO nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Creating image(s)#033[00m
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.066 247403 DEBUG nova.storage.rbd_utils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] creating snapshot(nova-resize) on rbd image(c215327f-37ad-41a7-a883-3dbb23334df6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:50:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Jan 31 03:50:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Jan 31 03:50:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 995 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 136 op/s
Jan 31 03:50:02 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.786 247403 DEBUG nova.objects.instance [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'trusted_certs' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.964 247403 DEBUG nova.network.neutron [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updated VIF entry in instance network info cache for port fbe66833-82a6-4f72-9b11-a4732140845a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:50:02 np0005603621 nova_compute[247399]: 2026-01-31 08:50:02.965 247403 DEBUG nova.network.neutron [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating instance_info_cache with network_info: [{"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.008 247403 DEBUG oslo_concurrency.lockutils [req-6592911b-e3c1-427f-9c2a-b63b8c21cbcd req-b305f0b1-9a53-49bf-926c-e302f34be861 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.010 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.011 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Ensure instance console log exists: /var/lib/nova/instances/c215327f-37ad-41a7-a883-3dbb23334df6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.011 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.011 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.012 247403 DEBUG oslo_concurrency.lockutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.014 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Start _get_guest_xml network_info=[{"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "vif_mac": "fa:16:3e:d6:4d:37"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-12e9d9b2-8ec9-4b16-b334-60c0f639cb59', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '12e9d9b2-8ec9-4b16-b334-60c0f639cb59', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'c215327f-37ad-41a7-a883-3dbb23334df6', 'attached_at': '2026-01-31T08:50:01.000000', 'detached_at': '', 'volume_id': '12e9d9b2-8ec9-4b16-b334-60c0f639cb59', 'multiattach': True, 'serial': '12e9d9b2-8ec9-4b16-b334-60c0f639cb59'}, 'device_type': 'disk', 'boot_index': None, 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'attachment_id': '7f98d534-1a06-4734-ad8c-53eed8ab4746', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.019 247403 WARNING nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.027 247403 DEBUG nova.virt.libvirt.host [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.027 247403 DEBUG nova.virt.libvirt.host [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.037 247403 DEBUG nova.virt.libvirt.host [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.037 247403 DEBUG nova.virt.libvirt.host [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.039 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.039 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f75c4aee-d826-4343-a7e3-f06a4b21de52',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.039 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.040 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.040 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.040 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.040 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.040 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.041 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.041 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.041 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.041 247403 DEBUG nova.virt.hardware [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.042 247403 DEBUG nova.objects.instance [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'vcpu_model' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.214 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:50:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4151138409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.655 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:03 np0005603621 nova_compute[247399]: 2026-01-31 08:50:03.690 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:50:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1584864664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.164 247403 DEBUG oslo_concurrency.processutils [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:04.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 995 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 136 op/s
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.755 247403 DEBUG nova.virt.libvirt.vif [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=163,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGvV4tGHwFrQ7+1WPmMS3fGcrpcMKpLQBFiD2ZG0NedKq4jaCN6oHf8RWlX+X72Ff/PSGJSQ5nqRPZm+CDMr01vn3vAMra9m4dZ/R1d2vwh+NDFwu298PivPHJQkyuCpg==',key_name='tempest-keypair-600650673',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-libm6dxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:49:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a498364761ef428b99cac3f92e603385',uuid=c215327f-37ad-41a7-a883-3dbb23334df6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "vif_mac": "fa:16:3e:d6:4d:37"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.756 247403 DEBUG nova.network.os_vif_util [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "vif_mac": "fa:16:3e:d6:4d:37"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.756 247403 DEBUG nova.network.os_vif_util [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.758 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <uuid>c215327f-37ad-41a7-a883-3dbb23334df6</uuid>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <name>instance-000000a3</name>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <memory>196608</memory>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:name>multiattach-server-0</nova:name>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:50:03</nova:creationTime>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.micro">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:memory>192</nova:memory>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:user uuid="a498364761ef428b99cac3f92e603385">tempest-AttachVolumeMultiAttachTest-1931311941-project-member</nova:user>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:project uuid="8397e0fed04b4dabb57148d0924de2dc">tempest-AttachVolumeMultiAttachTest-1931311941</nova:project>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <nova:port uuid="fbe66833-82a6-4f72-9b11-a4732140845a">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <entry name="serial">c215327f-37ad-41a7-a883-3dbb23334df6</entry>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <entry name="uuid">c215327f-37ad-41a7-a883-3dbb23334df6</entry>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c215327f-37ad-41a7-a883-3dbb23334df6_disk">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c215327f-37ad-41a7-a883-3dbb23334df6_disk.config">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-12e9d9b2-8ec9-4b16-b334-60c0f639cb59">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <target dev="vdb" bus="virtio"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <serial>12e9d9b2-8ec9-4b16-b334-60c0f639cb59</serial>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <shareable/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:d6:4d:37"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <target dev="tapfbe66833-82"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c215327f-37ad-41a7-a883-3dbb23334df6/console.log" append="off"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:50:04 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:50:04 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:50:04 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:50:04 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.759 247403 DEBUG nova.virt.libvirt.vif [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=163,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGvV4tGHwFrQ7+1WPmMS3fGcrpcMKpLQBFiD2ZG0NedKq4jaCN6oHf8RWlX+X72Ff/PSGJSQ5nqRPZm+CDMr01vn3vAMra9m4dZ/R1d2vwh+NDFwu298PivPHJQkyuCpg==',key_name='tempest-keypair-600650673',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-libm6dxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:49:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a498364761ef428b99cac3f92e603385',uuid=c215327f-37ad-41a7-a883-3dbb23334df6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "vif_mac": "fa:16:3e:d6:4d:37"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.759 247403 DEBUG nova.network.os_vif_util [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "vif_mac": "fa:16:3e:d6:4d:37"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.760 247403 DEBUG nova.network.os_vif_util [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.760 247403 DEBUG os_vif [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.761 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.761 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.764 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbe66833-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.765 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfbe66833-82, col_values=(('external_ids', {'iface-id': 'fbe66833-82a6-4f72-9b11-a4732140845a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:4d:37', 'vm-uuid': 'c215327f-37ad-41a7-a883-3dbb23334df6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.766 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:04 np0005603621 NetworkManager[49013]: <info>  [1769849404.7670] manager: (tapfbe66833-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.768 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.774 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.775 247403 INFO os_vif [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82')#033[00m
Jan 31 03:50:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:04Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:e3:f6 10.100.0.14
Jan 31 03:50:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:04Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:e3:f6 10.100.0.14
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.968 247403 DEBUG nova.network.neutron [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updated VIF entry in instance network info cache for port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.969 247403 DEBUG nova.network.neutron [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updating instance_info_cache with network_info: [{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.989 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.989 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.989 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.990 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No VIF found with MAC fa:16:3e:d6:4d:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:50:04 np0005603621 nova_compute[247399]: 2026-01-31 08:50:04.990 247403 INFO nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Using config drive#033[00m
Jan 31 03:50:05 np0005603621 nova_compute[247399]: 2026-01-31 08:50:05.098 247403 DEBUG oslo_concurrency.lockutils [req-fccf939c-02d1-4002-8e1f-46f5ffd5ee90 req-5e8a938b-4fef-491a-88ec-38c1a98ab8bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:50:05 np0005603621 kernel: tapfbe66833-82: entered promiscuous mode
Jan 31 03:50:05 np0005603621 NetworkManager[49013]: <info>  [1769849405.1315] manager: (tapfbe66833-82): new Tun device (/org/freedesktop/NetworkManager/Devices/311)
Jan 31 03:50:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:05Z|00685|binding|INFO|Claiming lport fbe66833-82a6-4f72-9b11-a4732140845a for this chassis.
Jan 31 03:50:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:05Z|00686|binding|INFO|fbe66833-82a6-4f72-9b11-a4732140845a: Claiming fa:16:3e:d6:4d:37 10.100.0.6
Jan 31 03:50:05 np0005603621 nova_compute[247399]: 2026-01-31 08:50:05.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:05Z|00687|binding|INFO|Setting lport fbe66833-82a6-4f72-9b11-a4732140845a ovn-installed in OVS
Jan 31 03:50:05 np0005603621 nova_compute[247399]: 2026-01-31 08:50:05.151 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:05 np0005603621 nova_compute[247399]: 2026-01-31 08:50:05.153 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:05 np0005603621 systemd-udevd[367405]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:50:05 np0005603621 systemd-machined[212769]: New machine qemu-83-instance-000000a3.
Jan 31 03:50:05 np0005603621 NetworkManager[49013]: <info>  [1769849405.1777] device (tapfbe66833-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:50:05 np0005603621 NetworkManager[49013]: <info>  [1769849405.1783] device (tapfbe66833-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:50:05 np0005603621 systemd[1]: Started Virtual Machine qemu-83-instance-000000a3.
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.232 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:4d:37 10.100.0.6'], port_security=['fa:16:3e:d6:4d:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c215327f-37ad-41a7-a883-3dbb23334df6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8397e0fed04b4dabb57148d0924de2dc', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd636f3a4-efef-465a-ac59-8182d61336f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbd2578f-ff6e-4dc3-bc49-93cbf023edc5, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fbe66833-82a6-4f72-9b11-a4732140845a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:05Z|00688|binding|INFO|Setting lport fbe66833-82a6-4f72-9b11-a4732140845a up in Southbound
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.234 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fbe66833-82a6-4f72-9b11-a4732140845a in datapath 3afaf607-43a1-4d65-95fc-0a22b5c901d0 bound to our chassis#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.237 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3afaf607-43a1-4d65-95fc-0a22b5c901d0#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.248 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d6eb39bf-1d43-446e-8417-eff7edcb0374]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.273 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6910d537-5358-4a38-ad1c-536985257f30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.278 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb08b3f-6591-45c8-916e-5d5a80a7d95c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.298 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c59c8426-a80e-4333-840f-1d993ef13285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.309 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[73410ca4-41a8-4ec6-8dd9-f90b2ac2b6c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3afaf607-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851951, 'reachable_time': 30558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367419, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.318 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b725b724-d62b-4ce9-97b9-4f043764416d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851960, 'tstamp': 851960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367420, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851962, 'tstamp': 851962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367420, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.320 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3afaf607-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:05 np0005603621 nova_compute[247399]: 2026-01-31 08:50:05.321 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.322 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3afaf607-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.322 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.323 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3afaf607-40, col_values=(('external_ids', {'iface-id': '0ed76a0a-650c-4ec7-a4d4-0e745236b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:05.323 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:50:05 np0005603621 nova_compute[247399]: 2026-01-31 08:50:05.679 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.026 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849406.0261137, c215327f-37ad-41a7-a883-3dbb23334df6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.027 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.029 247403 DEBUG nova.compute.manager [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.031 247403 INFO nova.virt.libvirt.driver [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Instance running successfully.#033[00m
Jan 31 03:50:06 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.033 247403 DEBUG nova.virt.libvirt.guest [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.033 247403 DEBUG nova.virt.libvirt.driver [None req-9653eed8-8ffb-4c89-81d0-4cb37883b857 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.073 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.076 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.199 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.199 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849406.027576, c215327f-37ad-41a7-a883-3dbb23334df6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.200 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] VM Started (Lifecycle Event)#033[00m
Jan 31 03:50:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:06.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:06.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.636 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.639 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:50:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 1000 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 165 op/s
Jan 31 03:50:06 np0005603621 nova_compute[247399]: 2026-01-31 08:50:06.859 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:50:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.259 247403 DEBUG nova.compute.manager [req-5d9533f4-d9a1-4ac6-9a9b-f8857db41f94 req-619c54ff-f5c1-46e6-8257-461c913eb4ed fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.259 247403 DEBUG oslo_concurrency.lockutils [req-5d9533f4-d9a1-4ac6-9a9b-f8857db41f94 req-619c54ff-f5c1-46e6-8257-461c913eb4ed fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.260 247403 DEBUG oslo_concurrency.lockutils [req-5d9533f4-d9a1-4ac6-9a9b-f8857db41f94 req-619c54ff-f5c1-46e6-8257-461c913eb4ed fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.260 247403 DEBUG oslo_concurrency.lockutils [req-5d9533f4-d9a1-4ac6-9a9b-f8857db41f94 req-619c54ff-f5c1-46e6-8257-461c913eb4ed fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.260 247403 DEBUG nova.compute.manager [req-5d9533f4-d9a1-4ac6-9a9b-f8857db41f94 req-619c54ff-f5c1-46e6-8257-461c913eb4ed fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] No waiting events found dispatching network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:07 np0005603621 nova_compute[247399]: 2026-01-31 08:50:07.260 247403 WARNING nova.compute.manager [req-5d9533f4-d9a1-4ac6-9a9b-f8857db41f94 req-619c54ff-f5c1-46e6-8257-461c913eb4ed fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received unexpected event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:50:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:08.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:08.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:08 np0005603621 podman[367484]: 2026-01-31 08:50:08.53262867 +0000 UTC m=+0.079539333 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:08 np0005603621 podman[367483]: 2026-01-31 08:50:08.53803119 +0000 UTC m=+0.074994768 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 1011 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Jan 31 03:50:09 np0005603621 nova_compute[247399]: 2026-01-31 08:50:09.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:09 np0005603621 nova_compute[247399]: 2026-01-31 08:50:09.768 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.197 247403 DEBUG nova.compute.manager [req-d84c389a-34de-4b88-81f8-45f9b56474db req-b8c36ce3-c7d3-46fd-92e5-a999da72efe0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.197 247403 DEBUG oslo_concurrency.lockutils [req-d84c389a-34de-4b88-81f8-45f9b56474db req-b8c36ce3-c7d3-46fd-92e5-a999da72efe0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.197 247403 DEBUG oslo_concurrency.lockutils [req-d84c389a-34de-4b88-81f8-45f9b56474db req-b8c36ce3-c7d3-46fd-92e5-a999da72efe0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.197 247403 DEBUG oslo_concurrency.lockutils [req-d84c389a-34de-4b88-81f8-45f9b56474db req-b8c36ce3-c7d3-46fd-92e5-a999da72efe0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.198 247403 DEBUG nova.compute.manager [req-d84c389a-34de-4b88-81f8-45f9b56474db req-b8c36ce3-c7d3-46fd-92e5-a999da72efe0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] No waiting events found dispatching network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.198 247403 WARNING nova.compute.manager [req-d84c389a-34de-4b88-81f8-45f9b56474db req-b8c36ce3-c7d3-46fd-92e5-a999da72efe0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received unexpected event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:50:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:10.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:10.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.683 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 1011 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.971 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.972 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.972 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:50:10 np0005603621 nova_compute[247399]: 2026-01-31 08:50:10.972 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:11 np0005603621 nova_compute[247399]: 2026-01-31 08:50:11.927 247403 INFO nova.compute.manager [None req-b2f364af-2d99-481c-b196-2fa2345e64a9 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Get console output#033[00m
Jan 31 03:50:11 np0005603621 nova_compute[247399]: 2026-01-31 08:50:11.932 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:50:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 1.0 GiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 222 op/s
Jan 31 03:50:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:14.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:14 np0005603621 nova_compute[247399]: 2026-01-31 08:50:14.458 247403 INFO nova.compute.manager [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Rebuilding instance#033[00m
Jan 31 03:50:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 1.0 GiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 185 op/s
Jan 31 03:50:14 np0005603621 nova_compute[247399]: 2026-01-31 08:50:14.772 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:15 np0005603621 nova_compute[247399]: 2026-01-31 08:50:15.359 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:15 np0005603621 nova_compute[247399]: 2026-01-31 08:50:15.686 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.006 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [{"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.199 247403 DEBUG nova.compute.manager [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.206 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-3308d345-19b7-4fbb-bd81-631135649e7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.206 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.206 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.207 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.207 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.207 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:16.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.381 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.381 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.381 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.382 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.382 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.692 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_requests' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 1.0 GiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 190 op/s
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.737 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.770 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476847654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.831 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.868 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Jan 31 03:50:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.917 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.920 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.941 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.941 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.944 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.944 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.944 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.947 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.947 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.950 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.950 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:16 np0005603621 nova_compute[247399]: 2026-01-31 08:50:16.950 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.102 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.103 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3373MB free_disk=20.51224136352539GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.104 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.104 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.253 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3308d345-19b7-4fbb-bd81-631135649e7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.253 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 74f09648-834b-4da1-89a4-bcdcca255908 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.253 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 1e96eda0-223d-45b7-b1a7-573b51604c50 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.253 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c215327f-37ad-41a7-a883-3dbb23334df6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.253 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.254 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.381 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/395250834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.840 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.846 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.896 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.955 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:50:17 np0005603621 nova_compute[247399]: 2026-01-31 08:50:17.956 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:18.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 1.1 GiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 140 op/s
Jan 31 03:50:18 np0005603621 nova_compute[247399]: 2026-01-31 08:50:18.951 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:50:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:19Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:4d:37 10.100.0.6
Jan 31 03:50:19 np0005603621 nova_compute[247399]: 2026-01-31 08:50:19.776 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:20.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:20 np0005603621 nova_compute[247399]: 2026-01-31 08:50:20.689 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 1.1 GiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 140 op/s
Jan 31 03:50:21 np0005603621 kernel: tap412e7f4c-bd (unregistering): left promiscuous mode
Jan 31 03:50:21 np0005603621 NetworkManager[49013]: <info>  [1769849421.1851] device (tap412e7f4c-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:50:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:21Z|00689|binding|INFO|Releasing lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 from this chassis (sb_readonly=0)
Jan 31 03:50:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:21Z|00690|binding|INFO|Setting lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 down in Southbound
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:21Z|00691|binding|INFO|Removing iface tap412e7f4c-bd ovn-installed in OVS
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.203 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Jan 31 03:50:21 np0005603621 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000a7.scope: Consumed 14.001s CPU time.
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.238 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e3:f6 10.100.0.14'], port_security=['fa:16:3e:fe:e3:f6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e96eda0-223d-45b7-b1a7-573b51604c50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c64fb20-295a-4907-91a3-2d6622028082', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13de6dfa-d3f8-48e3-9375-41e3868371bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c13c6f4-6161-4053-9f6c-b8f9a12a63dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.240 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 in datapath 5c64fb20-295a-4907-91a3-2d6622028082 unbound from our chassis#033[00m
Jan 31 03:50:21 np0005603621 systemd-machined[212769]: Machine qemu-82-instance-000000a7 terminated.
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.242 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c64fb20-295a-4907-91a3-2d6622028082, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.244 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bbf75e3a-9612-4bc7-82bf-83bb849ba930]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.244 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 namespace which is not needed anymore#033[00m
Jan 31 03:50:21 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [NOTICE]   (367125) : haproxy version is 2.8.14-c23fe91
Jan 31 03:50:21 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [NOTICE]   (367125) : path to executable is /usr/sbin/haproxy
Jan 31 03:50:21 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [WARNING]  (367125) : Exiting Master process...
Jan 31 03:50:21 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [ALERT]    (367125) : Current worker (367136) exited with code 143 (Terminated)
Jan 31 03:50:21 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[367114]: [WARNING]  (367125) : All workers exited. Exiting... (0)
Jan 31 03:50:21 np0005603621 systemd[1]: libpod-7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6.scope: Deactivated successfully.
Jan 31 03:50:21 np0005603621 podman[367655]: 2026-01-31 08:50:21.37717969 +0000 UTC m=+0.049035178 container died 7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 03:50:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6-userdata-shm.mount: Deactivated successfully.
Jan 31 03:50:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f386fb80c20658dd3f89ea0a764c1a7fd87296ef617a680fcd7b4704dcdd3f2f-merged.mount: Deactivated successfully.
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 podman[367655]: 2026-01-31 08:50:21.415090417 +0000 UTC m=+0.086945905 container cleanup 7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.418 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 systemd[1]: libpod-conmon-7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6.scope: Deactivated successfully.
Jan 31 03:50:21 np0005603621 podman[367690]: 2026-01-31 08:50:21.4905557 +0000 UTC m=+0.061125581 container remove 7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.494 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1f9d99-2865-4400-b100-dd62c2eadf26]: (4, ('Sat Jan 31 08:50:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 (7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6)\n7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6\nSat Jan 31 08:50:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 (7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6)\n7d18f44d75fa387034d0c10ba72f9995243fcc8a35395664ee6d7cf6ab7afca6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.496 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a59bef2e-3355-43c8-bd0d-844924ba0502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.497 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c64fb20-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 kernel: tap5c64fb20-20: left promiscuous mode
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.508 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.511 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd53589-e2b7-4eab-90b0-fba94136a112]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.524 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[83dc7b78-ba2c-4b79-b30e-11b7504ecc01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.525 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7c63e8b5-5ec2-4c45-942b-2e998f3bb5ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.536 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[353e03d6-9981-41f9-b6c8-ea6f9ec2f6db]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869313, 'reachable_time': 42132, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367711, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.539 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:50:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:21.539 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[a4919f94-176a-45f9-ba47-ab5ece8e529a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:21 np0005603621 systemd[1]: run-netns-ovnmeta\x2d5c64fb20\x2d295a\x2d4907\x2d91a3\x2d2d6622028082.mount: Deactivated successfully.
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.944 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance shutdown successfully after 5 seconds.#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.950 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance destroyed successfully.#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.955 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance destroyed successfully.#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.956 247403 DEBUG nova.virt.libvirt.vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:49:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1394409259',display_name='tempest-TestNetworkAdvancedServerOps-server-1394409259',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1394409259',id=167,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/lAD4OCru9xRbM5RwUuWIkZogXrbc5YYzvQOqv8vzq3yHSuGXSkcCz+Uq294UgXDvDOWlIVlc+KtPz2i57cLrRAb2n00QBhxyN/0ozf8lbd5nBtA8rBOi6LAdh2ntUJw==',key_name='tempest-TestNetworkAdvancedServerOps-1864686962',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:49:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-46zn1h0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:50:13Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e96eda0-223d-45b7-b1a7-573b51604c50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.956 247403 DEBUG nova.network.os_vif_util [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.957 247403 DEBUG nova.network.os_vif_util [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.957 247403 DEBUG os_vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.959 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.959 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap412e7f4c-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.962 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:21 np0005603621 nova_compute[247399]: 2026-01-31 08:50:21.963 247403 INFO os_vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd')#033[00m
Jan 31 03:50:22 np0005603621 nova_compute[247399]: 2026-01-31 08:50:22.118 247403 DEBUG nova.compute.manager [req-c8149123-8369-40ef-8274-deac501c043e req-dede85fc-745e-4b61-8568-ad27fe7b621f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-unplugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:22 np0005603621 nova_compute[247399]: 2026-01-31 08:50:22.119 247403 DEBUG oslo_concurrency.lockutils [req-c8149123-8369-40ef-8274-deac501c043e req-dede85fc-745e-4b61-8568-ad27fe7b621f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:22 np0005603621 nova_compute[247399]: 2026-01-31 08:50:22.119 247403 DEBUG oslo_concurrency.lockutils [req-c8149123-8369-40ef-8274-deac501c043e req-dede85fc-745e-4b61-8568-ad27fe7b621f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:22 np0005603621 nova_compute[247399]: 2026-01-31 08:50:22.119 247403 DEBUG oslo_concurrency.lockutils [req-c8149123-8369-40ef-8274-deac501c043e req-dede85fc-745e-4b61-8568-ad27fe7b621f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:22 np0005603621 nova_compute[247399]: 2026-01-31 08:50:22.120 247403 DEBUG nova.compute.manager [req-c8149123-8369-40ef-8274-deac501c043e req-dede85fc-745e-4b61-8568-ad27fe7b621f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-unplugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:22 np0005603621 nova_compute[247399]: 2026-01-31 08:50:22.120 247403 WARNING nova.compute.manager [req-c8149123-8369-40ef-8274-deac501c043e req-dede85fc-745e-4b61-8568-ad27fe7b621f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received unexpected event network-vif-unplugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 03:50:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:22.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:22.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 1011 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 767 KiB/s rd, 2.3 MiB/s wr, 158 op/s
Jan 31 03:50:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:24.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:24.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 1011 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 767 KiB/s rd, 2.3 MiB/s wr, 158 op/s
Jan 31 03:50:25 np0005603621 podman[367907]: 2026-01-31 08:50:25.101866163 +0000 UTC m=+0.074785833 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:50:25 np0005603621 podman[367927]: 2026-01-31 08:50:25.280895896 +0000 UTC m=+0.049340640 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 03:50:25 np0005603621 podman[367907]: 2026-01-31 08:50:25.285805841 +0000 UTC m=+0.258725491 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:50:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:25.392 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:25.394 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.713 247403 DEBUG nova.compute.manager [req-2ac37cc2-b451-49e7-b170-47780605badc req-1fa174eb-029f-4124-b43e-0833455a08fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.714 247403 DEBUG oslo_concurrency.lockutils [req-2ac37cc2-b451-49e7-b170-47780605badc req-1fa174eb-029f-4124-b43e-0833455a08fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.714 247403 DEBUG oslo_concurrency.lockutils [req-2ac37cc2-b451-49e7-b170-47780605badc req-1fa174eb-029f-4124-b43e-0833455a08fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.714 247403 DEBUG oslo_concurrency.lockutils [req-2ac37cc2-b451-49e7-b170-47780605badc req-1fa174eb-029f-4124-b43e-0833455a08fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.714 247403 DEBUG nova.compute.manager [req-2ac37cc2-b451-49e7-b170-47780605badc req-1fa174eb-029f-4124-b43e-0833455a08fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.714 247403 WARNING nova.compute.manager [req-2ac37cc2-b451-49e7-b170-47780605badc req-1fa174eb-029f-4124-b43e-0833455a08fe fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received unexpected event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with vm_state active and task_state rebuilding.#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.721 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deleting instance files /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50_del#033[00m
Jan 31 03:50:25 np0005603621 nova_compute[247399]: 2026-01-31 08:50:25.721 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deletion of /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50_del complete#033[00m
Jan 31 03:50:25 np0005603621 podman[368057]: 2026-01-31 08:50:25.761858691 +0000 UTC m=+0.046951693 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:50:25 np0005603621 podman[368079]: 2026-01-31 08:50:25.823860859 +0000 UTC m=+0.049216685 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:50:25 np0005603621 podman[368057]: 2026-01-31 08:50:25.942089781 +0000 UTC m=+0.227182783 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 03:50:26 np0005603621 podman[368125]: 2026-01-31 08:50:26.310414221 +0000 UTC m=+0.126084282 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Jan 31 03:50:26 np0005603621 podman[368125]: 2026-01-31 08:50:26.323105452 +0000 UTC m=+0.138775493 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, architecture=x86_64, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, io.buildah.version=1.28.2)
Jan 31 03:50:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:26.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:26.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.361 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.362 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Creating image(s)#033[00m
Jan 31 03:50:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.382 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:26.396 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.404 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.434 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.437 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.493 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.494 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "365f9823d2619ef09948bdeed685488da63755b5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.495 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.495 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "365f9823d2619ef09948bdeed685488da63755b5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.519 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.523 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 1e96eda0-223d-45b7-b1a7-573b51604c50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 986 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 188 op/s
Jan 31 03:50:26 np0005603621 nova_compute[247399]: 2026-01-31 08:50:26.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.087 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5 1e96eda0-223d-45b7-b1a7-573b51604c50_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.157 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 17e3be39-b208-4b3e-a9aa-2b497c1ae452 does not exist
Jan 31 03:50:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b7608690-a213-4c96-aeca-5e6688ecf0d3 does not exist
Jan 31 03:50:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c5a3cf03-dc31-450f-b8b2-251109d9e32c does not exist
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:27 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.437 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.438 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Ensure instance console log exists: /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.438 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.438 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.439 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.441 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Start _get_guest_xml network_info=[{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.447 247403 WARNING nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.454 247403 DEBUG nova.virt.libvirt.host [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.454 247403 DEBUG nova.virt.libvirt.host [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.458 247403 DEBUG nova.virt.libvirt.host [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.459 247403 DEBUG nova.virt.libvirt.host [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.460 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.460 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:45Z,direct_url=<?>,disk_format='qcow2',id=0864ca59-9877-4e6d-adfc-f0a3204ed8f8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:47Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.461 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.461 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.462 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.462 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.462 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.462 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.463 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.463 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.463 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.463 247403 DEBUG nova.virt.hardware [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.464 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:27 np0005603621 nova_compute[247399]: 2026-01-31 08:50:27.611 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.738105289 +0000 UTC m=+0.039954393 container create 08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:50:27 np0005603621 systemd[1]: Started libpod-conmon-08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d.scope.
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.72011353 +0000 UTC m=+0.021962664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:50:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.837248239 +0000 UTC m=+0.139097363 container init 08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.844139447 +0000 UTC m=+0.145988541 container start 08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.847874555 +0000 UTC m=+0.149723719 container attach 08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:50:27 np0005603621 jolly_lewin[368628]: 167 167
Jan 31 03:50:27 np0005603621 systemd[1]: libpod-08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d.scope: Deactivated successfully.
Jan 31 03:50:27 np0005603621 conmon[368628]: conmon 08019be106ddbff39d2a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d.scope/container/memory.events
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.850609571 +0000 UTC m=+0.152458675 container died 08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:50:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-154a6fb7f37270708e1e8102d2907ef22b3cc22a1d9f2df815b2f9a0d95da078-merged.mount: Deactivated successfully.
Jan 31 03:50:27 np0005603621 podman[368593]: 2026-01-31 08:50:27.904886264 +0000 UTC m=+0.206735388 container remove 08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:27 np0005603621 systemd[1]: libpod-conmon-08019be106ddbff39d2a5f393806e304fd801ea02a744fe40442a4a43552711d.scope: Deactivated successfully.
Jan 31 03:50:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:50:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723149064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.037 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.040798486 +0000 UTC m=+0.039091705 container create 1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hugle, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.071 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.081 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:28 np0005603621 systemd[1]: Started libpod-conmon-1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c.scope.
Jan 31 03:50:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd75a8a02aea1aab0b3dc507d4cd32f8539c9a080d1fbe930e2dc6222b657d91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd75a8a02aea1aab0b3dc507d4cd32f8539c9a080d1fbe930e2dc6222b657d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd75a8a02aea1aab0b3dc507d4cd32f8539c9a080d1fbe930e2dc6222b657d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd75a8a02aea1aab0b3dc507d4cd32f8539c9a080d1fbe930e2dc6222b657d91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd75a8a02aea1aab0b3dc507d4cd32f8539c9a080d1fbe930e2dc6222b657d91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.019494003 +0000 UTC m=+0.017787242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.119492841 +0000 UTC m=+0.117786080 container init 1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hugle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.128091882 +0000 UTC m=+0.126385111 container start 1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.135116274 +0000 UTC m=+0.133409493 container attach 1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hugle, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:50:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:28.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:28.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:50:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876129162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.530 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.532 247403 DEBUG nova.virt.libvirt.vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:49:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1394409259',display_name='tempest-TestNetworkAdvancedServerOps-server-1394409259',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1394409259',id=167,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/lAD4OCru9xRbM5RwUuWIkZogXrbc5YYzvQOqv8vzq3yHSuGXSkcCz+Uq294UgXDvDOWlIVlc+KtPz2i57cLrRAb2n00QBhxyN/0ozf8lbd5nBtA8rBOi6LAdh2ntUJw==',key_name='tempest-TestNetworkAdvancedServerOps-1864686962',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:49:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-46zn1h0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:50:25Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e96eda0-223d-45b7-b1a7-573b51604c50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.533 247403 DEBUG nova.network.os_vif_util [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.533 247403 DEBUG nova.network.os_vif_util [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.537 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <uuid>1e96eda0-223d-45b7-b1a7-573b51604c50</uuid>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <name>instance-000000a7</name>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1394409259</nova:name>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:50:27</nova:creationTime>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="0864ca59-9877-4e6d-adfc-f0a3204ed8f8"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <nova:port uuid="412e7f4c-bd7f-493a-b50e-cf3e36e6dea8">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <entry name="serial">1e96eda0-223d-45b7-b1a7-573b51604c50</entry>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <entry name="uuid">1e96eda0-223d-45b7-b1a7-573b51604c50</entry>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/1e96eda0-223d-45b7-b1a7-573b51604c50_disk">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:fe:e3:f6"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <target dev="tap412e7f4c-bd"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/console.log" append="off"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:50:28 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:50:28 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:50:28 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:50:28 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.538 247403 DEBUG nova.virt.libvirt.vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:49:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1394409259',display_name='tempest-TestNetworkAdvancedServerOps-server-1394409259',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1394409259',id=167,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/lAD4OCru9xRbM5RwUuWIkZogXrbc5YYzvQOqv8vzq3yHSuGXSkcCz+Uq294UgXDvDOWlIVlc+KtPz2i57cLrRAb2n00QBhxyN/0ozf8lbd5nBtA8rBOi6LAdh2ntUJw==',key_name='tempest-TestNetworkAdvancedServerOps-1864686962',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:49:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-46zn1h0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:50:25Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e96eda0-223d-45b7-b1a7-573b51604c50,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.538 247403 DEBUG nova.network.os_vif_util [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.539 247403 DEBUG nova.network.os_vif_util [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.539 247403 DEBUG os_vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.540 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.540 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.541 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.544 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.544 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap412e7f4c-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.545 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap412e7f4c-bd, col_values=(('external_ids', {'iface-id': '412e7f4c-bd7f-493a-b50e-cf3e36e6dea8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:e3:f6', 'vm-uuid': '1e96eda0-223d-45b7-b1a7-573b51604c50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.546 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:28 np0005603621 NetworkManager[49013]: <info>  [1769849428.5474] manager: (tap412e7f4c-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.549 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.553 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.554 247403 INFO os_vif [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd')#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.711 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.711 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.712 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:fe:e3:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.712 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Using config drive#033[00m
Jan 31 03:50:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 962 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 235 op/s
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.736 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.812 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'ec2_ids' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:28 np0005603621 nova_compute[247399]: 2026-01-31 08:50:28.891 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'keypairs' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:28 np0005603621 blissful_hugle[368691]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:50:28 np0005603621 blissful_hugle[368691]: --> relative data size: 1.0
Jan 31 03:50:28 np0005603621 blissful_hugle[368691]: --> All data devices are unavailable
Jan 31 03:50:28 np0005603621 systemd[1]: libpod-1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c.scope: Deactivated successfully.
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.929870757 +0000 UTC m=+0.928163976 container died 1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:50:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dd75a8a02aea1aab0b3dc507d4cd32f8539c9a080d1fbe930e2dc6222b657d91-merged.mount: Deactivated successfully.
Jan 31 03:50:28 np0005603621 podman[368654]: 2026-01-31 08:50:28.987244969 +0000 UTC m=+0.985538198 container remove 1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 03:50:28 np0005603621 systemd[1]: libpod-conmon-1417e8a93d2a5cd3b36361321f6a75743e7fc7fdaa83ec2b420b67b155ff642c.scope: Deactivated successfully.
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.470036302 +0000 UTC m=+0.032259589 container create 833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mcclintock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 03:50:29 np0005603621 systemd[1]: Started libpod-conmon-833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04.scope.
Jan 31 03:50:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.537278375 +0000 UTC m=+0.099501682 container init 833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.541769378 +0000 UTC m=+0.103992665 container start 833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mcclintock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.544990278 +0000 UTC m=+0.107213595 container attach 833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:50:29 np0005603621 nifty_mcclintock[368920]: 167 167
Jan 31 03:50:29 np0005603621 systemd[1]: libpod-833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04.scope: Deactivated successfully.
Jan 31 03:50:29 np0005603621 conmon[368920]: conmon 833328ef372364d67ac9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04.scope/container/memory.events
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.546945181 +0000 UTC m=+0.109168468 container died 833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.456114893 +0000 UTC m=+0.018338200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:50:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-29c2f828b6191ea866281291e07489103deaf03cc7f604ecdb512fadbf4c8442-merged.mount: Deactivated successfully.
Jan 31 03:50:29 np0005603621 podman[368904]: 2026-01-31 08:50:29.583766154 +0000 UTC m=+0.145989441 container remove 833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:50:29 np0005603621 systemd[1]: libpod-conmon-833328ef372364d67ac959e9d68b58b3f53e8516505879ab0c367cb7c681dc04.scope: Deactivated successfully.
Jan 31 03:50:29 np0005603621 podman[368943]: 2026-01-31 08:50:29.705968842 +0000 UTC m=+0.035716899 container create 6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:50:29 np0005603621 systemd[1]: Started libpod-conmon-6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260.scope.
Jan 31 03:50:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617a5192c6f5c57822fcc565ffb920648c914e67a386da69407581a1d4e0f14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617a5192c6f5c57822fcc565ffb920648c914e67a386da69407581a1d4e0f14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617a5192c6f5c57822fcc565ffb920648c914e67a386da69407581a1d4e0f14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1617a5192c6f5c57822fcc565ffb920648c914e67a386da69407581a1d4e0f14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:29 np0005603621 podman[368943]: 2026-01-31 08:50:29.776106036 +0000 UTC m=+0.105854143 container init 6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:50:29 np0005603621 podman[368943]: 2026-01-31 08:50:29.781383113 +0000 UTC m=+0.111131170 container start 6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:50:29 np0005603621 podman[368943]: 2026-01-31 08:50:29.784808711 +0000 UTC m=+0.114556788 container attach 6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:50:29 np0005603621 podman[368943]: 2026-01-31 08:50:29.689545263 +0000 UTC m=+0.019293340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:50:29 np0005603621 nova_compute[247399]: 2026-01-31 08:50:29.796 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Creating config drive at /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config#033[00m
Jan 31 03:50:29 np0005603621 nova_compute[247399]: 2026-01-31 08:50:29.801 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbkomwpnx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:29 np0005603621 nova_compute[247399]: 2026-01-31 08:50:29.926 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpbkomwpnx" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:29 np0005603621 nova_compute[247399]: 2026-01-31 08:50:29.951 247403 DEBUG nova.storage.rbd_utils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:50:29 np0005603621 nova_compute[247399]: 2026-01-31 08:50:29.956 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.111 247403 DEBUG oslo_concurrency.processutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config 1e96eda0-223d-45b7-b1a7-573b51604c50_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.112 247403 INFO nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deleting local config drive /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50/disk.config because it was imported into RBD.#033[00m
Jan 31 03:50:30 np0005603621 kernel: tap412e7f4c-bd: entered promiscuous mode
Jan 31 03:50:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:30Z|00692|binding|INFO|Claiming lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for this chassis.
Jan 31 03:50:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:30Z|00693|binding|INFO|412e7f4c-bd7f-493a-b50e-cf3e36e6dea8: Claiming fa:16:3e:fe:e3:f6 10.100.0.14
Jan 31 03:50:30 np0005603621 NetworkManager[49013]: <info>  [1769849430.1605] manager: (tap412e7f4c-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.162 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:30Z|00694|binding|INFO|Setting lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 ovn-installed in OVS
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.166 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 systemd-machined[212769]: New machine qemu-84-instance-000000a7.
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.183 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e3:f6 10.100.0.14'], port_security=['fa:16:3e:fe:e3:f6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e96eda0-223d-45b7-b1a7-573b51604c50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c64fb20-295a-4907-91a3-2d6622028082', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '13de6dfa-d3f8-48e3-9375-41e3868371bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c13c6f4-6161-4053-9f6c-b8f9a12a63dc, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.184 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 in datapath 5c64fb20-295a-4907-91a3-2d6622028082 bound to our chassis#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.185 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c64fb20-295a-4907-91a3-2d6622028082#033[00m
Jan 31 03:50:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:30Z|00695|binding|INFO|Setting lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 up in Southbound
Jan 31 03:50:30 np0005603621 systemd-udevd[369018]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.195 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74f8db23-3da1-4cc3-9ae6-62a6f421ebe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.195 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c64fb20-21 in ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:50:30 np0005603621 systemd[1]: Started Virtual Machine qemu-84-instance-000000a7.
Jan 31 03:50:30 np0005603621 NetworkManager[49013]: <info>  [1769849430.1983] device (tap412e7f4c-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:50:30 np0005603621 NetworkManager[49013]: <info>  [1769849430.1991] device (tap412e7f4c-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.197 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c64fb20-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.198 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcfa1fb-df38-4a75-a0c9-154e8fcee7f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.198 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[49e9ac23-4573-4c0e-a5c7-fe1ab1f058ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.208 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[29927ba7-003c-46a8-8503-37593a297415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.218 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5c73dac7-7007-4e25-a096-7796122434c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.249 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6887439d-ea4b-410f-a878-8bea33a80f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.256 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3c84ab-3ff0-4ffd-8d00-f76f86856c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 NetworkManager[49013]: <info>  [1769849430.2568] manager: (tap5c64fb20-20): new Veth device (/org/freedesktop/NetworkManager/Devices/314)
Jan 31 03:50:30 np0005603621 systemd-udevd[369022]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.301 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ae40614a-0e61-4859-977c-ee668905c430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.304 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[988560a8-ac5d-4111-8d9d-de4812f34add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 NetworkManager[49013]: <info>  [1769849430.3218] device (tap5c64fb20-20): carrier: link connected
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.328 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[427c52f4-f642-49a1-9ca2-fe4a8d624951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:30.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:30.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.343 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[360763af-bdc1-4010-b90c-ae6233dbd0d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c64fb20-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:94:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873583, 'reachable_time': 19529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369051, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.357 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3c032a-bb90-47ca-96f0-27529f8a23aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:94a3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 873583, 'tstamp': 873583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369052, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.369 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a5367d71-8e79-4375-ad73-e29ab3a11125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c64fb20-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:94:a3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873583, 'reachable_time': 19529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369053, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.391 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b28bb8-840f-4c4b-a900-c833199ba1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.429 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a86003a6-7839-4ccf-8d5d-b038723e31cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.431 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c64fb20-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.431 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.432 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c64fb20-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:30 np0005603621 NetworkManager[49013]: <info>  [1769849430.4343] manager: (tap5c64fb20-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Jan 31 03:50:30 np0005603621 kernel: tap5c64fb20-20: entered promiscuous mode
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.434 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.437 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c64fb20-20, col_values=(('external_ids', {'iface-id': '36d69125-9445-4eb0-8c79-64082311a234'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:30Z|00696|binding|INFO|Releasing lport 36d69125-9445-4eb0-8c79-64082311a234 from this chassis (sb_readonly=0)
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.443 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.445 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c64fb20-295a-4907-91a3-2d6622028082.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c64fb20-295a-4907-91a3-2d6622028082.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.446 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[759e7c60-59d3-425d-8c4f-ff39ffd35bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.447 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-5c64fb20-295a-4907-91a3-2d6622028082
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/5c64fb20-295a-4907-91a3-2d6622028082.pid.haproxy
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 5c64fb20-295a-4907-91a3-2d6622028082
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.449 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'env', 'PROCESS_TAG=haproxy-5c64fb20-295a-4907-91a3-2d6622028082', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c64fb20-295a-4907-91a3-2d6622028082.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.531 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.532 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:30.533 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:30 np0005603621 agitated_wing[368959]: {
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:    "0": [
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:        {
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "devices": [
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "/dev/loop3"
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            ],
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "lv_name": "ceph_lv0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "lv_size": "7511998464",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "name": "ceph_lv0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "tags": {
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.cluster_name": "ceph",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.crush_device_class": "",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.encrypted": "0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.osd_id": "0",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.type": "block",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:                "ceph.vdo": "0"
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            },
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "type": "block",
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:            "vg_name": "ceph_vg0"
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:        }
Jan 31 03:50:30 np0005603621 agitated_wing[368959]:    ]
Jan 31 03:50:30 np0005603621 agitated_wing[368959]: }
Jan 31 03:50:30 np0005603621 systemd[1]: libpod-6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260.scope: Deactivated successfully.
Jan 31 03:50:30 np0005603621 podman[368943]: 2026-01-31 08:50:30.56864142 +0000 UTC m=+0.898389477 container died 6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:50:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1617a5192c6f5c57822fcc565ffb920648c914e67a386da69407581a1d4e0f14-merged.mount: Deactivated successfully.
Jan 31 03:50:30 np0005603621 podman[368943]: 2026-01-31 08:50:30.669838025 +0000 UTC m=+0.999586082 container remove 6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:50:30 np0005603621 systemd[1]: libpod-conmon-6cd51819d2273fb598a06b50b3bc10007bb445d24f03939035c181d5d2b03260.scope: Deactivated successfully.
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.695 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 962 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 235 op/s
Jan 31 03:50:30 np0005603621 podman[369148]: 2026-01-31 08:50:30.821164163 +0000 UTC m=+0.080740461 container create a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 03:50:30 np0005603621 podman[369148]: 2026-01-31 08:50:30.758655579 +0000 UTC m=+0.018231897 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:50:30 np0005603621 systemd[1]: Started libpod-conmon-a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d.scope.
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.878 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for 1e96eda0-223d-45b7-b1a7-573b51604c50 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.879 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849430.8777578, 1e96eda0-223d-45b7-b1a7-573b51604c50 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.879 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:50:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.882 247403 DEBUG nova.compute.manager [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.882 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.885 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance spawned successfully.#033[00m
Jan 31 03:50:30 np0005603621 nova_compute[247399]: 2026-01-31 08:50:30.886 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:50:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e51e17668c0f5abc04cd1dfcac18d6a5deee7be3000cd29b7c91da8fb84bd5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:30 np0005603621 podman[369148]: 2026-01-31 08:50:30.90691063 +0000 UTC m=+0.166486928 container init a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:50:30 np0005603621 podman[369148]: 2026-01-31 08:50:30.912998222 +0000 UTC m=+0.172574520 container start a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:50:30 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [NOTICE]   (369260) : New worker (369262) forked
Jan 31 03:50:30 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [NOTICE]   (369260) : Loading success.
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.145 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.145 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.146 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.146 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.147 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.147 247403 DEBUG nova.virt.libvirt.driver [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.190878756 +0000 UTC m=+0.038955101 container create e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.226 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:31 np0005603621 systemd[1]: Started libpod-conmon-e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8.scope.
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.232 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:50:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.263776138 +0000 UTC m=+0.111852513 container init e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.17265924 +0000 UTC m=+0.020735595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.270156039 +0000 UTC m=+0.118232394 container start e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:50:31 np0005603621 sweet_pare[369326]: 167 167
Jan 31 03:50:31 np0005603621 systemd[1]: libpod-e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8.scope: Deactivated successfully.
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.279919927 +0000 UTC m=+0.127996282 container attach e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.28031198 +0000 UTC m=+0.128388325 container died e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 31 03:50:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-66f5349f7a85d9e84503995b97e51733994ce262ae3e0eb05d534e575cf1db7f-merged.mount: Deactivated successfully.
Jan 31 03:50:31 np0005603621 podman[369309]: 2026-01-31 08:50:31.333312143 +0000 UTC m=+0.181388488 container remove e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_pare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:50:31 np0005603621 systemd[1]: libpod-conmon-e3019a5772bbde82289808e4559f1c69fbed0cd6c4beeaaa1a8db6a62dd9b2a8.scope: Deactivated successfully.
Jan 31 03:50:31 np0005603621 podman[369349]: 2026-01-31 08:50:31.529826978 +0000 UTC m=+0.089751795 container create cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:50:31 np0005603621 podman[369349]: 2026-01-31 08:50:31.461570333 +0000 UTC m=+0.021495170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:50:31 np0005603621 systemd[1]: Started libpod-conmon-cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7.scope.
Jan 31 03:50:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:50:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5c862dfde9dbdfa3d361c34e6e3ef7acae58e46f72c452b15c36c2494467f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5c862dfde9dbdfa3d361c34e6e3ef7acae58e46f72c452b15c36c2494467f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5c862dfde9dbdfa3d361c34e6e3ef7acae58e46f72c452b15c36c2494467f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d5c862dfde9dbdfa3d361c34e6e3ef7acae58e46f72c452b15c36c2494467f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.597 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.600 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849430.8812833, 1e96eda0-223d-45b7-b1a7-573b51604c50 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.600 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] VM Started (Lifecycle Event)#033[00m
Jan 31 03:50:31 np0005603621 podman[369349]: 2026-01-31 08:50:31.606132957 +0000 UTC m=+0.166057794 container init cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:50:31 np0005603621 podman[369349]: 2026-01-31 08:50:31.614055027 +0000 UTC m=+0.173980064 container start cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Jan 31 03:50:31 np0005603621 podman[369349]: 2026-01-31 08:50:31.617820476 +0000 UTC m=+0.177745323 container attach cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.923 247403 DEBUG nova.compute.manager [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.937 247403 DEBUG nova.compute.manager [req-7d207989-a17c-440c-b68b-f6fe3a479bfa req-f2c52239-197e-4514-a227-5cf65781eb6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.938 247403 DEBUG oslo_concurrency.lockutils [req-7d207989-a17c-440c-b68b-f6fe3a479bfa req-f2c52239-197e-4514-a227-5cf65781eb6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.938 247403 DEBUG oslo_concurrency.lockutils [req-7d207989-a17c-440c-b68b-f6fe3a479bfa req-f2c52239-197e-4514-a227-5cf65781eb6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.938 247403 DEBUG oslo_concurrency.lockutils [req-7d207989-a17c-440c-b68b-f6fe3a479bfa req-f2c52239-197e-4514-a227-5cf65781eb6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.938 247403 DEBUG nova.compute.manager [req-7d207989-a17c-440c-b68b-f6fe3a479bfa req-f2c52239-197e-4514-a227-5cf65781eb6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.939 247403 WARNING nova.compute.manager [req-7d207989-a17c-440c-b68b-f6fe3a479bfa req-f2c52239-197e-4514-a227-5cf65781eb6a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received unexpected event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.972 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:31 np0005603621 nova_compute[247399]: 2026-01-31 08:50:31.974 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:50:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:32 np0005603621 nova_compute[247399]: 2026-01-31 08:50:32.277 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:32 np0005603621 nova_compute[247399]: 2026-01-31 08:50:32.278 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:32 np0005603621 nova_compute[247399]: 2026-01-31 08:50:32.278 247403 DEBUG nova.objects.instance [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 31 03:50:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:32.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:32.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:32 np0005603621 hungry_elion[369366]: {
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:        "osd_id": 0,
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:        "type": "bluestore"
Jan 31 03:50:32 np0005603621 hungry_elion[369366]:    }
Jan 31 03:50:32 np0005603621 hungry_elion[369366]: }
Jan 31 03:50:32 np0005603621 systemd[1]: libpod-cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7.scope: Deactivated successfully.
Jan 31 03:50:32 np0005603621 podman[369349]: 2026-01-31 08:50:32.420677455 +0000 UTC m=+0.980602292 container died cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 31 03:50:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7d5c862dfde9dbdfa3d361c34e6e3ef7acae58e46f72c452b15c36c2494467f8-merged.mount: Deactivated successfully.
Jan 31 03:50:32 np0005603621 nova_compute[247399]: 2026-01-31 08:50:32.462 247403 DEBUG oslo_concurrency.lockutils [None req-d1e1040d-815e-4cb1-a9ae-3b67e3a9ecad f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:32 np0005603621 podman[369349]: 2026-01-31 08:50:32.470862149 +0000 UTC m=+1.030786966 container remove cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:50:32 np0005603621 systemd[1]: libpod-conmon-cdb969e2cb5100a6b91456f82a31808fae90592f54b75f4febe0be63014d48c7.scope: Deactivated successfully.
Jan 31 03:50:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:50:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:50:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:32 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5f08fb95-59b0-418d-aec4-4761f17831c2 does not exist
Jan 31 03:50:32 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 69e61ea7-a9e0-479d-bcd7-a59343cee092 does not exist
Jan 31 03:50:32 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f4c6e663-2c09-4821-922d-2a44f2deee8f does not exist
Jan 31 03:50:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 980 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Jan 31 03:50:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:50:33 np0005603621 nova_compute[247399]: 2026-01-31 08:50:33.546 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.118 247403 DEBUG nova.compute.manager [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-changed-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.118 247403 DEBUG nova.compute.manager [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Refreshing instance network info cache due to event network-changed-fbe66833-82a6-4f72-9b11-a4732140845a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.118 247403 DEBUG oslo_concurrency.lockutils [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.119 247403 DEBUG oslo_concurrency.lockutils [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.119 247403 DEBUG nova.network.neutron [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Refreshing network info cache for port fbe66833-82a6-4f72-9b11-a4732140845a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:50:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:34.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:34.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 980 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 215 op/s
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.963 247403 DEBUG nova.compute.manager [req-e2ba821b-1813-4ed0-98d9-cd0c5770f312 req-0ee786b7-d1c2-4c9d-bd37-42013fd0f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.963 247403 DEBUG oslo_concurrency.lockutils [req-e2ba821b-1813-4ed0-98d9-cd0c5770f312 req-0ee786b7-d1c2-4c9d-bd37-42013fd0f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.963 247403 DEBUG oslo_concurrency.lockutils [req-e2ba821b-1813-4ed0-98d9-cd0c5770f312 req-0ee786b7-d1c2-4c9d-bd37-42013fd0f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.964 247403 DEBUG oslo_concurrency.lockutils [req-e2ba821b-1813-4ed0-98d9-cd0c5770f312 req-0ee786b7-d1c2-4c9d-bd37-42013fd0f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.964 247403 DEBUG nova.compute.manager [req-e2ba821b-1813-4ed0-98d9-cd0c5770f312 req-0ee786b7-d1c2-4c9d-bd37-42013fd0f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:34 np0005603621 nova_compute[247399]: 2026-01-31 08:50:34.964 247403 WARNING nova.compute.manager [req-e2ba821b-1813-4ed0-98d9-cd0c5770f312 req-0ee786b7-d1c2-4c9d-bd37-42013fd0f5d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received unexpected event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:50:35 np0005603621 nova_compute[247399]: 2026-01-31 08:50:35.141 247403 DEBUG oslo_concurrency.lockutils [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:35 np0005603621 nova_compute[247399]: 2026-01-31 08:50:35.142 247403 DEBUG oslo_concurrency.lockutils [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:35 np0005603621 nova_compute[247399]: 2026-01-31 08:50:35.545 247403 INFO nova.compute.manager [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Detaching volume 528b148a-172a-43e7-8be2-21819c2d44e5#033[00m
Jan 31 03:50:35 np0005603621 nova_compute[247399]: 2026-01-31 08:50:35.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.182 247403 INFO nova.virt.block_device [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Attempting to driver detach volume 528b148a-172a-43e7-8be2-21819c2d44e5 from mountpoint /dev/vdb#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.192 247403 DEBUG nova.virt.libvirt.driver [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Attempting to detach device vdb from instance 74f09648-834b-4da1-89a4-bcdcca255908 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.192 247403 DEBUG nova.virt.libvirt.guest [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-528b148a-172a-43e7-8be2-21819c2d44e5">
Jan 31 03:50:36 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <serial>528b148a-172a-43e7-8be2-21819c2d44e5</serial>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:50:36 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.201 247403 INFO nova.virt.libvirt.driver [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully detached device vdb from instance 74f09648-834b-4da1-89a4-bcdcca255908 from the persistent domain config.#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.201 247403 DEBUG nova.virt.libvirt.driver [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 74f09648-834b-4da1-89a4-bcdcca255908 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.202 247403 DEBUG nova.virt.libvirt.guest [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-528b148a-172a-43e7-8be2-21819c2d44e5">
Jan 31 03:50:36 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <serial>528b148a-172a-43e7-8be2-21819c2d44e5</serial>
Jan 31 03:50:36 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:50:36 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:50:36 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:50:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:36.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:36.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.430 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849436.429987, 74f09648-834b-4da1-89a4-bcdcca255908 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.432 247403 DEBUG nova.virt.libvirt.driver [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 74f09648-834b-4da1-89a4-bcdcca255908 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.434 247403 INFO nova.virt.libvirt.driver [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully detached device vdb from instance 74f09648-834b-4da1-89a4-bcdcca255908 from the live domain config.#033[00m
Jan 31 03:50:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 980 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 189 op/s
Jan 31 03:50:36 np0005603621 nova_compute[247399]: 2026-01-31 08:50:36.872 247403 DEBUG nova.objects.instance [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'flavor' on Instance uuid 74f09648-834b-4da1-89a4-bcdcca255908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:37 np0005603621 nova_compute[247399]: 2026-01-31 08:50:37.104 247403 DEBUG oslo_concurrency.lockutils [None req-89182277-b908-4e7c-af21-8f574c5704d7 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:37 np0005603621 nova_compute[247399]: 2026-01-31 08:50:37.918 247403 DEBUG nova.network.neutron [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updated VIF entry in instance network info cache for port fbe66833-82a6-4f72-9b11-a4732140845a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:50:37 np0005603621 nova_compute[247399]: 2026-01-31 08:50:37.918 247403 DEBUG nova.network.neutron [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating instance_info_cache with network_info: [{"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.094 247403 DEBUG oslo_concurrency.lockutils [req-00f8e0ed-9d80-44d1-8a39-9a2a14cfce69 req-9927651a-92bf-4deb-96f0-f37bf3aa018b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:50:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:38.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:38.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.443 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.444 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.444 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.444 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.444 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.445 247403 INFO nova.compute.manager [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Terminating instance#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.446 247403 DEBUG nova.compute.manager [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:50:38 np0005603621 kernel: tapb07e666e-f7 (unregistering): left promiscuous mode
Jan 31 03:50:38 np0005603621 NetworkManager[49013]: <info>  [1769849438.5074] device (tapb07e666e-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:50:38 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:38Z|00697|binding|INFO|Releasing lport b07e666e-f751-41ba-b006-3496f51d6eaa from this chassis (sb_readonly=0)
Jan 31 03:50:38 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:38Z|00698|binding|INFO|Setting lport b07e666e-f751-41ba-b006-3496f51d6eaa down in Southbound
Jan 31 03:50:38 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:38Z|00699|binding|INFO|Removing iface tapb07e666e-f7 ovn-installed in OVS
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.520 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.547 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 31 03:50:38 np0005603621 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000a4.scope: Consumed 16.443s CPU time.
Jan 31 03:50:38 np0005603621 systemd-machined[212769]: Machine qemu-81-instance-000000a4 terminated.
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:50:38
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'images']
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:50:38 np0005603621 podman[369515]: 2026-01-31 08:50:38.650175744 +0000 UTC m=+0.054525842 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.677 247403 INFO nova.virt.libvirt.driver [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Instance destroyed successfully.#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.677 247403 DEBUG nova.objects.instance [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'resources' on Instance uuid 74f09648-834b-4da1-89a4-bcdcca255908 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.681 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:22:c1 10.100.0.13'], port_security=['fa:16:3e:67:22:c1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '74f09648-834b-4da1-89a4-bcdcca255908', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f269a-650e-4227-8352-05abf2566c17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76fb5cb7abcd4d74abfc471a96bbd12c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7f545401-a445-41e0-97ac-2f2cec520248', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97876448-21a4-4b64-9452-bd401dfcc8ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b07e666e-f751-41ba-b006-3496f51d6eaa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.683 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b07e666e-f751-41ba-b006-3496f51d6eaa in datapath a02f269a-650e-4227-8352-05abf2566c17 unbound from our chassis#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.685 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a02f269a-650e-4227-8352-05abf2566c17, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.687 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2b21e7f3-b198-4505-b415-04cd152d0a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.687 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 namespace which is not needed anymore#033[00m
Jan 31 03:50:38 np0005603621 podman[369517]: 2026-01-31 08:50:38.710945662 +0000 UTC m=+0.115593160 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 03:50:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 979 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 1.9 MiB/s wr, 216 op/s
Jan 31 03:50:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [NOTICE]   (365459) : haproxy version is 2.8.14-c23fe91
Jan 31 03:50:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [NOTICE]   (365459) : path to executable is /usr/sbin/haproxy
Jan 31 03:50:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [WARNING]  (365459) : Exiting Master process...
Jan 31 03:50:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [ALERT]    (365459) : Current worker (365461) exited with code 143 (Terminated)
Jan 31 03:50:38 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[365454]: [WARNING]  (365459) : All workers exited. Exiting... (0)
Jan 31 03:50:38 np0005603621 systemd[1]: libpod-6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85.scope: Deactivated successfully.
Jan 31 03:50:38 np0005603621 podman[369589]: 2026-01-31 08:50:38.796157943 +0000 UTC m=+0.039298992 container died 6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:50:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85-userdata-shm.mount: Deactivated successfully.
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.836 247403 DEBUG nova.virt.libvirt.vif [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:48:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1818783830',display_name='tempest-AttachVolumeNegativeTest-server-1818783830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1818783830',id=164,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCbi4ee/uHKZmZ3YiUCQp4yI4jD1xvBXHz9BPflKR0a7UgKXiUXfNBhcRX7TqA1zE6SNa9Di7QCiwXBvdnITfa2LB4R28IrOx0I6vDefRfO6dTgkvP1J7g1i1OmBo6fVwA==',key_name='tempest-keypair-1160191621',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:48:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76fb5cb7abcd4d74abfc471a96bbd12c',ramdisk_id='',reservation_id='r-g01x1810',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-457307401',owner_user_name='tempest-AttachVolumeNegativeTest-457307401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:48:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3bd4ce8a916a4bdbbc988eb4fe32991e',uuid=74f09648-834b-4da1-89a4-bcdcca255908,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:50:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cd9af882f7876b0e341b1d261fd1c60ca9d3d83a915356272ee768b1472995e1-merged.mount: Deactivated successfully.
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.838 247403 DEBUG nova.network.os_vif_util [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converting VIF {"id": "b07e666e-f751-41ba-b006-3496f51d6eaa", "address": "fa:16:3e:67:22:c1", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb07e666e-f7", "ovs_interfaceid": "b07e666e-f751-41ba-b006-3496f51d6eaa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.839 247403 DEBUG nova.network.os_vif_util [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.839 247403 DEBUG os_vif [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.841 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb07e666e-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.842 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.843 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 podman[369589]: 2026-01-31 08:50:38.847985739 +0000 UTC m=+0.091126778 container cleanup 6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.847 247403 INFO os_vif [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:22:c1,bridge_name='br-int',has_traffic_filtering=True,id=b07e666e-f751-41ba-b006-3496f51d6eaa,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb07e666e-f7')#033[00m
Jan 31 03:50:38 np0005603621 systemd[1]: libpod-conmon-6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85.scope: Deactivated successfully.
Jan 31 03:50:38 np0005603621 podman[369619]: 2026-01-31 08:50:38.900120756 +0000 UTC m=+0.036864275 container remove 6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.906 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[69b834bc-c7aa-4c1d-bf39-63daa218c934]: (4, ('Sat Jan 31 08:50:38 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 (6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85)\n6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85\nSat Jan 31 08:50:38 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 (6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85)\n6e2105ecc2d409036b796d92bff8c4a412876943d94f4af18abac6fb084fcf85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.907 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[36645021-e83e-4bca-9b58-54e95fe81da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.909 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f269a-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:38 np0005603621 kernel: tapa02f269a-60: left promiscuous mode
Jan 31 03:50:38 np0005603621 nova_compute[247399]: 2026-01-31 08:50:38.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.920 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c480286e-be6b-49ce-be01-153445b6aebb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.936 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[03c3c2ae-47cd-4c09-92e8-605b9d66a184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.938 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4a604cd6-f1f3-4119-902a-5468123edb3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.951 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2d7d05-64a5-4931-9a2a-53d95e86f5e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862309, 'reachable_time': 38638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369652, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:38 np0005603621 systemd[1]: run-netns-ovnmeta\x2da02f269a\x2d650e\x2d4227\x2d8352\x2d05abf2566c17.mount: Deactivated successfully.
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.956 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:50:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:38.956 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[dba86c77-d036-436a-a3b6-ac0f3c888ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:50:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.033 247403 DEBUG nova.compute.manager [req-efb019f3-89b7-4ee8-93fc-5730287bafbc req-4345a76e-9271-488a-b8f2-4be14e0228e2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-vif-unplugged-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.033 247403 DEBUG oslo_concurrency.lockutils [req-efb019f3-89b7-4ee8-93fc-5730287bafbc req-4345a76e-9271-488a-b8f2-4be14e0228e2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.033 247403 DEBUG oslo_concurrency.lockutils [req-efb019f3-89b7-4ee8-93fc-5730287bafbc req-4345a76e-9271-488a-b8f2-4be14e0228e2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.034 247403 DEBUG oslo_concurrency.lockutils [req-efb019f3-89b7-4ee8-93fc-5730287bafbc req-4345a76e-9271-488a-b8f2-4be14e0228e2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.034 247403 DEBUG nova.compute.manager [req-efb019f3-89b7-4ee8-93fc-5730287bafbc req-4345a76e-9271-488a-b8f2-4be14e0228e2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] No waiting events found dispatching network-vif-unplugged-b07e666e-f751-41ba-b006-3496f51d6eaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.034 247403 DEBUG nova.compute.manager [req-efb019f3-89b7-4ee8-93fc-5730287bafbc req-4345a76e-9271-488a-b8f2-4be14e0228e2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-vif-unplugged-b07e666e-f751-41ba-b006-3496f51d6eaa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:50:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:40.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:40.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:40 np0005603621 nova_compute[247399]: 2026-01-31 08:50:40.699 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 979 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 859 KiB/s wr, 170 op/s
Jan 31 03:50:41 np0005603621 nova_compute[247399]: 2026-01-31 08:50:41.624 247403 INFO nova.virt.libvirt.driver [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Deleting instance files /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908_del#033[00m
Jan 31 03:50:41 np0005603621 nova_compute[247399]: 2026-01-31 08:50:41.624 247403 INFO nova.virt.libvirt.driver [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Deletion of /var/lib/nova/instances/74f09648-834b-4da1-89a4-bcdcca255908_del complete#033[00m
Jan 31 03:50:41 np0005603621 nova_compute[247399]: 2026-01-31 08:50:41.906 247403 INFO nova.compute.manager [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Took 3.46 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:50:41 np0005603621 nova_compute[247399]: 2026-01-31 08:50:41.906 247403 DEBUG oslo.service.loopingcall [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:50:41 np0005603621 nova_compute[247399]: 2026-01-31 08:50:41.907 247403 DEBUG nova.compute.manager [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:50:41 np0005603621 nova_compute[247399]: 2026-01-31 08:50:41.907 247403 DEBUG nova.network.neutron [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:50:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:42.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:42.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:42 np0005603621 nova_compute[247399]: 2026-01-31 08:50:42.379 247403 DEBUG nova.compute.manager [req-c3407507-68f8-4a69-b9d6-f8a0da3985a6 req-c3649330-c02c-4944-9df7-965bf1806296 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:42 np0005603621 nova_compute[247399]: 2026-01-31 08:50:42.380 247403 DEBUG oslo_concurrency.lockutils [req-c3407507-68f8-4a69-b9d6-f8a0da3985a6 req-c3649330-c02c-4944-9df7-965bf1806296 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "74f09648-834b-4da1-89a4-bcdcca255908-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:42 np0005603621 nova_compute[247399]: 2026-01-31 08:50:42.380 247403 DEBUG oslo_concurrency.lockutils [req-c3407507-68f8-4a69-b9d6-f8a0da3985a6 req-c3649330-c02c-4944-9df7-965bf1806296 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:42 np0005603621 nova_compute[247399]: 2026-01-31 08:50:42.380 247403 DEBUG oslo_concurrency.lockutils [req-c3407507-68f8-4a69-b9d6-f8a0da3985a6 req-c3649330-c02c-4944-9df7-965bf1806296 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:42 np0005603621 nova_compute[247399]: 2026-01-31 08:50:42.380 247403 DEBUG nova.compute.manager [req-c3407507-68f8-4a69-b9d6-f8a0da3985a6 req-c3649330-c02c-4944-9df7-965bf1806296 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] No waiting events found dispatching network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:42 np0005603621 nova_compute[247399]: 2026-01-31 08:50:42.381 247403 WARNING nova.compute.manager [req-c3407507-68f8-4a69-b9d6-f8a0da3985a6 req-c3649330-c02c-4944-9df7-965bf1806296 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received unexpected event network-vif-plugged-b07e666e-f751-41ba-b006-3496f51d6eaa for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:50:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 867 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.3 MiB/s wr, 245 op/s
Jan 31 03:50:43 np0005603621 nova_compute[247399]: 2026-01-31 08:50:43.869 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:44.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:44.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.371 247403 DEBUG nova.network.neutron [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.532 247403 DEBUG nova.compute.manager [req-b116c0ab-7567-46f6-9fce-9ba2f2689ea0 req-7edfd981-0139-4c37-8e18-b86aed293ea9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Received event network-vif-deleted-b07e666e-f751-41ba-b006-3496f51d6eaa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.532 247403 INFO nova.compute.manager [req-b116c0ab-7567-46f6-9fce-9ba2f2689ea0 req-7edfd981-0139-4c37-8e18-b86aed293ea9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Neutron deleted interface b07e666e-f751-41ba-b006-3496f51d6eaa; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.533 247403 DEBUG nova.network.neutron [req-b116c0ab-7567-46f6-9fce-9ba2f2689ea0 req-7edfd981-0139-4c37-8e18-b86aed293ea9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.580 247403 INFO nova.compute.manager [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Took 2.67 seconds to deallocate network for instance.#033[00m
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.632 247403 DEBUG nova.compute.manager [req-b116c0ab-7567-46f6-9fce-9ba2f2689ea0 req-7edfd981-0139-4c37-8e18-b86aed293ea9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Detach interface failed, port_id=b07e666e-f751-41ba-b006-3496f51d6eaa, reason: Instance 74f09648-834b-4da1-89a4-bcdcca255908 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:50:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:44Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:e3:f6 10.100.0.14
Jan 31 03:50:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:44Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:e3:f6 10.100.0.14
Jan 31 03:50:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 867 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 170 op/s
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.830 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:44 np0005603621 nova_compute[247399]: 2026-01-31 08:50:44.831 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.022 247403 DEBUG oslo_concurrency.processutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3786148693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.466 247403 DEBUG oslo_concurrency.processutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.472 247403 DEBUG nova.compute.provider_tree [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.612 247403 DEBUG nova.scheduler.client.report [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.701 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.704 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:45 np0005603621 nova_compute[247399]: 2026-01-31 08:50:45.826 247403 INFO nova.scheduler.client.report [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Deleted allocations for instance 74f09648-834b-4da1-89a4-bcdcca255908#033[00m
Jan 31 03:50:46 np0005603621 nova_compute[247399]: 2026-01-31 08:50:46.258 247403 DEBUG oslo_concurrency.lockutils [None req-5e09be4a-3be6-4822-be86-70fc21774264 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "74f09648-834b-4da1-89a4-bcdcca255908" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:46.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 872 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.6 MiB/s wr, 186 op/s
Jan 31 03:50:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:48.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:48.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 879 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 204 op/s
Jan 31 03:50:48 np0005603621 nova_compute[247399]: 2026-01-31 08:50:48.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01835966580588126 of space, bias 1.0, pg target 5.507899741764377 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004325555730039084 of space, bias 1.0, pg target 1.2760389403615298 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5595239653126745 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 16)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003206134840871022 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:50:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Jan 31 03:50:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:50.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:50 np0005603621 nova_compute[247399]: 2026-01-31 08:50:50.592 247403 INFO nova.compute.manager [None req-e146d1dc-433c-44dc-8a9a-c5b617785c9a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Get console output#033[00m
Jan 31 03:50:50 np0005603621 nova_compute[247399]: 2026-01-31 08:50:50.596 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:50:50 np0005603621 nova_compute[247399]: 2026-01-31 08:50:50.703 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 879 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1006 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.019 247403 DEBUG nova.compute.manager [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-changed-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.020 247403 DEBUG nova.compute.manager [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Refreshing instance network info cache due to event network-changed-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.020 247403 DEBUG oslo_concurrency.lockutils [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.021 247403 DEBUG oslo_concurrency.lockutils [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.021 247403 DEBUG nova.network.neutron [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Refreshing network info cache for port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:50:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.262 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.263 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.263 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.263 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.264 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.265 247403 INFO nova.compute.manager [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Terminating instance#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.266 247403 DEBUG nova.compute.manager [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:50:52 np0005603621 kernel: tap412e7f4c-bd (unregistering): left promiscuous mode
Jan 31 03:50:52 np0005603621 NetworkManager[49013]: <info>  [1769849452.3128] device (tap412e7f4c-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.319 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:52Z|00700|binding|INFO|Releasing lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 from this chassis (sb_readonly=0)
Jan 31 03:50:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:52Z|00701|binding|INFO|Setting lport 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 down in Southbound
Jan 31 03:50:52 np0005603621 ovn_controller[149152]: 2026-01-31T08:50:52Z|00702|binding|INFO|Removing iface tap412e7f4c-bd ovn-installed in OVS
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.331 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:e3:f6 10.100.0.14'], port_security=['fa:16:3e:fe:e3:f6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '1e96eda0-223d-45b7-b1a7-573b51604c50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c64fb20-295a-4907-91a3-2d6622028082', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '13de6dfa-d3f8-48e3-9375-41e3868371bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1c13c6f4-6161-4053-9f6c-b8f9a12a63dc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.333 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 in datapath 5c64fb20-295a-4907-91a3-2d6622028082 unbound from our chassis#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.347 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c64fb20-295a-4907-91a3-2d6622028082, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.349 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[34978b1c-aac5-4d03-9698-d1608c2e3a52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.349 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 namespace which is not needed anymore#033[00m
Jan 31 03:50:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:52.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:52 np0005603621 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000a7.scope: Deactivated successfully.
Jan 31 03:50:52 np0005603621 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000a7.scope: Consumed 12.377s CPU time.
Jan 31 03:50:52 np0005603621 systemd-machined[212769]: Machine qemu-84-instance-000000a7 terminated.
Jan 31 03:50:52 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [NOTICE]   (369260) : haproxy version is 2.8.14-c23fe91
Jan 31 03:50:52 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [NOTICE]   (369260) : path to executable is /usr/sbin/haproxy
Jan 31 03:50:52 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [WARNING]  (369260) : Exiting Master process...
Jan 31 03:50:52 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [ALERT]    (369260) : Current worker (369262) exited with code 143 (Terminated)
Jan 31 03:50:52 np0005603621 neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082[369243]: [WARNING]  (369260) : All workers exited. Exiting... (0)
Jan 31 03:50:52 np0005603621 systemd[1]: libpod-a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d.scope: Deactivated successfully.
Jan 31 03:50:52 np0005603621 podman[369707]: 2026-01-31 08:50:52.445451882 +0000 UTC m=+0.038033513 container died a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:50:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d-userdata-shm.mount: Deactivated successfully.
Jan 31 03:50:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-86e51e17668c0f5abc04cd1dfcac18d6a5deee7be3000cd29b7c91da8fb84bd5-merged.mount: Deactivated successfully.
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.479 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.482 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 podman[369707]: 2026-01-31 08:50:52.482828781 +0000 UTC m=+0.075410412 container cleanup a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Jan 31 03:50:52 np0005603621 systemd[1]: libpod-conmon-a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d.scope: Deactivated successfully.
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.494 247403 INFO nova.virt.libvirt.driver [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Instance destroyed successfully.#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.494 247403 DEBUG nova.objects.instance [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid 1e96eda0-223d-45b7-b1a7-573b51604c50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:50:52 np0005603621 podman[369745]: 2026-01-31 08:50:52.536907069 +0000 UTC m=+0.039912711 container remove a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.540 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f6772b7d-ebb0-4060-bca8-bee7ab02ec68]: (4, ('Sat Jan 31 08:50:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 (a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d)\na951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d\nSat Jan 31 08:50:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 (a951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d)\na951f74eb2d0a425804a04b0faeb4462a1ea4e26737923b7bbcd6030c9fd618d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.542 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef6b99a-720a-440d-a5ab-b12316451659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.543 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c64fb20-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:52 np0005603621 kernel: tap5c64fb20-20: left promiscuous mode
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.546 247403 DEBUG nova.virt.libvirt.vif [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:49:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1394409259',display_name='tempest-TestNetworkAdvancedServerOps-server-1394409259',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1394409259',id=167,image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/lAD4OCru9xRbM5RwUuWIkZogXrbc5YYzvQOqv8vzq3yHSuGXSkcCz+Uq294UgXDvDOWlIVlc+KtPz2i57cLrRAb2n00QBhxyN/0ozf8lbd5nBtA8rBOi6LAdh2ntUJw==',key_name='tempest-TestNetworkAdvancedServerOps-1864686962',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:50:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-46zn1h0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='0864ca59-9877-4e6d-adfc-f0a3204ed8f8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:50:32Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=1e96eda0-223d-45b7-b1a7-573b51604c50,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.547 247403 DEBUG nova.network.os_vif_util [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.548 247403 DEBUG nova.network.os_vif_util [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.549 247403 DEBUG os_vif [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.551 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.552 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap412e7f4c-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.553 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.555 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.556 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d2306809-929d-47ee-966f-13cca15e128e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.558 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:52 np0005603621 nova_compute[247399]: 2026-01-31 08:50:52.561 247403 INFO os_vif [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:e3:f6,bridge_name='br-int',has_traffic_filtering=True,id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8,network=Network(5c64fb20-295a-4907-91a3-2d6622028082),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap412e7f4c-bd')#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.575 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5929b166-bb55-4035-90d0-7f09e745fd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.576 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec76c64-b102-4533-b041-42d065f7277b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.589 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e8479756-a594-4eb6-8223-775944cbbf03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 873575, 'reachable_time': 27354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369778, 'error': None, 'target': 'ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 systemd[1]: run-netns-ovnmeta\x2d5c64fb20\x2d295a\x2d4907\x2d91a3\x2d2d6622028082.mount: Deactivated successfully.
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.593 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c64fb20-295a-4907-91a3-2d6622028082 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:50:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:50:52.593 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c35e39-5ac4-4076-be96-2f4ebc43b03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:50:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 887 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 167 op/s
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.164 247403 DEBUG nova.compute.manager [req-04bb32a8-eeda-4a84-bef0-8b91895d83e0 req-d0c42595-7517-4062-a1f3-1a9412a5a0ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-unplugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.164 247403 DEBUG oslo_concurrency.lockutils [req-04bb32a8-eeda-4a84-bef0-8b91895d83e0 req-d0c42595-7517-4062-a1f3-1a9412a5a0ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.165 247403 DEBUG oslo_concurrency.lockutils [req-04bb32a8-eeda-4a84-bef0-8b91895d83e0 req-d0c42595-7517-4062-a1f3-1a9412a5a0ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.165 247403 DEBUG oslo_concurrency.lockutils [req-04bb32a8-eeda-4a84-bef0-8b91895d83e0 req-d0c42595-7517-4062-a1f3-1a9412a5a0ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.165 247403 DEBUG nova.compute.manager [req-04bb32a8-eeda-4a84-bef0-8b91895d83e0 req-d0c42595-7517-4062-a1f3-1a9412a5a0ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-unplugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.165 247403 DEBUG nova.compute.manager [req-04bb32a8-eeda-4a84-bef0-8b91895d83e0 req-d0c42595-7517-4062-a1f3-1a9412a5a0ae fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-unplugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.624 247403 INFO nova.virt.libvirt.driver [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deleting instance files /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50_del#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.624 247403 INFO nova.virt.libvirt.driver [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deletion of /var/lib/nova/instances/1e96eda0-223d-45b7-b1a7-573b51604c50_del complete#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.675 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849438.674652, 74f09648-834b-4da1-89a4-bcdcca255908 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.676 247403 INFO nova.compute.manager [-] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.703 247403 DEBUG nova.compute.manager [None req-93ce352e-fb97-4f87-9a46-1aa064d2579d - - - - - -] [instance: 74f09648-834b-4da1-89a4-bcdcca255908] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.834 247403 INFO nova.compute.manager [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.834 247403 DEBUG oslo.service.loopingcall [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.835 247403 DEBUG nova.compute.manager [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:50:53 np0005603621 nova_compute[247399]: 2026-01-31 08:50:53.835 247403 DEBUG nova.network.neutron [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:50:54 np0005603621 nova_compute[247399]: 2026-01-31 08:50:54.221 247403 DEBUG nova.network.neutron [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updated VIF entry in instance network info cache for port 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:50:54 np0005603621 nova_compute[247399]: 2026-01-31 08:50:54.222 247403 DEBUG nova.network.neutron [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updating instance_info_cache with network_info: [{"id": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "address": "fa:16:3e:fe:e3:f6", "network": {"id": "5c64fb20-295a-4907-91a3-2d6622028082", "bridge": "br-int", "label": "tempest-network-smoke--825295636", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap412e7f4c-bd", "ovs_interfaceid": "412e7f4c-bd7f-493a-b50e-cf3e36e6dea8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:54.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 887 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 832 KiB/s rd, 729 KiB/s wr, 92 op/s
Jan 31 03:50:54 np0005603621 nova_compute[247399]: 2026-01-31 08:50:54.885 247403 DEBUG oslo_concurrency.lockutils [req-a6703d91-9cce-4682-a25e-1dc0fce3e516 req-bbd2593d-5acd-49e5-b188-2d692f96f3cf fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-1e96eda0-223d-45b7-b1a7-573b51604c50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.492 247403 DEBUG nova.compute.manager [req-5e690d14-1992-4ba6-af78-6bda8e3db732 req-e0b1915e-df04-415d-9d82-64a39f19e2a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.493 247403 DEBUG oslo_concurrency.lockutils [req-5e690d14-1992-4ba6-af78-6bda8e3db732 req-e0b1915e-df04-415d-9d82-64a39f19e2a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.493 247403 DEBUG oslo_concurrency.lockutils [req-5e690d14-1992-4ba6-af78-6bda8e3db732 req-e0b1915e-df04-415d-9d82-64a39f19e2a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.493 247403 DEBUG oslo_concurrency.lockutils [req-5e690d14-1992-4ba6-af78-6bda8e3db732 req-e0b1915e-df04-415d-9d82-64a39f19e2a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.493 247403 DEBUG nova.compute.manager [req-5e690d14-1992-4ba6-af78-6bda8e3db732 req-e0b1915e-df04-415d-9d82-64a39f19e2a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] No waiting events found dispatching network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.494 247403 WARNING nova.compute.manager [req-5e690d14-1992-4ba6-af78-6bda8e3db732 req-e0b1915e-df04-415d-9d82-64a39f19e2a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received unexpected event network-vif-plugged-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:50:55 np0005603621 nova_compute[247399]: 2026-01-31 08:50:55.705 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.037 247403 DEBUG nova.network.neutron [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.043 247403 DEBUG nova.compute.manager [req-a21306db-3102-4dc3-aea7-2cae9df9934f req-cb22721d-d038-4c2f-9aea-43918faf6a73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Received event network-vif-deleted-412e7f4c-bd7f-493a-b50e-cf3e36e6dea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.044 247403 INFO nova.compute.manager [req-a21306db-3102-4dc3-aea7-2cae9df9934f req-cb22721d-d038-4c2f-9aea-43918faf6a73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Neutron deleted interface 412e7f4c-bd7f-493a-b50e-cf3e36e6dea8; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.044 247403 DEBUG nova.network.neutron [req-a21306db-3102-4dc3-aea7-2cae9df9934f req-cb22721d-d038-4c2f-9aea-43918faf6a73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.361 247403 INFO nova.compute.manager [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Took 2.53 seconds to deallocate network for instance.#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.366 247403 DEBUG nova.compute.manager [req-a21306db-3102-4dc3-aea7-2cae9df9934f req-cb22721d-d038-4c2f-9aea-43918faf6a73 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Detach interface failed, port_id=412e7f4c-bd7f-493a-b50e-cf3e36e6dea8, reason: Instance 1e96eda0-223d-45b7-b1a7-573b51604c50 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:50:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:56.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.559 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.559 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:56 np0005603621 nova_compute[247399]: 2026-01-31 08:50:56.674 247403 DEBUG oslo_concurrency.processutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:50:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 874 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 834 KiB/s rd, 729 KiB/s wr, 95 op/s
Jan 31 03:50:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:50:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496585932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.096 247403 DEBUG oslo_concurrency.processutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.101 247403 DEBUG nova.compute.provider_tree [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:50:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.220 247403 DEBUG nova.scheduler.client.report [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.383 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.454 247403 INFO nova.scheduler.client.report [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Deleted allocations for instance 1e96eda0-223d-45b7-b1a7-573b51604c50#033[00m
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.554 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:50:57 np0005603621 nova_compute[247399]: 2026-01-31 08:50:57.655 247403 DEBUG oslo_concurrency.lockutils [None req-933df063-4a6d-4fbb-8b88-40dfdeae45c8 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "1e96eda0-223d-45b7-b1a7-573b51604c50" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:50:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:50:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:50:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:50:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:50:58.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:50:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:50:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:50:58.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:50:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 809 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 801 KiB/s rd, 643 KiB/s wr, 105 op/s
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.243 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.243 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.276 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.379 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.380 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.393 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.393 247403 INFO nova.compute.claims [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:50:59 np0005603621 nova_compute[247399]: 2026-01-31 08:50:59.651 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:51:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:00.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:51:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1225589024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.707 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.720 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.725 247403 DEBUG nova.compute.provider_tree [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:51:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 809 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 554 KiB/s rd, 57 KiB/s wr, 79 op/s
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.741 247403 DEBUG nova.scheduler.client.report [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.815 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.816 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.887 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.887 247403 DEBUG nova.network.neutron [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:51:00 np0005603621 nova_compute[247399]: 2026-01-31 08:51:00.913 247403 INFO nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.188 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.243 247403 DEBUG nova.policy [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3bd4ce8a916a4bdbbc988eb4fe32991e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76fb5cb7abcd4d74abfc471a96bbd12c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.367 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.370 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.370 247403 INFO nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Creating image(s)#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.396 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.422 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.448 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.453 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.512 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.513 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.514 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.514 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.540 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:01 np0005603621 nova_compute[247399]: 2026-01-31 08:51:01.543 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 dd9d93b2-c532-41d7-afab-4944d84afd07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Jan 31 03:51:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Jan 31 03:51:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Jan 31 03:51:02 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:02Z|00703|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.063 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 dd9d93b2-c532-41d7-afab-4944d84afd07_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.124 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] resizing rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:51:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.349 247403 DEBUG nova.objects.instance [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'migration_context' on Instance uuid dd9d93b2-c532-41d7-afab-4944d84afd07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.376 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.377 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Ensure instance console log exists: /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.377 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.377 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.378 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:02 np0005603621 nova_compute[247399]: 2026-01-31 08:51:02.556 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 831 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 915 KiB/s wr, 48 op/s
Jan 31 03:51:03 np0005603621 nova_compute[247399]: 2026-01-31 08:51:03.276 247403 DEBUG nova.network.neutron [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Successfully created port: 578814ca-1fc5-472e-8ea8-0016e6b9feae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:51:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 831 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 915 KiB/s wr, 48 op/s
Jan 31 03:51:05 np0005603621 nova_compute[247399]: 2026-01-31 08:51:05.625 247403 DEBUG nova.network.neutron [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Successfully updated port: 578814ca-1fc5-472e-8ea8-0016e6b9feae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:51:05 np0005603621 nova_compute[247399]: 2026-01-31 08:51:05.652 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:05 np0005603621 nova_compute[247399]: 2026-01-31 08:51:05.653 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquired lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:05 np0005603621 nova_compute[247399]: 2026-01-31 08:51:05.653 247403 DEBUG nova.network.neutron [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:51:05 np0005603621 nova_compute[247399]: 2026-01-31 08:51:05.709 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:06 np0005603621 nova_compute[247399]: 2026-01-31 08:51:06.000 247403 DEBUG nova.network.neutron [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:51:06 np0005603621 nova_compute[247399]: 2026-01-31 08:51:06.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:06 np0005603621 nova_compute[247399]: 2026-01-31 08:51:06.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:51:06 np0005603621 nova_compute[247399]: 2026-01-31 08:51:06.352 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:51:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:06.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:51:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 832 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 165 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Jan 31 03:51:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:07.254 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.254 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:07.255 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.492 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849452.4915626, 1e96eda0-223d-45b7-b1a7-573b51604c50 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.493 247403 INFO nova.compute.manager [-] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.557 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.578 247403 DEBUG nova.compute.manager [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-changed-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.579 247403 DEBUG nova.compute.manager [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Refreshing instance network info cache due to event network-changed-578814ca-1fc5-472e-8ea8-0016e6b9feae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.579 247403 DEBUG oslo_concurrency.lockutils [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:07 np0005603621 nova_compute[247399]: 2026-01-31 08:51:07.696 247403 DEBUG nova.compute.manager [None req-df5bab07-2d75-44dd-b4b2-d5407d5aaaa1 - - - - - -] [instance: 1e96eda0-223d-45b7-b1a7-573b51604c50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.219 247403 DEBUG nova.network.neutron [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updating instance_info_cache with network_info: [{"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:08.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.499 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Releasing lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.499 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Instance network_info: |[{"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.500 247403 DEBUG oslo_concurrency.lockutils [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.500 247403 DEBUG nova.network.neutron [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Refreshing network info cache for port 578814ca-1fc5-472e-8ea8-0016e6b9feae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.503 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Start _get_guest_xml network_info=[{"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.508 247403 WARNING nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.521 247403 DEBUG nova.virt.libvirt.host [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.521 247403 DEBUG nova.virt.libvirt.host [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.525 247403 DEBUG nova.virt.libvirt.host [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.525 247403 DEBUG nova.virt.libvirt.host [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.526 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.526 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.527 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.527 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.527 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.528 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.528 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.528 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.528 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.528 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.529 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.529 247403 DEBUG nova.virt.hardware [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:51:08 np0005603621 nova_compute[247399]: 2026-01-31 08:51:08.532 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 174 op/s
Jan 31 03:51:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1822691510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.007 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.030 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.034 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.353 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.353 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.354 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:51:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2359084509' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.483 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.484 247403 DEBUG nova.virt.libvirt.vif [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1706190253',display_name='tempest-AttachVolumeNegativeTest-server-1706190253',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1706190253',id=169,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOYHn72MVBJvVIkf6CCtub5I9AN8T/c+d3nqseBgIOA+1OPO3o1342ayUtIoAO3uHP2CHz5NO5w7EajFrY0E4gP9JIDJwOLq9CPUGSJrv+3aLGm/7XH9mYapBtK4y7xpig==',key_name='tempest-keypair-1718262871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76fb5cb7abcd4d74abfc471a96bbd12c',ramdisk_id='',reservation_id='r-etvl0z0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-457307401',owner_user_name='tempest-AttachVolumeNegativeTest-457307401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3bd4ce8a916a4bdbbc988eb4fe32991e',uuid=dd9d93b2-c532-41d7-afab-4944d84afd07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.485 247403 DEBUG nova.network.os_vif_util [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converting VIF {"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.485 247403 DEBUG nova.network.os_vif_util [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.486 247403 DEBUG nova.objects.instance [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'pci_devices' on Instance uuid dd9d93b2-c532-41d7-afab-4944d84afd07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:09 np0005603621 podman[370112]: 2026-01-31 08:51:09.504321854 +0000 UTC m=+0.058752106 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 03:51:09 np0005603621 podman[370113]: 2026-01-31 08:51:09.525498993 +0000 UTC m=+0.079928515 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.544 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <uuid>dd9d93b2-c532-41d7-afab-4944d84afd07</uuid>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <name>instance-000000a9</name>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1706190253</nova:name>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:51:08</nova:creationTime>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:user uuid="3bd4ce8a916a4bdbbc988eb4fe32991e">tempest-AttachVolumeNegativeTest-457307401-project-member</nova:user>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:project uuid="76fb5cb7abcd4d74abfc471a96bbd12c">tempest-AttachVolumeNegativeTest-457307401</nova:project>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <nova:port uuid="578814ca-1fc5-472e-8ea8-0016e6b9feae">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <entry name="serial">dd9d93b2-c532-41d7-afab-4944d84afd07</entry>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <entry name="uuid">dd9d93b2-c532-41d7-afab-4944d84afd07</entry>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/dd9d93b2-c532-41d7-afab-4944d84afd07_disk">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/dd9d93b2-c532-41d7-afab-4944d84afd07_disk.config">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:59:b2:2b"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <target dev="tap578814ca-1f"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/console.log" append="off"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:51:09 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:51:09 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:51:09 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:51:09 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.545 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Preparing to wait for external event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.546 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.546 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.546 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.547 247403 DEBUG nova.virt.libvirt.vif [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1706190253',display_name='tempest-AttachVolumeNegativeTest-server-1706190253',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1706190253',id=169,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOYHn72MVBJvVIkf6CCtub5I9AN8T/c+d3nqseBgIOA+1OPO3o1342ayUtIoAO3uHP2CHz5NO5w7EajFrY0E4gP9JIDJwOLq9CPUGSJrv+3aLGm/7XH9mYapBtK4y7xpig==',key_name='tempest-keypair-1718262871',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76fb5cb7abcd4d74abfc471a96bbd12c',ramdisk_id='',reservation_id='r-etvl0z0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-457307401',owner_user_name='tempest-AttachVolumeNegativeTest-457307401-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3bd4ce8a916a4bdbbc988eb4fe32991e',uuid=dd9d93b2-c532-41d7-afab-4944d84afd07,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.547 247403 DEBUG nova.network.os_vif_util [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converting VIF {"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.548 247403 DEBUG nova.network.os_vif_util [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.548 247403 DEBUG os_vif [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.549 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.549 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.550 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.553 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.553 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap578814ca-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.554 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap578814ca-1f, col_values=(('external_ids', {'iface-id': '578814ca-1fc5-472e-8ea8-0016e6b9feae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:b2:2b', 'vm-uuid': 'dd9d93b2-c532-41d7-afab-4944d84afd07'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.555 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:09 np0005603621 NetworkManager[49013]: <info>  [1769849469.5566] manager: (tap578814ca-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.558 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.560 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.561 247403 INFO os_vif [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f')#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.680 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.680 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.681 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No VIF found with MAC fa:16:3e:59:b2:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.681 247403 INFO nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Using config drive#033[00m
Jan 31 03:51:09 np0005603621 nova_compute[247399]: 2026-01-31 08:51:09.703 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:51:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:10.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:10.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:10 np0005603621 nova_compute[247399]: 2026-01-31 08:51:10.700 247403 INFO nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Creating config drive at /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/disk.config#033[00m
Jan 31 03:51:10 np0005603621 nova_compute[247399]: 2026-01-31 08:51:10.705 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgoz4zfbd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:10 np0005603621 nova_compute[247399]: 2026-01-31 08:51:10.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 174 op/s
Jan 31 03:51:10 np0005603621 nova_compute[247399]: 2026-01-31 08:51:10.832 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpgoz4zfbd" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:10 np0005603621 nova_compute[247399]: 2026-01-31 08:51:10.858 247403 DEBUG nova.storage.rbd_utils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] rbd image dd9d93b2-c532-41d7-afab-4944d84afd07_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:10 np0005603621 nova_compute[247399]: 2026-01-31 08:51:10.862 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/disk.config dd9d93b2-c532-41d7-afab-4944d84afd07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.257 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.386 247403 DEBUG oslo_concurrency.processutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/disk.config dd9d93b2-c532-41d7-afab-4944d84afd07_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.387 247403 INFO nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Deleting local config drive /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07/disk.config because it was imported into RBD.#033[00m
Jan 31 03:51:11 np0005603621 kernel: tap578814ca-1f: entered promiscuous mode
Jan 31 03:51:11 np0005603621 NetworkManager[49013]: <info>  [1769849471.4304] manager: (tap578814ca-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.430 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:11Z|00704|binding|INFO|Claiming lport 578814ca-1fc5-472e-8ea8-0016e6b9feae for this chassis.
Jan 31 03:51:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:11Z|00705|binding|INFO|578814ca-1fc5-472e-8ea8-0016e6b9feae: Claiming fa:16:3e:59:b2:2b 10.100.0.5
Jan 31 03:51:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:11Z|00706|binding|INFO|Setting lport 578814ca-1fc5-472e-8ea8-0016e6b9feae ovn-installed in OVS
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.439 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.441 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 systemd-udevd[370231]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:51:11 np0005603621 systemd-machined[212769]: New machine qemu-85-instance-000000a9.
Jan 31 03:51:11 np0005603621 NetworkManager[49013]: <info>  [1769849471.4609] device (tap578814ca-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:51:11 np0005603621 NetworkManager[49013]: <info>  [1769849471.4617] device (tap578814ca-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:51:11 np0005603621 systemd[1]: Started Virtual Machine qemu-85-instance-000000a9.
Jan 31 03:51:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:11Z|00707|binding|INFO|Setting lport 578814ca-1fc5-472e-8ea8-0016e6b9feae up in Southbound
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.485 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:b2:2b 10.100.0.5'], port_security=['fa:16:3e:59:b2:2b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dd9d93b2-c532-41d7-afab-4944d84afd07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f269a-650e-4227-8352-05abf2566c17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76fb5cb7abcd4d74abfc471a96bbd12c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c23fc33f-9ce8-4558-a197-95640fb18679', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97876448-21a4-4b64-9452-bd401dfcc8ac, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=578814ca-1fc5-472e-8ea8-0016e6b9feae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.488 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 578814ca-1fc5-472e-8ea8-0016e6b9feae in datapath a02f269a-650e-4227-8352-05abf2566c17 bound to our chassis#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.490 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a02f269a-650e-4227-8352-05abf2566c17#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.497 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fce29b06-732f-4fcd-8980-ae810f144823]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.498 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa02f269a-61 in ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.500 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa02f269a-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.500 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e510b47f-15b0-484e-80f9-6585a4daf43b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.501 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f554ec-c39f-47ce-865b-202f9d8a22db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.511 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc6135b-09bf-4ffe-943c-a4027d221fa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.522 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[90edccd1-23ba-42bc-b113-c75c44101179]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.541 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[278a1923-9e06-41ca-8653-03e3ecc92900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 NetworkManager[49013]: <info>  [1769849471.5476] manager: (tapa02f269a-60): new Veth device (/org/freedesktop/NetworkManager/Devices/318)
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.547 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1a7c93-3bc5-4487-ad58-d2474682227a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.573 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[559fc697-af11-4702-817a-9ec0dc2db4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.576 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7c51a0a3-d3e4-4837-b902-de49bc8379db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 NetworkManager[49013]: <info>  [1769849471.5918] device (tapa02f269a-60): carrier: link connected
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.595 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9a211d-35fa-4a56-90dc-e3c80ccd7a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.607 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bb16108d-108a-42ae-abbf-560e3dce978c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f269a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877710, 'reachable_time': 21662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370265, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.617 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[114c51e7-e34b-47ac-9c28-3483281aa980]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:e973'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877710, 'tstamp': 877710}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370266, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.629 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3456a7-e55b-4955-95e2-877640fac7d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa02f269a-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:e9:73'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877710, 'reachable_time': 21662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370267, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.649 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[264f218c-4ac8-4648-8725-599d1721141f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.691 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[15148e4c-49de-417b-aca6-a0ace0f98c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.692 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f269a-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.693 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.693 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa02f269a-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.695 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 NetworkManager[49013]: <info>  [1769849471.6963] manager: (tapa02f269a-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Jan 31 03:51:11 np0005603621 kernel: tapa02f269a-60: entered promiscuous mode
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.698 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.699 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa02f269a-60, col_values=(('external_ids', {'iface-id': '2c775482-0f82-4695-be62-4a95328fbf79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.702 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:11Z|00708|binding|INFO|Releasing lport 2c775482-0f82-4695-be62-4a95328fbf79 from this chassis (sb_readonly=1)
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.703 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a02f269a-650e-4227-8352-05abf2566c17.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a02f269a-650e-4227-8352-05abf2566c17.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.704 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4bfaac32-8f62-4dca-93b9-25342f6d073a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.705 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-a02f269a-650e-4227-8352-05abf2566c17
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/a02f269a-650e-4227-8352-05abf2566c17.pid.haproxy
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID a02f269a-650e-4227-8352-05abf2566c17
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:51:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:11.706 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'env', 'PROCESS_TAG=haproxy-a02f269a-650e-4227-8352-05abf2566c17', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a02f269a-650e-4227-8352-05abf2566c17.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.708 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.892 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849471.891953, dd9d93b2-c532-41d7-afab-4944d84afd07 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:11 np0005603621 nova_compute[247399]: 2026-01-31 08:51:11.893 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] VM Started (Lifecycle Event)#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.034 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.038 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849471.8920472, dd9d93b2-c532-41d7-afab-4944d84afd07 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.039 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:51:12 np0005603621 podman[370341]: 2026-01-31 08:51:12.055825705 +0000 UTC m=+0.051128425 container create 2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:51:12 np0005603621 systemd[1]: Started libpod-conmon-2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13.scope.
Jan 31 03:51:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4344f12d8104eccdee6158eee14156b8236a090913b0f412f166cb04adb4ce60/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:12 np0005603621 podman[370341]: 2026-01-31 08:51:12.111983188 +0000 UTC m=+0.107285938 container init 2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:51:12 np0005603621 podman[370341]: 2026-01-31 08:51:12.117318287 +0000 UTC m=+0.112621007 container start 2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 03:51:12 np0005603621 podman[370341]: 2026-01-31 08:51:12.029034189 +0000 UTC m=+0.024336929 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.133 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.136 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:51:12 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [NOTICE]   (370360) : New worker (370362) forked
Jan 31 03:51:12 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [NOTICE]   (370360) : Loading success.
Jan 31 03:51:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.174 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:51:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:51:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:12.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:12.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.419 247403 DEBUG nova.network.neutron [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updated VIF entry in instance network info cache for port 578814ca-1fc5-472e-8ea8-0016e6b9feae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.420 247403 DEBUG nova.network.neutron [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updating instance_info_cache with network_info: [{"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.486 247403 DEBUG oslo_concurrency.lockutils [req-32cd674b-a893-4d86-9fb1-16de2cdeedae req-a7e9dd9e-4a59-4a93-8d1f-4b8815424367 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.656 247403 DEBUG nova.compute.manager [req-a31bfa2f-e287-4ddf-80d8-1a0dbcb11a9f req-e811c23b-5f3e-4ad3-ae16-5ef38302db63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.657 247403 DEBUG oslo_concurrency.lockutils [req-a31bfa2f-e287-4ddf-80d8-1a0dbcb11a9f req-e811c23b-5f3e-4ad3-ae16-5ef38302db63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.657 247403 DEBUG oslo_concurrency.lockutils [req-a31bfa2f-e287-4ddf-80d8-1a0dbcb11a9f req-e811c23b-5f3e-4ad3-ae16-5ef38302db63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.658 247403 DEBUG oslo_concurrency.lockutils [req-a31bfa2f-e287-4ddf-80d8-1a0dbcb11a9f req-e811c23b-5f3e-4ad3-ae16-5ef38302db63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.658 247403 DEBUG nova.compute.manager [req-a31bfa2f-e287-4ddf-80d8-1a0dbcb11a9f req-e811c23b-5f3e-4ad3-ae16-5ef38302db63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Processing event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.659 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.662 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849472.6624496, dd9d93b2-c532-41d7-afab-4944d84afd07 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.662 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.664 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.667 247403 INFO nova.virt.libvirt.driver [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Instance spawned successfully.#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.668 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.715 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.721 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.725 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.725 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.726 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.726 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.726 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.727 247403 DEBUG nova.virt.libvirt.driver [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:51:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 165 op/s
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.783 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.908 247403 INFO nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Took 11.54 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:51:12 np0005603621 nova_compute[247399]: 2026-01-31 08:51:12.908 247403 DEBUG nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:13 np0005603621 nova_compute[247399]: 2026-01-31 08:51:13.037 247403 INFO nova.compute.manager [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Took 13.68 seconds to build instance.#033[00m
Jan 31 03:51:13 np0005603621 nova_compute[247399]: 2026-01-31 08:51:13.064 247403 DEBUG oslo_concurrency.lockutils [None req-a8dde00a-9fb9-46f0-b85c-ac0367c30840 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:13 np0005603621 nova_compute[247399]: 2026-01-31 08:51:13.144 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:13 np0005603621 nova_compute[247399]: 2026-01-31 08:51:13.144 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:13 np0005603621 nova_compute[247399]: 2026-01-31 08:51:13.145 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:51:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:51:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:14.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:14.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.557 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Jan 31 03:51:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Jan 31 03:51:14 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Jan 31 03:51:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 171 op/s
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.826 247403 DEBUG nova.compute.manager [req-37807294-afbd-4b4f-8b0f-21e9a5730c93 req-a9ebaf74-3a08-4932-8ce2-2cd20586e86f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.827 247403 DEBUG oslo_concurrency.lockutils [req-37807294-afbd-4b4f-8b0f-21e9a5730c93 req-a9ebaf74-3a08-4932-8ce2-2cd20586e86f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.827 247403 DEBUG oslo_concurrency.lockutils [req-37807294-afbd-4b4f-8b0f-21e9a5730c93 req-a9ebaf74-3a08-4932-8ce2-2cd20586e86f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.827 247403 DEBUG oslo_concurrency.lockutils [req-37807294-afbd-4b4f-8b0f-21e9a5730c93 req-a9ebaf74-3a08-4932-8ce2-2cd20586e86f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.827 247403 DEBUG nova.compute.manager [req-37807294-afbd-4b4f-8b0f-21e9a5730c93 req-a9ebaf74-3a08-4932-8ce2-2cd20586e86f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] No waiting events found dispatching network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:14 np0005603621 nova_compute[247399]: 2026-01-31 08:51:14.827 247403 WARNING nova.compute.manager [req-37807294-afbd-4b4f-8b0f-21e9a5730c93 req-a9ebaf74-3a08-4932-8ce2-2cd20586e86f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received unexpected event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae for instance with vm_state active and task_state None.#033[00m
Jan 31 03:51:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 03:51:15 np0005603621 nova_compute[247399]: 2026-01-31 08:51:15.712 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.276 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating instance_info_cache with network_info: [{"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.295 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.295 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.295 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.295 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.296 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.330 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.330 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.331 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.331 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.331 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 03:51:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 191 op/s
Jan 31 03:51:16 np0005603621 nova_compute[247399]: 2026-01-31 08:51:16.746 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:51:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/285505905' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:51:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:51:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/285505905' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:51:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.164 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.164 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.168 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.168 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.168 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.171 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.171 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.360 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.361 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3516MB free_disk=20.67269515991211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.361 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.362 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.731 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.826 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3308d345-19b7-4fbb-bd81-631135649e7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.827 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c215327f-37ad-41a7-a883-3dbb23334df6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.827 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance dd9d93b2-c532-41d7-afab-4944d84afd07 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.827 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.827 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:51:17 np0005603621 nova_compute[247399]: 2026-01-31 08:51:17.969 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.047 247403 DEBUG nova.compute.manager [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-changed-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.047 247403 DEBUG nova.compute.manager [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Refreshing instance network info cache due to event network-changed-578814ca-1fc5-472e-8ea8-0016e6b9feae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.048 247403 DEBUG oslo_concurrency.lockutils [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.048 247403 DEBUG oslo_concurrency.lockutils [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.048 247403 DEBUG nova.network.neutron [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Refreshing network info cache for port 578814ca-1fc5-472e-8ea8-0016e6b9feae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:51:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2319408797' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:51:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.421 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.426 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.488 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.546 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:51:18 np0005603621 nova_compute[247399]: 2026-01-31 08:51:18.547 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 32 KiB/s wr, 170 op/s
Jan 31 03:51:19 np0005603621 nova_compute[247399]: 2026-01-31 08:51:19.560 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:20 np0005603621 nova_compute[247399]: 2026-01-31 08:51:20.302 247403 DEBUG nova.network.neutron [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updated VIF entry in instance network info cache for port 578814ca-1fc5-472e-8ea8-0016e6b9feae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:20 np0005603621 nova_compute[247399]: 2026-01-31 08:51:20.303 247403 DEBUG nova.network.neutron [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updating instance_info_cache with network_info: [{"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:20 np0005603621 nova_compute[247399]: 2026-01-31 08:51:20.342 247403 DEBUG oslo_concurrency.lockutils [req-beda5ae9-e7c4-4db4-a5fc-cedde60e7c37 req-58cd84a7-ab87-48ee-8d1a-4301a49680af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-dd9d93b2-c532-41d7-afab-4944d84afd07" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:20 np0005603621 nova_compute[247399]: 2026-01-31 08:51:20.449 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:20 np0005603621 nova_compute[247399]: 2026-01-31 08:51:20.490 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:20 np0005603621 nova_compute[247399]: 2026-01-31 08:51:20.714 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 32 KiB/s wr, 170 op/s
Jan 31 03:51:21 np0005603621 nova_compute[247399]: 2026-01-31 08:51:21.447 247403 DEBUG nova.compute.manager [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-changed-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:21 np0005603621 nova_compute[247399]: 2026-01-31 08:51:21.447 247403 DEBUG nova.compute.manager [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Refreshing instance network info cache due to event network-changed-fbe66833-82a6-4f72-9b11-a4732140845a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:21 np0005603621 nova_compute[247399]: 2026-01-31 08:51:21.448 247403 DEBUG oslo_concurrency.lockutils [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:21 np0005603621 nova_compute[247399]: 2026-01-31 08:51:21.448 247403 DEBUG oslo_concurrency.lockutils [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:21 np0005603621 nova_compute[247399]: 2026-01-31 08:51:21.448 247403 DEBUG nova.network.neutron [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Refreshing network info cache for port fbe66833-82a6-4f72-9b11-a4732140845a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Jan 31 03:51:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Jan 31 03:51:22 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Jan 31 03:51:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:22.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:22.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 668 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 35 KiB/s wr, 267 op/s
Jan 31 03:51:23 np0005603621 nova_compute[247399]: 2026-01-31 08:51:23.181 247403 DEBUG nova.network.neutron [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updated VIF entry in instance network info cache for port fbe66833-82a6-4f72-9b11-a4732140845a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:23 np0005603621 nova_compute[247399]: 2026-01-31 08:51:23.182 247403 DEBUG nova.network.neutron [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating instance_info_cache with network_info: [{"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:23 np0005603621 nova_compute[247399]: 2026-01-31 08:51:23.425 247403 DEBUG oslo_concurrency.lockutils [req-1f1178c3-ae57-43c9-ada3-23c618a3d393 req-0177051c-0e75-4eca-ba21-ea1dd325c4d3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c215327f-37ad-41a7-a883-3dbb23334df6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.133 247403 DEBUG oslo_concurrency.lockutils [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.134 247403 DEBUG oslo_concurrency.lockutils [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.198 247403 INFO nova.compute.manager [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Detaching volume 12e9d9b2-8ec9-4b16-b334-60c0f639cb59#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:24.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.400 247403 INFO nova.virt.block_device [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Attempting to driver detach volume 12e9d9b2-8ec9-4b16-b334-60c0f639cb59 from mountpoint /dev/vdb#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.406 247403 DEBUG nova.virt.libvirt.driver [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Attempting to detach device vdb from instance c215327f-37ad-41a7-a883-3dbb23334df6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.407 247403 DEBUG nova.virt.libvirt.guest [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-12e9d9b2-8ec9-4b16-b334-60c0f639cb59">
Jan 31 03:51:24 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <serial>12e9d9b2-8ec9-4b16-b334-60c0f639cb59</serial>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:51:24 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:51:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:24.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.498 247403 INFO nova.virt.libvirt.driver [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully detached device vdb from instance c215327f-37ad-41a7-a883-3dbb23334df6 from the persistent domain config.#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.499 247403 DEBUG nova.virt.libvirt.driver [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c215327f-37ad-41a7-a883-3dbb23334df6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.499 247403 DEBUG nova.virt.libvirt.guest [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-12e9d9b2-8ec9-4b16-b334-60c0f639cb59">
Jan 31 03:51:24 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <serial>12e9d9b2-8ec9-4b16-b334-60c0f639cb59</serial>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <shareable/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 31 03:51:24 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:51:24 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.562 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 668 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 29 KiB/s wr, 216 op/s
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.779 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849484.7789648, c215327f-37ad-41a7-a883-3dbb23334df6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.780 247403 DEBUG nova.virt.libvirt.driver [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c215327f-37ad-41a7-a883-3dbb23334df6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:51:24 np0005603621 nova_compute[247399]: 2026-01-31 08:51:24.784 247403 INFO nova.virt.libvirt.driver [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully detached device vdb from instance c215327f-37ad-41a7-a883-3dbb23334df6 from the live domain config.#033[00m
Jan 31 03:51:25 np0005603621 nova_compute[247399]: 2026-01-31 08:51:25.717 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:26 np0005603621 nova_compute[247399]: 2026-01-31 08:51:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:26 np0005603621 nova_compute[247399]: 2026-01-31 08:51:26.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:51:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:26.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:26Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:59:b2:2b 10.100.0.5
Jan 31 03:51:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:26Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:b2:2b 10.100.0.5
Jan 31 03:51:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:26 np0005603621 nova_compute[247399]: 2026-01-31 08:51:26.730 247403 DEBUG nova.objects.instance [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'flavor' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 675 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.9 MiB/s wr, 215 op/s
Jan 31 03:51:26 np0005603621 nova_compute[247399]: 2026-01-31 08:51:26.795 247403 DEBUG oslo_concurrency.lockutils [None req-716bfeb3-fc1c-4964-881d-1e628617aa54 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:28.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 454 KiB/s rd, 2.6 MiB/s wr, 160 op/s
Jan 31 03:51:29 np0005603621 nova_compute[247399]: 2026-01-31 08:51:29.598 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:30.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:30.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:30.532 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:30.533 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:30.534 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:30 np0005603621 nova_compute[247399]: 2026-01-31 08:51:30.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 454 KiB/s rd, 2.6 MiB/s wr, 160 op/s
Jan 31 03:51:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:32.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:51:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:32.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:51:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 643 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 437 KiB/s rd, 4.2 MiB/s wr, 169 op/s
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:51:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d5b30bdd-d39e-41be-9095-efe86d9eba2d does not exist
Jan 31 03:51:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 13eed53b-5836-4b05-a072-72aff4e66460 does not exist
Jan 31 03:51:33 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5022e280-3d99-460b-8fce-65d0bc4a81cc does not exist
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:51:33 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.05468892 +0000 UTC m=+0.044051141 container create 3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:51:34 np0005603621 systemd[1]: Started libpod-conmon-3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01.scope.
Jan 31 03:51:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.034496743 +0000 UTC m=+0.023858984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.135721399 +0000 UTC m=+0.125083640 container init 3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.143114402 +0000 UTC m=+0.132476623 container start 3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.146455508 +0000 UTC m=+0.135817749 container attach 3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 03:51:34 np0005603621 agitated_hugle[370766]: 167 167
Jan 31 03:51:34 np0005603621 systemd[1]: libpod-3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01.scope: Deactivated successfully.
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.150683662 +0000 UTC m=+0.140045923 container died 3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:51:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2044bbff0accb420f59df9845f77b0dadd8a9636ba2e1ca840e65eedd0688a98-merged.mount: Deactivated successfully.
Jan 31 03:51:34 np0005603621 podman[370750]: 2026-01-31 08:51:34.185032855 +0000 UTC m=+0.174395076 container remove 3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hugle, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 03:51:34 np0005603621 systemd[1]: libpod-conmon-3bc9d28c29ddfd8451de54d57a3391476e81d7266748e4f88b8e9c685cd32c01.scope: Deactivated successfully.
Jan 31 03:51:34 np0005603621 podman[370791]: 2026-01-31 08:51:34.325462199 +0000 UTC m=+0.041363506 container create 92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 03:51:34 np0005603621 systemd[1]: Started libpod-conmon-92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906.scope.
Jan 31 03:51:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1579c7f9c60beec8be088b3658bb757fd6b747fb998f183dc56215c5e2b25b4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1579c7f9c60beec8be088b3658bb757fd6b747fb998f183dc56215c5e2b25b4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1579c7f9c60beec8be088b3658bb757fd6b747fb998f183dc56215c5e2b25b4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1579c7f9c60beec8be088b3658bb757fd6b747fb998f183dc56215c5e2b25b4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1579c7f9c60beec8be088b3658bb757fd6b747fb998f183dc56215c5e2b25b4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:34 np0005603621 podman[370791]: 2026-01-31 08:51:34.392181086 +0000 UTC m=+0.108082393 container init 92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:51:34 np0005603621 podman[370791]: 2026-01-31 08:51:34.398544487 +0000 UTC m=+0.114445784 container start 92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:51:34 np0005603621 podman[370791]: 2026-01-31 08:51:34.401384997 +0000 UTC m=+0.117286294 container attach 92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:51:34 np0005603621 podman[370791]: 2026-01-31 08:51:34.310194068 +0000 UTC m=+0.026095385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:51:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:34.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:34.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:34 np0005603621 nova_compute[247399]: 2026-01-31 08:51:34.599 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 643 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 379 KiB/s rd, 3.7 MiB/s wr, 140 op/s
Jan 31 03:51:35 np0005603621 vibrant_booth[370807]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:51:35 np0005603621 vibrant_booth[370807]: --> relative data size: 1.0
Jan 31 03:51:35 np0005603621 vibrant_booth[370807]: --> All data devices are unavailable
Jan 31 03:51:35 np0005603621 systemd[1]: libpod-92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906.scope: Deactivated successfully.
Jan 31 03:51:35 np0005603621 conmon[370807]: conmon 92fa54f0995d6679a5c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906.scope/container/memory.events
Jan 31 03:51:35 np0005603621 podman[370822]: 2026-01-31 08:51:35.257974713 +0000 UTC m=+0.024723232 container died 92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 03:51:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1579c7f9c60beec8be088b3658bb757fd6b747fb998f183dc56215c5e2b25b4b-merged.mount: Deactivated successfully.
Jan 31 03:51:35 np0005603621 podman[370822]: 2026-01-31 08:51:35.301872278 +0000 UTC m=+0.068620777 container remove 92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:51:35 np0005603621 systemd[1]: libpod-conmon-92fa54f0995d6679a5c921d420c8b2b9e0b60badde1ecc4447a2b7fa6b6b5906.scope: Deactivated successfully.
Jan 31 03:51:35 np0005603621 nova_compute[247399]: 2026-01-31 08:51:35.595 247403 DEBUG oslo_concurrency.lockutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:35 np0005603621 nova_compute[247399]: 2026-01-31 08:51:35.597 247403 DEBUG oslo_concurrency.lockutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:35 np0005603621 nova_compute[247399]: 2026-01-31 08:51:35.631 247403 DEBUG nova.objects.instance [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'flavor' on Instance uuid dd9d93b2-c532-41d7-afab-4944d84afd07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:35 np0005603621 nova_compute[247399]: 2026-01-31 08:51:35.704 247403 DEBUG oslo_concurrency.lockutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:35 np0005603621 nova_compute[247399]: 2026-01-31 08:51:35.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.775777961 +0000 UTC m=+0.036109211 container create 1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:51:35 np0005603621 systemd[1]: Started libpod-conmon-1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6.scope.
Jan 31 03:51:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.837565922 +0000 UTC m=+0.097897202 container init 1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.843365946 +0000 UTC m=+0.103697196 container start 1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 03:51:35 np0005603621 infallible_thompson[370994]: 167 167
Jan 31 03:51:35 np0005603621 systemd[1]: libpod-1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6.scope: Deactivated successfully.
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.847283339 +0000 UTC m=+0.107614619 container attach 1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.848543899 +0000 UTC m=+0.108875149 container died 1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.757893656 +0000 UTC m=+0.018224956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:51:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-007e5b0a47003fe0fc5a78de69e28212e82deed7be7cf12f5e2dbb1f40579580-merged.mount: Deactivated successfully.
Jan 31 03:51:35 np0005603621 podman[370978]: 2026-01-31 08:51:35.894346695 +0000 UTC m=+0.154677945 container remove 1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:51:35 np0005603621 systemd[1]: libpod-conmon-1b5848be25280c0ef03249b83c9a589f50cb0b930eeea155cd327417c002d9a6.scope: Deactivated successfully.
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.017413121 +0000 UTC m=+0.032200978 container create 9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_spence, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:51:36 np0005603621 systemd[1]: Started libpod-conmon-9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5.scope.
Jan 31 03:51:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43fd2679641eb6bc5f6b0a1762b8ef7f302feed50ad4a2b9f25a8037534f131/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43fd2679641eb6bc5f6b0a1762b8ef7f302feed50ad4a2b9f25a8037534f131/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43fd2679641eb6bc5f6b0a1762b8ef7f302feed50ad4a2b9f25a8037534f131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a43fd2679641eb6bc5f6b0a1762b8ef7f302feed50ad4a2b9f25a8037534f131/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.088 247403 DEBUG oslo_concurrency.lockutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.089 247403 DEBUG oslo_concurrency.lockutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.089 247403 INFO nova.compute.manager [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Attaching volume 0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28 to /dev/vdb#033[00m
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.091622994 +0000 UTC m=+0.106410851 container init 9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_spence, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.096996523 +0000 UTC m=+0.111784370 container start 9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_spence, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.003282415 +0000 UTC m=+0.018070292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.099851413 +0000 UTC m=+0.114639300 container attach 9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.343 247403 DEBUG os_brick.utils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.345 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.354 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.355 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[20a8a2d5-c777-453e-801f-19c6bd3f6b17]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.356 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.362 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.362 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[3b3ee5bf-551c-4499-8aaf-5d91f865b15a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.363 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.369 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.369 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[af304a9d-a630-43f0-9222-81e9b85433e2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.371 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[5dec42ac-bacd-4b89-8ef3-72ec43cdf161]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.371 247403 DEBUG oslo_concurrency.processutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.391 247403 DEBUG oslo_concurrency.processutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.393 247403 DEBUG os_brick.initiator.connectors.lightos [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.393 247403 DEBUG os_brick.initiator.connectors.lightos [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.394 247403 DEBUG os_brick.initiator.connectors.lightos [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.394 247403 DEBUG os_brick.utils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.394 247403 DEBUG nova.virt.block_device [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updating existing volume attachment record: 873450d2-bb75-4778-8a37-fec32bf0cd39 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:51:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:36.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:36.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Jan 31 03:51:36 np0005603621 charming_spence[371034]: {
Jan 31 03:51:36 np0005603621 charming_spence[371034]:    "0": [
Jan 31 03:51:36 np0005603621 charming_spence[371034]:        {
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "devices": [
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "/dev/loop3"
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            ],
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "lv_name": "ceph_lv0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "lv_size": "7511998464",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "name": "ceph_lv0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "tags": {
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.cluster_name": "ceph",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.crush_device_class": "",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.encrypted": "0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.osd_id": "0",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.type": "block",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:                "ceph.vdo": "0"
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            },
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "type": "block",
Jan 31 03:51:36 np0005603621 charming_spence[371034]:            "vg_name": "ceph_vg0"
Jan 31 03:51:36 np0005603621 charming_spence[371034]:        }
Jan 31 03:51:36 np0005603621 charming_spence[371034]:    ]
Jan 31 03:51:36 np0005603621 charming_spence[371034]: }
Jan 31 03:51:36 np0005603621 systemd[1]: libpod-9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5.scope: Deactivated successfully.
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.840805139 +0000 UTC m=+0.855593006 container died 9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 03:51:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a43fd2679641eb6bc5f6b0a1762b8ef7f302feed50ad4a2b9f25a8037534f131-merged.mount: Deactivated successfully.
Jan 31 03:51:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:36Z|00709|binding|INFO|Releasing lport 2c775482-0f82-4695-be62-4a95328fbf79 from this chassis (sb_readonly=0)
Jan 31 03:51:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:36Z|00710|binding|INFO|Releasing lport 0ed76a0a-650c-4ec7-a4d4-0e745236b047 from this chassis (sb_readonly=0)
Jan 31 03:51:36 np0005603621 podman[371018]: 2026-01-31 08:51:36.902026232 +0000 UTC m=+0.916814089 container remove 9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:51:36 np0005603621 systemd[1]: libpod-conmon-9a768af7fdc73e65c13914f67cf61383e029d8e9889200b9f2cf65bc3eef7fa5.scope: Deactivated successfully.
Jan 31 03:51:36 np0005603621 nova_compute[247399]: 2026-01-31 08:51:36.909 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178924345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.406206871 +0000 UTC m=+0.040406028 container create 5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:51:37 np0005603621 systemd[1]: Started libpod-conmon-5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee.scope.
Jan 31 03:51:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.46477957 +0000 UTC m=+0.098978727 container init 5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.471544103 +0000 UTC m=+0.105743250 container start 5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.474669292 +0000 UTC m=+0.108868459 container attach 5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:51:37 np0005603621 amazing_stonebraker[371270]: 167 167
Jan 31 03:51:37 np0005603621 systemd[1]: libpod-5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee.scope: Deactivated successfully.
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.476230572 +0000 UTC m=+0.110429709 container died 5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.388248983 +0000 UTC m=+0.022448170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:51:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-47671d0cbf62480b6ea922b9d0f0790e750f1bc222a712979f9257fe92a347bd-merged.mount: Deactivated successfully.
Jan 31 03:51:37 np0005603621 podman[371253]: 2026-01-31 08:51:37.507975143 +0000 UTC m=+0.142174280 container remove 5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:51:37 np0005603621 systemd[1]: libpod-conmon-5bf5f648ecb06ac348e1c50e3cf04d18f2de1bd8f1485dffd402b2a4f8fd34ee.scope: Deactivated successfully.
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.619 247403 DEBUG nova.objects.instance [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'flavor' on Instance uuid dd9d93b2-c532-41d7-afab-4944d84afd07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:37 np0005603621 podman[371294]: 2026-01-31 08:51:37.631287167 +0000 UTC m=+0.034796579 container create b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:51:37 np0005603621 systemd[1]: Started libpod-conmon-b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5.scope.
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.668 247403 DEBUG nova.virt.libvirt.driver [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Attempting to attach volume 0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.673 247403 DEBUG nova.virt.libvirt.guest [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28">
Jan 31 03:51:37 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:51:37 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:51:37 np0005603621 nova_compute[247399]:  <serial>0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28</serial>
Jan 31 03:51:37 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:51:37 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:51:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:51:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf305e41fefbf4c96922d66088858b557b28fc9d6d626a9da30f47188845f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf305e41fefbf4c96922d66088858b557b28fc9d6d626a9da30f47188845f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf305e41fefbf4c96922d66088858b557b28fc9d6d626a9da30f47188845f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:37 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf305e41fefbf4c96922d66088858b557b28fc9d6d626a9da30f47188845f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:51:37 np0005603621 podman[371294]: 2026-01-31 08:51:37.616934333 +0000 UTC m=+0.020443765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:51:37 np0005603621 ceph-osd[84880]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 31 03:51:37 np0005603621 podman[371294]: 2026-01-31 08:51:37.717695515 +0000 UTC m=+0.121204927 container init b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:51:37 np0005603621 podman[371294]: 2026-01-31 08:51:37.724268563 +0000 UTC m=+0.127777975 container start b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_buck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:51:37 np0005603621 podman[371294]: 2026-01-31 08:51:37.727713302 +0000 UTC m=+0.131222734 container attach b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_buck, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.899 247403 DEBUG nova.virt.libvirt.driver [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.900 247403 DEBUG nova.virt.libvirt.driver [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.900 247403 DEBUG nova.virt.libvirt.driver [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:37 np0005603621 nova_compute[247399]: 2026-01-31 08:51:37.900 247403 DEBUG nova.virt.libvirt.driver [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] No VIF found with MAC fa:16:3e:59:b2:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:51:38 np0005603621 nova_compute[247399]: 2026-01-31 08:51:38.188 247403 DEBUG oslo_concurrency.lockutils [None req-0cce24c1-432b-4fd7-82b6-960cf96da4b5 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:38.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:38.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:38 np0005603621 friendly_buck[371311]: {
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:        "osd_id": 0,
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:        "type": "bluestore"
Jan 31 03:51:38 np0005603621 friendly_buck[371311]:    }
Jan 31 03:51:38 np0005603621 friendly_buck[371311]: }
Jan 31 03:51:38 np0005603621 systemd[1]: libpod-b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5.scope: Deactivated successfully.
Jan 31 03:51:38 np0005603621 podman[371353]: 2026-01-31 08:51:38.534690801 +0000 UTC m=+0.020625092 container died b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_buck, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:51:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aaf305e41fefbf4c96922d66088858b557b28fc9d6d626a9da30f47188845f03-merged.mount: Deactivated successfully.
Jan 31 03:51:38 np0005603621 podman[371353]: 2026-01-31 08:51:38.582635954 +0000 UTC m=+0.068570225 container remove b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:51:38 np0005603621 systemd[1]: libpod-conmon-b281a409ae343f0ed9c19f90eff35a57ffe1daccb29d7a5d05859d10b5dbf6a5.scope: Deactivated successfully.
Jan 31 03:51:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:51:38
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 100 op/s
Jan 31 03:51:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:51:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:51:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2e9b7912-faa1-4dfd-a95d-8f8e24558465 does not exist
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ddceb598-b896-453b-a661-3375675fe06e does not exist
Jan 31 03:51:38 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev aaa382a9-44a6-43ac-bcca-d0efc876432b does not exist
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:51:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:51:39 np0005603621 nova_compute[247399]: 2026-01-31 08:51:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:51:39 np0005603621 nova_compute[247399]: 2026-01-31 08:51:39.603 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:51:39 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:51:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:40.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:40.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:40 np0005603621 podman[371419]: 2026-01-31 08:51:40.48976559 +0000 UTC m=+0.045195978 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:51:40 np0005603621 podman[371420]: 2026-01-31 08:51:40.535794734 +0000 UTC m=+0.090615483 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 03:51:40 np0005603621 nova_compute[247399]: 2026-01-31 08:51:40.723 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 31 03:51:40 np0005603621 nova_compute[247399]: 2026-01-31 08:51:40.797 247403 DEBUG oslo_concurrency.lockutils [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:40 np0005603621 nova_compute[247399]: 2026-01-31 08:51:40.797 247403 DEBUG oslo_concurrency.lockutils [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:40 np0005603621 nova_compute[247399]: 2026-01-31 08:51:40.821 247403 INFO nova.compute.manager [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Detaching volume 0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.171 247403 INFO nova.virt.block_device [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Attempting to driver detach volume 0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28 from mountpoint /dev/vdb#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.183 247403 DEBUG nova.virt.libvirt.driver [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Attempting to detach device vdb from instance dd9d93b2-c532-41d7-afab-4944d84afd07 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.184 247403 DEBUG nova.virt.libvirt.guest [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28">
Jan 31 03:51:41 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <serial>0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28</serial>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:51:41 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.196 247403 INFO nova.virt.libvirt.driver [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully detached device vdb from instance dd9d93b2-c532-41d7-afab-4944d84afd07 from the persistent domain config.#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.196 247403 DEBUG nova.virt.libvirt.driver [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance dd9d93b2-c532-41d7-afab-4944d84afd07 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.197 247403 DEBUG nova.virt.libvirt.guest [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28">
Jan 31 03:51:41 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <serial>0de0b2ab-99ec-43cf-a8f3-b54f9bb71e28</serial>
Jan 31 03:51:41 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:51:41 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:51:41 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.424 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849501.4242713, dd9d93b2-c532-41d7-afab-4944d84afd07 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.426 247403 DEBUG nova.virt.libvirt.driver [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance dd9d93b2-c532-41d7-afab-4944d84afd07 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:51:41 np0005603621 nova_compute[247399]: 2026-01-31 08:51:41.428 247403 INFO nova.virt.libvirt.driver [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully detached device vdb from instance dd9d93b2-c532-41d7-afab-4944d84afd07 from the live domain config.#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.092 247403 DEBUG nova.objects.instance [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'flavor' on Instance uuid dd9d93b2-c532-41d7-afab-4944d84afd07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.145 247403 DEBUG oslo_concurrency.lockutils [None req-c424e431-0650-4a47-80f2-093db7ed46e1 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.347s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:42.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:42.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.919 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.919 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.919 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.920 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.920 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.921 247403 INFO nova.compute.manager [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Terminating instance#033[00m
Jan 31 03:51:42 np0005603621 nova_compute[247399]: 2026-01-31 08:51:42.922 247403 DEBUG nova.compute.manager [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:51:43 np0005603621 kernel: tap578814ca-1f (unregistering): left promiscuous mode
Jan 31 03:51:43 np0005603621 NetworkManager[49013]: <info>  [1769849503.1111] device (tap578814ca-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.165 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:43Z|00711|binding|INFO|Releasing lport 578814ca-1fc5-472e-8ea8-0016e6b9feae from this chassis (sb_readonly=0)
Jan 31 03:51:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:43Z|00712|binding|INFO|Setting lport 578814ca-1fc5-472e-8ea8-0016e6b9feae down in Southbound
Jan 31 03:51:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:51:43Z|00713|binding|INFO|Removing iface tap578814ca-1f ovn-installed in OVS
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.169 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.173 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.195 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:b2:2b 10.100.0.5'], port_security=['fa:16:3e:59:b2:2b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'dd9d93b2-c532-41d7-afab-4944d84afd07', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a02f269a-650e-4227-8352-05abf2566c17', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76fb5cb7abcd4d74abfc471a96bbd12c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c23fc33f-9ce8-4558-a197-95640fb18679', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97876448-21a4-4b64-9452-bd401dfcc8ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=578814ca-1fc5-472e-8ea8-0016e6b9feae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.197 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 578814ca-1fc5-472e-8ea8-0016e6b9feae in datapath a02f269a-650e-4227-8352-05abf2566c17 unbound from our chassis#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.198 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a02f269a-650e-4227-8352-05abf2566c17, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.200 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d4aabab6-1b57-4806-8944-4afbb78a3aff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.201 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 namespace which is not needed anymore#033[00m
Jan 31 03:51:43 np0005603621 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000a9.scope: Deactivated successfully.
Jan 31 03:51:43 np0005603621 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000a9.scope: Consumed 13.214s CPU time.
Jan 31 03:51:43 np0005603621 systemd-machined[212769]: Machine qemu-85-instance-000000a9 terminated.
Jan 31 03:51:43 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [NOTICE]   (370360) : haproxy version is 2.8.14-c23fe91
Jan 31 03:51:43 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [NOTICE]   (370360) : path to executable is /usr/sbin/haproxy
Jan 31 03:51:43 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [WARNING]  (370360) : Exiting Master process...
Jan 31 03:51:43 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [ALERT]    (370360) : Current worker (370362) exited with code 143 (Terminated)
Jan 31 03:51:43 np0005603621 neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17[370356]: [WARNING]  (370360) : All workers exited. Exiting... (0)
Jan 31 03:51:43 np0005603621 systemd[1]: libpod-2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13.scope: Deactivated successfully.
Jan 31 03:51:43 np0005603621 podman[371489]: 2026-01-31 08:51:43.317576855 +0000 UTC m=+0.045574070 container died 2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:51:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13-userdata-shm.mount: Deactivated successfully.
Jan 31 03:51:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4344f12d8104eccdee6158eee14156b8236a090913b0f412f166cb04adb4ce60-merged.mount: Deactivated successfully.
Jan 31 03:51:43 np0005603621 podman[371489]: 2026-01-31 08:51:43.372576132 +0000 UTC m=+0.100573347 container cleanup 2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:51:43 np0005603621 systemd[1]: libpod-conmon-2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13.scope: Deactivated successfully.
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.379 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.384 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.388 247403 INFO nova.virt.libvirt.driver [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Instance destroyed successfully.#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.389 247403 DEBUG nova.objects.instance [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lazy-loading 'resources' on Instance uuid dd9d93b2-c532-41d7-afab-4944d84afd07 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:43 np0005603621 podman[371521]: 2026-01-31 08:51:43.42920735 +0000 UTC m=+0.037968439 container remove 2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.432 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[310ddd67-4ed7-467f-97e1-78276e1ac5cb]: (4, ('Sat Jan 31 08:51:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 (2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13)\n2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13\nSat Jan 31 08:51:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 (2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13)\n2fcbca1dabee9b7c7b53fa3f66d15235aac9425414bea570c227c6c2b55e4e13\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.434 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b59de2ae-5867-483e-9127-6fb230e88a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.435 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa02f269a-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 kernel: tapa02f269a-60: left promiscuous mode
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.445 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.449 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[66c6a338-5117-4fee-8d27-030382e96b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.463 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cca6782f-ff37-4816-a286-ce4b0a7e9df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.464 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b9407501-2142-4ce7-9a25-b216bb7ff781]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.475 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[53eac753-dd5a-4366-9986-db4e6f262764]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877705, 'reachable_time': 30099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371547, 'error': None, 'target': 'ovnmeta-a02f269a-650e-4227-8352-05abf2566c17', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 systemd[1]: run-netns-ovnmeta\x2da02f269a\x2d650e\x2d4227\x2d8352\x2d05abf2566c17.mount: Deactivated successfully.
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.478 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a02f269a-650e-4227-8352-05abf2566c17 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:51:43 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:51:43.479 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b002b4b5-d356-4b1b-8e67-54dcf439c023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.534 247403 DEBUG nova.virt.libvirt.vif [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:50:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1706190253',display_name='tempest-AttachVolumeNegativeTest-server-1706190253',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1706190253',id=169,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOYHn72MVBJvVIkf6CCtub5I9AN8T/c+d3nqseBgIOA+1OPO3o1342ayUtIoAO3uHP2CHz5NO5w7EajFrY0E4gP9JIDJwOLq9CPUGSJrv+3aLGm/7XH9mYapBtK4y7xpig==',key_name='tempest-keypair-1718262871',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76fb5cb7abcd4d74abfc471a96bbd12c',ramdisk_id='',reservation_id='r-etvl0z0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-457307401',owner_user_name='tempest-AttachVolumeNegativeTest-457307401-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:51:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3bd4ce8a916a4bdbbc988eb4fe32991e',uuid=dd9d93b2-c532-41d7-afab-4944d84afd07,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.535 247403 DEBUG nova.network.os_vif_util [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converting VIF {"id": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "address": "fa:16:3e:59:b2:2b", "network": {"id": "a02f269a-650e-4227-8352-05abf2566c17", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-245078866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76fb5cb7abcd4d74abfc471a96bbd12c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap578814ca-1f", "ovs_interfaceid": "578814ca-1fc5-472e-8ea8-0016e6b9feae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.536 247403 DEBUG nova.network.os_vif_util [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.536 247403 DEBUG os_vif [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.538 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.539 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap578814ca-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.540 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.542 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.545 247403 INFO os_vif [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:b2:2b,bridge_name='br-int',has_traffic_filtering=True,id=578814ca-1fc5-472e-8ea8-0016e6b9feae,network=Network(a02f269a-650e-4227-8352-05abf2566c17),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap578814ca-1f')#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.906 247403 DEBUG nova.compute.manager [req-c082c1a1-e859-4631-8075-0ae95c019106 req-8ce4007c-9d0d-46e9-afde-39c5051df515 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-vif-unplugged-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.906 247403 DEBUG oslo_concurrency.lockutils [req-c082c1a1-e859-4631-8075-0ae95c019106 req-8ce4007c-9d0d-46e9-afde-39c5051df515 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.907 247403 DEBUG oslo_concurrency.lockutils [req-c082c1a1-e859-4631-8075-0ae95c019106 req-8ce4007c-9d0d-46e9-afde-39c5051df515 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.907 247403 DEBUG oslo_concurrency.lockutils [req-c082c1a1-e859-4631-8075-0ae95c019106 req-8ce4007c-9d0d-46e9-afde-39c5051df515 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.907 247403 DEBUG nova.compute.manager [req-c082c1a1-e859-4631-8075-0ae95c019106 req-8ce4007c-9d0d-46e9-afde-39c5051df515 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] No waiting events found dispatching network-vif-unplugged-578814ca-1fc5-472e-8ea8-0016e6b9feae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:43 np0005603621 nova_compute[247399]: 2026-01-31 08:51:43.907 247403 DEBUG nova.compute.manager [req-c082c1a1-e859-4631-8075-0ae95c019106 req-8ce4007c-9d0d-46e9-afde-39c5051df515 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-vif-unplugged-578814ca-1fc5-472e-8ea8-0016e6b9feae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:51:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:44.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:44.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 647 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 275 KiB/s wr, 100 op/s
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.305 247403 INFO nova.virt.libvirt.driver [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Deleting instance files /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07_del#033[00m
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.306 247403 INFO nova.virt.libvirt.driver [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Deletion of /var/lib/nova/instances/dd9d93b2-c532-41d7-afab-4944d84afd07_del complete#033[00m
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.397 247403 INFO nova.compute.manager [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Took 2.48 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.398 247403 DEBUG oslo.service.loopingcall [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.398 247403 DEBUG nova.compute.manager [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.398 247403 DEBUG nova.network.neutron [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:51:45 np0005603621 nova_compute[247399]: 2026-01-31 08:51:45.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.054 247403 DEBUG nova.compute.manager [req-4edc8ada-8196-4c7d-935c-36e8e03c1ac1 req-4d1f6453-b00d-4618-9f4c-7f045f011894 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.054 247403 DEBUG oslo_concurrency.lockutils [req-4edc8ada-8196-4c7d-935c-36e8e03c1ac1 req-4d1f6453-b00d-4618-9f4c-7f045f011894 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.054 247403 DEBUG oslo_concurrency.lockutils [req-4edc8ada-8196-4c7d-935c-36e8e03c1ac1 req-4d1f6453-b00d-4618-9f4c-7f045f011894 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.054 247403 DEBUG oslo_concurrency.lockutils [req-4edc8ada-8196-4c7d-935c-36e8e03c1ac1 req-4d1f6453-b00d-4618-9f4c-7f045f011894 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.054 247403 DEBUG nova.compute.manager [req-4edc8ada-8196-4c7d-935c-36e8e03c1ac1 req-4d1f6453-b00d-4618-9f4c-7f045f011894 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] No waiting events found dispatching network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.055 247403 WARNING nova.compute.manager [req-4edc8ada-8196-4c7d-935c-36e8e03c1ac1 req-4d1f6453-b00d-4618-9f4c-7f045f011894 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received unexpected event network-vif-plugged-578814ca-1fc5-472e-8ea8-0016e6b9feae for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:51:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:46.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:46.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.619 247403 DEBUG nova.network.neutron [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.677 247403 INFO nova.compute.manager [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Took 1.28 seconds to deallocate network for instance.#033[00m
Jan 31 03:51:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 538 KiB/s wr, 111 op/s
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.771 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.772 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.804 247403 DEBUG nova.compute.manager [req-d66be1bf-caa6-4f0d-a849-5a598b87c93c req-fb7dcfd3-54cc-4b66-b314-84eee8fb1ed2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Received event network-vif-deleted-578814ca-1fc5-472e-8ea8-0016e6b9feae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:46 np0005603621 nova_compute[247399]: 2026-01-31 08:51:46.922 247403 DEBUG oslo_concurrency.processutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:51:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320498605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:51:47 np0005603621 nova_compute[247399]: 2026-01-31 08:51:47.338 247403 DEBUG oslo_concurrency.processutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:47 np0005603621 nova_compute[247399]: 2026-01-31 08:51:47.343 247403 DEBUG nova.compute.provider_tree [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:51:47 np0005603621 nova_compute[247399]: 2026-01-31 08:51:47.374 247403 DEBUG nova.scheduler.client.report [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:51:47 np0005603621 nova_compute[247399]: 2026-01-31 08:51:47.407 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:47 np0005603621 nova_compute[247399]: 2026-01-31 08:51:47.469 247403 INFO nova.scheduler.client.report [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Deleted allocations for instance dd9d93b2-c532-41d7-afab-4944d84afd07#033[00m
Jan 31 03:51:47 np0005603621 nova_compute[247399]: 2026-01-31 08:51:47.663 247403 DEBUG oslo_concurrency.lockutils [None req-54bf7435-748f-420d-b577-eaf6746ce2c2 3bd4ce8a916a4bdbbc988eb4fe32991e 76fb5cb7abcd4d74abfc471a96bbd12c - - default default] Lock "dd9d93b2-c532-41d7-afab-4944d84afd07" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.222 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.223 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.281 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.373 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.373 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.383 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.383 247403 INFO nova.compute.claims [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:51:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:48.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:48.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.541 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:48 np0005603621 nova_compute[247399]: 2026-01-31 08:51:48.604 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 31 03:51:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:51:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3191722501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.060 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.066 247403 DEBUG nova.compute.provider_tree [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.227 247403 DEBUG nova.scheduler.client.report [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.455 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.455 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009697830878931696 of space, bias 1.0, pg target 2.9093492636795086 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00531375046528941 of space, bias 1.0, pg target 1.5834976386562443 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003238850502512563 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:51:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.625 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.625 247403 DEBUG nova.network.neutron [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.739 247403 INFO nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.870 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:51:49 np0005603621 nova_compute[247399]: 2026-01-31 08:51:49.999 247403 DEBUG nova.policy [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a498364761ef428b99cac3f92e603385', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8397e0fed04b4dabb57148d0924de2dc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.061 247403 INFO nova.virt.block_device [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Booting with volume f7e6afdb-ed5e-4762-ad08-3a1fceda6276 at /dev/vda#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.241 247403 DEBUG os_brick.utils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.242 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.251 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.251 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[df2ae1cb-a562-4f30-8807-23a73b622dc6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.252 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.257 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.257 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a24431-679b-4115-b890-ae5430c4130d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.258 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.263 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.263 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[dc19c07a-fb01-4ca2-b149-60e55d5df861]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.264 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2a042d-6f67-4300-b6e8-61bbc832d71d]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.264 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.286 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.288 247403 DEBUG os_brick.initiator.connectors.lightos [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.288 247403 DEBUG os_brick.initiator.connectors.lightos [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.289 247403 DEBUG os_brick.initiator.connectors.lightos [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.289 247403 DEBUG os_brick.utils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.289 247403 DEBUG nova.virt.block_device [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Updating existing volume attachment record: 45735b27-dd0f-4d41-be36-8e24eedc76fa _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:51:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:50.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:50.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:50 np0005603621 nova_compute[247399]: 2026-01-31 08:51:50.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 31 03:51:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867114131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.824 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.827 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.828 247403 INFO nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Creating image(s)#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.829 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.829 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Ensure instance console log exists: /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.830 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.830 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.831 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:51 np0005603621 nova_compute[247399]: 2026-01-31 08:51:51.988 247403 DEBUG nova.network.neutron [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Successfully created port: 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:51:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:52.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:52.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 271 op/s
Jan 31 03:51:53 np0005603621 nova_compute[247399]: 2026-01-31 08:51:53.542 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:54.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.4 MiB/s wr, 180 op/s
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.570 247403 DEBUG nova.network.neutron [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Successfully updated port: 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.599 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "refresh_cache-cabe126e-f3af-4113-9463-6dde2833448e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.599 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquired lock "refresh_cache-cabe126e-f3af-4113-9463-6dde2833448e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.599 247403 DEBUG nova.network.neutron [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.715 247403 DEBUG nova.compute.manager [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-changed-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.716 247403 DEBUG nova.compute.manager [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Refreshing instance network info cache due to event network-changed-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.716 247403 DEBUG oslo_concurrency.lockutils [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-cabe126e-f3af-4113-9463-6dde2833448e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:51:55 np0005603621 nova_compute[247399]: 2026-01-31 08:51:55.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:56 np0005603621 nova_compute[247399]: 2026-01-31 08:51:56.196 247403 DEBUG nova.network.neutron [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:51:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:56.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:56.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 688 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.6 MiB/s wr, 202 op/s
Jan 31 03:51:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.679 247403 DEBUG nova.network.neutron [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Updating instance_info_cache with network_info: [{"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.792 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Releasing lock "refresh_cache-cabe126e-f3af-4113-9463-6dde2833448e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.793 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Instance network_info: |[{"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.794 247403 DEBUG oslo_concurrency.lockutils [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-cabe126e-f3af-4113-9463-6dde2833448e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.794 247403 DEBUG nova.network.neutron [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Refreshing network info cache for port 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.798 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Start _get_guest_xml network_info=[{"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-f7e6afdb-ed5e-4762-ad08-3a1fceda6276', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'f7e6afdb-ed5e-4762-ad08-3a1fceda6276', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'cabe126e-f3af-4113-9463-6dde2833448e', 'attached_at': '', 'detached_at': '', 'volume_id': 'f7e6afdb-ed5e-4762-ad08-3a1fceda6276', 'serial': 'f7e6afdb-ed5e-4762-ad08-3a1fceda6276', 'multiattach': True}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '45735b27-dd0f-4d41-be36-8e24eedc76fa', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.805 247403 WARNING nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.811 247403 DEBUG nova.virt.libvirt.host [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.813 247403 DEBUG nova.virt.libvirt.host [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.818 247403 DEBUG nova.virt.libvirt.host [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.818 247403 DEBUG nova.virt.libvirt.host [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.820 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.821 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.821 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.822 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.823 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.823 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.823 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.824 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.824 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.824 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.824 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.824 247403 DEBUG nova.virt.hardware [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.853 247403 DEBUG nova.storage.rbd_utils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image cabe126e-f3af-4113-9463-6dde2833448e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:57 np0005603621 nova_compute[247399]: 2026-01-31 08:51:57.856 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:51:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266480485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.269 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.387 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849503.3861227, dd9d93b2-c532-41d7-afab-4944d84afd07 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.388 247403 INFO nova.compute.manager [-] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:51:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:51:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:51:58.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:51:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:51:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:51:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:51:58.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.475 247403 DEBUG nova.virt.libvirt.vif [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1852430891',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1852430891',id=172,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-82ccmalq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:50Z,user_data=None,user_id='a498364761ef428b99cac3f92e603385',uuid=cabe126e-f3af-4113-9463-6dde2833448e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.476 247403 DEBUG nova.network.os_vif_util [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.476 247403 DEBUG nova.network.os_vif_util [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.477 247403 DEBUG nova.objects.instance [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'pci_devices' on Instance uuid cabe126e-f3af-4113-9463-6dde2833448e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.479 247403 DEBUG nova.compute.manager [None req-750ab7cd-94d2-455d-8c23-7e794f74b15b - - - - - -] [instance: dd9d93b2-c532-41d7-afab-4944d84afd07] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.501 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <uuid>cabe126e-f3af-4113-9463-6dde2833448e</uuid>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <name>instance-000000ac</name>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-1852430891</nova:name>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:51:57</nova:creationTime>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:user uuid="a498364761ef428b99cac3f92e603385">tempest-AttachVolumeMultiAttachTest-1931311941-project-member</nova:user>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:project uuid="8397e0fed04b4dabb57148d0924de2dc">tempest-AttachVolumeMultiAttachTest-1931311941</nova:project>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <nova:port uuid="228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <entry name="serial">cabe126e-f3af-4113-9463-6dde2833448e</entry>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <entry name="uuid">cabe126e-f3af-4113-9463-6dde2833448e</entry>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/cabe126e-f3af-4113-9463-6dde2833448e_disk.config">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-f7e6afdb-ed5e-4762-ad08-3a1fceda6276">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <serial>f7e6afdb-ed5e-4762-ad08-3a1fceda6276</serial>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <shareable/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:7f:9c:44"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <target dev="tap228a6ab0-a7"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/console.log" append="off"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:51:58 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:51:58 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:51:58 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:51:58 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.501 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Preparing to wait for external event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.501 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.502 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.502 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.502 247403 DEBUG nova.virt.libvirt.vif [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1852430891',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1852430891',id=172,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-82ccmalq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:51:50Z,user_data=None,user_id='a498364761ef428b99cac3f92e603385',uuid=cabe126e-f3af-4113-9463-6dde2833448e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.503 247403 DEBUG nova.network.os_vif_util [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.503 247403 DEBUG nova.network.os_vif_util [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.504 247403 DEBUG os_vif [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.504 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.505 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.505 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.508 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.508 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap228a6ab0-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.509 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap228a6ab0-a7, col_values=(('external_ids', {'iface-id': '228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:9c:44', 'vm-uuid': 'cabe126e-f3af-4113-9463-6dde2833448e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.510 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:58 np0005603621 NetworkManager[49013]: <info>  [1769849518.5114] manager: (tap228a6ab0-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/320)
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.515 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.516 247403 INFO os_vif [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7')#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.731 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.732 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.732 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] No VIF found with MAC fa:16:3e:7f:9c:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.732 247403 INFO nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Using config drive#033[00m
Jan 31 03:51:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 693 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 206 op/s
Jan 31 03:51:58 np0005603621 nova_compute[247399]: 2026-01-31 08:51:58.757 247403 DEBUG nova.storage.rbd_utils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image cabe126e-f3af-4113-9463-6dde2833448e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.391 247403 INFO nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Creating config drive at /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/disk.config#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.395 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdu7cccb8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.518 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdu7cccb8" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.545 247403 DEBUG nova.storage.rbd_utils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] rbd image cabe126e-f3af-4113-9463-6dde2833448e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.549 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/disk.config cabe126e-f3af-4113-9463-6dde2833448e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.605 247403 DEBUG nova.network.neutron [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Updated VIF entry in instance network info cache for port 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.607 247403 DEBUG nova.network.neutron [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Updating instance_info_cache with network_info: [{"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:51:59 np0005603621 nova_compute[247399]: 2026-01-31 08:51:59.633 247403 DEBUG oslo_concurrency.lockutils [req-aff8791c-346d-40fd-9ad1-26be0dcab164 req-4a1ff966-bfdc-4a46-87e8-6dd566a46cfc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-cabe126e-f3af-4113-9463-6dde2833448e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.089 247403 DEBUG oslo_concurrency.processutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/disk.config cabe126e-f3af-4113-9463-6dde2833448e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.090 247403 INFO nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Deleting local config drive /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e/disk.config because it was imported into RBD.#033[00m
Jan 31 03:52:00 np0005603621 kernel: tap228a6ab0-a7: entered promiscuous mode
Jan 31 03:52:00 np0005603621 NetworkManager[49013]: <info>  [1769849520.1335] manager: (tap228a6ab0-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/321)
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.166 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:00 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:00Z|00714|binding|INFO|Claiming lport 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe for this chassis.
Jan 31 03:52:00 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:00Z|00715|binding|INFO|228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe: Claiming fa:16:3e:7f:9c:44 10.100.0.13
Jan 31 03:52:00 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:00Z|00716|binding|INFO|Setting lport 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe ovn-installed in OVS
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.175 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:00 np0005603621 systemd-machined[212769]: New machine qemu-86-instance-000000ac.
Jan 31 03:52:00 np0005603621 systemd-udevd[371791]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:52:00 np0005603621 NetworkManager[49013]: <info>  [1769849520.1997] device (tap228a6ab0-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:52:00 np0005603621 systemd[1]: Started Virtual Machine qemu-86-instance-000000ac.
Jan 31 03:52:00 np0005603621 NetworkManager[49013]: <info>  [1769849520.2007] device (tap228a6ab0-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.201 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:44 10.100.0.13'], port_security=['fa:16:3e:7f:9c:44 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cabe126e-f3af-4113-9463-6dde2833448e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8397e0fed04b4dabb57148d0924de2dc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a5f5fc8-9ea2-499a-9817-9f89f2dea440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbd2578f-ff6e-4dc3-bc49-93cbf023edc5, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:00 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:00Z|00717|binding|INFO|Setting lport 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe up in Southbound
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.202 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe in datapath 3afaf607-43a1-4d65-95fc-0a22b5c901d0 bound to our chassis#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.204 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3afaf607-43a1-4d65-95fc-0a22b5c901d0#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.215 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d947fe09-a29a-4df9-9b58-adbb5beb60a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.242 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9f97d3c4-db75-447b-b441-548ff3b325c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.245 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[023cc51d-1ac4-4e24-86ee-7531f978eeb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.268 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[811f692d-838e-4824-babe-9522b30e3a87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.279 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[52f5ea0f-e9ff-4607-90dc-d645d1d2b649]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3afaf607-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851951, 'reachable_time': 30558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371805, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.291 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1fe4a4-515e-4904-878e-6f58a96b9e30]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851960, 'tstamp': 851960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371806, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851962, 'tstamp': 851962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371806, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.293 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3afaf607-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.296 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3afaf607-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.296 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.296 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.297 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3afaf607-40, col_values=(('external_ids', {'iface-id': '0ed76a0a-650c-4ec7-a4d4-0e745236b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:00.297 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:00.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:00.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.735 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849520.7343707, cabe126e-f3af-4113-9463-6dde2833448e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.735 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] VM Started (Lifecycle Event)#033[00m
Jan 31 03:52:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 693 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.776 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.779 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849520.7346165, cabe126e-f3af-4113-9463-6dde2833448e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.779 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.795 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:00 np0005603621 nova_compute[247399]: 2026-01-31 08:52:00.798 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:52:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:52:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.3 total, 600.2 interval#012Cumulative writes: 47K writes, 183K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 16K syncs, 2.85 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5794 writes, 19K keys, 5794 commit groups, 1.0 writes per commit group, ingest: 22.79 MB, 0.04 MB/s#012Interval WAL: 5795 writes, 2316 syncs, 2.50 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.410 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.410 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.587 247403 DEBUG nova.compute.manager [req-2f825d91-dc23-49dc-86c5-f7e86390579b req-096caccd-9ac9-4b5e-b8c6-abcb4dc4ab9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.588 247403 DEBUG oslo_concurrency.lockutils [req-2f825d91-dc23-49dc-86c5-f7e86390579b req-096caccd-9ac9-4b5e-b8c6-abcb4dc4ab9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.588 247403 DEBUG oslo_concurrency.lockutils [req-2f825d91-dc23-49dc-86c5-f7e86390579b req-096caccd-9ac9-4b5e-b8c6-abcb4dc4ab9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.588 247403 DEBUG oslo_concurrency.lockutils [req-2f825d91-dc23-49dc-86c5-f7e86390579b req-096caccd-9ac9-4b5e-b8c6-abcb4dc4ab9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.588 247403 DEBUG nova.compute.manager [req-2f825d91-dc23-49dc-86c5-f7e86390579b req-096caccd-9ac9-4b5e-b8c6-abcb4dc4ab9e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Processing event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.589 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.593 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849521.5922642, cabe126e-f3af-4113-9463-6dde2833448e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.593 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.595 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.598 247403 INFO nova.virt.libvirt.driver [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Instance spawned successfully.#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.598 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.640 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.646 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.650 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.651 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.651 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.651 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.652 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.652 247403 DEBUG nova.virt.libvirt.driver [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.685 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.730 247403 INFO nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Took 9.91 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.731 247403 DEBUG nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.956 247403 INFO nova.compute.manager [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Took 13.61 seconds to build instance.#033[00m
Jan 31 03:52:01 np0005603621 nova_compute[247399]: 2026-01-31 08:52:01.991 247403 DEBUG oslo_concurrency.lockutils [None req-bd01aa19-9fd2-4bed-8623-96c1c4efa087 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:52:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:02.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:52:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 728 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.4 MiB/s wr, 206 op/s
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.953 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.979 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 3308d345-19b7-4fbb-bd81-631135649e7d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.979 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid c215327f-37ad-41a7-a883-3dbb23334df6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.979 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid cabe126e-f3af-4113-9463-6dde2833448e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.979 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "c215327f-37ad-41a7-a883-3dbb23334df6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:02 np0005603621 nova_compute[247399]: 2026-01-31 08:52:02.980 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "cabe126e-f3af-4113-9463-6dde2833448e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.050 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "cabe126e-f3af-4113-9463-6dde2833448e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.051 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.051 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "c215327f-37ad-41a7-a883-3dbb23334df6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.624 247403 DEBUG nova.compute.manager [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.734 247403 DEBUG nova.compute.manager [req-6ecf94c0-291a-4368-81fb-27b938c741d3 req-7bc0c9c3-d925-4e11-94d7-d00bc7c24ee1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.734 247403 DEBUG oslo_concurrency.lockutils [req-6ecf94c0-291a-4368-81fb-27b938c741d3 req-7bc0c9c3-d925-4e11-94d7-d00bc7c24ee1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.735 247403 DEBUG oslo_concurrency.lockutils [req-6ecf94c0-291a-4368-81fb-27b938c741d3 req-7bc0c9c3-d925-4e11-94d7-d00bc7c24ee1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.735 247403 DEBUG oslo_concurrency.lockutils [req-6ecf94c0-291a-4368-81fb-27b938c741d3 req-7bc0c9c3-d925-4e11-94d7-d00bc7c24ee1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.735 247403 DEBUG nova.compute.manager [req-6ecf94c0-291a-4368-81fb-27b938c741d3 req-7bc0c9c3-d925-4e11-94d7-d00bc7c24ee1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] No waiting events found dispatching network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.735 247403 WARNING nova.compute.manager [req-6ecf94c0-291a-4368-81fb-27b938c741d3 req-7bc0c9c3-d925-4e11-94d7-d00bc7c24ee1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received unexpected event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe for instance with vm_state active and task_state None.#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.770 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.774 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.890 247403 DEBUG nova.objects.instance [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_requests' on Instance uuid e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.940 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.941 247403 INFO nova.compute.claims [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.941 247403 DEBUG nova.objects.instance [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:03 np0005603621 nova_compute[247399]: 2026-01-31 08:52:03.967 247403 DEBUG nova.objects.instance [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.084 247403 INFO nova.compute.resource_tracker [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating resource usage from migration dfacf1ec-776d-4ad2-a40b-20488a596d16#033[00m
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.084 247403 DEBUG nova.compute.resource_tracker [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Starting to track incoming migration dfacf1ec-776d-4ad2-a40b-20488a596d16 with flavor f75c4aee-d826-4343-a7e3-f06a4b21de52 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.248 247403 DEBUG oslo_concurrency.processutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:04.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:04.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345281330' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.708 247403 DEBUG oslo_concurrency.processutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.713 247403 DEBUG nova.compute.provider_tree [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.739 247403 DEBUG nova.scheduler.client.report [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 728 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1010 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.771 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:04 np0005603621 nova_compute[247399]: 2026-01-31 08:52:04.771 247403 INFO nova.compute.manager [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Migrating#033[00m
Jan 31 03:52:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Jan 31 03:52:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Jan 31 03:52:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Jan 31 03:52:05 np0005603621 nova_compute[247399]: 2026-01-31 08:52:05.811 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:06.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 750 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 874 KiB/s rd, 3.2 MiB/s wr, 108 op/s
Jan 31 03:52:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Jan 31 03:52:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Jan 31 03:52:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Jan 31 03:52:06 np0005603621 systemd-logind[818]: New session 72 of user nova.
Jan 31 03:52:06 np0005603621 systemd[1]: Created slice User Slice of UID 42436.
Jan 31 03:52:06 np0005603621 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 31 03:52:06 np0005603621 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 31 03:52:06 np0005603621 systemd[1]: Starting User Manager for UID 42436...
Jan 31 03:52:07 np0005603621 systemd[371878]: Queued start job for default target Main User Target.
Jan 31 03:52:07 np0005603621 systemd[371878]: Created slice User Application Slice.
Jan 31 03:52:07 np0005603621 systemd[371878]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:52:07 np0005603621 systemd[371878]: Started Daily Cleanup of User's Temporary Directories.
Jan 31 03:52:07 np0005603621 systemd[371878]: Reached target Paths.
Jan 31 03:52:07 np0005603621 systemd[371878]: Reached target Timers.
Jan 31 03:52:07 np0005603621 systemd[371878]: Starting D-Bus User Message Bus Socket...
Jan 31 03:52:07 np0005603621 systemd[371878]: Starting Create User's Volatile Files and Directories...
Jan 31 03:52:07 np0005603621 systemd[371878]: Finished Create User's Volatile Files and Directories.
Jan 31 03:52:07 np0005603621 systemd[371878]: Listening on D-Bus User Message Bus Socket.
Jan 31 03:52:07 np0005603621 systemd[371878]: Reached target Sockets.
Jan 31 03:52:07 np0005603621 systemd[371878]: Reached target Basic System.
Jan 31 03:52:07 np0005603621 systemd[371878]: Reached target Main User Target.
Jan 31 03:52:07 np0005603621 systemd[371878]: Startup finished in 121ms.
Jan 31 03:52:07 np0005603621 systemd[1]: Started User Manager for UID 42436.
Jan 31 03:52:07 np0005603621 systemd[1]: Started Session 72 of User nova.
Jan 31 03:52:07 np0005603621 systemd[1]: session-72.scope: Deactivated successfully.
Jan 31 03:52:07 np0005603621 systemd-logind[818]: Session 72 logged out. Waiting for processes to exit.
Jan 31 03:52:07 np0005603621 systemd-logind[818]: Removed session 72.
Jan 31 03:52:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:07 np0005603621 systemd-logind[818]: New session 74 of user nova.
Jan 31 03:52:07 np0005603621 systemd[1]: Started Session 74 of User nova.
Jan 31 03:52:07 np0005603621 systemd[1]: session-74.scope: Deactivated successfully.
Jan 31 03:52:07 np0005603621 systemd-logind[818]: Session 74 logged out. Waiting for processes to exit.
Jan 31 03:52:07 np0005603621 systemd-logind[818]: Removed session 74.
Jan 31 03:52:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:08.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:08.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:08 np0005603621 nova_compute[247399]: 2026-01-31 08:52:08.516 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:52:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 772 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 6.0 MiB/s wr, 271 op/s
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.904 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.904 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.905 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.905 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.905 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.906 247403 INFO nova.compute.manager [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Terminating instance#033[00m
Jan 31 03:52:09 np0005603621 nova_compute[247399]: 2026-01-31 08:52:09.907 247403 DEBUG nova.compute.manager [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:52:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:10.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:10.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:10 np0005603621 kernel: tap228a6ab0-a7 (unregistering): left promiscuous mode
Jan 31 03:52:10 np0005603621 NetworkManager[49013]: <info>  [1769849530.5032] device (tap228a6ab0-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.507 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:10Z|00718|binding|INFO|Releasing lport 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe from this chassis (sb_readonly=0)
Jan 31 03:52:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:10Z|00719|binding|INFO|Setting lport 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe down in Southbound
Jan 31 03:52:10 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:10Z|00720|binding|INFO|Removing iface tap228a6ab0-a7 ovn-installed in OVS
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.509 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.515 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:9c:44 10.100.0.13'], port_security=['fa:16:3e:7f:9c:44 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'cabe126e-f3af-4113-9463-6dde2833448e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8397e0fed04b4dabb57148d0924de2dc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a5f5fc8-9ea2-499a-9817-9f89f2dea440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbd2578f-ff6e-4dc3-bc49-93cbf023edc5, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.516 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe in datapath 3afaf607-43a1-4d65-95fc-0a22b5c901d0 unbound from our chassis#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.518 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3afaf607-43a1-4d65-95fc-0a22b5c901d0#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.530 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebf8162-d244-4a13-beb0-ce03deca0ad3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:10 np0005603621 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Jan 31 03:52:10 np0005603621 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000ac.scope: Consumed 8.938s CPU time.
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.555 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce9d622-24e5-4f33-a2b5-d2568e252810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.558 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0342bef8-0a58-4f74-a1e5-8404445fdbca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:10 np0005603621 systemd-machined[212769]: Machine qemu-86-instance-000000ac terminated.
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.580 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7f59a883-0b8a-4f1a-8210-68c193a66718]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:10 np0005603621 podman[371904]: 2026-01-31 08:52:10.584532172 +0000 UTC m=+0.057480055 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.595 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1d3970-acd5-49ea-9912-756fe8c6a4fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3afaf607-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851951, 'reachable_time': 30558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371943, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.607 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e38182-9e32-47fa-8a58-4f6602c39fdc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851960, 'tstamp': 851960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371948, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851962, 'tstamp': 851962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371948, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.609 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3afaf607-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.610 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.614 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.615 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3afaf607-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.615 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.615 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3afaf607-40, col_values=(('external_ids', {'iface-id': '0ed76a0a-650c-4ec7-a4d4-0e745236b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:10 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:10.616 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:10 np0005603621 podman[371923]: 2026-01-31 08:52:10.634369816 +0000 UTC m=+0.068350249 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 03:52:10 np0005603621 kernel: tap228a6ab0-a7: entered promiscuous mode
Jan 31 03:52:10 np0005603621 kernel: tap228a6ab0-a7 (unregistering): left promiscuous mode
Jan 31 03:52:10 np0005603621 NetworkManager[49013]: <info>  [1769849530.7274] manager: (tap228a6ab0-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.731 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.742 247403 INFO nova.virt.libvirt.driver [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Instance destroyed successfully.#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.743 247403 DEBUG nova.objects.instance [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'resources' on Instance uuid cabe126e-f3af-4113-9463-6dde2833448e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 754 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 239 op/s
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.813 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.817 247403 DEBUG nova.virt.libvirt.vif [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1852430891',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1852430891',id=172,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-82ccmalq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:52:01Z,user_data=None,user_id='a498364761ef428b99cac3f92e603385',uuid=cabe126e-f3af-4113-9463-6dde2833448e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.817 247403 DEBUG nova.network.os_vif_util [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "address": "fa:16:3e:7f:9c:44", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap228a6ab0-a7", "ovs_interfaceid": "228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.818 247403 DEBUG nova.network.os_vif_util [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.818 247403 DEBUG os_vif [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.821 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap228a6ab0-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.822 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.825 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:52:10 np0005603621 nova_compute[247399]: 2026-01-31 08:52:10.827 247403 INFO os_vif [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:9c:44,bridge_name='br-int',has_traffic_filtering=True,id=228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap228a6ab0-a7')#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.504 247403 DEBUG nova.compute.manager [req-c8b27aae-22e4-4ac5-95d9-2b862100fb5b req-1fea4f62-53f1-4f16-a906-5eae62515a0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-unplugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.505 247403 DEBUG oslo_concurrency.lockutils [req-c8b27aae-22e4-4ac5-95d9-2b862100fb5b req-1fea4f62-53f1-4f16-a906-5eae62515a0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.505 247403 DEBUG oslo_concurrency.lockutils [req-c8b27aae-22e4-4ac5-95d9-2b862100fb5b req-1fea4f62-53f1-4f16-a906-5eae62515a0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.505 247403 DEBUG oslo_concurrency.lockutils [req-c8b27aae-22e4-4ac5-95d9-2b862100fb5b req-1fea4f62-53f1-4f16-a906-5eae62515a0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.506 247403 DEBUG nova.compute.manager [req-c8b27aae-22e4-4ac5-95d9-2b862100fb5b req-1fea4f62-53f1-4f16-a906-5eae62515a0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] No waiting events found dispatching network-vif-unplugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.506 247403 WARNING nova.compute.manager [req-c8b27aae-22e4-4ac5-95d9-2b862100fb5b req-1fea4f62-53f1-4f16-a906-5eae62515a0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received unexpected event network-vif-unplugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:52:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.813 247403 INFO nova.virt.libvirt.driver [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Deleting instance files /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e_del#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.814 247403 INFO nova.virt.libvirt.driver [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Deletion of /var/lib/nova/instances/cabe126e-f3af-4113-9463-6dde2833448e_del complete#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.869 247403 INFO nova.compute.manager [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Took 1.96 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.869 247403 DEBUG oslo.service.loopingcall [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.870 247403 DEBUG nova.compute.manager [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:52:11 np0005603621 nova_compute[247399]: 2026-01-31 08:52:11.870 247403 DEBUG nova.network.neutron [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:52:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.205 247403 INFO nova.network.neutron [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating port 5d9caf6f-4602-4589-9c08-43b1a20a9c34 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 03:52:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:12.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:12.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 734 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.2 MiB/s wr, 387 op/s
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.860 247403 DEBUG nova.compute.manager [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-vif-unplugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.861 247403 DEBUG oslo_concurrency.lockutils [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.861 247403 DEBUG oslo_concurrency.lockutils [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.861 247403 DEBUG oslo_concurrency.lockutils [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.862 247403 DEBUG nova.compute.manager [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] No waiting events found dispatching network-vif-unplugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.862 247403 DEBUG nova.compute.manager [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-vif-unplugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.862 247403 DEBUG nova.compute.manager [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.862 247403 DEBUG oslo_concurrency.lockutils [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "cabe126e-f3af-4113-9463-6dde2833448e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.862 247403 DEBUG oslo_concurrency.lockutils [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.863 247403 DEBUG oslo_concurrency.lockutils [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.863 247403 DEBUG nova.compute.manager [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] No waiting events found dispatching network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:12 np0005603621 nova_compute[247399]: 2026-01-31 08:52:12.863 247403 WARNING nova.compute.manager [req-94634945-2be2-40fb-bd87-6677e1fb4522 req-7ff80ffa-7651-4dd3-9485-018c7b7072da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received unexpected event network-vif-plugged-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.110 247403 DEBUG nova.network.neutron [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.139 247403 INFO nova.compute.manager [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Took 1.27 seconds to deallocate network for instance.#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.243 247403 DEBUG nova.compute.manager [req-6c1f9454-beee-4615-8bdc-6111e92db603 req-89cfba40-e518-4c4b-824c-5c3317ab585d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Received event network-vif-deleted-228a6ab0-a762-4fa3-bed9-b5f3ecf2d2fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.246 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.273 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.273 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.274 247403 DEBUG nova.network.neutron [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.385 247403 INFO nova.compute.manager [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Took 0.25 seconds to detach 1 volumes for instance.#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.462 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.462 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.759 247403 DEBUG nova.compute.manager [req-c54f7d54-5e45-4f30-aa9b-91f100fd76b4 req-c69215c8-ba74-4e1e-9950-d777a2290afb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.759 247403 DEBUG oslo_concurrency.lockutils [req-c54f7d54-5e45-4f30-aa9b-91f100fd76b4 req-c69215c8-ba74-4e1e-9950-d777a2290afb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.759 247403 DEBUG oslo_concurrency.lockutils [req-c54f7d54-5e45-4f30-aa9b-91f100fd76b4 req-c69215c8-ba74-4e1e-9950-d777a2290afb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.759 247403 DEBUG oslo_concurrency.lockutils [req-c54f7d54-5e45-4f30-aa9b-91f100fd76b4 req-c69215c8-ba74-4e1e-9950-d777a2290afb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.760 247403 DEBUG nova.compute.manager [req-c54f7d54-5e45-4f30-aa9b-91f100fd76b4 req-c69215c8-ba74-4e1e-9950-d777a2290afb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] No waiting events found dispatching network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.760 247403 WARNING nova.compute.manager [req-c54f7d54-5e45-4f30-aa9b-91f100fd76b4 req-c69215c8-ba74-4e1e-9950-d777a2290afb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received unexpected event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 31 03:52:13 np0005603621 nova_compute[247399]: 2026-01-31 08:52:13.801 247403 DEBUG oslo_concurrency.processutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.240 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2326682990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.263 247403 DEBUG oslo_concurrency.processutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.268 247403 DEBUG nova.compute.provider_tree [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:14.291 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:14.293 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.300 247403 DEBUG nova.scheduler.client.report [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.339 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.380 247403 INFO nova.scheduler.client.report [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Deleted allocations for instance cabe126e-f3af-4113-9463-6dde2833448e#033[00m
Jan 31 03:52:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:14 np0005603621 nova_compute[247399]: 2026-01-31 08:52:14.527 247403 DEBUG oslo_concurrency.lockutils [None req-0607a5d4-6da7-41fe-8262-41033d9c47f6 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "cabe126e-f3af-4113-9463-6dde2833448e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:52:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298398213' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:52:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:52:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298398213' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:52:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 740 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 349 op/s
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.006 247403 DEBUG nova.compute.manager [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-changed-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.006 247403 DEBUG nova.compute.manager [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Refreshing instance network info cache due to event network-changed-5d9caf6f-4602-4589-9c08-43b1a20a9c34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.006 247403 DEBUG oslo_concurrency.lockutils [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.220 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.220 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.221 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.221 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.221 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807285927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:15 np0005603621 nova_compute[247399]: 2026-01-31 08:52:15.928 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.707s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.006 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.006 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000a3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.009 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.010 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.018 247403 DEBUG nova.network.neutron [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating instance_info_cache with network_info: [{"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.044 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.049 247403 DEBUG oslo_concurrency.lockutils [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.049 247403 DEBUG nova.network.neutron [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Refreshing network info cache for port 5d9caf6f-4602-4589-9c08-43b1a20a9c34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.155 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.156 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3609MB free_disk=20.718341827392578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.157 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.157 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.219 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.220 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.220 247403 INFO nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Creating image(s)#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.251 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Applying migration context for instance e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 as it has an incoming, in-progress migration dfacf1ec-776d-4ad2-a40b-20488a596d16. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.251 247403 INFO nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating resource usage from migration dfacf1ec-776d-4ad2-a40b-20488a596d16#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.255 247403 DEBUG nova.storage.rbd_utils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] creating snapshot(nova-resize) on rbd image(e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.312 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 3308d345-19b7-4fbb-bd81-631135649e7d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.313 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c215327f-37ad-41a7-a883-3dbb23334df6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.313 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.313 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.313 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.405 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:16.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:16.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Jan 31 03:52:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 740 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.8 MiB/s wr, 303 op/s
Jan 31 03:52:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Jan 31 03:52:16 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.818 247403 DEBUG nova.objects.instance [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'trusted_certs' on Instance uuid e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2323912170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.846 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.850 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.897 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.939 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.939 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.961 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.961 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Ensure instance console log exists: /var/lib/nova/instances/e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.961 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.962 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.962 247403 DEBUG oslo_concurrency.lockutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.964 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Start _get_guest_xml network_info=[{"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--2122556826", "vif_mac": "fa:16:3e:ed:8d:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.967 247403 WARNING nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.973 247403 DEBUG nova.virt.libvirt.host [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.973 247403 DEBUG nova.virt.libvirt.host [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.977 247403 DEBUG nova.virt.libvirt.host [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.978 247403 DEBUG nova.virt.libvirt.host [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.979 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.980 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:37Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f75c4aee-d826-4343-a7e3-f06a4b21de52',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.980 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.980 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.980 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.981 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.981 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.981 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.981 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.982 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.982 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.982 247403 DEBUG nova.virt.hardware [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:52:16 np0005603621 nova_compute[247399]: 2026-01-31 08:52:16.982 247403 DEBUG nova.objects.instance [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'vcpu_model' on Instance uuid e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.035 247403 DEBUG oslo_concurrency.processutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:17 np0005603621 systemd[1]: Stopping User Manager for UID 42436...
Jan 31 03:52:17 np0005603621 systemd[371878]: Activating special unit Exit the Session...
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped target Main User Target.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped target Basic System.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped target Paths.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped target Sockets.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped target Timers.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 31 03:52:17 np0005603621 systemd[371878]: Closed D-Bus User Message Bus Socket.
Jan 31 03:52:17 np0005603621 systemd[371878]: Stopped Create User's Volatile Files and Directories.
Jan 31 03:52:17 np0005603621 systemd[371878]: Removed slice User Application Slice.
Jan 31 03:52:17 np0005603621 systemd[371878]: Reached target Shutdown.
Jan 31 03:52:17 np0005603621 systemd[371878]: Finished Exit the Session.
Jan 31 03:52:17 np0005603621 systemd[371878]: Reached target Exit the Session.
Jan 31 03:52:17 np0005603621 systemd[1]: user@42436.service: Deactivated successfully.
Jan 31 03:52:17 np0005603621 systemd[1]: Stopped User Manager for UID 42436.
Jan 31 03:52:17 np0005603621 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 31 03:52:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:52:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623593070' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:52:17 np0005603621 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 31 03:52:17 np0005603621 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 31 03:52:17 np0005603621 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 31 03:52:17 np0005603621 systemd[1]: Removed slice User Slice of UID 42436.
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.470 247403 DEBUG oslo_concurrency.processutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.504 247403 DEBUG oslo_concurrency.processutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.741 247403 DEBUG nova.network.neutron [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updated VIF entry in instance network info cache for port 5d9caf6f-4602-4589-9c08-43b1a20a9c34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.742 247403 DEBUG nova.network.neutron [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating instance_info_cache with network_info: [{"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.772 247403 DEBUG oslo_concurrency.lockutils [req-f32b06b3-0fcb-41b5-9bfd-6323310c51b5 req-32cc343d-1115-4313-aa94-e7ab2f649ca9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:52:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:52:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550058782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.933 247403 DEBUG oslo_concurrency.processutils [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.934 247403 DEBUG nova.virt.libvirt.vif [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1107028276',display_name='tempest-TestNetworkAdvancedServerOps-server-1107028276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1107028276',id=170,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhAjak4EaSbAMsOQHzT/F+YD+mdwqaKmT1b8Pkiv6vPXQSXr3cEJwaMw5cOEGrpti6B+hT6jgk8eQer/fm3Y87ortF0Suf8ZM3a30yTbd7sIeUbybs0ERwtVLRyiyyflg==',key_name='tempest-TestNetworkAdvancedServerOps-1864875451',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-g52p6b46',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:52:11Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--2122556826", "vif_mac": "fa:16:3e:ed:8d:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.935 247403 DEBUG nova.network.os_vif_util [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--2122556826", "vif_mac": "fa:16:3e:ed:8d:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.935 247403 DEBUG nova.network.os_vif_util [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.937 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <uuid>e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8</uuid>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <name>instance-000000aa</name>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <memory>196608</memory>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1107028276</nova:name>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:52:16</nova:creationTime>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.micro">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:memory>192</nova:memory>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <nova:port uuid="5d9caf6f-4602-4589-9c08-43b1a20a9c34">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <entry name="serial">e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8</entry>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <entry name="uuid">e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8</entry>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8_disk">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8_disk.config">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:ed:8d:1c"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <target dev="tap5d9caf6f-46"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8/console.log" append="off"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:52:17 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:52:17 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:52:17 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:52:17 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.938 247403 DEBUG nova.virt.libvirt.vif [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1107028276',display_name='tempest-TestNetworkAdvancedServerOps-server-1107028276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1107028276',id=170,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhAjak4EaSbAMsOQHzT/F+YD+mdwqaKmT1b8Pkiv6vPXQSXr3cEJwaMw5cOEGrpti6B+hT6jgk8eQer/fm3Y87ortF0Suf8ZM3a30yTbd7sIeUbybs0ERwtVLRyiyyflg==',key_name='tempest-TestNetworkAdvancedServerOps-1864875451',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:51:40Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-g52p6b46',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:52:11Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--2122556826", "vif_mac": "fa:16:3e:ed:8d:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.938 247403 DEBUG nova.network.os_vif_util [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--2122556826", "vif_mac": "fa:16:3e:ed:8d:1c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.939 247403 DEBUG nova.network.os_vif_util [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.939 247403 DEBUG os_vif [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.939 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.940 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.940 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.942 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d9caf6f-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.943 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d9caf6f-46, col_values=(('external_ids', {'iface-id': '5d9caf6f-4602-4589-9c08-43b1a20a9c34', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:8d:1c', 'vm-uuid': 'e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.944 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:17 np0005603621 NetworkManager[49013]: <info>  [1769849537.9451] manager: (tap5d9caf6f-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.950 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:17 np0005603621 nova_compute[247399]: 2026-01-31 08:52:17.950 247403 INFO os_vif [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46')#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.030 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.030 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.030 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:ed:8d:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.031 247403 INFO nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Using config drive#033[00m
Jan 31 03:52:18 np0005603621 kernel: tap5d9caf6f-46: entered promiscuous mode
Jan 31 03:52:18 np0005603621 NetworkManager[49013]: <info>  [1769849538.0883] manager: (tap5d9caf6f-46): new Tun device (/org/freedesktop/NetworkManager/Devices/324)
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.089 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:18Z|00721|binding|INFO|Claiming lport 5d9caf6f-4602-4589-9c08-43b1a20a9c34 for this chassis.
Jan 31 03:52:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:18Z|00722|binding|INFO|5d9caf6f-4602-4589-9c08-43b1a20a9c34: Claiming fa:16:3e:ed:8d:1c 10.100.0.13
Jan 31 03:52:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:18Z|00723|binding|INFO|Setting lport 5d9caf6f-4602-4589-9c08-43b1a20a9c34 ovn-installed in OVS
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.098 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:18 np0005603621 systemd-machined[212769]: New machine qemu-87-instance-000000aa.
Jan 31 03:52:18 np0005603621 systemd-udevd[372275]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:52:18 np0005603621 NetworkManager[49013]: <info>  [1769849538.1183] device (tap5d9caf6f-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:52:18 np0005603621 NetworkManager[49013]: <info>  [1769849538.1187] device (tap5d9caf6f-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:52:18 np0005603621 systemd[1]: Started Virtual Machine qemu-87-instance-000000aa.
Jan 31 03:52:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:18Z|00724|binding|INFO|Setting lport 5d9caf6f-4602-4589-9c08-43b1a20a9c34 up in Southbound
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.121 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:8d:1c 10.100.0.13'], port_security=['fa:16:3e:ed:8d:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f94c6ed-71ba-4114-a483-6969c923a169', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '823dcd1e-c848-4b0b-b209-d3c49f7c199a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2593ca0c-7180-414a-b815-a3d2e8e5bf3e, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5d9caf6f-4602-4589-9c08-43b1a20a9c34) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.122 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5d9caf6f-4602-4589-9c08-43b1a20a9c34 in datapath 1f94c6ed-71ba-4114-a483-6969c923a169 bound to our chassis#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.124 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f94c6ed-71ba-4114-a483-6969c923a169#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.131 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cf02685b-22e3-43de-b001-295340425531]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.132 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1f94c6ed-71 in ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.134 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1f94c6ed-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.134 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3a07b676-d9db-4266-93ab-19166b8dd153]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.134 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dc35e2-ada5-430c-9a04-50b9e0f3836a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.143 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[760f0837-6a61-40c4-89f7-4c1df3238a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.152 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c735b39c-9133-45f1-9e01-71837f3a9ac8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.176 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4aeb46-ae89-431e-8029-07b0dc06bb05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.181 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[be674957-617f-4aca-840a-3920c1623c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 NetworkManager[49013]: <info>  [1769849538.1826] manager: (tap1f94c6ed-70): new Veth device (/org/freedesktop/NetworkManager/Devices/325)
Jan 31 03:52:18 np0005603621 systemd-udevd[372277]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.201 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b12fffd8-8de1-4596-9925-f4eeb769d745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.205 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2a410ca9-3de0-4f37-9444-7cf2ce94b8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 NetworkManager[49013]: <info>  [1769849538.2221] device (tap1f94c6ed-70): carrier: link connected
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.226 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8e639ee7-0723-40e0-9889-4a6a94954f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.240 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[13466e15-0f43-4f57-af04-512185a4c295]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f94c6ed-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884373, 'reachable_time': 35172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372309, 'error': None, 'target': 'ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.254 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[41b49929-3190-43ad-a18b-a1a2fd633c47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefc:2d70'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884373, 'tstamp': 884373}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372310, 'error': None, 'target': 'ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.266 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8d52bc-88c3-4178-b666-5a63acf0be09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f94c6ed-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fc:2d:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884373, 'reachable_time': 35172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372311, 'error': None, 'target': 'ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.290 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[def4e968-d35b-4b65-b7fa-eaf274b66fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.294 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.329 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[35aa8a86-b993-4c90-ba41-9e97e413bccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.331 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f94c6ed-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.331 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.331 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f94c6ed-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.332 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:18 np0005603621 kernel: tap1f94c6ed-70: entered promiscuous mode
Jan 31 03:52:18 np0005603621 NetworkManager[49013]: <info>  [1769849538.3350] manager: (tap1f94c6ed-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.336 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f94c6ed-70, col_values=(('external_ids', {'iface-id': 'b0f245d6-f62b-4b38-8c6a-366768516e3e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:18 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:18Z|00725|binding|INFO|Releasing lport b0f245d6-f62b-4b38-8c6a-366768516e3e from this chassis (sb_readonly=0)
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.339 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1f94c6ed-71ba-4114-a483-6969c923a169.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1f94c6ed-71ba-4114-a483-6969c923a169.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.340 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[694dc254-10af-44f7-9995-22c9d8cf562f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.341 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-1f94c6ed-71ba-4114-a483-6969c923a169
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/1f94c6ed-71ba-4114-a483-6969c923a169.pid.haproxy
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 1f94c6ed-71ba-4114-a483-6969c923a169
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:52:18 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:18.341 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169', 'env', 'PROCESS_TAG=haproxy-1f94c6ed-71ba-4114-a483-6969c923a169', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1f94c6ed-71ba-4114-a483-6969c923a169.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.343 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:18.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:18.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.610 247403 DEBUG nova.compute.manager [req-82c057ee-b492-4fbb-b35e-01d1eee60c2e req-b762cb2a-a76c-463c-81f5-439bbccd6ea3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.610 247403 DEBUG oslo_concurrency.lockutils [req-82c057ee-b492-4fbb-b35e-01d1eee60c2e req-b762cb2a-a76c-463c-81f5-439bbccd6ea3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.610 247403 DEBUG oslo_concurrency.lockutils [req-82c057ee-b492-4fbb-b35e-01d1eee60c2e req-b762cb2a-a76c-463c-81f5-439bbccd6ea3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.610 247403 DEBUG oslo_concurrency.lockutils [req-82c057ee-b492-4fbb-b35e-01d1eee60c2e req-b762cb2a-a76c-463c-81f5-439bbccd6ea3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.610 247403 DEBUG nova.compute.manager [req-82c057ee-b492-4fbb-b35e-01d1eee60c2e req-b762cb2a-a76c-463c-81f5-439bbccd6ea3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] No waiting events found dispatching network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.611 247403 WARNING nova.compute.manager [req-82c057ee-b492-4fbb-b35e-01d1eee60c2e req-b762cb2a-a76c-463c-81f5-439bbccd6ea3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received unexpected event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.699 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849538.6994622, e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.700 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.702 247403 DEBUG nova.compute.manager [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.706 247403 INFO nova.virt.libvirt.driver [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Instance running successfully.#033[00m
Jan 31 03:52:18 np0005603621 virtqemud[247123]: argument unsupported: QEMU guest agent is not configured
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.707 247403 DEBUG nova.virt.libvirt.guest [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.707 247403 DEBUG nova.virt.libvirt.driver [None req-2ad6268d-fc06-4bb2-bbeb-fb778b5feddf f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 31 03:52:18 np0005603621 podman[372378]: 2026-01-31 08:52:18.629363377 +0000 UTC m=+0.025136764 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.724 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.727 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:52:18 np0005603621 podman[372378]: 2026-01-31 08:52:18.760407875 +0000 UTC m=+0.156181242 container create bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.763 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.764 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849538.7016232, e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.764 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] VM Started (Lifecycle Event)#033[00m
Jan 31 03:52:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 671 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 322 op/s
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.793 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.796 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.857 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 31 03:52:18 np0005603621 systemd[1]: Started libpod-conmon-bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35.scope.
Jan 31 03:52:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac3f57788658e12246addf68e67d8fb5b535ddb337fdff79fe300cdb670360f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.939 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:18 np0005603621 nova_compute[247399]: 2026-01-31 08:52:18.939 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:19 np0005603621 podman[372378]: 2026-01-31 08:52:19.002712245 +0000 UTC m=+0.398485642 container init bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:52:19 np0005603621 podman[372378]: 2026-01-31 08:52:19.007950751 +0000 UTC m=+0.403724118 container start bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 03:52:19 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [NOTICE]   (372404) : New worker (372406) forked
Jan 31 03:52:19 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [NOTICE]   (372404) : Loading success.
Jan 31 03:52:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:52:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Jan 31 03:52:20 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Jan 31 03:52:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:20.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 652 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 293 KiB/s wr, 243 op/s
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.854 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.915 247403 DEBUG nova.compute.manager [req-52a4bc47-63ce-4889-886f-d41b6381d910 req-6f54317f-70d4-4055-ba33-ab4f1b319961 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.915 247403 DEBUG oslo_concurrency.lockutils [req-52a4bc47-63ce-4889-886f-d41b6381d910 req-6f54317f-70d4-4055-ba33-ab4f1b319961 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.916 247403 DEBUG oslo_concurrency.lockutils [req-52a4bc47-63ce-4889-886f-d41b6381d910 req-6f54317f-70d4-4055-ba33-ab4f1b319961 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.916 247403 DEBUG oslo_concurrency.lockutils [req-52a4bc47-63ce-4889-886f-d41b6381d910 req-6f54317f-70d4-4055-ba33-ab4f1b319961 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.916 247403 DEBUG nova.compute.manager [req-52a4bc47-63ce-4889-886f-d41b6381d910 req-6f54317f-70d4-4055-ba33-ab4f1b319961 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] No waiting events found dispatching network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:20 np0005603621 nova_compute[247399]: 2026-01-31 08:52:20.916 247403 WARNING nova.compute.manager [req-52a4bc47-63ce-4889-886f-d41b6381d910 req-6f54317f-70d4-4055-ba33-ab4f1b319961 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received unexpected event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 for instance with vm_state resized and task_state None.#033[00m
Jan 31 03:52:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:22.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 661 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.8 MiB/s wr, 333 op/s
Jan 31 03:52:22 np0005603621 nova_compute[247399]: 2026-01-31 08:52:22.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:24.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 669 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.8 MiB/s wr, 393 op/s
Jan 31 03:52:25 np0005603621 nova_compute[247399]: 2026-01-31 08:52:25.741 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849530.7393272, cabe126e-f3af-4113-9463-6dde2833448e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:25 np0005603621 nova_compute[247399]: 2026-01-31 08:52:25.742 247403 INFO nova.compute.manager [-] [instance: cabe126e-f3af-4113-9463-6dde2833448e] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:52:25 np0005603621 nova_compute[247399]: 2026-01-31 08:52:25.785 247403 DEBUG nova.compute.manager [None req-9994e774-23ed-4109-b37b-fc61a1d2d7eb - - - - - -] [instance: cabe126e-f3af-4113-9463-6dde2833448e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:25 np0005603621 nova_compute[247399]: 2026-01-31 08:52:25.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:26.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:26.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 669 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.2 MiB/s wr, 315 op/s
Jan 31 03:52:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Jan 31 03:52:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Jan 31 03:52:27 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Jan 31 03:52:27 np0005603621 nova_compute[247399]: 2026-01-31 08:52:27.948 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:28.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 676 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.0 MiB/s wr, 243 op/s
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137091602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:52:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137091602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:52:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:52:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/469916413' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:52:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:52:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/469916413' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:52:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:30.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:30.533 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:30.533 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:30.534 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 680 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 139 op/s
Jan 31 03:52:30 np0005603621 nova_compute[247399]: 2026-01-31 08:52:30.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:31Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:8d:1c 10.100.0.13
Jan 31 03:52:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:32.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 864 KiB/s rd, 467 KiB/s wr, 185 op/s
Jan 31 03:52:32 np0005603621 nova_compute[247399]: 2026-01-31 08:52:32.950 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:34.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 469 KiB/s wr, 211 op/s
Jan 31 03:52:35 np0005603621 nova_compute[247399]: 2026-01-31 08:52:35.901 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.292 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.292 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.293 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.293 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.293 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.294 247403 INFO nova.compute.manager [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Terminating instance#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.295 247403 DEBUG nova.compute.manager [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:52:36 np0005603621 kernel: tapfbe66833-82 (unregistering): left promiscuous mode
Jan 31 03:52:36 np0005603621 NetworkManager[49013]: <info>  [1769849556.3891] device (tapfbe66833-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:36Z|00726|binding|INFO|Releasing lport fbe66833-82a6-4f72-9b11-a4732140845a from this chassis (sb_readonly=0)
Jan 31 03:52:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:36Z|00727|binding|INFO|Setting lport fbe66833-82a6-4f72-9b11-a4732140845a down in Southbound
Jan 31 03:52:36 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:36Z|00728|binding|INFO|Removing iface tapfbe66833-82 ovn-installed in OVS
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.430 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:4d:37 10.100.0.6'], port_security=['fa:16:3e:d6:4d:37 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c215327f-37ad-41a7-a883-3dbb23334df6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8397e0fed04b4dabb57148d0924de2dc', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'd636f3a4-efef-465a-ac59-8182d61336f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbd2578f-ff6e-4dc3-bc49-93cbf023edc5, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fbe66833-82a6-4f72-9b11-a4732140845a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.431 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fbe66833-82a6-4f72-9b11-a4732140845a in datapath 3afaf607-43a1-4d65-95fc-0a22b5c901d0 unbound from our chassis#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.433 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3afaf607-43a1-4d65-95fc-0a22b5c901d0#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.443 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[58113e21-b056-450c-a469-3421bac9cdb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:36 np0005603621 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Jan 31 03:52:36 np0005603621 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000a3.scope: Consumed 17.729s CPU time.
Jan 31 03:52:36 np0005603621 systemd-machined[212769]: Machine qemu-83-instance-000000a3 terminated.
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.459 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[14ce4128-c618-4f6e-8801-bc590d258f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.462 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6b05e1-cb89-4aab-a524-11740bcf3d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:36.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.477 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7442ee60-ffea-45a4-841e-88c86d36fd0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.487 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e91133-e470-4432-9132-ec4a59ae4c7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3afaf607-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:84:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851951, 'reachable_time': 30558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372435, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.496 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[51146443-248f-462c-bd6c-dde5f195b001]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851960, 'tstamp': 851960}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372436, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3afaf607-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 851962, 'tstamp': 851962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372436, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.497 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3afaf607-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.503 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.503 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3afaf607-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.504 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.504 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3afaf607-40, col_values=(('external_ids', {'iface-id': '0ed76a0a-650c-4ec7-a4d4-0e745236b047'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:36 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:36.505 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.509 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.514 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:36.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.525 247403 INFO nova.virt.libvirt.driver [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Instance destroyed successfully.#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.526 247403 DEBUG nova.objects.instance [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'resources' on Instance uuid c215327f-37ad-41a7-a883-3dbb23334df6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.625 247403 DEBUG nova.virt.libvirt.vif [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=163,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGvV4tGHwFrQ7+1WPmMS3fGcrpcMKpLQBFiD2ZG0NedKq4jaCN6oHf8RWlX+X72Ff/PSGJSQ5nqRPZm+CDMr01vn3vAMra9m4dZ/R1d2vwh+NDFwu298PivPHJQkyuCpg==',key_name='tempest-keypair-600650673',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:50:06Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-libm6dxn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:50:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a498364761ef428b99cac3f92e603385',uuid=c215327f-37ad-41a7-a883-3dbb23334df6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.625 247403 DEBUG nova.network.os_vif_util [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "fbe66833-82a6-4f72-9b11-a4732140845a", "address": "fa:16:3e:d6:4d:37", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfbe66833-82", "ovs_interfaceid": "fbe66833-82a6-4f72-9b11-a4732140845a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.627 247403 DEBUG nova.network.os_vif_util [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.627 247403 DEBUG os_vif [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.630 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbe66833-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.631 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:36 np0005603621 nova_compute[247399]: 2026-01-31 08:52:36.635 247403 INFO os_vif [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:4d:37,bridge_name='br-int',has_traffic_filtering=True,id=fbe66833-82a6-4f72-9b11-a4732140845a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfbe66833-82')#033[00m
Jan 31 03:52:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 599 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 803 KiB/s rd, 40 KiB/s wr, 134 op/s
Jan 31 03:52:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Jan 31 03:52:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Jan 31 03:52:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.298 247403 DEBUG nova.compute.manager [req-e0a2d47d-7fb6-4de5-9822-a539e0040abd req-4d93f575-b894-48a0-983f-41275ebff431 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-unplugged-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.298 247403 DEBUG oslo_concurrency.lockutils [req-e0a2d47d-7fb6-4de5-9822-a539e0040abd req-4d93f575-b894-48a0-983f-41275ebff431 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.298 247403 DEBUG oslo_concurrency.lockutils [req-e0a2d47d-7fb6-4de5-9822-a539e0040abd req-4d93f575-b894-48a0-983f-41275ebff431 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.299 247403 DEBUG oslo_concurrency.lockutils [req-e0a2d47d-7fb6-4de5-9822-a539e0040abd req-4d93f575-b894-48a0-983f-41275ebff431 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.299 247403 DEBUG nova.compute.manager [req-e0a2d47d-7fb6-4de5-9822-a539e0040abd req-4d93f575-b894-48a0-983f-41275ebff431 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] No waiting events found dispatching network-vif-unplugged-fbe66833-82a6-4f72-9b11-a4732140845a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.299 247403 DEBUG nova.compute.manager [req-e0a2d47d-7fb6-4de5-9822-a539e0040abd req-4d93f575-b894-48a0-983f-41275ebff431 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-unplugged-fbe66833-82a6-4f72-9b11-a4732140845a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.360 247403 INFO nova.virt.libvirt.driver [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Deleting instance files /var/lib/nova/instances/c215327f-37ad-41a7-a883-3dbb23334df6_del#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.361 247403 INFO nova.virt.libvirt.driver [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Deletion of /var/lib/nova/instances/c215327f-37ad-41a7-a883-3dbb23334df6_del complete#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.466 247403 INFO nova.compute.manager [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.466 247403 DEBUG oslo.service.loopingcall [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.467 247403 DEBUG nova.compute.manager [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.467 247403 DEBUG nova.network.neutron [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.942 247403 INFO nova.compute.manager [None req-959ee776-9986-4c1b-8142-d196d47fd658 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Get console output#033[00m
Jan 31 03:52:37 np0005603621 nova_compute[247399]: 2026-01-31 08:52:37.947 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:52:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:38.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:52:38
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'backups', '.mgr', 'images', 'default.rgw.meta', '.rgw.root']
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:52:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 560 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 736 KiB/s rd, 51 KiB/s wr, 141 op/s
Jan 31 03:52:38 np0005603621 nova_compute[247399]: 2026-01-31 08:52:38.789 247403 DEBUG nova.network.neutron [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:38 np0005603621 nova_compute[247399]: 2026-01-31 08:52:38.929 247403 INFO nova.compute.manager [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Took 1.46 seconds to deallocate network for instance.#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.048 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.048 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.051 247403 DEBUG nova.compute.manager [req-a882b143-0ca4-423f-9638-2cfad9b760cf req-6d74600e-3e22-4628-b8a9-c3343dfdd379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-deleted-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:52:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.208 247403 DEBUG oslo_concurrency.processutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.471 247403 DEBUG nova.compute.manager [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-changed-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.471 247403 DEBUG nova.compute.manager [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Refreshing instance network info cache due to event network-changed-5d9caf6f-4602-4589-9c08-43b1a20a9c34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.472 247403 DEBUG oslo_concurrency.lockutils [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.472 247403 DEBUG oslo_concurrency.lockutils [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.472 247403 DEBUG nova.network.neutron [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Refreshing network info cache for port 5d9caf6f-4602-4589-9c08-43b1a20a9c34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.481 247403 DEBUG nova.compute.manager [req-544fa3e4-e4cb-4e81-aaed-ceba29a97c98 req-b5d860b1-882e-4f3e-bf78-1c2ee6c8ae50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.482 247403 DEBUG oslo_concurrency.lockutils [req-544fa3e4-e4cb-4e81-aaed-ceba29a97c98 req-b5d860b1-882e-4f3e-bf78-1c2ee6c8ae50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.482 247403 DEBUG oslo_concurrency.lockutils [req-544fa3e4-e4cb-4e81-aaed-ceba29a97c98 req-b5d860b1-882e-4f3e-bf78-1c2ee6c8ae50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.482 247403 DEBUG oslo_concurrency.lockutils [req-544fa3e4-e4cb-4e81-aaed-ceba29a97c98 req-b5d860b1-882e-4f3e-bf78-1c2ee6c8ae50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.482 247403 DEBUG nova.compute.manager [req-544fa3e4-e4cb-4e81-aaed-ceba29a97c98 req-b5d860b1-882e-4f3e-bf78-1c2ee6c8ae50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] No waiting events found dispatching network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.483 247403 WARNING nova.compute.manager [req-544fa3e4-e4cb-4e81-aaed-ceba29a97c98 req-b5d860b1-882e-4f3e-bf78-1c2ee6c8ae50 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Received unexpected event network-vif-plugged-fbe66833-82a6-4f72-9b11-a4732140845a for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.676 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.677 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.677 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.677 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.677 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2333145183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.679 247403 INFO nova.compute.manager [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Terminating instance#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.680 247403 DEBUG nova.compute.manager [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.696 247403 DEBUG oslo_concurrency.processutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.703 247403 DEBUG nova.compute.provider_tree [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.726 247403 DEBUG nova.scheduler.client.report [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.794 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:39 np0005603621 kernel: tap5d9caf6f-46 (unregistering): left promiscuous mode
Jan 31 03:52:39 np0005603621 NetworkManager[49013]: <info>  [1769849559.8350] device (tap5d9caf6f-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:39Z|00729|binding|INFO|Releasing lport 5d9caf6f-4602-4589-9c08-43b1a20a9c34 from this chassis (sb_readonly=0)
Jan 31 03:52:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:39Z|00730|binding|INFO|Setting lport 5d9caf6f-4602-4589-9c08-43b1a20a9c34 down in Southbound
Jan 31 03:52:39 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:39Z|00731|binding|INFO|Removing iface tap5d9caf6f-46 ovn-installed in OVS
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.843 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.848 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000aa.scope: Deactivated successfully.
Jan 31 03:52:39 np0005603621 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000aa.scope: Consumed 12.577s CPU time.
Jan 31 03:52:39 np0005603621 systemd-machined[212769]: Machine qemu-87-instance-000000aa terminated.
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.897 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.899 247403 INFO nova.scheduler.client.report [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Deleted allocations for instance c215327f-37ad-41a7-a883-3dbb23334df6#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.901 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:39.903 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:8d:1c 10.100.0.13'], port_security=['fa:16:3e:ed:8d:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f94c6ed-71ba-4114-a483-6969c923a169', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '8', 'neutron:security_group_ids': '823dcd1e-c848-4b0b-b209-d3c49f7c199a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2593ca0c-7180-414a-b815-a3d2e8e5bf3e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5d9caf6f-4602-4589-9c08-43b1a20a9c34) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:39.904 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5d9caf6f-4602-4589-9c08-43b1a20a9c34 in datapath 1f94c6ed-71ba-4114-a483-6969c923a169 unbound from our chassis#033[00m
Jan 31 03:52:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:39.906 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1f94c6ed-71ba-4114-a483-6969c923a169, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:52:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:39.907 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8376b99d-9c80-49a0-8e54-bb1d4125d3a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:39.907 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169 namespace which is not needed anymore#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.916 247403 INFO nova.virt.libvirt.driver [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Instance destroyed successfully.#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.917 247403 DEBUG nova.objects.instance [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.982 247403 DEBUG nova.virt.libvirt.vif [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:51:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1107028276',display_name='tempest-TestNetworkAdvancedServerOps-server-1107028276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1107028276',id=170,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMhAjak4EaSbAMsOQHzT/F+YD+mdwqaKmT1b8Pkiv6vPXQSXr3cEJwaMw5cOEGrpti6B+hT6jgk8eQer/fm3Y87ortF0Suf8ZM3a30yTbd7sIeUbybs0ERwtVLRyiyyflg==',key_name='tempest-TestNetworkAdvancedServerOps-1864875451',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:52:18Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-g52p6b46',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:52:30Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.984 247403 DEBUG nova.network.os_vif_util [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.985 247403 DEBUG nova.network.os_vif_util [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.985 247403 DEBUG os_vif [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.987 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.988 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d9caf6f-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:39 np0005603621 nova_compute[247399]: 2026-01-31 08:52:39.994 247403 INFO os_vif [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:8d:1c,bridge_name='br-int',has_traffic_filtering=True,id=5d9caf6f-4602-4589-9c08-43b1a20a9c34,network=Network(1f94c6ed-71ba-4114-a483-6969c923a169),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d9caf6f-46')#033[00m
Jan 31 03:52:40 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [NOTICE]   (372404) : haproxy version is 2.8.14-c23fe91
Jan 31 03:52:40 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [NOTICE]   (372404) : path to executable is /usr/sbin/haproxy
Jan 31 03:52:40 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [WARNING]  (372404) : Exiting Master process...
Jan 31 03:52:40 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [ALERT]    (372404) : Current worker (372406) exited with code 143 (Terminated)
Jan 31 03:52:40 np0005603621 neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169[372400]: [WARNING]  (372404) : All workers exited. Exiting... (0)
Jan 31 03:52:40 np0005603621 systemd[1]: libpod-bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35.scope: Deactivated successfully.
Jan 31 03:52:40 np0005603621 podman[372776]: 2026-01-31 08:52:40.031573555 +0000 UTC m=+0.050736093 container died bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 03:52:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35-userdata-shm.mount: Deactivated successfully.
Jan 31 03:52:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9ac3f57788658e12246addf68e67d8fb5b535ddb337fdff79fe300cdb670360f-merged.mount: Deactivated successfully.
Jan 31 03:52:40 np0005603621 podman[372776]: 2026-01-31 08:52:40.096810965 +0000 UTC m=+0.115973503 container cleanup bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:52:40 np0005603621 systemd[1]: libpod-conmon-bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35.scope: Deactivated successfully.
Jan 31 03:52:40 np0005603621 podman[372852]: 2026-01-31 08:52:40.173189946 +0000 UTC m=+0.061024998 container remove bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.176 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[43005383-802e-4593-b60d-6e106d41c237]: (4, ('Sat Jan 31 08:52:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169 (bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35)\nbd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35\nSat Jan 31 08:52:40 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169 (bd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35)\nbd3ce3d81b60206a20f1f1e5ffc445857eb5fe4120920d89a6f4a2fe16651d35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.177 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9a40a216-e8ea-4aec-8013-5e300f950fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.178 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f94c6ed-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:40 np0005603621 nova_compute[247399]: 2026-01-31 08:52:40.212 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:40 np0005603621 kernel: tap1f94c6ed-70: left promiscuous mode
Jan 31 03:52:40 np0005603621 nova_compute[247399]: 2026-01-31 08:52:40.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.223 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[60d55416-3be3-45dc-8157-0dcbeaba796e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.234 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[66f06742-c125-449e-8e2c-558d43c399ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.235 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ef996234-1edb-442c-9f3b-7453ea2ed2f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.253 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c5241d1a-b699-4c3a-83ed-6f988fb14d4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884368, 'reachable_time': 44152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372886, 'error': None, 'target': 'ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 systemd[1]: run-netns-ovnmeta\x2d1f94c6ed\x2d71ba\x2d4114\x2da483\x2d6969c923a169.mount: Deactivated successfully.
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.256 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1f94c6ed-71ba-4114-a483-6969c923a169 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:52:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:40.256 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e6d81d9e-a53b-4cd7-b307-c2674f715885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:52:40 np0005603621 nova_compute[247399]: 2026-01-31 08:52:40.305 247403 DEBUG oslo_concurrency.lockutils [None req-3e058f4f-dfca-47d6-b191-bd744662f267 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "c215327f-37ad-41a7-a883-3dbb23334df6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:52:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:40.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:52:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 545 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 699 KiB/s rd, 46 KiB/s wr, 139 op/s
Jan 31 03:52:40 np0005603621 nova_compute[247399]: 2026-01-31 08:52:40.905 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:41 np0005603621 nova_compute[247399]: 2026-01-31 08:52:41.233 247403 DEBUG nova.compute.manager [req-708f8f41-df23-4263-9097-fc534f4e1328 req-7aa35bf5-1edd-4378-a910-6870dd9fbfab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-unplugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:41 np0005603621 nova_compute[247399]: 2026-01-31 08:52:41.234 247403 DEBUG oslo_concurrency.lockutils [req-708f8f41-df23-4263-9097-fc534f4e1328 req-7aa35bf5-1edd-4378-a910-6870dd9fbfab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:41 np0005603621 nova_compute[247399]: 2026-01-31 08:52:41.234 247403 DEBUG oslo_concurrency.lockutils [req-708f8f41-df23-4263-9097-fc534f4e1328 req-7aa35bf5-1edd-4378-a910-6870dd9fbfab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:41 np0005603621 nova_compute[247399]: 2026-01-31 08:52:41.234 247403 DEBUG oslo_concurrency.lockutils [req-708f8f41-df23-4263-9097-fc534f4e1328 req-7aa35bf5-1edd-4378-a910-6870dd9fbfab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:41 np0005603621 nova_compute[247399]: 2026-01-31 08:52:41.234 247403 DEBUG nova.compute.manager [req-708f8f41-df23-4263-9097-fc534f4e1328 req-7aa35bf5-1edd-4378-a910-6870dd9fbfab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] No waiting events found dispatching network-vif-unplugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:41 np0005603621 nova_compute[247399]: 2026-01-31 08:52:41.235 247403 DEBUG nova.compute.manager [req-708f8f41-df23-4263-9097-fc534f4e1328 req-7aa35bf5-1edd-4378-a910-6870dd9fbfab fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-unplugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:52:41 np0005603621 podman[372888]: 2026-01-31 08:52:41.492108979 +0000 UTC m=+0.043767582 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:52:41 np0005603621 podman[372889]: 2026-01-31 08:52:41.51685065 +0000 UTC m=+0.068315217 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 31 03:52:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:52:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:52:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.187 247403 INFO nova.virt.libvirt.driver [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Deleting instance files /var/lib/nova/instances/e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8_del#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.188 247403 INFO nova.virt.libvirt.driver [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Deletion of /var/lib/nova/instances/e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8_del complete#033[00m
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a2f6abee-7102-4844-8d61-392349893d0b does not exist
Jan 31 03:52:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 84e2a964-28c8-4839-b53c-e69156afa5da does not exist
Jan 31 03:52:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e365915f-371f-4eda-af2d-623102ce979a does not exist
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.303 247403 DEBUG nova.network.neutron [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updated VIF entry in instance network info cache for port 5d9caf6f-4602-4589-9c08-43b1a20a9c34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.303 247403 DEBUG nova.network.neutron [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating instance_info_cache with network_info: [{"id": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "address": "fa:16:3e:ed:8d:1c", "network": {"id": "1f94c6ed-71ba-4114-a483-6969c923a169", "bridge": "br-int", "label": "tempest-network-smoke--2122556826", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d9caf6f-46", "ovs_interfaceid": "5d9caf6f-4602-4589-9c08-43b1a20a9c34", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.383 247403 DEBUG oslo_concurrency.lockutils [req-057fe2ea-5ecc-4485-832b-bd64f25d3447 req-18733d57-f35e-43f5-b649-10a082e4e6e7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.395 247403 INFO nova.compute.manager [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Took 2.72 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.396 247403 DEBUG oslo.service.loopingcall [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.396 247403 DEBUG nova.compute.manager [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.396 247403 DEBUG nova.network.neutron [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:52:42 np0005603621 nova_compute[247399]: 2026-01-31 08:52:42.454 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:42.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:42.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:52:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 428 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 242 KiB/s rd, 18 KiB/s wr, 104 op/s
Jan 31 03:52:42 np0005603621 podman[373076]: 2026-01-31 08:52:42.838529021 +0000 UTC m=+0.048216743 container create fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:52:42 np0005603621 systemd[1]: Started libpod-conmon-fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41.scope.
Jan 31 03:52:42 np0005603621 podman[373076]: 2026-01-31 08:52:42.815568187 +0000 UTC m=+0.025255939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:52:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:42 np0005603621 podman[373076]: 2026-01-31 08:52:42.944649381 +0000 UTC m=+0.154337123 container init fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 03:52:42 np0005603621 podman[373076]: 2026-01-31 08:52:42.952243992 +0000 UTC m=+0.161931724 container start fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:52:42 np0005603621 podman[373076]: 2026-01-31 08:52:42.955635129 +0000 UTC m=+0.165322841 container attach fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 03:52:42 np0005603621 nifty_wiles[373093]: 167 167
Jan 31 03:52:42 np0005603621 systemd[1]: libpod-fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41.scope: Deactivated successfully.
Jan 31 03:52:42 np0005603621 podman[373076]: 2026-01-31 08:52:42.967023278 +0000 UTC m=+0.176710990 container died fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:52:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-51c25ab916cb77e550922c522fa2d7c4040786f3f28e5a122def79657c8ccf94-merged.mount: Deactivated successfully.
Jan 31 03:52:43 np0005603621 podman[373076]: 2026-01-31 08:52:43.011374429 +0000 UTC m=+0.221062161 container remove fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:52:43 np0005603621 systemd[1]: libpod-conmon-fe51a1cd3d118067d79286bc9976d7851fa94fd8ea33d4fd4a60c648a8a71b41.scope: Deactivated successfully.
Jan 31 03:52:43 np0005603621 podman[373117]: 2026-01-31 08:52:43.168686325 +0000 UTC m=+0.044806986 container create 51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hofstadter, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:52:43 np0005603621 systemd[1]: Started libpod-conmon-51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6.scope.
Jan 31 03:52:43 np0005603621 podman[373117]: 2026-01-31 08:52:43.144423339 +0000 UTC m=+0.020544030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:52:43 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e2ec41c05bf53d50c2d42c9d2c982b2a38ad8639d0450801580f0c673210ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e2ec41c05bf53d50c2d42c9d2c982b2a38ad8639d0450801580f0c673210ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e2ec41c05bf53d50c2d42c9d2c982b2a38ad8639d0450801580f0c673210ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e2ec41c05bf53d50c2d42c9d2c982b2a38ad8639d0450801580f0c673210ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:43 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e2ec41c05bf53d50c2d42c9d2c982b2a38ad8639d0450801580f0c673210ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:43 np0005603621 podman[373117]: 2026-01-31 08:52:43.279433132 +0000 UTC m=+0.155553793 container init 51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hofstadter, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:52:43 np0005603621 podman[373117]: 2026-01-31 08:52:43.292900727 +0000 UTC m=+0.169021368 container start 51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:52:43 np0005603621 podman[373117]: 2026-01-31 08:52:43.299813196 +0000 UTC m=+0.175933917 container attach 51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hofstadter, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:52:43 np0005603621 nova_compute[247399]: 2026-01-31 08:52:43.380 247403 DEBUG nova.compute.manager [req-3930fec9-9222-498f-8aa4-e68a9d14f219 req-bca15194-cc52-4259-a0c8-df6213992784 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:43 np0005603621 nova_compute[247399]: 2026-01-31 08:52:43.381 247403 DEBUG oslo_concurrency.lockutils [req-3930fec9-9222-498f-8aa4-e68a9d14f219 req-bca15194-cc52-4259-a0c8-df6213992784 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:43 np0005603621 nova_compute[247399]: 2026-01-31 08:52:43.382 247403 DEBUG oslo_concurrency.lockutils [req-3930fec9-9222-498f-8aa4-e68a9d14f219 req-bca15194-cc52-4259-a0c8-df6213992784 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:43 np0005603621 nova_compute[247399]: 2026-01-31 08:52:43.382 247403 DEBUG oslo_concurrency.lockutils [req-3930fec9-9222-498f-8aa4-e68a9d14f219 req-bca15194-cc52-4259-a0c8-df6213992784 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:43 np0005603621 nova_compute[247399]: 2026-01-31 08:52:43.382 247403 DEBUG nova.compute.manager [req-3930fec9-9222-498f-8aa4-e68a9d14f219 req-bca15194-cc52-4259-a0c8-df6213992784 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] No waiting events found dispatching network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:43 np0005603621 nova_compute[247399]: 2026-01-31 08:52:43.382 247403 WARNING nova.compute.manager [req-3930fec9-9222-498f-8aa4-e68a9d14f219 req-bca15194-cc52-4259-a0c8-df6213992784 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received unexpected event network-vif-plugged-5d9caf6f-4602-4589-9c08-43b1a20a9c34 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:52:44 np0005603621 upbeat_hofstadter[373134]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:52:44 np0005603621 upbeat_hofstadter[373134]: --> relative data size: 1.0
Jan 31 03:52:44 np0005603621 upbeat_hofstadter[373134]: --> All data devices are unavailable
Jan 31 03:52:44 np0005603621 systemd[1]: libpod-51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6.scope: Deactivated successfully.
Jan 31 03:52:44 np0005603621 podman[373117]: 2026-01-31 08:52:44.058878822 +0000 UTC m=+0.934999463 container died 51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 03:52:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-88e2ec41c05bf53d50c2d42c9d2c982b2a38ad8639d0450801580f0c673210ed-merged.mount: Deactivated successfully.
Jan 31 03:52:44 np0005603621 podman[373117]: 2026-01-31 08:52:44.119659802 +0000 UTC m=+0.995780443 container remove 51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:52:44 np0005603621 systemd[1]: libpod-conmon-51a799ad01345820a15954c9fcbc255f078a2c4a3329c108bea51d2f24430bb6.scope: Deactivated successfully.
Jan 31 03:52:44 np0005603621 nova_compute[247399]: 2026-01-31 08:52:44.323 247403 DEBUG nova.network.neutron [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:44 np0005603621 nova_compute[247399]: 2026-01-31 08:52:44.382 247403 INFO nova.compute.manager [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Took 1.99 seconds to deallocate network for instance.#033[00m
Jan 31 03:52:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:44.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:44.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.604079287 +0000 UTC m=+0.047948515 container create 4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:52:44 np0005603621 systemd[1]: Started libpod-conmon-4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37.scope.
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.578275202 +0000 UTC m=+0.022144410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:52:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.69950175 +0000 UTC m=+0.143370958 container init 4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.704568299 +0000 UTC m=+0.148437507 container start 4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.709142723 +0000 UTC m=+0.153011931 container attach 4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:52:44 np0005603621 happy_mayer[373318]: 167 167
Jan 31 03:52:44 np0005603621 systemd[1]: libpod-4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37.scope: Deactivated successfully.
Jan 31 03:52:44 np0005603621 conmon[373318]: conmon 4eb617b7151a53664b69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37.scope/container/memory.events
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.711934862 +0000 UTC m=+0.155804060 container died 4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:52:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-754e642e57f0b44666f964cb1d4ac0b485be39b351604d8e2827de4221517905-merged.mount: Deactivated successfully.
Jan 31 03:52:44 np0005603621 podman[373302]: 2026-01-31 08:52:44.757588323 +0000 UTC m=+0.201457511 container remove 4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:52:44 np0005603621 systemd[1]: libpod-conmon-4eb617b7151a53664b69a5d9cbe939ba2210e957931a5933dcbb640eb5739e37.scope: Deactivated successfully.
Jan 31 03:52:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 374 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 18 KiB/s wr, 111 op/s
Jan 31 03:52:44 np0005603621 nova_compute[247399]: 2026-01-31 08:52:44.839 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:44 np0005603621 nova_compute[247399]: 2026-01-31 08:52:44.840 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:44 np0005603621 podman[373341]: 2026-01-31 08:52:44.876767936 +0000 UTC m=+0.033900292 container create b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:52:44 np0005603621 systemd[1]: Started libpod-conmon-b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e.scope.
Jan 31 03:52:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffa7d1bc58f815e56ed6c903a99b897ed37fec7514be1b1ad054d868f4513ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffa7d1bc58f815e56ed6c903a99b897ed37fec7514be1b1ad054d868f4513ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffa7d1bc58f815e56ed6c903a99b897ed37fec7514be1b1ad054d868f4513ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ffa7d1bc58f815e56ed6c903a99b897ed37fec7514be1b1ad054d868f4513ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:44 np0005603621 podman[373341]: 2026-01-31 08:52:44.956925077 +0000 UTC m=+0.114057453 container init b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 03:52:44 np0005603621 podman[373341]: 2026-01-31 08:52:44.861027619 +0000 UTC m=+0.018159995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:52:44 np0005603621 nova_compute[247399]: 2026-01-31 08:52:44.959 247403 DEBUG oslo_concurrency.processutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:44 np0005603621 podman[373341]: 2026-01-31 08:52:44.963419582 +0000 UTC m=+0.120551948 container start b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:52:44 np0005603621 podman[373341]: 2026-01-31 08:52:44.969165923 +0000 UTC m=+0.126298309 container attach b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:52:44 np0005603621 nova_compute[247399]: 2026-01-31 08:52:44.991 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956740215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:45 np0005603621 nova_compute[247399]: 2026-01-31 08:52:45.389 247403 DEBUG oslo_concurrency.processutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:45 np0005603621 nova_compute[247399]: 2026-01-31 08:52:45.395 247403 DEBUG nova.compute.provider_tree [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:45 np0005603621 nova_compute[247399]: 2026-01-31 08:52:45.627 247403 DEBUG nova.scheduler.client.report [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]: {
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:    "0": [
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:        {
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "devices": [
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "/dev/loop3"
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            ],
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "lv_name": "ceph_lv0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "lv_size": "7511998464",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "name": "ceph_lv0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "tags": {
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.cluster_name": "ceph",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.crush_device_class": "",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.encrypted": "0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.osd_id": "0",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.type": "block",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:                "ceph.vdo": "0"
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            },
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "type": "block",
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:            "vg_name": "ceph_vg0"
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:        }
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]:    ]
Jan 31 03:52:45 np0005603621 xenodochial_euler[373358]: }
Jan 31 03:52:45 np0005603621 systemd[1]: libpod-b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e.scope: Deactivated successfully.
Jan 31 03:52:45 np0005603621 podman[373341]: 2026-01-31 08:52:45.741370574 +0000 UTC m=+0.898502940 container died b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:52:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3ffa7d1bc58f815e56ed6c903a99b897ed37fec7514be1b1ad054d868f4513ac-merged.mount: Deactivated successfully.
Jan 31 03:52:45 np0005603621 podman[373341]: 2026-01-31 08:52:45.79854395 +0000 UTC m=+0.955676306 container remove b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:52:45 np0005603621 systemd[1]: libpod-conmon-b10c386ba927adb62918feeffdc369d9e969eb2fc9463fc5a8b2db53a7870f8e.scope: Deactivated successfully.
Jan 31 03:52:45 np0005603621 nova_compute[247399]: 2026-01-31 08:52:45.938 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:45 np0005603621 nova_compute[247399]: 2026-01-31 08:52:45.949 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:46 np0005603621 nova_compute[247399]: 2026-01-31 08:52:46.051 247403 INFO nova.scheduler.client.report [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Deleted allocations for instance e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8#033[00m
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.33125 +0000 UTC m=+0.035764020 container create 468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:52:46 np0005603621 systemd[1]: Started libpod-conmon-468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7.scope.
Jan 31 03:52:46 np0005603621 nova_compute[247399]: 2026-01-31 08:52:46.373 247403 DEBUG oslo_concurrency.lockutils [None req-a1cb4667-9b92-4acb-8c06-04c40f311075 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.315328927 +0000 UTC m=+0.019842967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.418091102 +0000 UTC m=+0.122605162 container init 468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.424415532 +0000 UTC m=+0.128929552 container start 468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:52:46 np0005603621 elegant_pascal[373558]: 167 167
Jan 31 03:52:46 np0005603621 systemd[1]: libpod-468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7.scope: Deactivated successfully.
Jan 31 03:52:46 np0005603621 conmon[373558]: conmon 468826a8df4ee3562af0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7.scope/container/memory.events
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.43736812 +0000 UTC m=+0.141882160 container attach 468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.437999571 +0000 UTC m=+0.142513591 container died 468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:52:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:46.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cf281bcde1682cdd218b5778a7e821b30c0b3b232fac329987457f2c790c0b47-merged.mount: Deactivated successfully.
Jan 31 03:52:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:46.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:46 np0005603621 podman[373541]: 2026-01-31 08:52:46.563385649 +0000 UTC m=+0.267899669 container remove 468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:52:46 np0005603621 systemd[1]: libpod-conmon-468826a8df4ee3562af0fd38ee50193168a96204abb6cf358e3ef2b839ae53b7.scope: Deactivated successfully.
Jan 31 03:52:46 np0005603621 podman[373582]: 2026-01-31 08:52:46.726711026 +0000 UTC m=+0.083232789 container create a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:52:46 np0005603621 podman[373582]: 2026-01-31 08:52:46.665936397 +0000 UTC m=+0.022458200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:52:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 374 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 18 KiB/s wr, 111 op/s
Jan 31 03:52:46 np0005603621 systemd[1]: Started libpod-conmon-a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31.scope.
Jan 31 03:52:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:52:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31512bd9d9e602fba56742fee22f745d250ebe4ce47c0eca80e5ec4c72296de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31512bd9d9e602fba56742fee22f745d250ebe4ce47c0eca80e5ec4c72296de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31512bd9d9e602fba56742fee22f745d250ebe4ce47c0eca80e5ec4c72296de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31512bd9d9e602fba56742fee22f745d250ebe4ce47c0eca80e5ec4c72296de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:52:46 np0005603621 podman[373582]: 2026-01-31 08:52:46.926037539 +0000 UTC m=+0.282559302 container init a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_napier, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:52:46 np0005603621 podman[373582]: 2026-01-31 08:52:46.931164641 +0000 UTC m=+0.287686404 container start a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_napier, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:52:46 np0005603621 nova_compute[247399]: 2026-01-31 08:52:46.988 247403 DEBUG nova.compute.manager [req-5b9b68ed-6022-42c8-b853-680c591b41ef req-50191de8-97a9-4574-991c-05aeaafe0fea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Received event network-vif-deleted-5d9caf6f-4602-4589-9c08-43b1a20a9c34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:46 np0005603621 podman[373582]: 2026-01-31 08:52:46.992026523 +0000 UTC m=+0.348548286 container attach a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_napier, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:52:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]: {
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:        "osd_id": 0,
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:        "type": "bluestore"
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]:    }
Jan 31 03:52:47 np0005603621 dreamy_napier[373598]: }
Jan 31 03:52:47 np0005603621 systemd[1]: libpod-a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31.scope: Deactivated successfully.
Jan 31 03:52:47 np0005603621 podman[373582]: 2026-01-31 08:52:47.727426432 +0000 UTC m=+1.083948195 container died a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 03:52:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b31512bd9d9e602fba56742fee22f745d250ebe4ce47c0eca80e5ec4c72296de-merged.mount: Deactivated successfully.
Jan 31 03:52:48 np0005603621 podman[373582]: 2026-01-31 08:52:48.094271065 +0000 UTC m=+1.450792828 container remove a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:52:48 np0005603621 systemd[1]: libpod-conmon-a5944b6c07763106b9aab84d1c10d8bf4c087fd2ce2d25d466f848408c300e31.scope: Deactivated successfully.
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d5d26a28-5d3f-4f05-9778-f478a2333ca9 does not exist
Jan 31 03:52:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b7b3d242-cf48-497d-9587-fe38642c7569 does not exist
Jan 31 03:52:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 91cd349d-5f49-4ec9-871a-90bc2e1aec2a does not exist
Jan 31 03:52:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:48.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:52:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.561425) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849568561550, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2215, "num_deletes": 256, "total_data_size": 3755273, "memory_usage": 3808624, "flush_reason": "Manual Compaction"}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849568614433, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3666358, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66193, "largest_seqno": 68407, "table_properties": {"data_size": 3656276, "index_size": 6383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21711, "raw_average_key_size": 20, "raw_value_size": 3635875, "raw_average_value_size": 3506, "num_data_blocks": 276, "num_entries": 1037, "num_filter_entries": 1037, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849379, "oldest_key_time": 1769849379, "file_creation_time": 1769849568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 53037 microseconds, and 9455 cpu microseconds.
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.614484) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3666358 bytes OK
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.614507) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.631128) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.631186) EVENT_LOG_v1 {"time_micros": 1769849568631175, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.631218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3746040, prev total WAL file size 3746040, number of live WAL files 2.
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.632420) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3580KB)], [152(10MB)]
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849568632488, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 14706996, "oldest_snapshot_seqno": -1}
Jan 31 03:52:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 320 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 6.5 KiB/s wr, 123 op/s
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9568 keys, 12769707 bytes, temperature: kUnknown
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849568871279, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12769707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12707396, "index_size": 37302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 251375, "raw_average_key_size": 26, "raw_value_size": 12539453, "raw_average_value_size": 1310, "num_data_blocks": 1427, "num_entries": 9568, "num_filter_entries": 9568, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.871709) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12769707 bytes
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.872992) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.6 rd, 53.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 10099, records dropped: 531 output_compression: NoCompression
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.873007) EVENT_LOG_v1 {"time_micros": 1769849568873000, "job": 94, "event": "compaction_finished", "compaction_time_micros": 238849, "compaction_time_cpu_micros": 23079, "output_level": 6, "num_output_files": 1, "total_output_size": 12769707, "num_input_records": 10099, "num_output_records": 9568, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849568873452, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849568874518, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.632267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.874559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.874564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.874565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.874567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:52:48 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:52:48.874569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:52:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033962491857435326 of space, bias 1.0, pg target 1.0188747557230597 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004325192222687512 of space, bias 1.0, pg target 1.2932324745835662 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:52:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:52:49 np0005603621 nova_compute[247399]: 2026-01-31 08:52:49.993 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.438 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.438 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.438 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.438 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.439 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.439 247403 INFO nova.compute.manager [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Terminating instance#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.440 247403 DEBUG nova.compute.manager [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:52:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:50.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:50 np0005603621 kernel: tapdf3fc295-9a (unregistering): left promiscuous mode
Jan 31 03:52:50 np0005603621 NetworkManager[49013]: <info>  [1769849570.5628] device (tapdf3fc295-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:52:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:50Z|00732|binding|INFO|Releasing lport df3fc295-9afc-49a0-87f8-9dda757af02a from this chassis (sb_readonly=0)
Jan 31 03:52:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:50Z|00733|binding|INFO|Setting lport df3fc295-9afc-49a0-87f8-9dda757af02a down in Southbound
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.569 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:52:50Z|00734|binding|INFO|Removing iface tapdf3fc295-9a ovn-installed in OVS
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.571 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.579 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.595 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:6b:61 10.100.0.5'], port_security=['fa:16:3e:00:6b:61 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '3308d345-19b7-4fbb-bd81-631135649e7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8397e0fed04b4dabb57148d0924de2dc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd636f3a4-efef-465a-ac59-8182d61336f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dbd2578f-ff6e-4dc3-bc49-93cbf023edc5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=df3fc295-9afc-49a0-87f8-9dda757af02a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.596 159734 INFO neutron.agent.ovn.metadata.agent [-] Port df3fc295-9afc-49a0-87f8-9dda757af02a in datapath 3afaf607-43a1-4d65-95fc-0a22b5c901d0 unbound from our chassis#033[00m
Jan 31 03:52:50 np0005603621 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.597 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3afaf607-43a1-4d65-95fc-0a22b5c901d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:52:50 np0005603621 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d0000009e.scope: Consumed 24.632s CPU time.
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.598 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bb21dd60-c3e6-45c5-bedc-0f127a1b5ee6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.599 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0 namespace which is not needed anymore#033[00m
Jan 31 03:52:50 np0005603621 systemd-machined[212769]: Machine qemu-78-instance-0000009e terminated.
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.672 247403 INFO nova.virt.libvirt.driver [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Instance destroyed successfully.#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.673 247403 DEBUG nova.objects.instance [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lazy-loading 'resources' on Instance uuid 3308d345-19b7-4fbb-bd81-631135649e7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:52:50 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [NOTICE]   (362367) : haproxy version is 2.8.14-c23fe91
Jan 31 03:52:50 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [NOTICE]   (362367) : path to executable is /usr/sbin/haproxy
Jan 31 03:52:50 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [WARNING]  (362367) : Exiting Master process...
Jan 31 03:52:50 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [ALERT]    (362367) : Current worker (362369) exited with code 143 (Terminated)
Jan 31 03:52:50 np0005603621 neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0[362363]: [WARNING]  (362367) : All workers exited. Exiting... (0)
Jan 31 03:52:50 np0005603621 systemd[1]: libpod-946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f.scope: Deactivated successfully.
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.706 247403 DEBUG nova.virt.libvirt.vif [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:46:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=158,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGvV4tGHwFrQ7+1WPmMS3fGcrpcMKpLQBFiD2ZG0NedKq4jaCN6oHf8RWlX+X72Ff/PSGJSQ5nqRPZm+CDMr01vn3vAMra9m4dZ/R1d2vwh+NDFwu298PivPHJQkyuCpg==',key_name='tempest-keypair-600650673',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:46:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8397e0fed04b4dabb57148d0924de2dc',ramdisk_id='',reservation_id='r-qz7ydqvm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-1931311941',owner_user_name='tempest-AttachVolumeMultiAttachTest-1931311941-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:46:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a498364761ef428b99cac3f92e603385',uuid=3308d345-19b7-4fbb-bd81-631135649e7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.706 247403 DEBUG nova.network.os_vif_util [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converting VIF {"id": "df3fc295-9afc-49a0-87f8-9dda757af02a", "address": "fa:16:3e:00:6b:61", "network": {"id": "3afaf607-43a1-4d65-95fc-0a22b5c901d0", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1596089959-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8397e0fed04b4dabb57148d0924de2dc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf3fc295-9a", "ovs_interfaceid": "df3fc295-9afc-49a0-87f8-9dda757af02a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.707 247403 DEBUG nova.network.os_vif_util [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.707 247403 DEBUG os_vif [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:52:50 np0005603621 podman[373710]: 2026-01-31 08:52:50.708060402 +0000 UTC m=+0.043225285 container died 946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.709 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.710 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf3fc295-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.711 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.714 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.716 247403 INFO os_vif [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:6b:61,bridge_name='br-int',has_traffic_filtering=True,id=df3fc295-9afc-49a0-87f8-9dda757af02a,network=Network(3afaf607-43a1-4d65-95fc-0a22b5c901d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf3fc295-9a')#033[00m
Jan 31 03:52:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f-userdata-shm.mount: Deactivated successfully.
Jan 31 03:52:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-aec0079c84a7539f4a6b717c1c6c2ced5ce5ba40165ce0587c6f288b921f805f-merged.mount: Deactivated successfully.
Jan 31 03:52:50 np0005603621 podman[373710]: 2026-01-31 08:52:50.755269764 +0000 UTC m=+0.090434647 container cleanup 946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:52:50 np0005603621 systemd[1]: libpod-conmon-946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f.scope: Deactivated successfully.
Jan 31 03:52:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 4.7 KiB/s wr, 108 op/s
Jan 31 03:52:50 np0005603621 podman[373767]: 2026-01-31 08:52:50.811769017 +0000 UTC m=+0.042415299 container remove 946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.815 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b72fea7c-47a1-41d8-a4a1-1f282eb1f895]: (4, ('Sat Jan 31 08:52:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0 (946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f)\n946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f\nSat Jan 31 08:52:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0 (946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f)\n946a634219c29966e3f11655f71023b1eae12f33cf130a0997b8e726ef606a8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.817 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea73e99-2929-47ad-b0b6-48d1ebddcc77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.818 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3afaf607-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.819 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 kernel: tap3afaf607-40: left promiscuous mode
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.821 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.823 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[47ce19c5-5e04-4075-8e7b-27f91c3f1313]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.826 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.836 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2de4bfb-5eee-4e87-aeab-ec9d3297d77a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.837 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d2177e-4aa2-4733-bf26-53d996384243]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.847 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29f86f38-d199-46e1-ad40-4e7bf6b102ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 851945, 'reachable_time': 40004, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373782, 'error': None, 'target': 'ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.849 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3afaf607-43a1-4d65-95fc-0a22b5c901d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:52:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:52:50.849 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[93e7e69c-0b51-414c-93a0-d53c1baa257a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:52:50 np0005603621 systemd[1]: run-netns-ovnmeta\x2d3afaf607\x2d43a1\x2d4d65\x2d95fc\x2d0a22b5c901d0.mount: Deactivated successfully.
Jan 31 03:52:50 np0005603621 nova_compute[247399]: 2026-01-31 08:52:50.951 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.408 247403 DEBUG nova.compute.manager [req-35c0ccd3-73a3-4336-a44d-3c0ea506886d req-387796e6-7a48-4155-a7cb-c61316e36c93 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-vif-unplugged-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.409 247403 DEBUG oslo_concurrency.lockutils [req-35c0ccd3-73a3-4336-a44d-3c0ea506886d req-387796e6-7a48-4155-a7cb-c61316e36c93 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.409 247403 DEBUG oslo_concurrency.lockutils [req-35c0ccd3-73a3-4336-a44d-3c0ea506886d req-387796e6-7a48-4155-a7cb-c61316e36c93 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.409 247403 DEBUG oslo_concurrency.lockutils [req-35c0ccd3-73a3-4336-a44d-3c0ea506886d req-387796e6-7a48-4155-a7cb-c61316e36c93 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.409 247403 DEBUG nova.compute.manager [req-35c0ccd3-73a3-4336-a44d-3c0ea506886d req-387796e6-7a48-4155-a7cb-c61316e36c93 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] No waiting events found dispatching network-vif-unplugged-df3fc295-9afc-49a0-87f8-9dda757af02a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.410 247403 DEBUG nova.compute.manager [req-35c0ccd3-73a3-4336-a44d-3c0ea506886d req-387796e6-7a48-4155-a7cb-c61316e36c93 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-vif-unplugged-df3fc295-9afc-49a0-87f8-9dda757af02a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.524 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849556.5229514, c215327f-37ad-41a7-a883-3dbb23334df6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.524 247403 INFO nova.compute.manager [-] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:52:51 np0005603621 nova_compute[247399]: 2026-01-31 08:52:51.690 247403 DEBUG nova.compute.manager [None req-abbec0d5-7a86-4e3c-935f-7819fdc50a10 - - - - - -] [instance: c215327f-37ad-41a7-a883-3dbb23334df6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:52 np0005603621 nova_compute[247399]: 2026-01-31 08:52:52.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:52:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/320880322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:52:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:52:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/320880322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:52:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:52.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 84 KiB/s rd, 4.2 KiB/s wr, 120 op/s
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.186 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.757 247403 DEBUG nova.compute.manager [req-af6748fb-e214-43d9-88ef-31c9631566fc req-01c7fc52-fce7-458d-a8fd-166c33154963 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.758 247403 DEBUG oslo_concurrency.lockutils [req-af6748fb-e214-43d9-88ef-31c9631566fc req-01c7fc52-fce7-458d-a8fd-166c33154963 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.758 247403 DEBUG oslo_concurrency.lockutils [req-af6748fb-e214-43d9-88ef-31c9631566fc req-01c7fc52-fce7-458d-a8fd-166c33154963 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.759 247403 DEBUG oslo_concurrency.lockutils [req-af6748fb-e214-43d9-88ef-31c9631566fc req-01c7fc52-fce7-458d-a8fd-166c33154963 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.759 247403 DEBUG nova.compute.manager [req-af6748fb-e214-43d9-88ef-31c9631566fc req-01c7fc52-fce7-458d-a8fd-166c33154963 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] No waiting events found dispatching network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:52:53 np0005603621 nova_compute[247399]: 2026-01-31 08:52:53.759 247403 WARNING nova.compute.manager [req-af6748fb-e214-43d9-88ef-31c9631566fc req-01c7fc52-fce7-458d-a8fd-166c33154963 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received unexpected event network-vif-plugged-df3fc295-9afc-49a0-87f8-9dda757af02a for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:52:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:54.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:54.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 4.0 KiB/s wr, 90 op/s
Jan 31 03:52:54 np0005603621 nova_compute[247399]: 2026-01-31 08:52:54.861 247403 INFO nova.virt.libvirt.driver [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Deleting instance files /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d_del#033[00m
Jan 31 03:52:54 np0005603621 nova_compute[247399]: 2026-01-31 08:52:54.862 247403 INFO nova.virt.libvirt.driver [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Deletion of /var/lib/nova/instances/3308d345-19b7-4fbb-bd81-631135649e7d_del complete#033[00m
Jan 31 03:52:54 np0005603621 nova_compute[247399]: 2026-01-31 08:52:54.915 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849559.9133816, e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:52:54 np0005603621 nova_compute[247399]: 2026-01-31 08:52:54.915 247403 INFO nova.compute.manager [-] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:52:54 np0005603621 nova_compute[247399]: 2026-01-31 08:52:54.984 247403 DEBUG nova.compute.manager [None req-8065d1b9-9236-4942-9a15-a5748bc18ed8 - - - - - -] [instance: e8aff7bc-7d3c-4761-b7a3-849c89e7f1c8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:52:55 np0005603621 nova_compute[247399]: 2026-01-31 08:52:55.030 247403 INFO nova.compute.manager [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Took 4.59 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:52:55 np0005603621 nova_compute[247399]: 2026-01-31 08:52:55.031 247403 DEBUG oslo.service.loopingcall [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:52:55 np0005603621 nova_compute[247399]: 2026-01-31 08:52:55.031 247403 DEBUG nova.compute.manager [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:52:55 np0005603621 nova_compute[247399]: 2026-01-31 08:52:55.031 247403 DEBUG nova.network.neutron [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:52:55 np0005603621 nova_compute[247399]: 2026-01-31 08:52:55.713 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:55 np0005603621 nova_compute[247399]: 2026-01-31 08:52:55.952 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:52:56 np0005603621 nova_compute[247399]: 2026-01-31 08:52:56.410 247403 DEBUG nova.network.neutron [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:52:56 np0005603621 nova_compute[247399]: 2026-01-31 08:52:56.478 247403 INFO nova.compute.manager [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Took 1.45 seconds to deallocate network for instance.#033[00m
Jan 31 03:52:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:56.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:56.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:56 np0005603621 nova_compute[247399]: 2026-01-31 08:52:56.540 247403 DEBUG nova.compute.manager [req-476b2bea-1354-42d9-a0c4-3b7a34f11f85 req-104d64c7-5bbd-4666-b6c2-b916f5215e46 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Received event network-vif-deleted-df3fc295-9afc-49a0-87f8-9dda757af02a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:52:56 np0005603621 nova_compute[247399]: 2026-01-31 08:52:56.673 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:52:56 np0005603621 nova_compute[247399]: 2026-01-31 08:52:56.673 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:52:56 np0005603621 nova_compute[247399]: 2026-01-31 08:52:56.747 247403 DEBUG oslo_concurrency.processutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:52:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 2.7 KiB/s wr, 67 op/s
Jan 31 03:52:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:52:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570886801' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:52:57 np0005603621 nova_compute[247399]: 2026-01-31 08:52:57.147 247403 DEBUG oslo_concurrency.processutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:52:57 np0005603621 nova_compute[247399]: 2026-01-31 08:52:57.153 247403 DEBUG nova.compute.provider_tree [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:52:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:52:57 np0005603621 nova_compute[247399]: 2026-01-31 08:52:57.338 247403 DEBUG nova.scheduler.client.report [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:52:57 np0005603621 nova_compute[247399]: 2026-01-31 08:52:57.838 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:52:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:52:58.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:58 np0005603621 nova_compute[247399]: 2026-01-31 08:52:58.505 247403 INFO nova.scheduler.client.report [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Deleted allocations for instance 3308d345-19b7-4fbb-bd81-631135649e7d#033[00m
Jan 31 03:52:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:52:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:52:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:52:58.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:52:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 3.1 KiB/s wr, 80 op/s
Jan 31 03:53:00 np0005603621 nova_compute[247399]: 2026-01-31 08:53:00.157 247403 DEBUG oslo_concurrency.lockutils [None req-beb3afdf-c7e6-4864-93d8-e854e62361d2 a498364761ef428b99cac3f92e603385 8397e0fed04b4dabb57148d0924de2dc - - default default] Lock "3308d345-19b7-4fbb-bd81-631135649e7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:00.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:00 np0005603621 nova_compute[247399]: 2026-01-31 08:53:00.715 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.2 KiB/s wr, 58 op/s
Jan 31 03:53:00 np0005603621 nova_compute[247399]: 2026-01-31 08:53:00.953 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:01 np0005603621 nova_compute[247399]: 2026-01-31 08:53:01.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:02.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.7 KiB/s wr, 56 op/s
Jan 31 03:53:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:53:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1843678875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:53:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:53:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1843678875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:53:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:04.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.9 KiB/s wr, 44 op/s
Jan 31 03:53:05 np0005603621 nova_compute[247399]: 2026-01-31 08:53:05.672 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849570.6709208, 3308d345-19b7-4fbb-bd81-631135649e7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:05 np0005603621 nova_compute[247399]: 2026-01-31 08:53:05.672 247403 INFO nova.compute.manager [-] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:53:05 np0005603621 nova_compute[247399]: 2026-01-31 08:53:05.715 247403 DEBUG nova.compute.manager [None req-95f72117-987d-4123-8705-a32dfcbeabe2 - - - - - -] [instance: 3308d345-19b7-4fbb-bd81-631135649e7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:05 np0005603621 nova_compute[247399]: 2026-01-31 08:53:05.718 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:05 np0005603621 nova_compute[247399]: 2026-01-31 08:53:05.954 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 852 B/s wr, 32 op/s
Jan 31 03:53:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:53:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:53:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:08.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.2 KiB/s wr, 44 op/s
Jan 31 03:53:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:10.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:10 np0005603621 nova_compute[247399]: 2026-01-31 08:53:10.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Jan 31 03:53:10 np0005603621 nova_compute[247399]: 2026-01-31 08:53:10.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:12 np0005603621 nova_compute[247399]: 2026-01-31 08:53:12.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:12 np0005603621 nova_compute[247399]: 2026-01-31 08:53:12.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:12 np0005603621 nova_compute[247399]: 2026-01-31 08:53:12.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:53:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:12 np0005603621 podman[373868]: 2026-01-31 08:53:12.494494492 +0000 UTC m=+0.049896276 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 03:53:12 np0005603621 podman[373869]: 2026-01-31 08:53:12.508316129 +0000 UTC m=+0.065701875 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 03:53:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:12.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:12.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 222 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 766 KiB/s wr, 36 op/s
Jan 31 03:53:14 np0005603621 nova_compute[247399]: 2026-01-31 08:53:14.196 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:14 np0005603621 nova_compute[247399]: 2026-01-31 08:53:14.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:14 np0005603621 nova_compute[247399]: 2026-01-31 08:53:14.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:53:14 np0005603621 nova_compute[247399]: 2026-01-31 08:53:14.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:53:14 np0005603621 nova_compute[247399]: 2026-01-31 08:53:14.217 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195868107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3195868107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:53:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:14.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2549863062' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:53:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2549863062' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:53:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Jan 31 03:53:15 np0005603621 nova_compute[247399]: 2026-01-31 08:53:15.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:15 np0005603621 nova_compute[247399]: 2026-01-31 08:53:15.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:16.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:16.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 217 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.3 MiB/s wr, 39 op/s
Jan 31 03:53:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:16.976 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:53:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:16.977 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:53:16 np0005603621 nova_compute[247399]: 2026-01-31 08:53:16.977 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:16.978 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.244 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.245 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.245 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.245 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.246 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:53:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2724482955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.701 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.864 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.866 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=20.967548370361328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.866 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.867 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.952 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.952 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:53:17 np0005603621 nova_compute[247399]: 2026-01-31 08:53:17.992 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:53:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416414895' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:53:18 np0005603621 nova_compute[247399]: 2026-01-31 08:53:18.401 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:18 np0005603621 nova_compute[247399]: 2026-01-31 08:53:18.406 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:53:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:18.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:18 np0005603621 nova_compute[247399]: 2026-01-31 08:53:18.658 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:53:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 03:53:18 np0005603621 nova_compute[247399]: 2026-01-31 08:53:18.830 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:53:18 np0005603621 nova_compute[247399]: 2026-01-31 08:53:18.831 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:20.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:20.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:20 np0005603621 nova_compute[247399]: 2026-01-31 08:53:20.727 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 31 03:53:20 np0005603621 nova_compute[247399]: 2026-01-31 08:53:20.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:21 np0005603621 nova_compute[247399]: 2026-01-31 08:53:21.832 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:22 np0005603621 nova_compute[247399]: 2026-01-31 08:53:22.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:22.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:22.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 31 03:53:24 np0005603621 nova_compute[247399]: 2026-01-31 08:53:24.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:53:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:24.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:24.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.0 MiB/s wr, 29 op/s
Jan 31 03:53:25 np0005603621 nova_compute[247399]: 2026-01-31 08:53:25.730 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:25 np0005603621 nova_compute[247399]: 2026-01-31 08:53:25.962 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:26.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:26.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.8 KiB/s rd, 517 KiB/s wr, 13 op/s
Jan 31 03:53:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:28.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 518 KiB/s wr, 66 op/s
Jan 31 03:53:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:30.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:30.533 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:30.534 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:30.534 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:30 np0005603621 nova_compute[247399]: 2026-01-31 08:53:30.733 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 31 03:53:30 np0005603621 nova_compute[247399]: 2026-01-31 08:53:30.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:53:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:53:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 120 op/s
Jan 31 03:53:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 163 op/s
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.600 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.601 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.676 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.849 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.850 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.856 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.857 247403 INFO nova.compute.claims [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:53:35 np0005603621 nova_compute[247399]: 2026-01-31 08:53:35.965 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.129 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:53:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2266039910' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.532 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 03:53:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:36.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.537 247403 DEBUG nova.compute.provider_tree [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.576 247403 DEBUG nova.scheduler.client.report [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:53:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.742 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.743 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:53:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 167 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 163 op/s
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.993 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:53:36 np0005603621 nova_compute[247399]: 2026-01-31 08:53:36.994 247403 DEBUG nova.network.neutron [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.074 247403 INFO nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.162 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:53:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.327 247403 DEBUG nova.policy [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1c6e7eff11b435a81429826a682b32f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bfe11bd9d694684b527666e2c378eed', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.453 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.454 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.454 247403 INFO nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Creating image(s)#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.478 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.502 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.531 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.536 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.594 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.595 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.596 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.596 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.618 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.622 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.880 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:37 np0005603621 nova_compute[247399]: 2026-01-31 08:53:37.947 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] resizing rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.095 247403 DEBUG nova.objects.instance [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'migration_context' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.121 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.122 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Ensure instance console log exists: /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.123 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.123 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.123 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:53:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:38.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:53:38
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', 'vms', 'default.rgw.log', '.rgw.root']
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:53:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 180 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 257 op/s
Jan 31 03:53:38 np0005603621 nova_compute[247399]: 2026-01-31 08:53:38.844 247403 DEBUG nova.network.neutron [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Successfully created port: e18b94cd-d887-479a-ad93-c2c11ee4a451 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:53:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:53:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:40.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:40 np0005603621 nova_compute[247399]: 2026-01-31 08:53:40.738 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 210 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 2.7 MiB/s wr, 265 op/s
Jan 31 03:53:40 np0005603621 nova_compute[247399]: 2026-01-31 08:53:40.967 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.353 247403 DEBUG nova.network.neutron [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Successfully updated port: e18b94cd-d887-479a-ad93-c2c11ee4a451 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.546 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.547 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.547 247403 DEBUG nova.network.neutron [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.655 247403 DEBUG nova.compute.manager [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-changed-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.656 247403 DEBUG nova.compute.manager [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Refreshing instance network info cache due to event network-changed-e18b94cd-d887-479a-ad93-c2c11ee4a451. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.656 247403 DEBUG oslo_concurrency.lockutils [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:53:41 np0005603621 nova_compute[247399]: 2026-01-31 08:53:41.847 247403 DEBUG nova.network.neutron [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:53:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:53:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077205124' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:53:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:53:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1077205124' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:53:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 540 KiB/s rd, 3.9 MiB/s wr, 308 op/s
Jan 31 03:53:43 np0005603621 podman[374262]: 2026-01-31 08:53:43.48560314 +0000 UTC m=+0.043872306 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:53:43 np0005603621 podman[374263]: 2026-01-31 08:53:43.540408681 +0000 UTC m=+0.097939944 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.400 247403 DEBUG nova.network.neutron [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.437 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.438 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance network_info: |[{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.438 247403 DEBUG oslo_concurrency.lockutils [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.438 247403 DEBUG nova.network.neutron [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Refreshing network info cache for port e18b94cd-d887-479a-ad93-c2c11ee4a451 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.441 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Start _get_guest_xml network_info=[{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.446 247403 WARNING nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.452 247403 DEBUG nova.virt.libvirt.host [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.453 247403 DEBUG nova.virt.libvirt.host [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.458 247403 DEBUG nova.virt.libvirt.host [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.458 247403 DEBUG nova.virt.libvirt.host [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.459 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.460 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.460 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.460 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.460 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.461 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.461 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.461 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.461 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.462 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.462 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.462 247403 DEBUG nova.virt.hardware [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.464 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:44.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 492 KiB/s rd, 3.9 MiB/s wr, 267 op/s
Jan 31 03:53:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:53:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1474543742' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.895 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.923 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:44 np0005603621 nova_compute[247399]: 2026-01-31 08:53:44.926 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:53:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840252459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.347 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.349 247403 DEBUG nova.virt.libvirt.vif [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-221845287',display_name='tempest-TestNetworkAdvancedServerOps-server-221845287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-221845287',id=175,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtau6exAZl+mwRMeJFkXhJB67n1cV0P1ThdOm4bYsqLjnyb4XcP9L1040/96tZWXYEBSGeRghmnPpuQ0fVpMkCuMLB4eVu5B9uCfJR4fo7dCLE6dAf4F6fC26WbSWjqXw==',key_name='tempest-TestNetworkAdvancedServerOps-1492106170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-lmsd7way',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:53:37Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=d2b5bab2-3aab-4bdf-94a8-115368b4ee97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.349 247403 DEBUG nova.network.os_vif_util [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.350 247403 DEBUG nova.network.os_vif_util [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.351 247403 DEBUG nova.objects.instance [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.668 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <uuid>d2b5bab2-3aab-4bdf-94a8-115368b4ee97</uuid>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <name>instance-000000af</name>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-221845287</nova:name>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:53:44</nova:creationTime>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <nova:port uuid="e18b94cd-d887-479a-ad93-c2c11ee4a451">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <entry name="serial">d2b5bab2-3aab-4bdf-94a8-115368b4ee97</entry>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <entry name="uuid">d2b5bab2-3aab-4bdf-94a8-115368b4ee97</entry>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:14:96:4e"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <target dev="tape18b94cd-d8"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/console.log" append="off"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:53:45 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:53:45 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:53:45 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:53:45 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.670 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Preparing to wait for external event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.671 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.671 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.671 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.672 247403 DEBUG nova.virt.libvirt.vif [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-221845287',display_name='tempest-TestNetworkAdvancedServerOps-server-221845287',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-221845287',id=175,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtau6exAZl+mwRMeJFkXhJB67n1cV0P1ThdOm4bYsqLjnyb4XcP9L1040/96tZWXYEBSGeRghmnPpuQ0fVpMkCuMLB4eVu5B9uCfJR4fo7dCLE6dAf4F6fC26WbSWjqXw==',key_name='tempest-TestNetworkAdvancedServerOps-1492106170',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-lmsd7way',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:53:37Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=d2b5bab2-3aab-4bdf-94a8-115368b4ee97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.672 247403 DEBUG nova.network.os_vif_util [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.673 247403 DEBUG nova.network.os_vif_util [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.673 247403 DEBUG os_vif [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.674 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.674 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.675 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.677 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.677 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape18b94cd-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.678 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape18b94cd-d8, col_values=(('external_ids', {'iface-id': 'e18b94cd-d887-479a-ad93-c2c11ee4a451', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:96:4e', 'vm-uuid': 'd2b5bab2-3aab-4bdf-94a8-115368b4ee97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.679 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:45 np0005603621 NetworkManager[49013]: <info>  [1769849625.6810] manager: (tape18b94cd-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/327)
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.682 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.686 247403 INFO os_vif [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8')#033[00m
Jan 31 03:53:45 np0005603621 nova_compute[247399]: 2026-01-31 08:53:45.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:46 np0005603621 nova_compute[247399]: 2026-01-31 08:53:46.067 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:53:46 np0005603621 nova_compute[247399]: 2026-01-31 08:53:46.067 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:53:46 np0005603621 nova_compute[247399]: 2026-01-31 08:53:46.068 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] No VIF found with MAC fa:16:3e:14:96:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:53:46 np0005603621 nova_compute[247399]: 2026-01-31 08:53:46.068 247403 INFO nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Using config drive#033[00m
Jan 31 03:53:46 np0005603621 nova_compute[247399]: 2026-01-31 08:53:46.091 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:46.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:46.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 3.9 MiB/s wr, 223 op/s
Jan 31 03:53:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:48.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:48.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 3.9 MiB/s wr, 224 op/s
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.355 247403 INFO nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Creating config drive at /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/disk.config#033[00m
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.359 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8e4wz2oc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.483 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8e4wz2oc" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.521 247403 DEBUG nova.storage.rbd_utils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] rbd image d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.527 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/disk.config d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003157970116787642 of space, bias 1.0, pg target 0.9473910350362926 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:53:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.705 247403 DEBUG oslo_concurrency.processutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/disk.config d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.706 247403 INFO nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Deleting local config drive /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/disk.config because it was imported into RBD.#033[00m
Jan 31 03:53:49 np0005603621 kernel: tape18b94cd-d8: entered promiscuous mode
Jan 31 03:53:49 np0005603621 NetworkManager[49013]: <info>  [1769849629.7454] manager: (tape18b94cd-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/328)
Jan 31 03:53:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:53:49Z|00735|binding|INFO|Claiming lport e18b94cd-d887-479a-ad93-c2c11ee4a451 for this chassis.
Jan 31 03:53:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:53:49Z|00736|binding|INFO|e18b94cd-d887-479a-ad93-c2c11ee4a451: Claiming fa:16:3e:14:96:4e 10.100.0.14
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.747 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.755 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:49 np0005603621 systemd-udevd[374696]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:53:49 np0005603621 systemd-machined[212769]: New machine qemu-88-instance-000000af.
Jan 31 03:53:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:53:49Z|00737|binding|INFO|Setting lport e18b94cd-d887-479a-ad93-c2c11ee4a451 ovn-installed in OVS
Jan 31 03:53:49 np0005603621 systemd[1]: Started Virtual Machine qemu-88-instance-000000af.
Jan 31 03:53:49 np0005603621 NetworkManager[49013]: <info>  [1769849629.7785] device (tape18b94cd-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:53:49 np0005603621 nova_compute[247399]: 2026-01-31 08:53:49.778 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:49 np0005603621 NetworkManager[49013]: <info>  [1769849629.7797] device (tape18b94cd-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:53:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:53:49 np0005603621 ovn_controller[149152]: 2026-01-31T08:53:49Z|00738|binding|INFO|Setting lport e18b94cd-d887-479a-ad93-c2c11ee4a451 up in Southbound
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.813 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:96:4e 10.100.0.14'], port_security=['fa:16:3e:14:96:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2b5bab2-3aab-4bdf-94a8-115368b4ee97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e4de7d-2f09-4145-a904-553981394e44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2c433d56-bbd7-483b-aba2-0ec9ae720225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6b712d9-577d-47ef-ab43-83cc481a0ef2, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e18b94cd-d887-479a-ad93-c2c11ee4a451) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.814 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e18b94cd-d887-479a-ad93-c2c11ee4a451 in datapath 14e4de7d-2f09-4145-a904-553981394e44 bound to our chassis#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.815 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e4de7d-2f09-4145-a904-553981394e44#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.824 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a737f6c7-3958-4e5d-810f-d1a0ced1e914]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.825 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e4de7d-21 in ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.828 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e4de7d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.828 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a733c88b-d637-4c0a-bd7f-a450f0195050]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.829 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5b00cd43-d4ba-4e8a-b2e1-e441ff7895c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.840 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[9487636e-c04f-41a7-b9bb-a0c49146547b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.852 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf49c7a-98d5-41bb-bccd-3198cb5a269e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.874 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[46a4937a-8be5-4e47-b10a-9332a802c5fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:53:49 np0005603621 NetworkManager[49013]: <info>  [1769849629.8816] manager: (tap14e4de7d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/329)
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.880 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[95c7dd07-dc62-44fb-bbf4-0928ce675284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 podman[374721]: 2026-01-31 08:53:49.901414203 +0000 UTC m=+0.051104494 container create 52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 31 03:53:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.904 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb9dccd-2ed0-4a84-97a8-7c85c15acd66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.908 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbb42c0-835f-44ef-82cb-ff24b23216c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 NetworkManager[49013]: <info>  [1769849629.9264] device (tap14e4de7d-20): carrier: link connected
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.933 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb62159-33b0-454b-9858-e1e940072ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 systemd[1]: Started libpod-conmon-52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543.scope.
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.951 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[268d9d2d-5624-4671-8e8e-21bf554f4105]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e4de7d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:d8:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893544, 'reachable_time': 35715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374758, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.962 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9213ac52-8927-466b-86f3-8b8c0488f2e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:d8b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 893544, 'tstamp': 893544}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374762, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 podman[374721]: 2026-01-31 08:53:49.869251577 +0000 UTC m=+0.018941888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:49 np0005603621 podman[374721]: 2026-01-31 08:53:49.966845699 +0000 UTC m=+0.116535990 container init 52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:53:49 np0005603621 podman[374721]: 2026-01-31 08:53:49.972617411 +0000 UTC m=+0.122307702 container start 52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:53:49 np0005603621 focused_engelbart[374759]: 167 167
Jan 31 03:53:49 np0005603621 systemd[1]: libpod-52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543.scope: Deactivated successfully.
Jan 31 03:53:49 np0005603621 podman[374721]: 2026-01-31 08:53:49.979003873 +0000 UTC m=+0.128694164 container attach 52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_engelbart, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:53:49 np0005603621 podman[374721]: 2026-01-31 08:53:49.97954946 +0000 UTC m=+0.129239751 container died 52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_engelbart, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.978 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b20b0b62-d59b-4d71-ba22-d911295045f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e4de7d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:d8:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893544, 'reachable_time': 35715, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374764, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:49.998 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c2909cf1-6332-43b6-9716-9a9489baeb7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-49c149226b47f9eeb1d214b20ea22c33a53938a068b9694f59d0d57c318faac7-merged.mount: Deactivated successfully.
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.038 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2d26ad4f-7af6-4ec9-959c-c56bbcc25d8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.039 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e4de7d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.040 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.040 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e4de7d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:50 np0005603621 kernel: tap14e4de7d-20: entered promiscuous mode
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603621 NetworkManager[49013]: <info>  [1769849630.0424] manager: (tap14e4de7d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.043 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.047 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e4de7d-20, col_values=(('external_ids', {'iface-id': '84f05e7d-3c61-45d4-a1fe-635c2b300d01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603621 ovn_controller[149152]: 2026-01-31T08:53:50Z|00739|binding|INFO|Releasing lport 84f05e7d-3c61-45d4-a1fe-635c2b300d01 from this chassis (sb_readonly=0)
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.050 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e4de7d-2f09-4145-a904-553981394e44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e4de7d-2f09-4145-a904-553981394e44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.050 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fe1704-c5d0-4ea7-b6b6-473c1e8f7927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.051 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-14e4de7d-2f09-4145-a904-553981394e44
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/14e4de7d-2f09-4145-a904-553981394e44.pid.haproxy
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 14e4de7d-2f09-4145-a904-553981394e44
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.052 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:53:50.052 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'env', 'PROCESS_TAG=haproxy-14e4de7d-2f09-4145-a904-553981394e44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e4de7d-2f09-4145-a904-553981394e44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:53:50 np0005603621 podman[374721]: 2026-01-31 08:53:50.105541758 +0000 UTC m=+0.255232049 container remove 52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_engelbart, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:53:50 np0005603621 systemd[1]: libpod-conmon-52e8f002211e4c0deb199216d6a12db41f9f939ca3ef64c156a1b76664849543.scope: Deactivated successfully.
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.147 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849630.147089, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.148 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Started (Lifecycle Event)#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.267 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.273 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849630.1499546, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.273 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:53:50 np0005603621 podman[374837]: 2026-01-31 08:53:50.218474643 +0000 UTC m=+0.021102446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.322 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.325 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:53:50 np0005603621 podman[374837]: 2026-01-31 08:53:50.341797197 +0000 UTC m=+0.144424980 container create 81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.372 247403 DEBUG nova.network.neutron [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updated VIF entry in instance network info cache for port e18b94cd-d887-479a-ad93-c2c11ee4a451. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.373 247403 DEBUG nova.network.neutron [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:53:50 np0005603621 systemd[1]: Started libpod-conmon-81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7.scope.
Jan 31 03:53:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b195b0165b269aa1e66b8df58c0ce2c2d5c065251a6e398de7f48ab1b423e6ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b195b0165b269aa1e66b8df58c0ce2c2d5c065251a6e398de7f48ab1b423e6ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b195b0165b269aa1e66b8df58c0ce2c2d5c065251a6e398de7f48ab1b423e6ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b195b0165b269aa1e66b8df58c0ce2c2d5c065251a6e398de7f48ab1b423e6ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:50 np0005603621 podman[374837]: 2026-01-31 08:53:50.465692749 +0000 UTC m=+0.268320552 container init 81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:53:50 np0005603621 podman[374837]: 2026-01-31 08:53:50.47077673 +0000 UTC m=+0.273404513 container start 81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:53:50 np0005603621 podman[374837]: 2026-01-31 08:53:50.485861746 +0000 UTC m=+0.288489549 container attach 81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 03:53:50 np0005603621 podman[374874]: 2026-01-31 08:53:50.42200843 +0000 UTC m=+0.022927085 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:53:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:50.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:50 np0005603621 podman[374874]: 2026-01-31 08:53:50.556820187 +0000 UTC m=+0.157738822 container create edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.561 247403 DEBUG oslo_concurrency.lockutils [req-58b4f5c1-0826-4c30-b7dd-1cd8794eb2a2 req-c1eb4e74-13d5-47c1-95f1-f3eae4f09815 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:53:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:50.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.590 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:53:50 np0005603621 systemd[1]: Started libpod-conmon-edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07.scope.
Jan 31 03:53:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9078a78eb6f9b262c0d1688160a51f861c83ea92ac0d0a434a7360164eaeb47/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:50 np0005603621 podman[374874]: 2026-01-31 08:53:50.672178909 +0000 UTC m=+0.273097554 container init edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:53:50 np0005603621 podman[374874]: 2026-01-31 08:53:50.676891258 +0000 UTC m=+0.277809883 container start edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.680 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:50 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [NOTICE]   (374898) : New worker (374900) forked
Jan 31 03:53:50 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [NOTICE]   (374898) : Loading success.
Jan 31 03:53:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 2.6 MiB/s wr, 130 op/s
Jan 31 03:53:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:50 np0005603621 nova_compute[247399]: 2026-01-31 08:53:50.969 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.293 247403 DEBUG nova.compute.manager [req-25f4c4e3-6abe-4cd8-a87b-e7c325fb0ad4 req-3eddc69b-9059-441f-a5e9-a615677b35b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.294 247403 DEBUG oslo_concurrency.lockutils [req-25f4c4e3-6abe-4cd8-a87b-e7c325fb0ad4 req-3eddc69b-9059-441f-a5e9-a615677b35b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.295 247403 DEBUG oslo_concurrency.lockutils [req-25f4c4e3-6abe-4cd8-a87b-e7c325fb0ad4 req-3eddc69b-9059-441f-a5e9-a615677b35b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.295 247403 DEBUG oslo_concurrency.lockutils [req-25f4c4e3-6abe-4cd8-a87b-e7c325fb0ad4 req-3eddc69b-9059-441f-a5e9-a615677b35b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.295 247403 DEBUG nova.compute.manager [req-25f4c4e3-6abe-4cd8-a87b-e7c325fb0ad4 req-3eddc69b-9059-441f-a5e9-a615677b35b6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Processing event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.296 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.299 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849631.2996333, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.300 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.302 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.305 247403 INFO nova.virt.libvirt.driver [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance spawned successfully.#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.305 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.331 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.333 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.351 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.351 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.352 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.352 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.353 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.353 247403 DEBUG nova.virt.libvirt.driver [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.362 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]: [
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:    {
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "available": false,
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "ceph_device": false,
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "lsm_data": {},
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "lvs": [],
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "path": "/dev/sr0",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "rejected_reasons": [
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "Has a FileSystem",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "Insufficient space (<5GB)"
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        ],
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        "sys_api": {
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "actuators": null,
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "device_nodes": "sr0",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "devname": "sr0",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "human_readable_size": "482.00 KB",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "id_bus": "ata",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "model": "QEMU DVD-ROM",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "nr_requests": "2",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "parent": "/dev/sr0",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "partitions": {},
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "path": "/dev/sr0",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "removable": "1",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "rev": "2.5+",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "ro": "0",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "rotational": "1",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "sas_address": "",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "sas_device_handle": "",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "scheduler_mode": "mq-deadline",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "sectors": 0,
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "sectorsize": "2048",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "size": 493568.0,
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "support_discard": "2048",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "type": "disk",
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:            "vendor": "QEMU"
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:        }
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]:    }
Jan 31 03:53:51 np0005603621 admiring_mcnulty[374880]: ]
Jan 31 03:53:51 np0005603621 systemd[1]: libpod-81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7.scope: Deactivated successfully.
Jan 31 03:53:51 np0005603621 podman[374837]: 2026-01-31 08:53:51.485070575 +0000 UTC m=+1.287698358 container died 81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.488 247403 INFO nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Took 14.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.488 247403 DEBUG nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:53:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b195b0165b269aa1e66b8df58c0ce2c2d5c065251a6e398de7f48ab1b423e6ff-merged.mount: Deactivated successfully.
Jan 31 03:53:51 np0005603621 podman[374837]: 2026-01-31 08:53:51.647196554 +0000 UTC m=+1.449824337 container remove 81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:53:51 np0005603621 systemd[1]: libpod-conmon-81d0ff8e684a7bc2e9a9ff6f0a718d538df6f47f5ee44ec92c6e2ff5d12223e7.scope: Deactivated successfully.
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.658 247403 INFO nova.compute.manager [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Took 15.86 seconds to build instance.#033[00m
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:53:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:51 np0005603621 nova_compute[247399]: 2026-01-31 08:53:51.805 247403 DEBUG oslo_concurrency.lockutils [None req-d79b181b-9662-4de0-b7fa-c2415f4d9885 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a1fe9dbc-9900-4dd8-a9dc-a9796ee2964d does not exist
Jan 31 03:53:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 960f6ec3-03d7-4b5a-b075-2e6e2f46ce8a does not exist
Jan 31 03:53:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 06127d72-7a7a-464a-a20b-856528a86ec6 does not exist
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:53:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:52.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:53:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 938 KiB/s rd, 1.2 MiB/s wr, 96 op/s
Jan 31 03:53:52 np0005603621 podman[376159]: 2026-01-31 08:53:52.84809228 +0000 UTC m=+0.039210779 container create e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 03:53:52 np0005603621 systemd[1]: Started libpod-conmon-e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f.scope.
Jan 31 03:53:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:52 np0005603621 podman[376159]: 2026-01-31 08:53:52.831203767 +0000 UTC m=+0.022322276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:52 np0005603621 podman[376159]: 2026-01-31 08:53:52.931470443 +0000 UTC m=+0.122588962 container init e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:53:52 np0005603621 podman[376159]: 2026-01-31 08:53:52.939767075 +0000 UTC m=+0.130885574 container start e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:53:52 np0005603621 beautiful_babbage[376176]: 167 167
Jan 31 03:53:52 np0005603621 systemd[1]: libpod-e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f.scope: Deactivated successfully.
Jan 31 03:53:52 np0005603621 podman[376159]: 2026-01-31 08:53:52.948297634 +0000 UTC m=+0.139416153 container attach e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 03:53:52 np0005603621 podman[376159]: 2026-01-31 08:53:52.949922766 +0000 UTC m=+0.141041255 container died e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 03:53:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-662fffd676c1014ba4280978af912f4f84036b017f6763aff45a2fc81a06ef14-merged.mount: Deactivated successfully.
Jan 31 03:53:53 np0005603621 podman[376159]: 2026-01-31 08:53:53.003052363 +0000 UTC m=+0.194170862 container remove e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:53:53 np0005603621 systemd[1]: libpod-conmon-e9ea4f899530948952eeff0b5c52548f8bc186eb37736d562b2064f7d693723f.scope: Deactivated successfully.
Jan 31 03:53:53 np0005603621 podman[376203]: 2026-01-31 08:53:53.143287451 +0000 UTC m=+0.050971320 container create fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:53:53 np0005603621 systemd[1]: Started libpod-conmon-fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937.scope.
Jan 31 03:53:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97984e597e4af8947d05c06dd3e06cd3cfb59677916271fd842627b5caf15b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:53 np0005603621 podman[376203]: 2026-01-31 08:53:53.119661335 +0000 UTC m=+0.027345244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97984e597e4af8947d05c06dd3e06cd3cfb59677916271fd842627b5caf15b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97984e597e4af8947d05c06dd3e06cd3cfb59677916271fd842627b5caf15b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97984e597e4af8947d05c06dd3e06cd3cfb59677916271fd842627b5caf15b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e97984e597e4af8947d05c06dd3e06cd3cfb59677916271fd842627b5caf15b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:53 np0005603621 podman[376203]: 2026-01-31 08:53:53.226317172 +0000 UTC m=+0.134001091 container init fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 03:53:53 np0005603621 podman[376203]: 2026-01-31 08:53:53.235359738 +0000 UTC m=+0.143043607 container start fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:53:53 np0005603621 podman[376203]: 2026-01-31 08:53:53.244201858 +0000 UTC m=+0.151885727 container attach fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:53:53 np0005603621 nova_compute[247399]: 2026-01-31 08:53:53.609 247403 DEBUG nova.compute.manager [req-b8cf5e43-5672-4376-83c4-03e168027156 req-3a9eef2b-bda8-4d46-8e27-cf7e94cbe295 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:53:53 np0005603621 nova_compute[247399]: 2026-01-31 08:53:53.610 247403 DEBUG oslo_concurrency.lockutils [req-b8cf5e43-5672-4376-83c4-03e168027156 req-3a9eef2b-bda8-4d46-8e27-cf7e94cbe295 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:53:53 np0005603621 nova_compute[247399]: 2026-01-31 08:53:53.610 247403 DEBUG oslo_concurrency.lockutils [req-b8cf5e43-5672-4376-83c4-03e168027156 req-3a9eef2b-bda8-4d46-8e27-cf7e94cbe295 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:53:53 np0005603621 nova_compute[247399]: 2026-01-31 08:53:53.610 247403 DEBUG oslo_concurrency.lockutils [req-b8cf5e43-5672-4376-83c4-03e168027156 req-3a9eef2b-bda8-4d46-8e27-cf7e94cbe295 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:53:53 np0005603621 nova_compute[247399]: 2026-01-31 08:53:53.610 247403 DEBUG nova.compute.manager [req-b8cf5e43-5672-4376-83c4-03e168027156 req-3a9eef2b-bda8-4d46-8e27-cf7e94cbe295 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:53:53 np0005603621 nova_compute[247399]: 2026-01-31 08:53:53.610 247403 WARNING nova.compute.manager [req-b8cf5e43-5672-4376-83c4-03e168027156 req-3a9eef2b-bda8-4d46-8e27-cf7e94cbe295 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received unexpected event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:53:53 np0005603621 hardcore_heisenberg[376220]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:53:53 np0005603621 hardcore_heisenberg[376220]: --> relative data size: 1.0
Jan 31 03:53:53 np0005603621 hardcore_heisenberg[376220]: --> All data devices are unavailable
Jan 31 03:53:54 np0005603621 systemd[1]: libpod-fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937.scope: Deactivated successfully.
Jan 31 03:53:54 np0005603621 podman[376203]: 2026-01-31 08:53:54.004783632 +0000 UTC m=+0.912467501 container died fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:53:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e97984e597e4af8947d05c06dd3e06cd3cfb59677916271fd842627b5caf15b4-merged.mount: Deactivated successfully.
Jan 31 03:53:54 np0005603621 podman[376203]: 2026-01-31 08:53:54.045093255 +0000 UTC m=+0.952777124 container remove fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 03:53:54 np0005603621 systemd[1]: libpod-conmon-fd43ec19ac736e532abdc66d28983f36b28568554705664af23196b0e2283937.scope: Deactivated successfully.
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.517680416 +0000 UTC m=+0.034766499 container create 129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:53:54 np0005603621 systemd[1]: Started libpod-conmon-129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7.scope.
Jan 31 03:53:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:54.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.585355733 +0000 UTC m=+0.102441816 container init 129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.59063805 +0000 UTC m=+0.107724133 container start 129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.593995766 +0000 UTC m=+0.111081859 container attach 129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:53:54 np0005603621 laughing_curie[376401]: 167 167
Jan 31 03:53:54 np0005603621 systemd[1]: libpod-129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7.scope: Deactivated successfully.
Jan 31 03:53:54 np0005603621 conmon[376401]: conmon 129c2d19355f10dd4606 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7.scope/container/memory.events
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.596411902 +0000 UTC m=+0.113497985 container died 129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:53:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:54.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.502658451 +0000 UTC m=+0.019744564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-190abb4e68f85cf889481b7e66b91d15071dae20e2f85a7def4d6f9ce4d70667-merged.mount: Deactivated successfully.
Jan 31 03:53:54 np0005603621 podman[376385]: 2026-01-31 08:53:54.62899488 +0000 UTC m=+0.146080963 container remove 129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_curie, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:53:54 np0005603621 systemd[1]: libpod-conmon-129c2d19355f10dd4606fbbcfc29d7bf5e6b50fec8c5f908083fa783f5b68ef7.scope: Deactivated successfully.
Jan 31 03:53:54 np0005603621 podman[376425]: 2026-01-31 08:53:54.750207898 +0000 UTC m=+0.036330738 container create 648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:53:54 np0005603621 systemd[1]: Started libpod-conmon-648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4.scope.
Jan 31 03:53:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 29 KiB/s wr, 67 op/s
Jan 31 03:53:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad19011adbb3a8c6310abe82c8852bbdb69943d964a3bb32e6f5fc6430ad9ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad19011adbb3a8c6310abe82c8852bbdb69943d964a3bb32e6f5fc6430ad9ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad19011adbb3a8c6310abe82c8852bbdb69943d964a3bb32e6f5fc6430ad9ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:54 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ad19011adbb3a8c6310abe82c8852bbdb69943d964a3bb32e6f5fc6430ad9ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:54 np0005603621 podman[376425]: 2026-01-31 08:53:54.82628989 +0000 UTC m=+0.112412740 container init 648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swirles, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:53:54 np0005603621 podman[376425]: 2026-01-31 08:53:54.734078798 +0000 UTC m=+0.020201668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:54 np0005603621 podman[376425]: 2026-01-31 08:53:54.833522328 +0000 UTC m=+0.119645168 container start 648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:53:54 np0005603621 podman[376425]: 2026-01-31 08:53:54.84182767 +0000 UTC m=+0.127950530 container attach 648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]: {
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:    "0": [
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:        {
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "devices": [
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "/dev/loop3"
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            ],
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "lv_name": "ceph_lv0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "lv_size": "7511998464",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "name": "ceph_lv0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "tags": {
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.cluster_name": "ceph",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.crush_device_class": "",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.encrypted": "0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.osd_id": "0",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.type": "block",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:                "ceph.vdo": "0"
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            },
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "type": "block",
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:            "vg_name": "ceph_vg0"
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:        }
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]:    ]
Jan 31 03:53:55 np0005603621 stoic_swirles[376439]: }
Jan 31 03:53:55 np0005603621 systemd[1]: libpod-648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4.scope: Deactivated successfully.
Jan 31 03:53:55 np0005603621 podman[376425]: 2026-01-31 08:53:55.621829858 +0000 UTC m=+0.907952718 container died 648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swirles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 03:53:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9ad19011adbb3a8c6310abe82c8852bbdb69943d964a3bb32e6f5fc6430ad9ae-merged.mount: Deactivated successfully.
Jan 31 03:53:55 np0005603621 podman[376425]: 2026-01-31 08:53:55.669798752 +0000 UTC m=+0.955921592 container remove 648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_swirles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:53:55 np0005603621 nova_compute[247399]: 2026-01-31 08:53:55.681 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:55 np0005603621 systemd[1]: libpod-conmon-648730a35b1a1d2c5d8ee4e30cda1e83a0e84acdd152c3c39f94231c016b4aa4.scope: Deactivated successfully.
Jan 31 03:53:55 np0005603621 nova_compute[247399]: 2026-01-31 08:53:55.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.143560031 +0000 UTC m=+0.035129240 container create e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:53:56 np0005603621 systemd[1]: Started libpod-conmon-e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59.scope.
Jan 31 03:53:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.196193052 +0000 UTC m=+0.087762281 container init e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.200468288 +0000 UTC m=+0.092037497 container start e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 03:53:56 np0005603621 festive_chebyshev[376619]: 167 167
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.203216605 +0000 UTC m=+0.094785824 container attach e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 03:53:56 np0005603621 systemd[1]: libpod-e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59.scope: Deactivated successfully.
Jan 31 03:53:56 np0005603621 conmon[376619]: conmon e047a5f725f7c93b4f05 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59.scope/container/memory.events
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.204521185 +0000 UTC m=+0.096090394 container died e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.126884034 +0000 UTC m=+0.018453273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2a053cdb8a06ac9eae83ae1f0c77180207af825cfc2237eef4fbddb0c53bacca-merged.mount: Deactivated successfully.
Jan 31 03:53:56 np0005603621 podman[376601]: 2026-01-31 08:53:56.234350638 +0000 UTC m=+0.125919847 container remove e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:53:56 np0005603621 systemd[1]: libpod-conmon-e047a5f725f7c93b4f05b83db9f2cac65469a6913af4134752d777b86bdd2d59.scope: Deactivated successfully.
Jan 31 03:53:56 np0005603621 podman[376642]: 2026-01-31 08:53:56.349432821 +0000 UTC m=+0.032210738 container create fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:53:56 np0005603621 systemd[1]: Started libpod-conmon-fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a.scope.
Jan 31 03:53:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:53:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96ca348b8f375247997d9436530bcfd663b5c880c23dabccd775474bb090dae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96ca348b8f375247997d9436530bcfd663b5c880c23dabccd775474bb090dae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96ca348b8f375247997d9436530bcfd663b5c880c23dabccd775474bb090dae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e96ca348b8f375247997d9436530bcfd663b5c880c23dabccd775474bb090dae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:53:56 np0005603621 podman[376642]: 2026-01-31 08:53:56.408984171 +0000 UTC m=+0.091762118 container init fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:53:56 np0005603621 podman[376642]: 2026-01-31 08:53:56.414766964 +0000 UTC m=+0.097544881 container start fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:53:56 np0005603621 podman[376642]: 2026-01-31 08:53:56.418626806 +0000 UTC m=+0.101404723 container attach fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 31 03:53:56 np0005603621 podman[376642]: 2026-01-31 08:53:56.33610646 +0000 UTC m=+0.018884377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:53:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:56.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:56.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 28 KiB/s wr, 62 op/s
Jan 31 03:53:57 np0005603621 practical_cohen[376659]: {
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:        "osd_id": 0,
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:        "type": "bluestore"
Jan 31 03:53:57 np0005603621 practical_cohen[376659]:    }
Jan 31 03:53:57 np0005603621 practical_cohen[376659]: }
Jan 31 03:53:57 np0005603621 systemd[1]: libpod-fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a.scope: Deactivated successfully.
Jan 31 03:53:57 np0005603621 podman[376642]: 2026-01-31 08:53:57.246789864 +0000 UTC m=+0.929567771 container died fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:53:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:53:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e96ca348b8f375247997d9436530bcfd663b5c880c23dabccd775474bb090dae-merged.mount: Deactivated successfully.
Jan 31 03:53:57 np0005603621 podman[376642]: 2026-01-31 08:53:57.394036683 +0000 UTC m=+1.076814600 container remove fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:53:57 np0005603621 systemd[1]: libpod-conmon-fdf1fb2fee3a5b442d572aff1e9c18dd2c6abffa2314f1ac6c14617f2380221a.scope: Deactivated successfully.
Jan 31 03:53:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:53:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:53:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f7ec6f39-1470-41a2-b027-62bd359ff056 does not exist
Jan 31 03:53:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cca26cdf-47bd-4431-b99d-14a7cd24704e does not exist
Jan 31 03:53:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 89e2e93e-63df-4787-a863-5c15693715d3 does not exist
Jan 31 03:53:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:53:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:53:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:53:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:53:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:53:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:53:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:53:58.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:53:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 75 op/s
Jan 31 03:54:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:00 np0005603621 nova_compute[247399]: 2026-01-31 08:54:00.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 31 03:54:00 np0005603621 nova_compute[247399]: 2026-01-31 08:54:00.973 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 31 03:54:03 np0005603621 nova_compute[247399]: 2026-01-31 08:54:03.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:04 np0005603621 nova_compute[247399]: 2026-01-31 08:54:04.039 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:04Z|00740|binding|INFO|Releasing lport 84f05e7d-3c61-45d4-a1fe-635c2b300d01 from this chassis (sb_readonly=0)
Jan 31 03:54:04 np0005603621 NetworkManager[49013]: <info>  [1769849644.0400] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Jan 31 03:54:04 np0005603621 NetworkManager[49013]: <info>  [1769849644.0416] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 31 03:54:04 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:04Z|00741|binding|INFO|Releasing lport 84f05e7d-3c61-45d4-a1fe-635c2b300d01 from this chassis (sb_readonly=0)
Jan 31 03:54:04 np0005603621 nova_compute[247399]: 2026-01-31 08:54:04.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:04 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 31 03:54:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:04.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:04.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 252 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 658 KiB/s wr, 75 op/s
Jan 31 03:54:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:05Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:96:4e 10.100.0.14
Jan 31 03:54:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:05Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:96:4e 10.100.0.14
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.278 247403 DEBUG nova.compute.manager [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-changed-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.279 247403 DEBUG nova.compute.manager [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Refreshing instance network info cache due to event network-changed-e18b94cd-d887-479a-ad93-c2c11ee4a451. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.280 247403 DEBUG oslo_concurrency.lockutils [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.281 247403 DEBUG oslo_concurrency.lockutils [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.281 247403 DEBUG nova.network.neutron [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Refreshing network info cache for port e18b94cd-d887-479a-ad93-c2c11ee4a451 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.691 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:05 np0005603621 nova_compute[247399]: 2026-01-31 08:54:05.976 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:06.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 252 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 601 KiB/s rd, 657 KiB/s wr, 40 op/s
Jan 31 03:54:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:08 np0005603621 nova_compute[247399]: 2026-01-31 08:54:08.257 247403 DEBUG nova.network.neutron [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updated VIF entry in instance network info cache for port e18b94cd-d887-479a-ad93-c2c11ee4a451. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:54:08 np0005603621 nova_compute[247399]: 2026-01-31 08:54:08.258 247403 DEBUG nova.network.neutron [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:54:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:08.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:08.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:08 np0005603621 nova_compute[247399]: 2026-01-31 08:54:08.761 247403 DEBUG oslo_concurrency.lockutils [req-16ac500e-b016-478e-a37a-d7aefed23084 req-048015b0-2f95-4629-825d-406e81092118 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 708 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 31 03:54:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:10.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:10 np0005603621 nova_compute[247399]: 2026-01-31 08:54:10.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:10 np0005603621 nova_compute[247399]: 2026-01-31 08:54:10.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 03:54:10 np0005603621 nova_compute[247399]: 2026-01-31 08:54:10.979 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:12 np0005603621 nova_compute[247399]: 2026-01-31 08:54:12.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.316128) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849652316308, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1087, "num_deletes": 256, "total_data_size": 1681413, "memory_usage": 1710208, "flush_reason": "Manual Compaction"}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849652325826, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1641451, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68408, "largest_seqno": 69494, "table_properties": {"data_size": 1636203, "index_size": 2643, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11631, "raw_average_key_size": 19, "raw_value_size": 1625574, "raw_average_value_size": 2755, "num_data_blocks": 114, "num_entries": 590, "num_filter_entries": 590, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849569, "oldest_key_time": 1769849569, "file_creation_time": 1769849652, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 9604 microseconds, and 3265 cpu microseconds.
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.325860) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1641451 bytes OK
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.325875) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.327969) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.327982) EVENT_LOG_v1 {"time_micros": 1769849652327978, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.328001) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1676380, prev total WAL file size 1676380, number of live WAL files 2.
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.328685) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1602KB)], [155(12MB)]
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849652328714, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 14411158, "oldest_snapshot_seqno": -1}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9629 keys, 14255369 bytes, temperature: kUnknown
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849652485109, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 14255369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14190849, "index_size": 39319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24133, "raw_key_size": 253611, "raw_average_key_size": 26, "raw_value_size": 14020166, "raw_average_value_size": 1456, "num_data_blocks": 1510, "num_entries": 9629, "num_filter_entries": 9629, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849652, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.485369) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14255369 bytes
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.502164) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.1 rd, 91.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.2 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(17.5) write-amplify(8.7) OK, records in: 10158, records dropped: 529 output_compression: NoCompression
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.502215) EVENT_LOG_v1 {"time_micros": 1769849652502198, "job": 96, "event": "compaction_finished", "compaction_time_micros": 156472, "compaction_time_cpu_micros": 24013, "output_level": 6, "num_output_files": 1, "total_output_size": 14255369, "num_input_records": 10158, "num_output_records": 9629, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849652503352, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849652505057, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.328574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.505149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.505154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.505156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.505157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:12.505159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:12.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:12.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 03:54:12 np0005603621 nova_compute[247399]: 2026-01-31 08:54:12.853 247403 INFO nova.compute.manager [None req-f19c3bbe-8aeb-4091-942c-7fc838d48ec7 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Get console output#033[00m
Jan 31 03:54:12 np0005603621 nova_compute[247399]: 2026-01-31 08:54:12.858 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.692 247403 DEBUG oslo_concurrency.lockutils [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.692 247403 DEBUG oslo_concurrency.lockutils [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.693 247403 DEBUG nova.compute.manager [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.697 247403 DEBUG nova.compute.manager [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.697 247403 DEBUG nova.objects.instance [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'flavor' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:13 np0005603621 nova_compute[247399]: 2026-01-31 08:54:13.753 247403 DEBUG nova.virt.libvirt.driver [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 31 03:54:14 np0005603621 podman[376800]: 2026-01-31 08:54:14.492868607 +0000 UTC m=+0.050019480 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:54:14 np0005603621 podman[376801]: 2026-01-31 08:54:14.515424489 +0000 UTC m=+0.072211702 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:54:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:14.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:14.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Jan 31 03:54:15 np0005603621 nova_compute[247399]: 2026-01-31 08:54:15.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:15 np0005603621 nova_compute[247399]: 2026-01-31 08:54:15.694 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:15 np0005603621 nova_compute[247399]: 2026-01-31 08:54:15.981 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 kernel: tape18b94cd-d8 (unregistering): left promiscuous mode
Jan 31 03:54:16 np0005603621 NetworkManager[49013]: <info>  [1769849656.0033] device (tape18b94cd-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:54:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:16Z|00742|binding|INFO|Releasing lport e18b94cd-d887-479a-ad93-c2c11ee4a451 from this chassis (sb_readonly=0)
Jan 31 03:54:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:16Z|00743|binding|INFO|Setting lport e18b94cd-d887-479a-ad93-c2c11ee4a451 down in Southbound
Jan 31 03:54:16 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:16Z|00744|binding|INFO|Removing iface tape18b94cd-d8 ovn-installed in OVS
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.023 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000af.scope: Deactivated successfully.
Jan 31 03:54:16 np0005603621 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000af.scope: Consumed 13.145s CPU time.
Jan 31 03:54:16 np0005603621 systemd-machined[212769]: Machine qemu-88-instance-000000af terminated.
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.229 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.231 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.523 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:96:4e 10.100.0.14'], port_security=['fa:16:3e:14:96:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2b5bab2-3aab-4bdf-94a8-115368b4ee97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e4de7d-2f09-4145-a904-553981394e44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2c433d56-bbd7-483b-aba2-0ec9ae720225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6b712d9-577d-47ef-ab43-83cc481a0ef2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e18b94cd-d887-479a-ad93-c2c11ee4a451) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.524 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e18b94cd-d887-479a-ad93-c2c11ee4a451 in datapath 14e4de7d-2f09-4145-a904-553981394e44 unbound from our chassis#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.526 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e4de7d-2f09-4145-a904-553981394e44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.528 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[50939648-8a68-4fe2-85c7-02acb79d4f00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.528 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 namespace which is not needed anymore#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.563 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.564 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.564 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.564 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:16.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:16 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [NOTICE]   (374898) : haproxy version is 2.8.14-c23fe91
Jan 31 03:54:16 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [NOTICE]   (374898) : path to executable is /usr/sbin/haproxy
Jan 31 03:54:16 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [WARNING]  (374898) : Exiting Master process...
Jan 31 03:54:16 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [WARNING]  (374898) : Exiting Master process...
Jan 31 03:54:16 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [ALERT]    (374898) : Current worker (374900) exited with code 143 (Terminated)
Jan 31 03:54:16 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[374894]: [WARNING]  (374898) : All workers exited. Exiting... (0)
Jan 31 03:54:16 np0005603621 systemd[1]: libpod-edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07.scope: Deactivated successfully.
Jan 31 03:54:16 np0005603621 podman[376878]: 2026-01-31 08:54:16.647019471 +0000 UTC m=+0.044114704 container died edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:54:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07-userdata-shm.mount: Deactivated successfully.
Jan 31 03:54:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b9078a78eb6f9b262c0d1688160a51f861c83ea92ac0d0a434a7360164eaeb47-merged.mount: Deactivated successfully.
Jan 31 03:54:16 np0005603621 podman[376878]: 2026-01-31 08:54:16.6888163 +0000 UTC m=+0.085911533 container cleanup edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:54:16 np0005603621 systemd[1]: libpod-conmon-edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07.scope: Deactivated successfully.
Jan 31 03:54:16 np0005603621 podman[376909]: 2026-01-31 08:54:16.739644395 +0000 UTC m=+0.035868903 container remove edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.742 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b1692c-9152-4d07-b3d1-7f9f04849a45]: (4, ('Sat Jan 31 08:54:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 (edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07)\nedf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07\nSat Jan 31 08:54:16 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 (edf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07)\nedf417aa3e797831975c5db8fba7668ab7b568300b9cb4dcc34f7d3b6dd18b07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.744 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[639f6985-5375-41fd-8f77-86b8b250da54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.745 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e4de7d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.779 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 kernel: tap14e4de7d-20: left promiscuous mode
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.782 247403 INFO nova.virt.libvirt.driver [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance shutdown successfully after 3 seconds.#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.790 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5d6091-c730-4dad-8f91-dcd809ff5e30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.789 247403 INFO nova.virt.libvirt.driver [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance destroyed successfully.#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.789 247403 DEBUG nova.objects.instance [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'numa_topology' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.804 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[889c9e75-9619-4b54-9b8c-eab78a3d7786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.805 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[58dbd805-8fcf-4d4a-bb4a-3015ead6fb1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 1.5 MiB/s wr, 38 op/s
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.819 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea40972-4002-49d2-bb2d-fb091f1e4234]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 893538, 'reachable_time': 18885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376927, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 systemd[1]: run-netns-ovnmeta\x2d14e4de7d\x2d2f09\x2d4145\x2da904\x2d553981394e44.mount: Deactivated successfully.
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.822 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:54:16 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:16.823 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[83a74ba2-6d1f-41d1-b538-2572e45f3183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:16 np0005603621 nova_compute[247399]: 2026-01-31 08:54:16.963 247403 DEBUG nova.compute.manager [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:17 np0005603621 nova_compute[247399]: 2026-01-31 08:54:17.514 247403 DEBUG oslo_concurrency.lockutils [None req-066ec05f-d2c7-49dd-9848-106213b15850 f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:18.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 1.5 MiB/s wr, 39 op/s
Jan 31 03:54:19 np0005603621 nova_compute[247399]: 2026-01-31 08:54:19.028 247403 DEBUG nova.compute.manager [req-786162bc-6277-4e42-9ffd-c187996801ec req-a2549037-9a22-428a-ae97-7b2ad8c5d160 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-unplugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:19 np0005603621 nova_compute[247399]: 2026-01-31 08:54:19.028 247403 DEBUG oslo_concurrency.lockutils [req-786162bc-6277-4e42-9ffd-c187996801ec req-a2549037-9a22-428a-ae97-7b2ad8c5d160 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:19 np0005603621 nova_compute[247399]: 2026-01-31 08:54:19.028 247403 DEBUG oslo_concurrency.lockutils [req-786162bc-6277-4e42-9ffd-c187996801ec req-a2549037-9a22-428a-ae97-7b2ad8c5d160 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:19 np0005603621 nova_compute[247399]: 2026-01-31 08:54:19.029 247403 DEBUG oslo_concurrency.lockutils [req-786162bc-6277-4e42-9ffd-c187996801ec req-a2549037-9a22-428a-ae97-7b2ad8c5d160 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:19 np0005603621 nova_compute[247399]: 2026-01-31 08:54:19.029 247403 DEBUG nova.compute.manager [req-786162bc-6277-4e42-9ffd-c187996801ec req-a2549037-9a22-428a-ae97-7b2ad8c5d160 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-unplugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:19 np0005603621 nova_compute[247399]: 2026-01-31 08:54:19.029 247403 WARNING nova.compute.manager [req-786162bc-6277-4e42-9ffd-c187996801ec req-a2549037-9a22-428a-ae97-7b2ad8c5d160 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received unexpected event network-vif-unplugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:54:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:20.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:20 np0005603621 nova_compute[247399]: 2026-01-31 08:54:20.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 106 KiB/s wr, 13 op/s
Jan 31 03:54:20 np0005603621 nova_compute[247399]: 2026-01-31 08:54:20.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.205 247403 INFO nova.compute.manager [None req-f969e09b-b1a1-43e1-8e9c-3d123e5724ba f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Get console output#033[00m
Jan 31 03:54:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:21.364 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:21.365 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:54:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:21.366 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.443 247403 DEBUG nova.compute.manager [req-1cc9cd0b-f8de-4fc7-a717-3ea58628c08e req-a4ec125c-2384-48c2-b477-17ae7c5c0241 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.444 247403 DEBUG oslo_concurrency.lockutils [req-1cc9cd0b-f8de-4fc7-a717-3ea58628c08e req-a4ec125c-2384-48c2-b477-17ae7c5c0241 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.444 247403 DEBUG oslo_concurrency.lockutils [req-1cc9cd0b-f8de-4fc7-a717-3ea58628c08e req-a4ec125c-2384-48c2-b477-17ae7c5c0241 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.444 247403 DEBUG oslo_concurrency.lockutils [req-1cc9cd0b-f8de-4fc7-a717-3ea58628c08e req-a4ec125c-2384-48c2-b477-17ae7c5c0241 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.445 247403 DEBUG nova.compute.manager [req-1cc9cd0b-f8de-4fc7-a717-3ea58628c08e req-a4ec125c-2384-48c2-b477-17ae7c5c0241 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:21 np0005603621 nova_compute[247399]: 2026-01-31 08:54:21.445 247403 WARNING nova.compute.manager [req-1cc9cd0b-f8de-4fc7-a717-3ea58628c08e req-a4ec125c-2384-48c2-b477-17ae7c5c0241 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received unexpected event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with vm_state stopped and task_state None.#033[00m
Jan 31 03:54:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:22 np0005603621 nova_compute[247399]: 2026-01-31 08:54:22.335 247403 DEBUG nova.objects.instance [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'flavor' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:22 np0005603621 nova_compute[247399]: 2026-01-31 08:54:22.413 247403 DEBUG oslo_concurrency.lockutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:22.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.1 KiB/s rd, 35 KiB/s wr, 10 op/s
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.567 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.654 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.655 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.655 247403 DEBUG oslo_concurrency.lockutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquired lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.655 247403 DEBUG nova.network.neutron [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.655 247403 DEBUG nova.objects.instance [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'info_cache' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.656 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.657 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.657 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.847 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.848 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.848 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.848 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:54:23 np0005603621 nova_compute[247399]: 2026-01-31 08:54:23.849 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500496898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.274 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:24.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.778 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.778 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:54:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 347 KiB/s wr, 22 op/s
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.891 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.892 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4152MB free_disk=20.895076751708984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.893 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:24 np0005603621 nova_compute[247399]: 2026-01-31 08:54:24.893 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:25 np0005603621 nova_compute[247399]: 2026-01-31 08:54:25.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:25 np0005603621 nova_compute[247399]: 2026-01-31 08:54:25.883 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance d2b5bab2-3aab-4bdf-94a8-115368b4ee97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:54:25 np0005603621 nova_compute[247399]: 2026-01-31 08:54:25.884 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:54:25 np0005603621 nova_compute[247399]: 2026-01-31 08:54:25.884 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:54:25 np0005603621 nova_compute[247399]: 2026-01-31 08:54:25.984 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:26 np0005603621 nova_compute[247399]: 2026-01-31 08:54:26.075 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:54:26 np0005603621 nova_compute[247399]: 2026-01-31 08:54:26.118 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:54:26 np0005603621 nova_compute[247399]: 2026-01-31 08:54:26.118 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:54:26 np0005603621 nova_compute[247399]: 2026-01-31 08:54:26.153 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:54:26 np0005603621 nova_compute[247399]: 2026-01-31 08:54:26.431 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:54:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:26.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:26 np0005603621 nova_compute[247399]: 2026-01-31 08:54:26.588 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:26.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 318 KiB/s wr, 19 op/s
Jan 31 03:54:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330909780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:27 np0005603621 nova_compute[247399]: 2026-01-31 08:54:27.024 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:27 np0005603621 nova_compute[247399]: 2026-01-31 08:54:27.030 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:54:27 np0005603621 nova_compute[247399]: 2026-01-31 08:54:27.097 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:54:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:27 np0005603621 nova_compute[247399]: 2026-01-31 08:54:27.483 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:54:27 np0005603621 nova_compute[247399]: 2026-01-31 08:54:27.484 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:28 np0005603621 nova_compute[247399]: 2026-01-31 08:54:28.024 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:54:28 np0005603621 nova_compute[247399]: 2026-01-31 08:54:28.098 247403 DEBUG nova.network.neutron [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:28.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:28.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:28 np0005603621 nova_compute[247399]: 2026-01-31 08:54:28.818 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.033 247403 DEBUG oslo_concurrency.lockutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Releasing lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.515 247403 INFO nova.virt.libvirt.driver [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance destroyed successfully.#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.515 247403 DEBUG nova.objects.instance [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'numa_topology' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.642 247403 DEBUG nova.objects.instance [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.718 247403 DEBUG nova.virt.libvirt.vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-221845287',display_name='tempest-TestNetworkAdvancedServerOps-server-221845287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-221845287',id=175,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtau6exAZl+mwRMeJFkXhJB67n1cV0P1ThdOm4bYsqLjnyb4XcP9L1040/96tZWXYEBSGeRghmnPpuQ0fVpMkCuMLB4eVu5B9uCfJR4fo7dCLE6dAf4F6fC26WbSWjqXw==',key_name='tempest-TestNetworkAdvancedServerOps-1492106170',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:53:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-lmsd7way',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:54:17Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=d2b5bab2-3aab-4bdf-94a8-115368b4ee97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.718 247403 DEBUG nova.network.os_vif_util [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.719 247403 DEBUG nova.network.os_vif_util [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.719 247403 DEBUG os_vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.721 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape18b94cd-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.723 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.727 247403 INFO os_vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8')#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.733 247403 DEBUG nova.virt.libvirt.driver [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Start _get_guest_xml network_info=[{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.736 247403 WARNING nova.virt.libvirt.driver [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.743 247403 DEBUG nova.virt.libvirt.host [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.743 247403 DEBUG nova.virt.libvirt.host [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.746 247403 DEBUG nova.virt.libvirt.host [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.747 247403 DEBUG nova.virt.libvirt.host [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.748 247403 DEBUG nova.virt.libvirt.driver [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.748 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.749 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.749 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.749 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.749 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.749 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.750 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.750 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.750 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.750 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.751 247403 DEBUG nova.virt.hardware [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.751 247403 DEBUG nova.objects.instance [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'vcpu_model' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:29 np0005603621 nova_compute[247399]: 2026-01-31 08:54:29.771 247403 DEBUG oslo_concurrency.processutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:54:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1194703683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.197 247403 DEBUG oslo_concurrency.processutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.231 247403 DEBUG oslo_concurrency.processutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.535 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.535 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.535 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:30.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:54:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3606358954' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.656 247403 DEBUG oslo_concurrency.processutils [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.657 247403 DEBUG nova.virt.libvirt.vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-221845287',display_name='tempest-TestNetworkAdvancedServerOps-server-221845287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-221845287',id=175,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtau6exAZl+mwRMeJFkXhJB67n1cV0P1ThdOm4bYsqLjnyb4XcP9L1040/96tZWXYEBSGeRghmnPpuQ0fVpMkCuMLB4eVu5B9uCfJR4fo7dCLE6dAf4F6fC26WbSWjqXw==',key_name='tempest-TestNetworkAdvancedServerOps-1492106170',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:53:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-lmsd7way',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:54:17Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=d2b5bab2-3aab-4bdf-94a8-115368b4ee97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.658 247403 DEBUG nova.network.os_vif_util [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.659 247403 DEBUG nova.network.os_vif_util [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.660 247403 DEBUG nova.objects.instance [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'pci_devices' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.872 247403 DEBUG nova.virt.libvirt.driver [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <uuid>d2b5bab2-3aab-4bdf-94a8-115368b4ee97</uuid>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <name>instance-000000af</name>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-221845287</nova:name>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:54:29</nova:creationTime>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:user uuid="f1c6e7eff11b435a81429826a682b32f">tempest-TestNetworkAdvancedServerOps-840410497-project-member</nova:user>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:project uuid="0bfe11bd9d694684b527666e2c378eed">tempest-TestNetworkAdvancedServerOps-840410497</nova:project>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <nova:port uuid="e18b94cd-d887-479a-ad93-c2c11ee4a451">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <entry name="serial">d2b5bab2-3aab-4bdf-94a8-115368b4ee97</entry>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <entry name="uuid">d2b5bab2-3aab-4bdf-94a8-115368b4ee97</entry>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/d2b5bab2-3aab-4bdf-94a8-115368b4ee97_disk.config">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:14:96:4e"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <target dev="tape18b94cd-d8"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97/console.log" append="off"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:54:30 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:54:30 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:54:30 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:54:30 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.874 247403 DEBUG nova.virt.libvirt.driver [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.874 247403 DEBUG nova.virt.libvirt.driver [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.875 247403 DEBUG nova.virt.libvirt.vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-221845287',display_name='tempest-TestNetworkAdvancedServerOps-server-221845287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-221845287',id=175,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtau6exAZl+mwRMeJFkXhJB67n1cV0P1ThdOm4bYsqLjnyb4XcP9L1040/96tZWXYEBSGeRghmnPpuQ0fVpMkCuMLB4eVu5B9uCfJR4fo7dCLE6dAf4F6fC26WbSWjqXw==',key_name='tempest-TestNetworkAdvancedServerOps-1492106170',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:53:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-lmsd7way',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:54:17Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=d2b5bab2-3aab-4bdf-94a8-115368b4ee97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.876 247403 DEBUG nova.network.os_vif_util [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.876 247403 DEBUG nova.network.os_vif_util [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.877 247403 DEBUG os_vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.878 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.878 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.881 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.881 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape18b94cd-d8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.881 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape18b94cd-d8, col_values=(('external_ids', {'iface-id': 'e18b94cd-d887-479a-ad93-c2c11ee4a451', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:96:4e', 'vm-uuid': 'd2b5bab2-3aab-4bdf-94a8-115368b4ee97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 NetworkManager[49013]: <info>  [1769849670.8840] manager: (tape18b94cd-d8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.886 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.888 247403 INFO os_vif [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8')#033[00m
Jan 31 03:54:30 np0005603621 kernel: tape18b94cd-d8: entered promiscuous mode
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:30Z|00745|binding|INFO|Claiming lport e18b94cd-d887-479a-ad93-c2c11ee4a451 for this chassis.
Jan 31 03:54:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:30Z|00746|binding|INFO|e18b94cd-d887-479a-ad93-c2c11ee4a451: Claiming fa:16:3e:14:96:4e 10.100.0.14
Jan 31 03:54:30 np0005603621 NetworkManager[49013]: <info>  [1769849670.9378] manager: (tape18b94cd-d8): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:30Z|00747|binding|INFO|Setting lport e18b94cd-d887-479a-ad93-c2c11ee4a451 ovn-installed in OVS
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.945 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 systemd-udevd[377103]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:54:30 np0005603621 NetworkManager[49013]: <info>  [1769849670.9711] device (tape18b94cd-d8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:54:30 np0005603621 NetworkManager[49013]: <info>  [1769849670.9723] device (tape18b94cd-d8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:54:30 np0005603621 systemd-machined[212769]: New machine qemu-89-instance-000000af.
Jan 31 03:54:30 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:30Z|00748|binding|INFO|Setting lport e18b94cd-d887-479a-ad93-c2c11ee4a451 up in Southbound
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.984 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:96:4e 10.100.0.14'], port_security=['fa:16:3e:14:96:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2b5bab2-3aab-4bdf-94a8-115368b4ee97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e4de7d-2f09-4145-a904-553981394e44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2c433d56-bbd7-483b-aba2-0ec9ae720225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6b712d9-577d-47ef-ab43-83cc481a0ef2, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e18b94cd-d887-479a-ad93-c2c11ee4a451) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.986 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e18b94cd-d887-479a-ad93-c2c11ee4a451 in datapath 14e4de7d-2f09-4145-a904-553981394e44 bound to our chassis#033[00m
Jan 31 03:54:30 np0005603621 nova_compute[247399]: 2026-01-31 08:54:30.986 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.988 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14e4de7d-2f09-4145-a904-553981394e44#033[00m
Jan 31 03:54:30 np0005603621 systemd[1]: Started Virtual Machine qemu-89-instance-000000af.
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.994 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1c723b1d-d9da-486d-8b27-7f7b48f89ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.995 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14e4de7d-21 in ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.996 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14e4de7d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.996 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f54bbf54-a741-46c7-8430-e2080c09bd2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:30.997 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e002d748-aae0-49f9-a203-7f7ff00aef3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.004 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b402c646-cd99-48b3-9c0a-c2083a42021d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.012 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[45a671b0-522a-47a4-9a4a-a8934fba185a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.029 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2b47a078-62cc-431b-8928-f36e630e5a67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.033 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9a1fb1-a136-49bd-9c40-ebada45cca46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 NetworkManager[49013]: <info>  [1769849671.0342] manager: (tap14e4de7d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/335)
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.058 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[da042d33-6c4f-45d6-9ee7-84adc5301875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.060 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[36aeeade-a990-486b-88b9-20b6453d023a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 NetworkManager[49013]: <info>  [1769849671.0779] device (tap14e4de7d-20): carrier: link connected
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.081 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed68d85-dd33-4c0f-b5c5-f9c714e36e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.094 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9698e6a2-cef8-4c65-b213-b41a73293c01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e4de7d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:d8:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897659, 'reachable_time': 44760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377139, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.104 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7e782b35-96b5-4174-b052-b551c7255bdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:d8b6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 897659, 'tstamp': 897659}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377140, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.116 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0640d4a2-7848-4350-ad4f-798d1315efa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14e4de7d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:d8:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897659, 'reachable_time': 44760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377141, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.138 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8f9342-3e3f-41ab-b020-1b140de32c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.187 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b7b1bb-fa6b-4e57-8466-cad40da08431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.189 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e4de7d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.189 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.190 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14e4de7d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.192 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603621 NetworkManager[49013]: <info>  [1769849671.1930] manager: (tap14e4de7d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 31 03:54:31 np0005603621 kernel: tap14e4de7d-20: entered promiscuous mode
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.195 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14e4de7d-20, col_values=(('external_ids', {'iface-id': '84f05e7d-3c61-45d4-a1fe-635c2b300d01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:31 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:31Z|00749|binding|INFO|Releasing lport 84f05e7d-3c61-45d4-a1fe-635c2b300d01 from this chassis (sb_readonly=0)
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.196 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.198 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14e4de7d-2f09-4145-a904-553981394e44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14e4de7d-2f09-4145-a904-553981394e44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.199 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[494b331b-b792-4b9d-b00e-f3e57b47e304]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.200 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-14e4de7d-2f09-4145-a904-553981394e44
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/14e4de7d-2f09-4145-a904-553981394e44.pid.haproxy
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 14e4de7d-2f09-4145-a904-553981394e44
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:54:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:31.201 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'env', 'PROCESS_TAG=haproxy-14e4de7d-2f09-4145-a904-553981394e44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14e4de7d-2f09-4145-a904-553981394e44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.250 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849656.2487175, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.251 247403 INFO nova.compute.manager [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.501 247403 DEBUG nova.virt.libvirt.host [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Removed pending event for d2b5bab2-3aab-4bdf-94a8-115368b4ee97 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.502 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849671.501153, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.503 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.505 247403 DEBUG nova.compute.manager [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.510 247403 DEBUG nova.compute.manager [None req-4ee45cf6-e4ef-4d94-8901-e44b6586981b - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.511 247403 INFO nova.virt.libvirt.driver [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance rebooted successfully.#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.512 247403 DEBUG nova.compute.manager [None req-a25f47e8-f014-4042-bbf9-fca9a43b811a f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:31 np0005603621 podman[377214]: 2026-01-31 08:54:31.525309365 +0000 UTC m=+0.047035165 container create 4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:54:31 np0005603621 systemd[1]: Started libpod-conmon-4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973.scope.
Jan 31 03:54:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:54:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3b5c469ff7e3c43b55f13488910509cb76ecad400d9a2783fc7885db4b6432/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:31 np0005603621 podman[377214]: 2026-01-31 08:54:31.500406149 +0000 UTC m=+0.022131969 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:54:31 np0005603621 podman[377214]: 2026-01-31 08:54:31.602558874 +0000 UTC m=+0.124284674 container init 4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 31 03:54:31 np0005603621 podman[377214]: 2026-01-31 08:54:31.607887892 +0000 UTC m=+0.129613692 container start 4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:54:31 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [NOTICE]   (377234) : New worker (377236) forked
Jan 31 03:54:31 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [NOTICE]   (377234) : Loading success.
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.708 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.712 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.847 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849671.502013, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:54:31 np0005603621 nova_compute[247399]: 2026-01-31 08:54:31.847 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Started (Lifecycle Event)#033[00m
Jan 31 03:54:32 np0005603621 nova_compute[247399]: 2026-01-31 08:54:32.005 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:54:32 np0005603621 nova_compute[247399]: 2026-01-31 08:54:32.008 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:54:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:32.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 794 KiB/s rd, 3.3 MiB/s wr, 79 op/s
Jan 31 03:54:34 np0005603621 nova_compute[247399]: 2026-01-31 08:54:34.151 247403 DEBUG nova.compute.manager [req-8ceee05f-8cbb-432a-9bc7-3dc8f59cf359 req-7d93e967-3cc1-4cf0-993e-880b877557a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:34 np0005603621 nova_compute[247399]: 2026-01-31 08:54:34.152 247403 DEBUG oslo_concurrency.lockutils [req-8ceee05f-8cbb-432a-9bc7-3dc8f59cf359 req-7d93e967-3cc1-4cf0-993e-880b877557a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:34 np0005603621 nova_compute[247399]: 2026-01-31 08:54:34.152 247403 DEBUG oslo_concurrency.lockutils [req-8ceee05f-8cbb-432a-9bc7-3dc8f59cf359 req-7d93e967-3cc1-4cf0-993e-880b877557a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:34 np0005603621 nova_compute[247399]: 2026-01-31 08:54:34.153 247403 DEBUG oslo_concurrency.lockutils [req-8ceee05f-8cbb-432a-9bc7-3dc8f59cf359 req-7d93e967-3cc1-4cf0-993e-880b877557a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:34 np0005603621 nova_compute[247399]: 2026-01-31 08:54:34.153 247403 DEBUG nova.compute.manager [req-8ceee05f-8cbb-432a-9bc7-3dc8f59cf359 req-7d93e967-3cc1-4cf0-993e-880b877557a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:34 np0005603621 nova_compute[247399]: 2026-01-31 08:54:34.153 247403 WARNING nova.compute.manager [req-8ceee05f-8cbb-432a-9bc7-3dc8f59cf359 req-7d93e967-3cc1-4cf0-993e-880b877557a2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received unexpected event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:54:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:34.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.5 MiB/s wr, 106 op/s
Jan 31 03:54:35 np0005603621 nova_compute[247399]: 2026-01-31 08:54:35.884 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:35 np0005603621 nova_compute[247399]: 2026-01-31 08:54:35.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:36.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:36.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.2 MiB/s wr, 93 op/s
Jan 31 03:54:37 np0005603621 nova_compute[247399]: 2026-01-31 08:54:37.301 247403 DEBUG nova.compute.manager [req-f1a36b8e-73b2-497c-bad3-491be60ba8f9 req-eeeefdec-7a03-4e77-b0de-e7cdb67a596e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:37 np0005603621 nova_compute[247399]: 2026-01-31 08:54:37.301 247403 DEBUG oslo_concurrency.lockutils [req-f1a36b8e-73b2-497c-bad3-491be60ba8f9 req-eeeefdec-7a03-4e77-b0de-e7cdb67a596e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:37 np0005603621 nova_compute[247399]: 2026-01-31 08:54:37.302 247403 DEBUG oslo_concurrency.lockutils [req-f1a36b8e-73b2-497c-bad3-491be60ba8f9 req-eeeefdec-7a03-4e77-b0de-e7cdb67a596e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:37 np0005603621 nova_compute[247399]: 2026-01-31 08:54:37.302 247403 DEBUG oslo_concurrency.lockutils [req-f1a36b8e-73b2-497c-bad3-491be60ba8f9 req-eeeefdec-7a03-4e77-b0de-e7cdb67a596e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:37 np0005603621 nova_compute[247399]: 2026-01-31 08:54:37.302 247403 DEBUG nova.compute.manager [req-f1a36b8e-73b2-497c-bad3-491be60ba8f9 req-eeeefdec-7a03-4e77-b0de-e7cdb67a596e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:37 np0005603621 nova_compute[247399]: 2026-01-31 08:54:37.302 247403 WARNING nova.compute.manager [req-f1a36b8e-73b2-497c-bad3-491be60ba8f9 req-eeeefdec-7a03-4e77-b0de-e7cdb67a596e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received unexpected event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:54:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:54:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:38.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:38.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:54:38
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta']
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:54:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 106 op/s
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:54:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:54:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:40.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 31 03:54:40 np0005603621 nova_compute[247399]: 2026-01-31 08:54:40.888 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:40 np0005603621 nova_compute[247399]: 2026-01-31 08:54:40.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:42.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:42.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 31 03:54:43 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:43Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:96:4e 10.100.0.14
Jan 31 03:54:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:44.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:44.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 276 KiB/s wr, 146 op/s
Jan 31 03:54:44 np0005603621 nova_compute[247399]: 2026-01-31 08:54:44.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:45 np0005603621 podman[377304]: 2026-01-31 08:54:45.510174259 +0000 UTC m=+0.069850656 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:54:45 np0005603621 podman[377303]: 2026-01-31 08:54:45.51082493 +0000 UTC m=+0.070501037 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:54:45 np0005603621 nova_compute[247399]: 2026-01-31 08:54:45.890 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:46 np0005603621 nova_compute[247399]: 2026-01-31 08:54:46.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:46.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 35 KiB/s wr, 112 op/s
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.341453) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849687341495, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 579, "num_deletes": 252, "total_data_size": 620571, "memory_usage": 630560, "flush_reason": "Manual Compaction"}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849687347562, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 427060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69495, "largest_seqno": 70073, "table_properties": {"data_size": 424340, "index_size": 691, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7578, "raw_average_key_size": 20, "raw_value_size": 418653, "raw_average_value_size": 1128, "num_data_blocks": 31, "num_entries": 371, "num_filter_entries": 371, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849652, "oldest_key_time": 1769849652, "file_creation_time": 1769849687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 6144 microseconds, and 1577 cpu microseconds.
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.347597) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 427060 bytes OK
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.347615) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.351045) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.351128) EVENT_LOG_v1 {"time_micros": 1769849687351112, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.351171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 617418, prev total WAL file size 617418, number of live WAL files 2.
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.352132) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353034' seq:72057594037927935, type:22 .. '6D6772737461740032373537' seq:0, type:0; will stop at (end)
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(417KB)], [158(13MB)]
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849687352219, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 14682429, "oldest_snapshot_seqno": -1}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9499 keys, 10991636 bytes, temperature: kUnknown
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849687489707, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 10991636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10932500, "index_size": 34255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 251105, "raw_average_key_size": 26, "raw_value_size": 10768590, "raw_average_value_size": 1133, "num_data_blocks": 1299, "num_entries": 9499, "num_filter_entries": 9499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.489955) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10991636 bytes
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.493887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.7 rd, 79.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(60.1) write-amplify(25.7) OK, records in: 10000, records dropped: 501 output_compression: NoCompression
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.493907) EVENT_LOG_v1 {"time_micros": 1769849687493898, "job": 98, "event": "compaction_finished", "compaction_time_micros": 137562, "compaction_time_cpu_micros": 21496, "output_level": 6, "num_output_files": 1, "total_output_size": 10991636, "num_input_records": 10000, "num_output_records": 9499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849687494073, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849687495314, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.351946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.495617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.495628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.495629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.495631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:47 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:54:47.495633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:54:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:48.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:48.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:48 np0005603621 nova_compute[247399]: 2026-01-31 08:54:48.775 247403 INFO nova.compute.manager [None req-87cbdd7b-c51a-4a40-ac26-725aa67e1a2e f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Get console output#033[00m
Jan 31 03:54:48 np0005603621 nova_compute[247399]: 2026-01-31 08:54:48.780 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:54:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 48 KiB/s wr, 148 op/s
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006336841906290713 of space, bias 1.0, pg target 1.9010525718872138 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6469151312116136 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:54:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:54:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:50.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:50.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:50 np0005603621 nova_compute[247399]: 2026-01-31 08:54:50.815 247403 DEBUG nova.compute.manager [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-changed-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:50 np0005603621 nova_compute[247399]: 2026-01-31 08:54:50.815 247403 DEBUG nova.compute.manager [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Refreshing instance network info cache due to event network-changed-e18b94cd-d887-479a-ad93-c2c11ee4a451. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:54:50 np0005603621 nova_compute[247399]: 2026-01-31 08:54:50.815 247403 DEBUG oslo_concurrency.lockutils [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:54:50 np0005603621 nova_compute[247399]: 2026-01-31 08:54:50.816 247403 DEBUG oslo_concurrency.lockutils [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:54:50 np0005603621 nova_compute[247399]: 2026-01-31 08:54:50.816 247403 DEBUG nova.network.neutron [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Refreshing network info cache for port e18b94cd-d887-479a-ad93-c2c11ee4a451 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:54:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 48 KiB/s wr, 147 op/s
Jan 31 03:54:50 np0005603621 nova_compute[247399]: 2026-01-31 08:54:50.894 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.062 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.063 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.063 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.063 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.063 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.065 247403 INFO nova.compute.manager [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Terminating instance#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.065 247403 DEBUG nova.compute.manager [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:54:51 np0005603621 kernel: tape18b94cd-d8 (unregistering): left promiscuous mode
Jan 31 03:54:51 np0005603621 NetworkManager[49013]: <info>  [1769849691.3209] device (tape18b94cd-d8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.324 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:51Z|00750|binding|INFO|Releasing lport e18b94cd-d887-479a-ad93-c2c11ee4a451 from this chassis (sb_readonly=0)
Jan 31 03:54:51 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:51Z|00751|binding|INFO|Setting lport e18b94cd-d887-479a-ad93-c2c11ee4a451 down in Southbound
Jan 31 03:54:51 np0005603621 ovn_controller[149152]: 2026-01-31T08:54:51Z|00752|binding|INFO|Removing iface tape18b94cd-d8 ovn-installed in OVS
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.336 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000af.scope: Deactivated successfully.
Jan 31 03:54:51 np0005603621 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000af.scope: Consumed 12.517s CPU time.
Jan 31 03:54:51 np0005603621 systemd-machined[212769]: Machine qemu-89-instance-000000af terminated.
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.371 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:96:4e 10.100.0.14'], port_security=['fa:16:3e:14:96:4e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd2b5bab2-3aab-4bdf-94a8-115368b4ee97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14e4de7d-2f09-4145-a904-553981394e44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bfe11bd9d694684b527666e2c378eed', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2c433d56-bbd7-483b-aba2-0ec9ae720225', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6b712d9-577d-47ef-ab43-83cc481a0ef2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=e18b94cd-d887-479a-ad93-c2c11ee4a451) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.372 159734 INFO neutron.agent.ovn.metadata.agent [-] Port e18b94cd-d887-479a-ad93-c2c11ee4a451 in datapath 14e4de7d-2f09-4145-a904-553981394e44 unbound from our chassis#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.375 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14e4de7d-2f09-4145-a904-553981394e44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.375 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef3c04e-c2f9-4f87-a8b7-2297c2ec390f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.376 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 namespace which is not needed anymore#033[00m
Jan 31 03:54:51 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [NOTICE]   (377234) : haproxy version is 2.8.14-c23fe91
Jan 31 03:54:51 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [NOTICE]   (377234) : path to executable is /usr/sbin/haproxy
Jan 31 03:54:51 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [WARNING]  (377234) : Exiting Master process...
Jan 31 03:54:51 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [ALERT]    (377234) : Current worker (377236) exited with code 143 (Terminated)
Jan 31 03:54:51 np0005603621 neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44[377230]: [WARNING]  (377234) : All workers exited. Exiting... (0)
Jan 31 03:54:51 np0005603621 systemd[1]: libpod-4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973.scope: Deactivated successfully.
Jan 31 03:54:51 np0005603621 podman[377377]: 2026-01-31 08:54:51.476410276 +0000 UTC m=+0.037554286 container died 4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 03:54:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973-userdata-shm.mount: Deactivated successfully.
Jan 31 03:54:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0d3b5c469ff7e3c43b55f13488910509cb76ecad400d9a2783fc7885db4b6432-merged.mount: Deactivated successfully.
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.507 247403 INFO nova.virt.libvirt.driver [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Instance destroyed successfully.#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.508 247403 DEBUG nova.objects.instance [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lazy-loading 'resources' on Instance uuid d2b5bab2-3aab-4bdf-94a8-115368b4ee97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:54:51 np0005603621 podman[377377]: 2026-01-31 08:54:51.513523498 +0000 UTC m=+0.074667508 container cleanup 4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 03:54:51 np0005603621 systemd[1]: libpod-conmon-4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973.scope: Deactivated successfully.
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.555 247403 DEBUG nova.virt.libvirt.vif [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-221845287',display_name='tempest-TestNetworkAdvancedServerOps-server-221845287',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-221845287',id=175,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtau6exAZl+mwRMeJFkXhJB67n1cV0P1ThdOm4bYsqLjnyb4XcP9L1040/96tZWXYEBSGeRghmnPpuQ0fVpMkCuMLB4eVu5B9uCfJR4fo7dCLE6dAf4F6fC26WbSWjqXw==',key_name='tempest-TestNetworkAdvancedServerOps-1492106170',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:53:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bfe11bd9d694684b527666e2c378eed',ramdisk_id='',reservation_id='r-lmsd7way',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-840410497',owner_user_name='tempest-TestNetworkAdvancedServerOps-840410497-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:54:31Z,user_data=None,user_id='f1c6e7eff11b435a81429826a682b32f',uuid=d2b5bab2-3aab-4bdf-94a8-115368b4ee97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.556 247403 DEBUG nova.network.os_vif_util [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converting VIF {"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.556 247403 DEBUG nova.network.os_vif_util [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.558 247403 DEBUG os_vif [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.559 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.560 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape18b94cd-d8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.562 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.564 247403 INFO os_vif [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:96:4e,bridge_name='br-int',has_traffic_filtering=True,id=e18b94cd-d887-479a-ad93-c2c11ee4a451,network=Network(14e4de7d-2f09-4145-a904-553981394e44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape18b94cd-d8')#033[00m
Jan 31 03:54:51 np0005603621 podman[377416]: 2026-01-31 08:54:51.565302313 +0000 UTC m=+0.037911548 container remove 4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.569 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a406c491-1cca-4648-8097-1a7540fc1c7f]: (4, ('Sat Jan 31 08:54:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 (4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973)\n4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973\nSat Jan 31 08:54:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 (4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973)\n4e7d0d8425455898275eb7bca01ff5a75d08fef7ac1eb1088225234fe869f973\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.571 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6ae06c52-7ed0-490c-b1f7-2b7f68fccbfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.572 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14e4de7d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:54:51 np0005603621 kernel: tap14e4de7d-20: left promiscuous mode
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.578 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[48cf52fd-42c8-4f07-95f1-6d0328818a1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.588 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 nova_compute[247399]: 2026-01-31 08:54:51.618 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.624 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[48263ea9-a53f-48d3-a6b7-56a8e66f8f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.625 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fb97226b-fad7-4db1-a126-cb2c7e998b85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.634 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bc316f75-ca11-4ea6-bd9f-074e3525325b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 897654, 'reachable_time': 32735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377446, 'error': None, 'target': 'ovnmeta-14e4de7d-2f09-4145-a904-553981394e44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.636 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14e4de7d-2f09-4145-a904-553981394e44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:54:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:54:51.636 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6809bb20-a16a-4fd4-aad5-01f8edbdbcc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:54:51 np0005603621 systemd[1]: run-netns-ovnmeta\x2d14e4de7d\x2d2f09\x2d4145\x2da904\x2d553981394e44.mount: Deactivated successfully.
Jan 31 03:54:52 np0005603621 nova_compute[247399]: 2026-01-31 08:54:52.035 247403 DEBUG nova.compute.manager [req-40b01b90-26d3-4e47-b520-117f062574b1 req-c772f289-53ac-4805-b4b6-9343dd4e2284 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-unplugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:52 np0005603621 nova_compute[247399]: 2026-01-31 08:54:52.035 247403 DEBUG oslo_concurrency.lockutils [req-40b01b90-26d3-4e47-b520-117f062574b1 req-c772f289-53ac-4805-b4b6-9343dd4e2284 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:52 np0005603621 nova_compute[247399]: 2026-01-31 08:54:52.036 247403 DEBUG oslo_concurrency.lockutils [req-40b01b90-26d3-4e47-b520-117f062574b1 req-c772f289-53ac-4805-b4b6-9343dd4e2284 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:52 np0005603621 nova_compute[247399]: 2026-01-31 08:54:52.036 247403 DEBUG oslo_concurrency.lockutils [req-40b01b90-26d3-4e47-b520-117f062574b1 req-c772f289-53ac-4805-b4b6-9343dd4e2284 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:52 np0005603621 nova_compute[247399]: 2026-01-31 08:54:52.036 247403 DEBUG nova.compute.manager [req-40b01b90-26d3-4e47-b520-117f062574b1 req-c772f289-53ac-4805-b4b6-9343dd4e2284 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-unplugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:52 np0005603621 nova_compute[247399]: 2026-01-31 08:54:52.036 247403 DEBUG nova.compute.manager [req-40b01b90-26d3-4e47-b520-117f062574b1 req-c772f289-53ac-4805-b4b6-9343dd4e2284 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-unplugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:54:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:52.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:52.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 380 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 456 KiB/s wr, 215 op/s
Jan 31 03:54:53 np0005603621 nova_compute[247399]: 2026-01-31 08:54:53.585 247403 DEBUG nova.network.neutron [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updated VIF entry in instance network info cache for port e18b94cd-d887-479a-ad93-c2c11ee4a451. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:54:53 np0005603621 nova_compute[247399]: 2026-01-31 08:54:53.586 247403 DEBUG nova.network.neutron [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [{"id": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "address": "fa:16:3e:14:96:4e", "network": {"id": "14e4de7d-2f09-4145-a904-553981394e44", "bridge": "br-int", "label": "tempest-network-smoke--1587573877", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bfe11bd9d694684b527666e2c378eed", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape18b94cd-d8", "ovs_interfaceid": "e18b94cd-d887-479a-ad93-c2c11ee4a451", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:53 np0005603621 nova_compute[247399]: 2026-01-31 08:54:53.710 247403 DEBUG oslo_concurrency.lockutils [req-8b3f3d2f-dbcd-48ff-803b-1449ccd98f4a req-19c19b94-711f-4fb5-9183-8c5e9053151b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-d2b5bab2-3aab-4bdf-94a8-115368b4ee97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:54:53 np0005603621 nova_compute[247399]: 2026-01-31 08:54:53.961 247403 INFO nova.virt.libvirt.driver [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Deleting instance files /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97_del#033[00m
Jan 31 03:54:53 np0005603621 nova_compute[247399]: 2026-01-31 08:54:53.962 247403 INFO nova.virt.libvirt.driver [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Deletion of /var/lib/nova/instances/d2b5bab2-3aab-4bdf-94a8-115368b4ee97_del complete#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.137 247403 INFO nova.compute.manager [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Took 3.07 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.138 247403 DEBUG oslo.service.loopingcall [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.138 247403 DEBUG nova.compute.manager [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.139 247403 DEBUG nova.network.neutron [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.209 247403 DEBUG nova.compute.manager [req-abc6dc59-fadc-401b-9b0a-5ef7c12873b5 req-04101533-27e2-422c-9b4c-7fe51e4bcf96 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.209 247403 DEBUG oslo_concurrency.lockutils [req-abc6dc59-fadc-401b-9b0a-5ef7c12873b5 req-04101533-27e2-422c-9b4c-7fe51e4bcf96 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.210 247403 DEBUG oslo_concurrency.lockutils [req-abc6dc59-fadc-401b-9b0a-5ef7c12873b5 req-04101533-27e2-422c-9b4c-7fe51e4bcf96 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.210 247403 DEBUG oslo_concurrency.lockutils [req-abc6dc59-fadc-401b-9b0a-5ef7c12873b5 req-04101533-27e2-422c-9b4c-7fe51e4bcf96 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.210 247403 DEBUG nova.compute.manager [req-abc6dc59-fadc-401b-9b0a-5ef7c12873b5 req-04101533-27e2-422c-9b4c-7fe51e4bcf96 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] No waiting events found dispatching network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:54:54 np0005603621 nova_compute[247399]: 2026-01-31 08:54:54.210 247403 WARNING nova.compute.manager [req-abc6dc59-fadc-401b-9b0a-5ef7c12873b5 req-04101533-27e2-422c-9b4c-7fe51e4bcf96 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received unexpected event network-vif-plugged-e18b94cd-d887-479a-ad93-c2c11ee4a451 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:54:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:54.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:54:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:54.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:54:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 364 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.7 MiB/s wr, 202 op/s
Jan 31 03:54:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Jan 31 03:54:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Jan 31 03:54:55 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Jan 31 03:54:55 np0005603621 nova_compute[247399]: 2026-01-31 08:54:55.998 247403 DEBUG nova.network.neutron [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.473 247403 INFO nova.compute.manager [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Took 2.33 seconds to deallocate network for instance.#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.555 247403 DEBUG nova.compute.manager [req-70147aa7-d3eb-4cd3-a7c7-cae0b68502ac req-8028dd72-4820-4871-a109-bb3539b67b61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Received event network-vif-deleted-e18b94cd-d887-479a-ad93-c2c11ee4a451 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.556 247403 INFO nova.compute.manager [req-70147aa7-d3eb-4cd3-a7c7-cae0b68502ac req-8028dd72-4820-4871-a109-bb3539b67b61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Neutron deleted interface e18b94cd-d887-479a-ad93-c2c11ee4a451; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.556 247403 DEBUG nova.network.neutron [req-70147aa7-d3eb-4cd3-a7c7-cae0b68502ac req-8028dd72-4820-4871-a109-bb3539b67b61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.562 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:56.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 364 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.874 247403 DEBUG nova.compute.manager [req-70147aa7-d3eb-4cd3-a7c7-cae0b68502ac req-8028dd72-4820-4871-a109-bb3539b67b61 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Detach interface failed, port_id=e18b94cd-d887-479a-ad93-c2c11ee4a451, reason: Instance d2b5bab2-3aab-4bdf-94a8-115368b4ee97 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.917 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.918 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:54:56 np0005603621 nova_compute[247399]: 2026-01-31 08:54:56.979 247403 DEBUG oslo_concurrency.processutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:54:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:54:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:54:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2166473015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:54:57 np0005603621 nova_compute[247399]: 2026-01-31 08:54:57.406 247403 DEBUG oslo_concurrency.processutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:54:57 np0005603621 nova_compute[247399]: 2026-01-31 08:54:57.411 247403 DEBUG nova.compute.provider_tree [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:54:57 np0005603621 nova_compute[247399]: 2026-01-31 08:54:57.918 247403 DEBUG nova.scheduler.client.report [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:54:58 np0005603621 nova_compute[247399]: 2026-01-31 08:54:58.095 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:58 np0005603621 nova_compute[247399]: 2026-01-31 08:54:58.204 247403 INFO nova.scheduler.client.report [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Deleted allocations for instance d2b5bab2-3aab-4bdf-94a8-115368b4ee97#033[00m
Jan 31 03:54:58 np0005603621 nova_compute[247399]: 2026-01-31 08:54:58.362 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:54:58 np0005603621 nova_compute[247399]: 2026-01-31 08:54:58.451 247403 DEBUG oslo_concurrency.lockutils [None req-7f36e902-8f10-4ba1-a8d9-c0e8707d5eaa f1c6e7eff11b435a81429826a682b32f 0bfe11bd9d694684b527666e2c378eed - - default default] Lock "d2b5bab2-3aab-4bdf-94a8-115368b4ee97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:54:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 61d987ff-310b-4d8c-94a8-808ed5e8b3de does not exist
Jan 31 03:54:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cd2b2c3e-709e-49b4-b83c-efc88f767e67 does not exist
Jan 31 03:54:58 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e1d0f3a0-f3f4-45bb-b4e1-fc7329f79eee does not exist
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:54:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:54:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:54:58.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:54:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:54:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:54:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:54:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 336 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 192 op/s
Jan 31 03:54:58 np0005603621 podman[377798]: 2026-01-31 08:54:58.92669846 +0000 UTC m=+0.032477446 container create 43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:54:58 np0005603621 systemd[1]: Started libpod-conmon-43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af.scope.
Jan 31 03:54:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:54:58 np0005603621 podman[377798]: 2026-01-31 08:54:58.981603294 +0000 UTC m=+0.087382300 container init 43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:54:58 np0005603621 podman[377798]: 2026-01-31 08:54:58.986276222 +0000 UTC m=+0.092055208 container start 43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 03:54:58 np0005603621 podman[377798]: 2026-01-31 08:54:58.989497454 +0000 UTC m=+0.095276470 container attach 43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 03:54:58 np0005603621 eloquent_bartik[377814]: 167 167
Jan 31 03:54:58 np0005603621 podman[377798]: 2026-01-31 08:54:58.990836755 +0000 UTC m=+0.096615741 container died 43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:54:58 np0005603621 systemd[1]: libpod-43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af.scope: Deactivated successfully.
Jan 31 03:54:59 np0005603621 podman[377798]: 2026-01-31 08:54:58.912329367 +0000 UTC m=+0.018108393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:54:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9eedf6649b27575f8769901d6d557f71e9ed7534cb4c267707612557452df488-merged.mount: Deactivated successfully.
Jan 31 03:54:59 np0005603621 podman[377798]: 2026-01-31 08:54:59.031461259 +0000 UTC m=+0.137240245 container remove 43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bartik, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:54:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:54:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:54:59 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:54:59 np0005603621 systemd[1]: libpod-conmon-43f63b389b3f24c3a43825988419cbe8a8fd4931f0df663a289b797865e460af.scope: Deactivated successfully.
Jan 31 03:54:59 np0005603621 podman[377836]: 2026-01-31 08:54:59.143214997 +0000 UTC m=+0.035004357 container create edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 03:54:59 np0005603621 systemd[1]: Started libpod-conmon-edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72.scope.
Jan 31 03:54:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:54:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfb6613b1511b562d25df2c3a1382537dc542b4c2582c16e82afe33d8313415/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfb6613b1511b562d25df2c3a1382537dc542b4c2582c16e82afe33d8313415/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfb6613b1511b562d25df2c3a1382537dc542b4c2582c16e82afe33d8313415/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfb6613b1511b562d25df2c3a1382537dc542b4c2582c16e82afe33d8313415/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfb6613b1511b562d25df2c3a1382537dc542b4c2582c16e82afe33d8313415/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:54:59 np0005603621 podman[377836]: 2026-01-31 08:54:59.215296853 +0000 UTC m=+0.107086263 container init edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:54:59 np0005603621 podman[377836]: 2026-01-31 08:54:59.220846378 +0000 UTC m=+0.112635748 container start edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:54:59 np0005603621 podman[377836]: 2026-01-31 08:54:59.127576213 +0000 UTC m=+0.019365603 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:54:59 np0005603621 podman[377836]: 2026-01-31 08:54:59.226523367 +0000 UTC m=+0.118312737 container attach edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 03:55:00 np0005603621 zen_liskov[377852]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:55:00 np0005603621 zen_liskov[377852]: --> relative data size: 1.0
Jan 31 03:55:00 np0005603621 zen_liskov[377852]: --> All data devices are unavailable
Jan 31 03:55:00 np0005603621 systemd[1]: libpod-edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72.scope: Deactivated successfully.
Jan 31 03:55:00 np0005603621 podman[377836]: 2026-01-31 08:55:00.033581099 +0000 UTC m=+0.925370499 container died edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:55:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fdfb6613b1511b562d25df2c3a1382537dc542b4c2582c16e82afe33d8313415-merged.mount: Deactivated successfully.
Jan 31 03:55:00 np0005603621 podman[377836]: 2026-01-31 08:55:00.084894809 +0000 UTC m=+0.976684179 container remove edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 31 03:55:00 np0005603621 systemd[1]: libpod-conmon-edcdc1c9dd623aa8bb84f4903293083e07f96280ad1b88ae1ea7ff4378034b72.scope: Deactivated successfully.
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.580046903 +0000 UTC m=+0.032044193 container create e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:55:00 np0005603621 systemd[1]: Started libpod-conmon-e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a.scope.
Jan 31 03:55:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:00.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.642946509 +0000 UTC m=+0.094943809 container init e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.649083673 +0000 UTC m=+0.101080963 container start e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.652398438 +0000 UTC m=+0.104395728 container attach e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:55:00 np0005603621 epic_herschel[378039]: 167 167
Jan 31 03:55:00 np0005603621 systemd[1]: libpod-e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a.scope: Deactivated successfully.
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.655543976 +0000 UTC m=+0.107541266 container died e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.566702722 +0000 UTC m=+0.018700042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:55:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:00.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-08424aa389f33793813aa811852ef9c43b68a610ab5722a8c5d04ace995f1f61-merged.mount: Deactivated successfully.
Jan 31 03:55:00 np0005603621 podman[378022]: 2026-01-31 08:55:00.704352828 +0000 UTC m=+0.156350108 container remove e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_herschel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:55:00 np0005603621 systemd[1]: libpod-conmon-e87da40e4e929a380a93e52cbe29bcb6cdcbf057c5820226c3203d4689f9377a.scope: Deactivated successfully.
Jan 31 03:55:00 np0005603621 podman[378063]: 2026-01-31 08:55:00.808774005 +0000 UTC m=+0.032986312 container create 158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:55:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 353 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 209 op/s
Jan 31 03:55:00 np0005603621 systemd[1]: Started libpod-conmon-158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268.scope.
Jan 31 03:55:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:55:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c230e9124c5062880d665aa300df7ead62a71f6b3028ddf0d15d98f513c149b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c230e9124c5062880d665aa300df7ead62a71f6b3028ddf0d15d98f513c149b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c230e9124c5062880d665aa300df7ead62a71f6b3028ddf0d15d98f513c149b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:00 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c230e9124c5062880d665aa300df7ead62a71f6b3028ddf0d15d98f513c149b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:00 np0005603621 podman[378063]: 2026-01-31 08:55:00.872458436 +0000 UTC m=+0.096670763 container init 158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ritchie, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:55:00 np0005603621 podman[378063]: 2026-01-31 08:55:00.878858048 +0000 UTC m=+0.103070355 container start 158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ritchie, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 03:55:00 np0005603621 podman[378063]: 2026-01-31 08:55:00.882039458 +0000 UTC m=+0.106251775 container attach 158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:55:00 np0005603621 podman[378063]: 2026-01-31 08:55:00.796156806 +0000 UTC m=+0.020369113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:55:01 np0005603621 nova_compute[247399]: 2026-01-31 08:55:01.016 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:01 np0005603621 nova_compute[247399]: 2026-01-31 08:55:01.564 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]: {
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:    "0": [
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:        {
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "devices": [
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "/dev/loop3"
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            ],
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "lv_name": "ceph_lv0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "lv_size": "7511998464",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "name": "ceph_lv0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "tags": {
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.cluster_name": "ceph",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.crush_device_class": "",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.encrypted": "0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.osd_id": "0",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.type": "block",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:                "ceph.vdo": "0"
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            },
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "type": "block",
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:            "vg_name": "ceph_vg0"
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:        }
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]:    ]
Jan 31 03:55:01 np0005603621 compassionate_ritchie[378079]: }
Jan 31 03:55:01 np0005603621 systemd[1]: libpod-158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268.scope: Deactivated successfully.
Jan 31 03:55:01 np0005603621 podman[378063]: 2026-01-31 08:55:01.651615127 +0000 UTC m=+0.875827434 container died 158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ritchie, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:55:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c230e9124c5062880d665aa300df7ead62a71f6b3028ddf0d15d98f513c149b6-merged.mount: Deactivated successfully.
Jan 31 03:55:01 np0005603621 podman[378063]: 2026-01-31 08:55:01.710396733 +0000 UTC m=+0.934609040 container remove 158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:55:01 np0005603621 systemd[1]: libpod-conmon-158507004b51916ccba85cfeeffa2948ff275452a203a6b27e74a6b78f727268.scope: Deactivated successfully.
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.169500468 +0000 UTC m=+0.031025930 container create 8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:55:02 np0005603621 systemd[1]: Started libpod-conmon-8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a.scope.
Jan 31 03:55:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.234242373 +0000 UTC m=+0.095767865 container init 8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.240662855 +0000 UTC m=+0.102188347 container start 8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.245516899 +0000 UTC m=+0.107042371 container attach 8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:55:02 np0005603621 ecstatic_noyce[378260]: 167 167
Jan 31 03:55:02 np0005603621 systemd[1]: libpod-8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a.scope: Deactivated successfully.
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.247368457 +0000 UTC m=+0.108893929 container died 8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.157249471 +0000 UTC m=+0.018774963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:55:02 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dc2cc313b793944fafaa948ca9e35f03c5aaac7cbc841352d19b011ccdfeaa18-merged.mount: Deactivated successfully.
Jan 31 03:55:02 np0005603621 podman[378243]: 2026-01-31 08:55:02.28612085 +0000 UTC m=+0.147646322 container remove 8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:55:02 np0005603621 systemd[1]: libpod-conmon-8d257500af8a177ab56150eb2f7cc57f916d7d886bdd4bbeab28df6744c8400a.scope: Deactivated successfully.
Jan 31 03:55:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:02 np0005603621 podman[378283]: 2026-01-31 08:55:02.3963054 +0000 UTC m=+0.034291084 container create 504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:55:02 np0005603621 systemd[1]: Started libpod-conmon-504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931.scope.
Jan 31 03:55:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:55:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9aae597a4870418dcf9d5457e8a7cc3510458bbca49ae9ffe0377951ba3e13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9aae597a4870418dcf9d5457e8a7cc3510458bbca49ae9ffe0377951ba3e13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9aae597a4870418dcf9d5457e8a7cc3510458bbca49ae9ffe0377951ba3e13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9aae597a4870418dcf9d5457e8a7cc3510458bbca49ae9ffe0377951ba3e13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:02 np0005603621 podman[378283]: 2026-01-31 08:55:02.38049307 +0000 UTC m=+0.018478774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:55:02 np0005603621 podman[378283]: 2026-01-31 08:55:02.480943781 +0000 UTC m=+0.118929475 container init 504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:55:02 np0005603621 podman[378283]: 2026-01-31 08:55:02.486236709 +0000 UTC m=+0.124222393 container start 504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:55:02 np0005603621 podman[378283]: 2026-01-31 08:55:02.490678359 +0000 UTC m=+0.128664043 container attach 504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:55:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:02.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:02.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 378 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 814 KiB/s rd, 6.6 MiB/s wr, 193 op/s
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]: {
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:        "osd_id": 0,
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:        "type": "bluestore"
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]:    }
Jan 31 03:55:03 np0005603621 focused_keldysh[378299]: }
Jan 31 03:55:03 np0005603621 systemd[1]: libpod-504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931.scope: Deactivated successfully.
Jan 31 03:55:03 np0005603621 podman[378283]: 2026-01-31 08:55:03.264638136 +0000 UTC m=+0.902623840 container died 504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:55:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1b9aae597a4870418dcf9d5457e8a7cc3510458bbca49ae9ffe0377951ba3e13-merged.mount: Deactivated successfully.
Jan 31 03:55:03 np0005603621 podman[378283]: 2026-01-31 08:55:03.30339265 +0000 UTC m=+0.941378344 container remove 504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 31 03:55:03 np0005603621 systemd[1]: libpod-conmon-504e82840cf648785bb9d9f24ed943a3fb3ad1c0583b1e1db0d9a83754878931.scope: Deactivated successfully.
Jan 31 03:55:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:55:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:55:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:55:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:55:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0b88ac41-2add-4022-ab71-3ce2fe84d17c does not exist
Jan 31 03:55:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 98701fcb-1d07-4467-9ade-40d9a96de00a does not exist
Jan 31 03:55:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 391c19d1-a779-4d51-a269-42fccaaf6174 does not exist
Jan 31 03:55:04 np0005603621 nova_compute[247399]: 2026-01-31 08:55:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:55:04 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:55:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:04.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 566 KiB/s rd, 5.1 MiB/s wr, 146 op/s
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.019 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.029 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:55:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2447356570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.506 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849691.5044634, d2b5bab2-3aab-4bdf-94a8-115368b4ee97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.506 247403 INFO nova.compute.manager [-] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.566 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:06 np0005603621 nova_compute[247399]: 2026-01-31 08:55:06.601 247403 DEBUG nova.compute.manager [None req-1d0d85bf-1ed1-46ab-ae34-abfead506faf - - - - - -] [instance: d2b5bab2-3aab-4bdf-94a8-115368b4ee97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:55:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:06.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 4.7 MiB/s wr, 134 op/s
Jan 31 03:55:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:55:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Jan 31 03:55:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:10.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 379 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.9 MiB/s wr, 94 op/s
Jan 31 03:55:11 np0005603621 nova_compute[247399]: 2026-01-31 08:55:11.021 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:11 np0005603621 nova_compute[247399]: 2026-01-31 08:55:11.568 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 321 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 95 op/s
Jan 31 03:55:14 np0005603621 nova_compute[247399]: 2026-01-31 08:55:14.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:14 np0005603621 nova_compute[247399]: 2026-01-31 08:55:14.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:14 np0005603621 nova_compute[247399]: 2026-01-31 08:55:14.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:55:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 300 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 89 KiB/s wr, 52 op/s
Jan 31 03:55:15 np0005603621 nova_compute[247399]: 2026-01-31 08:55:15.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:16 np0005603621 nova_compute[247399]: 2026-01-31 08:55:16.022 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:16 np0005603621 podman[378392]: 2026-01-31 08:55:16.506618384 +0000 UTC m=+0.058530550 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:55:16 np0005603621 podman[378393]: 2026-01-31 08:55:16.529502546 +0000 UTC m=+0.080120000 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 31 03:55:16 np0005603621 nova_compute[247399]: 2026-01-31 08:55:16.569 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 300 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 47 op/s
Jan 31 03:55:17 np0005603621 nova_compute[247399]: 2026-01-31 08:55:17.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:17 np0005603621 nova_compute[247399]: 2026-01-31 08:55:17.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:55:17 np0005603621 nova_compute[247399]: 2026-01-31 08:55:17.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:55:17 np0005603621 nova_compute[247399]: 2026-01-31 08:55:17.225 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 03:55:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.244 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.244 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:18.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:55:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179766494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.663 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.826 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.827 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4120MB free_disk=20.89702606201172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.828 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.828 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 300 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 35 KiB/s wr, 60 op/s
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.979 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:55:18 np0005603621 nova_compute[247399]: 2026-01-31 08:55:18.980 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:55:19 np0005603621 nova_compute[247399]: 2026-01-31 08:55:19.008 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:55:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25285103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:55:19 np0005603621 nova_compute[247399]: 2026-01-31 08:55:19.408 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.400s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:19 np0005603621 nova_compute[247399]: 2026-01-31 08:55:19.412 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:55:19 np0005603621 nova_compute[247399]: 2026-01-31 08:55:19.437 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:55:19 np0005603621 nova_compute[247399]: 2026-01-31 08:55:19.484 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:55:19 np0005603621 nova_compute[247399]: 2026-01-31 08:55:19.485 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:20.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 318 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 31 03:55:21 np0005603621 nova_compute[247399]: 2026-01-31 08:55:21.024 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:21 np0005603621 nova_compute[247399]: 2026-01-31 08:55:21.571 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:22.416 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:55:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:22.417 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:55:22 np0005603621 nova_compute[247399]: 2026-01-31 08:55:22.417 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:22 np0005603621 nova_compute[247399]: 2026-01-31 08:55:22.485 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:22 np0005603621 nova_compute[247399]: 2026-01-31 08:55:22.486 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:22.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 31 03:55:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:24.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:24.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 303 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.4 MiB/s wr, 138 op/s
Jan 31 03:55:25 np0005603621 nova_compute[247399]: 2026-01-31 08:55:25.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.026 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.564 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.564 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.573 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:26.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.678 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:55:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 303 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.4 MiB/s wr, 127 op/s
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.950 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.950 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.958 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:55:26 np0005603621 nova_compute[247399]: 2026-01-31 08:55:26.958 247403 INFO nova.compute.claims [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:55:27 np0005603621 nova_compute[247399]: 2026-01-31 08:55:27.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:55:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:27 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:27.420 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:28 np0005603621 nova_compute[247399]: 2026-01-31 08:55:28.142 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:55:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1865344643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:55:28 np0005603621 nova_compute[247399]: 2026-01-31 08:55:28.577 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:28 np0005603621 nova_compute[247399]: 2026-01-31 08:55:28.581 247403 DEBUG nova.compute.provider_tree [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:55:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:28.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:28 np0005603621 nova_compute[247399]: 2026-01-31 08:55:28.671 247403 DEBUG nova.scheduler.client.report [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:55:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:28.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:28 np0005603621 nova_compute[247399]: 2026-01-31 08:55:28.823 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:28 np0005603621 nova_compute[247399]: 2026-01-31 08:55:28.824 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:55:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 327 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.0 MiB/s wr, 196 op/s
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.040 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.041 247403 DEBUG nova.network.neutron [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.161 247403 INFO nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.243 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.424 247403 DEBUG nova.policy [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'cfaebb011a374541b083e772a6c83f25', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06b5fc9cfd4c49abb2d8b9f2f8a82c1f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.442 247403 INFO nova.virt.block_device [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Booting with volume 26d9ea85-55bc-4b3b-b5ad-d60d188212c2 at /dev/vda#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.697 247403 DEBUG os_brick.utils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.698 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.707 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.707 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[784454f1-3dcb-418a-b8a3-4fb9d5a2170c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.709 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.714 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.715 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[0b55b4fd-f737-4c39-b5ec-a73364e7e7ed]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.716 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.721 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.722 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcef800-715a-4c21-931b-e8f15760315b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.723 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[613e3630-6b21-409e-a52e-41420dc2dffe]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.723 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.746 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.748 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.748 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.748 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.748 247403 DEBUG os_brick.utils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] <== get_connector_properties: return (51ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:55:29 np0005603621 nova_compute[247399]: 2026-01-31 08:55:29.749 247403 DEBUG nova.virt.block_device [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating existing volume attachment record: d748cdef-8b51-4810-ba5d-5cc0bc71c23f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:55:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:30.536 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:30.537 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:30.537 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:30 np0005603621 nova_compute[247399]: 2026-01-31 08:55:30.814 247403 DEBUG nova.network.neutron [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Successfully created port: a686c587-a94d-4875-a040-48d5b193a20a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:55:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 344 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.5 MiB/s wr, 193 op/s
Jan 31 03:55:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:55:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3679704786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.028 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.574 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.697 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.698 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.699 247403 INFO nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Creating image(s)#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.699 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.699 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Ensure instance console log exists: /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.699 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.700 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:31 np0005603621 nova_compute[247399]: 2026-01-31 08:55:31.700 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:32.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 181 op/s
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.134 247403 DEBUG nova.network.neutron [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Successfully updated port: a686c587-a94d-4875-a040-48d5b193a20a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.154 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.154 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.154 247403 DEBUG nova.network.neutron [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.298 247403 DEBUG nova.compute.manager [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.298 247403 DEBUG nova.compute.manager [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.298 247403 DEBUG oslo_concurrency.lockutils [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:55:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:34.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:34.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:34 np0005603621 nova_compute[247399]: 2026-01-31 08:55:34.729 247403 DEBUG nova.network.neutron [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:55:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:55:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3310656352' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:55:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 144 op/s
Jan 31 03:55:36 np0005603621 nova_compute[247399]: 2026-01-31 08:55:36.033 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:36 np0005603621 nova_compute[247399]: 2026-01-31 08:55:36.576 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:36.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.235 247403 DEBUG nova.network.neutron [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.273 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.273 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Instance network_info: |[{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.274 247403 DEBUG oslo_concurrency.lockutils [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.274 247403 DEBUG nova.network.neutron [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.277 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Start _get_guest_xml network_info=[{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-26d9ea85-55bc-4b3b-b5ad-d60d188212c2', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '26d9ea85-55bc-4b3b-b5ad-d60d188212c2', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c91674d0-7f78-4e09-b54e-e46f7fbd65a3', 'attached_at': '', 'detached_at': '', 'volume_id': '26d9ea85-55bc-4b3b-b5ad-d60d188212c2', 'serial': '26d9ea85-55bc-4b3b-b5ad-d60d188212c2'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': 'd748cdef-8b51-4810-ba5d-5cc0bc71c23f', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.283 247403 WARNING nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.288 247403 DEBUG nova.virt.libvirt.host [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.289 247403 DEBUG nova.virt.libvirt.host [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.293 247403 DEBUG nova.virt.libvirt.host [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.293 247403 DEBUG nova.virt.libvirt.host [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.294 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.295 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.295 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.295 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.295 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.295 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.296 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.296 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.296 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.296 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.296 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.297 247403 DEBUG nova.virt.hardware [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.324 247403 DEBUG nova.storage.rbd_utils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] rbd image c91674d0-7f78-4e09-b54e-e46f7fbd65a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.328 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:55:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755729750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.770 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.811 247403 DEBUG nova.virt.libvirt.vif [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1909699072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1909699072',id=178,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXzK6zUN8P2oqgqYwcegkodZ7bCeyyyhmYXIteBKXOhNEu+drS3qyKalg8BzkpjD3Rc/+FviAhlBApTbimNmOyPmM7IztIR2VGri6qDWFeRA0jXOdg2vS/Kgt0ALKH9cg==',key_name='tempest-TestInstancesWithCinderVolumes-176277168',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06b5fc9cfd4c49abb2d8b9f2f8a82c1f',ramdisk_id='',reservation_id='r-g26td7gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-2132464628',owner_user_name='tempest-TestInstancesWithCinderVolumes-2132464628-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:55:29Z,user_data=None,user_id='cfaebb011a374541b083e772a6c83f25',uuid=c91674d0-7f78-4e09-b54e-e46f7fbd65a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.812 247403 DEBUG nova.network.os_vif_util [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Converting VIF {"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.812 247403 DEBUG nova.network.os_vif_util [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.814 247403 DEBUG nova.objects.instance [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'pci_devices' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.864 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <uuid>c91674d0-7f78-4e09-b54e-e46f7fbd65a3</uuid>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <name>instance-000000b2</name>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1909699072</nova:name>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:55:37</nova:creationTime>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:user uuid="cfaebb011a374541b083e772a6c83f25">tempest-TestInstancesWithCinderVolumes-2132464628-project-member</nova:user>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:project uuid="06b5fc9cfd4c49abb2d8b9f2f8a82c1f">tempest-TestInstancesWithCinderVolumes-2132464628</nova:project>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <nova:port uuid="a686c587-a94d-4875-a040-48d5b193a20a">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <entry name="serial">c91674d0-7f78-4e09-b54e-e46f7fbd65a3</entry>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <entry name="uuid">c91674d0-7f78-4e09-b54e-e46f7fbd65a3</entry>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c91674d0-7f78-4e09-b54e-e46f7fbd65a3_disk.config">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-26d9ea85-55bc-4b3b-b5ad-d60d188212c2">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <serial>26d9ea85-55bc-4b3b-b5ad-d60d188212c2</serial>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:58:25:74"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <target dev="tapa686c587-a9"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/console.log" append="off"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:55:37 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:55:37 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:55:37 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:55:37 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.864 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Preparing to wait for external event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.864 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.865 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.865 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.865 247403 DEBUG nova.virt.libvirt.vif [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1909699072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1909699072',id=178,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXzK6zUN8P2oqgqYwcegkodZ7bCeyyyhmYXIteBKXOhNEu+drS3qyKalg8BzkpjD3Rc/+FviAhlBApTbimNmOyPmM7IztIR2VGri6qDWFeRA0jXOdg2vS/Kgt0ALKH9cg==',key_name='tempest-TestInstancesWithCinderVolumes-176277168',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06b5fc9cfd4c49abb2d8b9f2f8a82c1f',ramdisk_id='',reservation_id='r-g26td7gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-2132464628',owner_user_name='tempest-TestInstancesWithCinderVolumes-2132464628-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:55:29Z,user_data=None,user_id='cfaebb011a374541b083e772a6c83f25',uuid=c91674d0-7f78-4e09-b54e-e46f7fbd65a3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.866 247403 DEBUG nova.network.os_vif_util [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Converting VIF {"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.866 247403 DEBUG nova.network.os_vif_util [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.866 247403 DEBUG os_vif [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.867 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.867 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.867 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.870 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa686c587-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.871 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa686c587-a9, col_values=(('external_ids', {'iface-id': 'a686c587-a94d-4875-a040-48d5b193a20a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:25:74', 'vm-uuid': 'c91674d0-7f78-4e09-b54e-e46f7fbd65a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:37 np0005603621 NetworkManager[49013]: <info>  [1769849737.8735] manager: (tapa686c587-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/337)
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.875 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.880 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.881 247403 INFO os_vif [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9')#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.953 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.953 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.954 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No VIF found with MAC fa:16:3e:58:25:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.954 247403 INFO nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Using config drive#033[00m
Jan 31 03:55:37 np0005603621 nova_compute[247399]: 2026-01-31 08:55:37.980 247403 DEBUG nova.storage.rbd_utils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] rbd image c91674d0-7f78-4e09-b54e-e46f7fbd65a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:55:38
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'volumes', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:55:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:38.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:38.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 125 op/s
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:55:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:55:39 np0005603621 nova_compute[247399]: 2026-01-31 08:55:39.995 247403 INFO nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Creating config drive at /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/disk.config#033[00m
Jan 31 03:55:39 np0005603621 nova_compute[247399]: 2026-01-31 08:55:39.998 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp625htikm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.122 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp625htikm" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.149 247403 DEBUG nova.storage.rbd_utils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] rbd image c91674d0-7f78-4e09-b54e-e46f7fbd65a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.153 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/disk.config c91674d0-7f78-4e09-b54e-e46f7fbd65a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.311 247403 DEBUG oslo_concurrency.processutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/disk.config c91674d0-7f78-4e09-b54e-e46f7fbd65a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.312 247403 INFO nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Deleting local config drive /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3/disk.config because it was imported into RBD.#033[00m
Jan 31 03:55:40 np0005603621 kernel: tapa686c587-a9: entered promiscuous mode
Jan 31 03:55:40 np0005603621 NetworkManager[49013]: <info>  [1769849740.3530] manager: (tapa686c587-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/338)
Jan 31 03:55:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:40Z|00753|binding|INFO|Claiming lport a686c587-a94d-4875-a040-48d5b193a20a for this chassis.
Jan 31 03:55:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:40Z|00754|binding|INFO|a686c587-a94d-4875-a040-48d5b193a20a: Claiming fa:16:3e:58:25:74 10.100.0.8
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.352 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.354 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.357 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 systemd-machined[212769]: New machine qemu-90-instance-000000b2.
Jan 31 03:55:40 np0005603621 systemd-udevd[378733]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.376 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 systemd[1]: Started Virtual Machine qemu-90-instance-000000b2.
Jan 31 03:55:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:40Z|00755|binding|INFO|Setting lport a686c587-a94d-4875-a040-48d5b193a20a ovn-installed in OVS
Jan 31 03:55:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:40Z|00756|binding|INFO|Setting lport a686c587-a94d-4875-a040-48d5b193a20a up in Southbound
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.380 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.378 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:25:74 10.100.0.8'], port_security=['fa:16:3e:58:25:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c91674d0-7f78-4e09-b54e-e46f7fbd65a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06b5fc9cfd4c49abb2d8b9f2f8a82c1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d7b4c6b-30ca-4a01-b275-d4aa9d87b845', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe6e8b31-5a27-4e0f-b157-3b33899fa37b, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a686c587-a94d-4875-a040-48d5b193a20a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.379 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a686c587-a94d-4875-a040-48d5b193a20a in datapath 405bd95c-1bad-49fb-83bf-a97a0c66786e bound to our chassis#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.380 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 405bd95c-1bad-49fb-83bf-a97a0c66786e#033[00m
Jan 31 03:55:40 np0005603621 NetworkManager[49013]: <info>  [1769849740.3875] device (tapa686c587-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:55:40 np0005603621 NetworkManager[49013]: <info>  [1769849740.3881] device (tapa686c587-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.388 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4e55f2-80b6-48fd-8364-b9275fced5f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.389 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap405bd95c-11 in ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.392 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap405bd95c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.392 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bde390c4-6627-4f25-a743-2084291eff60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.393 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd97136-aa86-49df-8412-a06445fb06b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.403 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[760893cd-8445-44ac-bdee-4fd1b6b22585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.415 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0abb59f4-a8ea-46cb-a9be-3cf827941130]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.441 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2ffbde-5063-443d-bc16-0dcc26945cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 systemd-udevd[378736]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.446 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ddad6398-a3be-48f1-902d-b68e1c19ac76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 NetworkManager[49013]: <info>  [1769849740.4477] manager: (tap405bd95c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/339)
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.470 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[58417637-a1ef-4507-83c0-f29db208fef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.473 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4bea6d-be15-4548-8678-5c0b321da00a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 NetworkManager[49013]: <info>  [1769849740.4896] device (tap405bd95c-10): carrier: link connected
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.493 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff259af-7c72-47a7-93e8-0afc7752114f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.505 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[15b06f09-adac-47d5-a0da-5548d8f9318d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap405bd95c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:d8:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904600, 'reachable_time': 38032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378766, 'error': None, 'target': 'ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.512 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ac648a-2c84-423a-b376-9cd0e4dc6f25]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:d880'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 904600, 'tstamp': 904600}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378767, 'error': None, 'target': 'ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.523 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb7580b-14df-43b5-aede-97956f69e13b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap405bd95c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:d8:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904600, 'reachable_time': 38032, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378768, 'error': None, 'target': 'ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.540 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3124ecf6-0158-420a-a5b6-f83d3be1c2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.586 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e9ffae-b6e9-4e87-8a4e-acd80f57720b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.588 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap405bd95c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.588 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.588 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap405bd95c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.590 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 NetworkManager[49013]: <info>  [1769849740.5909] manager: (tap405bd95c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Jan 31 03:55:40 np0005603621 kernel: tap405bd95c-10: entered promiscuous mode
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.595 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap405bd95c-10, col_values=(('external_ids', {'iface-id': '5a0136e3-84ab-4495-80ff-8006a0a74934'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:55:40 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:40Z|00757|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=1)
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.596 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.597 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.597 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/405bd95c-1bad-49fb-83bf-a97a0c66786e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/405bd95c-1bad-49fb-83bf-a97a0c66786e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.601 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[61b9e2bf-3e44-4cbf-bc3d-67c0c58e438b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.602 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.602 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-405bd95c-1bad-49fb-83bf-a97a0c66786e
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/405bd95c-1bad-49fb-83bf-a97a0c66786e.pid.haproxy
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 405bd95c-1bad-49fb-83bf-a97a0c66786e
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:55:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:55:40.603 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'env', 'PROCESS_TAG=haproxy-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/405bd95c-1bad-49fb-83bf-a97a0c66786e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:55:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:40.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:40.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.762 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849740.7618978, c91674d0-7f78-4e09-b54e-e46f7fbd65a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.762 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] VM Started (Lifecycle Event)#033[00m
Jan 31 03:55:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 359 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 541 KiB/s rd, 1.3 MiB/s wr, 62 op/s
Jan 31 03:55:40 np0005603621 podman[378840]: 2026-01-31 08:55:40.912768878 +0000 UTC m=+0.041557273 container create c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.939 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.943 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849740.7646594, c91674d0-7f78-4e09-b54e-e46f7fbd65a3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:55:40 np0005603621 nova_compute[247399]: 2026-01-31 08:55:40.943 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:55:40 np0005603621 systemd[1]: Started libpod-conmon-c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6.scope.
Jan 31 03:55:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:55:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd44d71d67bc624f15fb2220569b0a4cd6aaf6c7c9916825697c3b0a90286ea0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:55:40 np0005603621 podman[378840]: 2026-01-31 08:55:40.890356991 +0000 UTC m=+0.019145426 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:55:40 np0005603621 podman[378840]: 2026-01-31 08:55:40.98758461 +0000 UTC m=+0.116373015 container init c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:55:40 np0005603621 podman[378840]: 2026-01-31 08:55:40.991470143 +0000 UTC m=+0.120258548 container start c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 03:55:41 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [NOTICE]   (378859) : New worker (378861) forked
Jan 31 03:55:41 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [NOTICE]   (378859) : Loading success.
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.031 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.033 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.035 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.070 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.388 247403 DEBUG nova.compute.manager [req-d7e3f285-2d78-49a9-9d3d-2b0d25b7e816 req-640ec882-1951-4f00-94ff-9f15c1c56a48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.388 247403 DEBUG oslo_concurrency.lockutils [req-d7e3f285-2d78-49a9-9d3d-2b0d25b7e816 req-640ec882-1951-4f00-94ff-9f15c1c56a48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.389 247403 DEBUG oslo_concurrency.lockutils [req-d7e3f285-2d78-49a9-9d3d-2b0d25b7e816 req-640ec882-1951-4f00-94ff-9f15c1c56a48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.389 247403 DEBUG oslo_concurrency.lockutils [req-d7e3f285-2d78-49a9-9d3d-2b0d25b7e816 req-640ec882-1951-4f00-94ff-9f15c1c56a48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.389 247403 DEBUG nova.compute.manager [req-d7e3f285-2d78-49a9-9d3d-2b0d25b7e816 req-640ec882-1951-4f00-94ff-9f15c1c56a48 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Processing event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.390 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.394 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849741.394069, c91674d0-7f78-4e09-b54e-e46f7fbd65a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.395 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.398 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.402 247403 INFO nova.virt.libvirt.driver [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Instance spawned successfully.#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.403 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.461 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.464 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.499 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.502 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.503 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.503 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.504 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.504 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.504 247403 DEBUG nova.virt.libvirt.driver [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.574 247403 INFO nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.574 247403 DEBUG nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.615 247403 DEBUG nova.network.neutron [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.616 247403 DEBUG nova.network.neutron [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.639 247403 DEBUG oslo_concurrency.lockutils [req-2232f132-2c37-411b-bca2-fedbc79dce28 req-4e19b204-79da-4a0f-91f4-520ef5c326d7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.676 247403 INFO nova.compute.manager [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Took 14.78 seconds to build instance.#033[00m
Jan 31 03:55:41 np0005603621 nova_compute[247399]: 2026-01-31 08:55:41.738 247403 DEBUG oslo_concurrency.lockutils [None req-fd62ec30-2d9c-4ce7-9d04-e1d4a2898428 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:42.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:42.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 361 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 906 KiB/s rd, 836 KiB/s wr, 70 op/s
Jan 31 03:55:42 np0005603621 nova_compute[247399]: 2026-01-31 08:55:42.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.663 247403 DEBUG nova.compute.manager [req-b25cca6d-dc86-4b73-a43a-ecc154677ef3 req-6628a408-db3c-4c83-91ad-d759d91f59ce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.664 247403 DEBUG oslo_concurrency.lockutils [req-b25cca6d-dc86-4b73-a43a-ecc154677ef3 req-6628a408-db3c-4c83-91ad-d759d91f59ce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.664 247403 DEBUG oslo_concurrency.lockutils [req-b25cca6d-dc86-4b73-a43a-ecc154677ef3 req-6628a408-db3c-4c83-91ad-d759d91f59ce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.664 247403 DEBUG oslo_concurrency.lockutils [req-b25cca6d-dc86-4b73-a43a-ecc154677ef3 req-6628a408-db3c-4c83-91ad-d759d91f59ce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.665 247403 DEBUG nova.compute.manager [req-b25cca6d-dc86-4b73-a43a-ecc154677ef3 req-6628a408-db3c-4c83-91ad-d759d91f59ce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] No waiting events found dispatching network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.665 247403 WARNING nova.compute.manager [req-b25cca6d-dc86-4b73-a43a-ecc154677ef3 req-6628a408-db3c-4c83-91ad-d759d91f59ce fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received unexpected event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:55:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:55:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2857698816' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.899 247403 DEBUG oslo_concurrency.lockutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.901 247403 DEBUG oslo_concurrency.lockutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:43 np0005603621 nova_compute[247399]: 2026-01-31 08:55:43.932 247403 DEBUG nova.objects.instance [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'flavor' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.005 247403 DEBUG oslo_concurrency.lockutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.417 247403 DEBUG oslo_concurrency.lockutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.418 247403 DEBUG oslo_concurrency.lockutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.418 247403 INFO nova.compute.manager [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attaching volume 9bc3cb04-a48c-40bd-81ea-100007bee62c to /dev/vdb#033[00m
Jan 31 03:55:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:55:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:44.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:55:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:44.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.743 247403 DEBUG os_brick.utils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.745 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.755 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.756 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[84a2ae8b-507b-4542-8b2a-af629ef7c3fb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.757 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.763 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.763 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[903b4575-b568-42bb-8dbf-c6be0a7b8ed3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.765 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.771 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.772 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f09f34-40e2-485b-b708-10032316416f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.774 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[aa8c92a1-c49e-4e62-b225-e200e2a0057f]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.775 247403 DEBUG oslo_concurrency.processutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.798 247403 DEBUG oslo_concurrency.processutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.799 247403 DEBUG os_brick.initiator.connectors.lightos [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.800 247403 DEBUG os_brick.initiator.connectors.lightos [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.800 247403 DEBUG os_brick.initiator.connectors.lightos [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.800 247403 DEBUG os_brick.utils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:55:44 np0005603621 nova_compute[247399]: 2026-01-31 08:55:44.800 247403 DEBUG nova.virt.block_device [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating existing volume attachment record: 28be5535-fab0-4ea0-8c5d-e062c6ff531d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:55:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 374 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 482 KiB/s wr, 115 op/s
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.013 247403 DEBUG nova.objects.instance [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'flavor' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.042 247403 DEBUG nova.virt.libvirt.driver [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attempting to attach volume 9bc3cb04-a48c-40bd-81ea-100007bee62c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.045 247403 DEBUG nova.virt.libvirt.guest [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-9bc3cb04-a48c-40bd-81ea-100007bee62c">
Jan 31 03:55:46 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:55:46 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:55:46 np0005603621 nova_compute[247399]:  <serial>9bc3cb04-a48c-40bd-81ea-100007bee62c</serial>
Jan 31 03:55:46 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:55:46 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.182 247403 DEBUG nova.virt.libvirt.driver [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.183 247403 DEBUG nova.virt.libvirt.driver [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.183 247403 DEBUG nova.virt.libvirt.driver [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.184 247403 DEBUG nova.virt.libvirt.driver [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No VIF found with MAC fa:16:3e:58:25:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:55:46 np0005603621 nova_compute[247399]: 2026-01-31 08:55:46.536 247403 DEBUG oslo_concurrency.lockutils [None req-35896200-7e9c-4b53-a7ca-c6c81ea46950 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:46.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:46.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 374 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 471 KiB/s wr, 97 op/s
Jan 31 03:55:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:47 np0005603621 podman[378901]: 2026-01-31 08:55:47.486809525 +0000 UTC m=+0.043316318 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 03:55:47 np0005603621 podman[378902]: 2026-01-31 08:55:47.554401069 +0000 UTC m=+0.110620733 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 03:55:47 np0005603621 nova_compute[247399]: 2026-01-31 08:55:47.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:48 np0005603621 nova_compute[247399]: 2026-01-31 08:55:48.626 247403 DEBUG oslo_concurrency.lockutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:48 np0005603621 nova_compute[247399]: 2026-01-31 08:55:48.627 247403 DEBUG oslo_concurrency.lockutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:48 np0005603621 nova_compute[247399]: 2026-01-31 08:55:48.654 247403 DEBUG nova.objects.instance [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'flavor' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:48.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:48.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:48 np0005603621 nova_compute[247399]: 2026-01-31 08:55:48.731 247403 DEBUG oslo_concurrency.lockutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 408 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.125 247403 DEBUG oslo_concurrency.lockutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.126 247403 DEBUG oslo_concurrency.lockutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.127 247403 INFO nova.compute.manager [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attaching volume f694d315-eb2a-436c-b06d-0d95676e69e3 to /dev/vdc#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.367 247403 DEBUG os_brick.utils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.368 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.375 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.375 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf3d16f-ad7e-4843-83c4-ad94cb107bf2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.376 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.381 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.381 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[a20b8d6c-bb89-44ab-af15-16d47d68f09a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.383 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.387 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.388 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ba0ff-7f08-4e57-bf20-5f3612deb6b1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.389 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[e88fafbd-341f-4feb-9f40-6f3a21fc75d6]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.390 247403 DEBUG oslo_concurrency.processutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.411 247403 DEBUG oslo_concurrency.processutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.413 247403 DEBUG os_brick.initiator.connectors.lightos [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.413 247403 DEBUG os_brick.initiator.connectors.lightos [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.413 247403 DEBUG os_brick.initiator.connectors.lightos [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.414 247403 DEBUG os_brick.utils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] <== get_connector_properties: return (46ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 03:55:49 np0005603621 nova_compute[247399]: 2026-01-31 08:55:49.414 247403 DEBUG nova.virt.block_device [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating existing volume attachment record: fbbdb19b-b3ec-44d3-97b3-614efba043dd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00317941705053043 of space, bias 1.0, pg target 0.953825115159129 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005132360296854643 of space, bias 1.0, pg target 1.539708089056393 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:55:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:55:50 np0005603621 nova_compute[247399]: 2026-01-31 08:55:50.646 247403 DEBUG nova.objects.instance [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'flavor' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:55:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:50.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:50 np0005603621 nova_compute[247399]: 2026-01-31 08:55:50.692 247403 DEBUG nova.virt.libvirt.driver [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attempting to attach volume f694d315-eb2a-436c-b06d-0d95676e69e3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 03:55:50 np0005603621 nova_compute[247399]: 2026-01-31 08:55:50.694 247403 DEBUG nova.virt.libvirt.guest [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-f694d315-eb2a-436c-b06d-0d95676e69e3">
Jan 31 03:55:50 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 03:55:50 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  </auth>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:55:50 np0005603621 nova_compute[247399]:  <serial>f694d315-eb2a-436c-b06d-0d95676e69e3</serial>
Jan 31 03:55:50 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:55:50 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:55:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:55:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:50.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:55:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 408 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.149 247403 DEBUG nova.virt.libvirt.driver [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.150 247403 DEBUG nova.virt.libvirt.driver [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.150 247403 DEBUG nova.virt.libvirt.driver [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.150 247403 DEBUG nova.virt.libvirt.driver [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.150 247403 DEBUG nova.virt.libvirt.driver [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] No VIF found with MAC fa:16:3e:58:25:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:55:51 np0005603621 nova_compute[247399]: 2026-01-31 08:55:51.652 247403 DEBUG oslo_concurrency.lockutils [None req-a935baa8-1923-4ed0-bd45-da2ec5fe4ef6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:55:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:52.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:52.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 408 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Jan 31 03:55:52 np0005603621 nova_compute[247399]: 2026-01-31 08:55:52.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:54.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:54 np0005603621 nova_compute[247399]: 2026-01-31 08:55:54.839 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:54 np0005603621 NetworkManager[49013]: <info>  [1769849754.8405] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/341)
Jan 31 03:55:54 np0005603621 NetworkManager[49013]: <info>  [1769849754.8415] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 31 03:55:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 418 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 179 op/s
Jan 31 03:55:54 np0005603621 nova_compute[247399]: 2026-01-31 08:55:54.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:54 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:54Z|00758|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:55:55 np0005603621 nova_compute[247399]: 2026-01-31 08:55:55.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:55Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:25:74 10.100.0.8
Jan 31 03:55:55 np0005603621 ovn_controller[149152]: 2026-01-31T08:55:55Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:25:74 10.100.0.8
Jan 31 03:55:56 np0005603621 nova_compute[247399]: 2026-01-31 08:55:56.074 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:56 np0005603621 nova_compute[247399]: 2026-01-31 08:55:56.213 247403 DEBUG nova.compute.manager [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:56 np0005603621 nova_compute[247399]: 2026-01-31 08:55:56.213 247403 DEBUG nova.compute.manager [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:55:56 np0005603621 nova_compute[247399]: 2026-01-31 08:55:56.214 247403 DEBUG oslo_concurrency.lockutils [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:55:56 np0005603621 nova_compute[247399]: 2026-01-31 08:55:56.214 247403 DEBUG oslo_concurrency.lockutils [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:55:56 np0005603621 nova_compute[247399]: 2026-01-31 08:55:56.214 247403 DEBUG nova.network.neutron [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:55:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:56.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 418 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 31 03:55:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:55:57 np0005603621 nova_compute[247399]: 2026-01-31 08:55:57.882 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:55:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:55:58.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:55:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:55:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:55:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:55:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 449 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.4 MiB/s wr, 348 op/s
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.803 247403 DEBUG nova.network.neutron [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.803 247403 DEBUG nova.network.neutron [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.866 247403 DEBUG oslo_concurrency.lockutils [req-5c81cd85-f975-40fa-a08b-b15a24a5e32d req-20085064-7a6f-4bf9-a78d-4a62cfd43495 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.946 247403 DEBUG nova.compute.manager [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.947 247403 DEBUG nova.compute.manager [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.947 247403 DEBUG oslo_concurrency.lockutils [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.947 247403 DEBUG oslo_concurrency.lockutils [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:55:59 np0005603621 nova_compute[247399]: 2026-01-31 08:55:59.948 247403 DEBUG nova.network.neutron [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:56:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:00.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 474 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 4.3 MiB/s wr, 364 op/s
Jan 31 03:56:01 np0005603621 nova_compute[247399]: 2026-01-31 08:56:01.528 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.543666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849761543703, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 946, "num_deletes": 251, "total_data_size": 1316224, "memory_usage": 1334864, "flush_reason": "Manual Compaction"}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849761561664, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1301000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70074, "largest_seqno": 71019, "table_properties": {"data_size": 1296511, "index_size": 2076, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10572, "raw_average_key_size": 19, "raw_value_size": 1287175, "raw_average_value_size": 2419, "num_data_blocks": 92, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849688, "oldest_key_time": 1769849688, "file_creation_time": 1769849761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 18042 microseconds, and 2997 cpu microseconds.
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.561707) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1301000 bytes OK
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.561725) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.565837) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.565853) EVENT_LOG_v1 {"time_micros": 1769849761565848, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.565877) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1311751, prev total WAL file size 1311751, number of live WAL files 2.
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.566349) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1270KB)], [161(10MB)]
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849761566379, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12292636, "oldest_snapshot_seqno": -1}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9512 keys, 10423496 bytes, temperature: kUnknown
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849761711652, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10423496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10364826, "index_size": 33789, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 252107, "raw_average_key_size": 26, "raw_value_size": 10201118, "raw_average_value_size": 1072, "num_data_blocks": 1274, "num_entries": 9512, "num_filter_entries": 9512, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849761, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.711888) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10423496 bytes
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.714388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.6 rd, 71.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(17.5) write-amplify(8.0) OK, records in: 10031, records dropped: 519 output_compression: NoCompression
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.714406) EVENT_LOG_v1 {"time_micros": 1769849761714397, "job": 100, "event": "compaction_finished", "compaction_time_micros": 145344, "compaction_time_cpu_micros": 19175, "output_level": 6, "num_output_files": 1, "total_output_size": 10423496, "num_input_records": 10031, "num_output_records": 9512, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849761714634, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849761715423, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.566265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.715485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.715490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.715492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.715494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:56:01 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:56:01.715495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:56:02 np0005603621 nova_compute[247399]: 2026-01-31 08:56:02.184 247403 DEBUG nova.compute.manager [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:02 np0005603621 nova_compute[247399]: 2026-01-31 08:56:02.184 247403 DEBUG nova.compute.manager [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:56:02 np0005603621 nova_compute[247399]: 2026-01-31 08:56:02.185 247403 DEBUG oslo_concurrency.lockutils [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:02.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 474 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 349 op/s
Jan 31 03:56:02 np0005603621 nova_compute[247399]: 2026-01-31 08:56:02.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 03:56:04 np0005603621 nova_compute[247399]: 2026-01-31 08:56:04.266 247403 DEBUG nova.network.neutron [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:56:04 np0005603621 nova_compute[247399]: 2026-01-31 08:56:04.267 247403 DEBUG nova.network.neutron [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:04 np0005603621 nova_compute[247399]: 2026-01-31 08:56:04.323 247403 DEBUG oslo_concurrency.lockutils [req-58386757-7209-464a-bc2e-2a2ef3d5da59 req-8ee89672-5005-486f-8850-51d71dbcd498 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:04 np0005603621 nova_compute[247399]: 2026-01-31 08:56:04.324 247403 DEBUG oslo_concurrency.lockutils [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:04 np0005603621 nova_compute[247399]: 2026-01-31 08:56:04.324 247403 DEBUG nova.network.neutron [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:04.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:56:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 474 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 344 op/s
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 03:56:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:56:06 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:56:06 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:56:06 np0005603621 nova_compute[247399]: 2026-01-31 08:56:06.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6c0fc24a-6cfc-4369-b9af-434a506d3ae2 does not exist
Jan 31 03:56:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ad6947fd-ca52-4f87-ae33-9649ebe824d1 does not exist
Jan 31 03:56:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 894a0df3-90ca-446b-b1d6-077f23fdd244 does not exist
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:56:06 np0005603621 nova_compute[247399]: 2026-01-31 08:56:06.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:06.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:06.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:56:06 np0005603621 podman[379423]: 2026-01-31 08:56:06.741188178 +0000 UTC m=+0.019742464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:56:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 474 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.5 MiB/s wr, 333 op/s
Jan 31 03:56:06 np0005603621 podman[379423]: 2026-01-31 08:56:06.879501455 +0000 UTC m=+0.158055721 container create 22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 03:56:07 np0005603621 systemd[1]: Started libpod-conmon-22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21.scope.
Jan 31 03:56:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:07 np0005603621 nova_compute[247399]: 2026-01-31 08:56:07.102 247403 DEBUG nova.network.neutron [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:56:07 np0005603621 nova_compute[247399]: 2026-01-31 08:56:07.102 247403 DEBUG nova.network.neutron [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:07 np0005603621 nova_compute[247399]: 2026-01-31 08:56:07.153 247403 DEBUG oslo_concurrency.lockutils [req-494122e9-b041-423c-8cc7-eb0fa34fef00 req-ccf5c0eb-6c07-46c7-b3d9-c1e5900318fd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:07 np0005603621 podman[379423]: 2026-01-31 08:56:07.19732341 +0000 UTC m=+0.475877686 container init 22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:56:07 np0005603621 podman[379423]: 2026-01-31 08:56:07.204414584 +0000 UTC m=+0.482968870 container start 22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mendeleev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:56:07 np0005603621 blissful_mendeleev[379439]: 167 167
Jan 31 03:56:07 np0005603621 systemd[1]: libpod-22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21.scope: Deactivated successfully.
Jan 31 03:56:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:07 np0005603621 podman[379423]: 2026-01-31 08:56:07.358349074 +0000 UTC m=+0.636903340 container attach 22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:56:07 np0005603621 podman[379423]: 2026-01-31 08:56:07.359416878 +0000 UTC m=+0.637971184 container died 22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mendeleev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 03:56:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-257bc26bbf21975926e258c67ed45bf49439cc7f2acb87f4beed7f505bf14ce2-merged.mount: Deactivated successfully.
Jan 31 03:56:07 np0005603621 nova_compute[247399]: 2026-01-31 08:56:07.912 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:08 np0005603621 podman[379423]: 2026-01-31 08:56:08.06133563 +0000 UTC m=+1.339889926 container remove 22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 03:56:08 np0005603621 systemd[1]: libpod-conmon-22b4ad79f287f228d736b14ab8c7475fc323b3f29b517d5669e79beb3cf73b21.scope: Deactivated successfully.
Jan 31 03:56:08 np0005603621 podman[379466]: 2026-01-31 08:56:08.233642811 +0000 UTC m=+0.091353516 container create fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 03:56:08 np0005603621 podman[379466]: 2026-01-31 08:56:08.165151838 +0000 UTC m=+0.022862553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:56:08 np0005603621 systemd[1]: Started libpod-conmon-fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa.scope.
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:56:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354eccca882927f5c9d7e8a7128fefe04d331201984abab4945985d27f86d40c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354eccca882927f5c9d7e8a7128fefe04d331201984abab4945985d27f86d40c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354eccca882927f5c9d7e8a7128fefe04d331201984abab4945985d27f86d40c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354eccca882927f5c9d7e8a7128fefe04d331201984abab4945985d27f86d40c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/354eccca882927f5c9d7e8a7128fefe04d331201984abab4945985d27f86d40c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:08 np0005603621 podman[379466]: 2026-01-31 08:56:08.667082066 +0000 UTC m=+0.524792801 container init fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:56:08 np0005603621 podman[379466]: 2026-01-31 08:56:08.67261143 +0000 UTC m=+0.530322145 container start fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:56:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:08.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:08 np0005603621 podman[379466]: 2026-01-31 08:56:08.733779572 +0000 UTC m=+0.591490307 container attach fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 03:56:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:08.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 478 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.0 MiB/s wr, 370 op/s
Jan 31 03:56:08 np0005603621 nova_compute[247399]: 2026-01-31 08:56:08.950 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:08 np0005603621 nova_compute[247399]: 2026-01-31 08:56:08.951 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:08 np0005603621 nova_compute[247399]: 2026-01-31 08:56:08.972 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.147 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.147 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.156 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.156 247403 INFO nova.compute.claims [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.341 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:09 np0005603621 exciting_bouman[379485]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:56:09 np0005603621 exciting_bouman[379485]: --> relative data size: 1.0
Jan 31 03:56:09 np0005603621 exciting_bouman[379485]: --> All data devices are unavailable
Jan 31 03:56:09 np0005603621 systemd[1]: libpod-fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa.scope: Deactivated successfully.
Jan 31 03:56:09 np0005603621 podman[379466]: 2026-01-31 08:56:09.420640849 +0000 UTC m=+1.278351554 container died fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.629 247403 DEBUG nova.compute.manager [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.629 247403 DEBUG nova.compute.manager [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.629 247403 DEBUG oslo_concurrency.lockutils [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.630 247403 DEBUG oslo_concurrency.lockutils [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.630 247403 DEBUG nova.network.neutron [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:56:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-354eccca882927f5c9d7e8a7128fefe04d331201984abab4945985d27f86d40c-merged.mount: Deactivated successfully.
Jan 31 03:56:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:56:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862325408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.930 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.936 247403 DEBUG nova.compute.provider_tree [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:56:09 np0005603621 nova_compute[247399]: 2026-01-31 08:56:09.971 247403 DEBUG nova.scheduler.client.report [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.017 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.018 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.082 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.083 247403 DEBUG nova.network.neutron [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.109 247403 INFO nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.138 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:56:10 np0005603621 podman[379466]: 2026-01-31 08:56:10.174662025 +0000 UTC m=+2.032372720 container remove fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bouman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:56:10 np0005603621 systemd[1]: libpod-conmon-fd9ac49dfa88bad48ec71f581a57cbbca51fab6edefd7394bac9cb275d022bfa.scope: Deactivated successfully.
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.304 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.305 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.306 247403 INFO nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Creating image(s)#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.337 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.364 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.397 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.404 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.445 247403 DEBUG nova.policy [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.483 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.484 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.485 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.485 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.514 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.519 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.691885167 +0000 UTC m=+0.041245604 container create 063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:56:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:10.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:10 np0005603621 systemd[1]: Started libpod-conmon-063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f.scope.
Jan 31 03:56:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:10.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.672702291 +0000 UTC m=+0.022062888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.777880822 +0000 UTC m=+0.127241279 container init 063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.783949864 +0000 UTC m=+0.133310301 container start 063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:56:10 np0005603621 recursing_mcclintock[379782]: 167 167
Jan 31 03:56:10 np0005603621 systemd[1]: libpod-063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f.scope: Deactivated successfully.
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.835454739 +0000 UTC m=+0.184815176 container attach 063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.835892073 +0000 UTC m=+0.185252510 container died 063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mcclintock, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 03:56:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 500 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.5 MiB/s wr, 185 op/s
Jan 31 03:56:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-394391f8c479f4a41a4f662f7032c285b84d80bfe858ed5b373e3de57db89cda-merged.mount: Deactivated successfully.
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.883 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:10 np0005603621 podman[379762]: 2026-01-31 08:56:10.900593216 +0000 UTC m=+0.249953643 container remove 063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:56:10 np0005603621 systemd[1]: libpod-conmon-063b77eeef3c7f832ab3afde976670e280cf680db45705647295596c7935d92f.scope: Deactivated successfully.
Jan 31 03:56:10 np0005603621 nova_compute[247399]: 2026-01-31 08:56:10.959 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.018315123 +0000 UTC m=+0.032966982 container create 9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 03:56:11 np0005603621 systemd[1]: Started libpod-conmon-9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205.scope.
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.061 247403 DEBUG nova.objects.instance [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc48ee89c5340fe2871ae0f4fdb83ca15609e2372bca221e6bb95b01bed313fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc48ee89c5340fe2871ae0f4fdb83ca15609e2372bca221e6bb95b01bed313fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc48ee89c5340fe2871ae0f4fdb83ca15609e2372bca221e6bb95b01bed313fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc48ee89c5340fe2871ae0f4fdb83ca15609e2372bca221e6bb95b01bed313fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.082 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.082 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Ensure instance console log exists: /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.082 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.083 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.083 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.086856408 +0000 UTC m=+0.101508277 container init 9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.091547175 +0000 UTC m=+0.106199034 container start 9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.095416788 +0000 UTC m=+0.110068657 container attach 9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.004726794 +0000 UTC m=+0.019378673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.534 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:11 np0005603621 cool_shaw[379896]: {
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:    "0": [
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:        {
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "devices": [
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "/dev/loop3"
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            ],
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "lv_name": "ceph_lv0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "lv_size": "7511998464",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "name": "ceph_lv0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "tags": {
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.cluster_name": "ceph",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.crush_device_class": "",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.encrypted": "0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.osd_id": "0",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.type": "block",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:                "ceph.vdo": "0"
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            },
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "type": "block",
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:            "vg_name": "ceph_vg0"
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:        }
Jan 31 03:56:11 np0005603621 cool_shaw[379896]:    ]
Jan 31 03:56:11 np0005603621 cool_shaw[379896]: }
Jan 31 03:56:11 np0005603621 systemd[1]: libpod-9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205.scope: Deactivated successfully.
Jan 31 03:56:11 np0005603621 conmon[379896]: conmon 9e108efc30f8414ae7fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205.scope/container/memory.events
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.825057145 +0000 UTC m=+0.839709034 container died 9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.857 247403 DEBUG nova.network.neutron [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.858 247403 DEBUG nova.network.neutron [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dc48ee89c5340fe2871ae0f4fdb83ca15609e2372bca221e6bb95b01bed313fc-merged.mount: Deactivated successfully.
Jan 31 03:56:11 np0005603621 podman[379859]: 2026-01-31 08:56:11.8901165 +0000 UTC m=+0.904768359 container remove 9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 03:56:11 np0005603621 systemd[1]: libpod-conmon-9e108efc30f8414ae7fbb0c8376ed69f63a5d658d085d2f20e8334b57cb9f205.scope: Deactivated successfully.
Jan 31 03:56:11 np0005603621 nova_compute[247399]: 2026-01-31 08:56:11.927 247403 DEBUG oslo_concurrency.lockutils [req-97d2d07c-2a88-447a-a28f-ddf868e4d23f req-d943c5cf-41eb-46d3-b06e-701f509a9845 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.365557291 +0000 UTC m=+0.041174851 container create 6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:56:12 np0005603621 systemd[1]: Started libpod-conmon-6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0.scope.
Jan 31 03:56:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.428979943 +0000 UTC m=+0.104597533 container init 6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_napier, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.434041673 +0000 UTC m=+0.109659233 container start 6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:56:12 np0005603621 dazzling_napier[380076]: 167 167
Jan 31 03:56:12 np0005603621 systemd[1]: libpod-6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0.scope: Deactivated successfully.
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.441198039 +0000 UTC m=+0.116815599 container attach 6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_napier, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.347119348 +0000 UTC m=+0.022736908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.441466058 +0000 UTC m=+0.117083618 container died 6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_napier, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:56:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8f8d98a635ac07bbe81901846707d0ce24f93194217bf2ecf953a2c4cf71a841-merged.mount: Deactivated successfully.
Jan 31 03:56:12 np0005603621 podman[380060]: 2026-01-31 08:56:12.48050576 +0000 UTC m=+0.156123320 container remove 6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 03:56:12 np0005603621 systemd[1]: libpod-conmon-6b176c628672ea22460fc3cac1b95fca4b64a1cb111ca2ef3bbe87f11d0fa6d0.scope: Deactivated successfully.
Jan 31 03:56:12 np0005603621 nova_compute[247399]: 2026-01-31 08:56:12.535 247403 DEBUG nova.network.neutron [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Successfully created port: 2a244101-ce3d-48b5-be3c-d1d2063b883a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:56:12 np0005603621 podman[380100]: 2026-01-31 08:56:12.638816868 +0000 UTC m=+0.044858677 container create 96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:56:12 np0005603621 systemd[1]: Started libpod-conmon-96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a.scope.
Jan 31 03:56:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:12 np0005603621 podman[380100]: 2026-01-31 08:56:12.616772812 +0000 UTC m=+0.022814641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:56:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a493fa3cbc13a84b772fd1f19632165d28749df64937e1f790240a286fa3497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a493fa3cbc13a84b772fd1f19632165d28749df64937e1f790240a286fa3497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a493fa3cbc13a84b772fd1f19632165d28749df64937e1f790240a286fa3497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a493fa3cbc13a84b772fd1f19632165d28749df64937e1f790240a286fa3497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:12.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:12 np0005603621 podman[380100]: 2026-01-31 08:56:12.747531041 +0000 UTC m=+0.153572900 container init 96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_perlman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:56:12 np0005603621 podman[380100]: 2026-01-31 08:56:12.75607171 +0000 UTC m=+0.162113489 container start 96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_perlman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:56:12 np0005603621 podman[380100]: 2026-01-31 08:56:12.761822222 +0000 UTC m=+0.167864021 container attach 96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_perlman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 03:56:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 548 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.2 MiB/s wr, 206 op/s
Jan 31 03:56:12 np0005603621 nova_compute[247399]: 2026-01-31 08:56:12.958 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]: {
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:        "osd_id": 0,
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:        "type": "bluestore"
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]:    }
Jan 31 03:56:13 np0005603621 sweet_perlman[380116]: }
Jan 31 03:56:13 np0005603621 systemd[1]: libpod-96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a.scope: Deactivated successfully.
Jan 31 03:56:13 np0005603621 podman[380100]: 2026-01-31 08:56:13.569553836 +0000 UTC m=+0.975595615 container died 96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:56:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8a493fa3cbc13a84b772fd1f19632165d28749df64937e1f790240a286fa3497-merged.mount: Deactivated successfully.
Jan 31 03:56:13 np0005603621 podman[380100]: 2026-01-31 08:56:13.610442876 +0000 UTC m=+1.016484655 container remove 96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_perlman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:56:13 np0005603621 systemd[1]: libpod-conmon-96701c70b81ae998452dcba71f43db1785e3e333e9a8a8648e295270a6ddd96a.scope: Deactivated successfully.
Jan 31 03:56:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:56:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:56:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 216f9478-956a-4f6a-a788-a98d3a95a994 does not exist
Jan 31 03:56:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 34e8240b-0c33-41ee-85b4-f960df875c73 does not exist
Jan 31 03:56:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 32502bda-5c2b-4a5c-bc32-bfaa4cd32601 does not exist
Jan 31 03:56:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.056 247403 DEBUG oslo_concurrency.lockutils [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.057 247403 DEBUG oslo_concurrency.lockutils [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.193 247403 INFO nova.compute.manager [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Detaching volume 9bc3cb04-a48c-40bd-81ea-100007bee62c#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.286 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.428 247403 INFO nova.virt.block_device [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attempting to driver detach volume 9bc3cb04-a48c-40bd-81ea-100007bee62c from mountpoint /dev/vdb#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.437 247403 DEBUG nova.virt.libvirt.driver [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Attempting to detach device vdb from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.438 247403 DEBUG nova.virt.libvirt.guest [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-9bc3cb04-a48c-40bd-81ea-100007bee62c">
Jan 31 03:56:14 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <serial>9bc3cb04-a48c-40bd-81ea-100007bee62c</serial>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:56:14 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.445 247403 INFO nova.virt.libvirt.driver [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Successfully detached device vdb from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the persistent domain config.#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.446 247403 DEBUG nova.virt.libvirt.driver [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.447 247403 DEBUG nova.virt.libvirt.guest [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-9bc3cb04-a48c-40bd-81ea-100007bee62c">
Jan 31 03:56:14 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <serial>9bc3cb04-a48c-40bd-81ea-100007bee62c</serial>
Jan 31 03:56:14 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 03:56:14 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:56:14 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.482 247403 DEBUG nova.network.neutron [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Successfully updated port: 2a244101-ce3d-48b5-be3c-d1d2063b883a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.552 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849774.5522773, c91674d0-7f78-4e09-b54e-e46f7fbd65a3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.554 247403 DEBUG nova.virt.libvirt.driver [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.557 247403 INFO nova.virt.libvirt.driver [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Successfully detached device vdb from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the live domain config.#033[00m
Jan 31 03:56:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:14.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:14.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.2 MiB/s wr, 199 op/s
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.947 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.948 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.948 247403 DEBUG nova.network.neutron [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.973 247403 DEBUG nova.compute.manager [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-changed-2a244101-ce3d-48b5-be3c-d1d2063b883a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.973 247403 DEBUG nova.compute.manager [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing instance network info cache due to event network-changed-2a244101-ce3d-48b5-be3c-d1d2063b883a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:56:14 np0005603621 nova_compute[247399]: 2026-01-31 08:56:14.973 247403 DEBUG oslo_concurrency.lockutils [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:15 np0005603621 nova_compute[247399]: 2026-01-31 08:56:15.281 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:16 np0005603621 nova_compute[247399]: 2026-01-31 08:56:16.133 247403 DEBUG nova.objects.instance [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'flavor' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:16 np0005603621 nova_compute[247399]: 2026-01-31 08:56:16.157 247403 DEBUG nova.network.neutron [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:56:16 np0005603621 nova_compute[247399]: 2026-01-31 08:56:16.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:16 np0005603621 nova_compute[247399]: 2026-01-31 08:56:16.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:56:16 np0005603621 nova_compute[247399]: 2026-01-31 08:56:16.316 247403 DEBUG oslo_concurrency.lockutils [None req-87aef88b-fe68-4c45-be20-1323ab10ac0c cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:16 np0005603621 nova_compute[247399]: 2026-01-31 08:56:16.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:16.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:16.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 582 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.2 MiB/s wr, 199 op/s
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.258 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 03:56:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.480 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.480 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.480 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.481 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:17 np0005603621 nova_compute[247399]: 2026-01-31 08:56:17.961 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.017 247403 DEBUG nova.network.neutron [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:18 np0005603621 podman[380227]: 2026-01-31 08:56:18.213243553 +0000 UTC m=+0.070118425 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:56:18 np0005603621 podman[380226]: 2026-01-31 08:56:18.213258764 +0000 UTC m=+0.073447910 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.279 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.280 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Instance network_info: |[{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.280 247403 DEBUG oslo_concurrency.lockutils [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.280 247403 DEBUG nova.network.neutron [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing network info cache for port 2a244101-ce3d-48b5-be3c-d1d2063b883a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.283 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Start _get_guest_xml network_info=[{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.288 247403 WARNING nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.297 247403 DEBUG nova.virt.libvirt.host [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.298 247403 DEBUG nova.virt.libvirt.host [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.301 247403 DEBUG nova.virt.libvirt.host [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.301 247403 DEBUG nova.virt.libvirt.host [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.302 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.303 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.303 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.303 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.304 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.304 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.304 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.304 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.304 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.305 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.305 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.305 247403 DEBUG nova.virt.hardware [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.308 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:18.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:56:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656617891' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.733 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:18.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.763 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:18 np0005603621 nova_compute[247399]: 2026-01-31 08:56:18.767 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:56:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60662481' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:56:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:56:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/60662481' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:56:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 6.2 MiB/s wr, 236 op/s
Jan 31 03:56:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:56:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032068566' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.194 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.196 247403 DEBUG nova.virt.libvirt.vif [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:56:10Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.197 247403 DEBUG nova.network.os_vif_util [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.198 247403 DEBUG nova.network.os_vif_util [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.200 247403 DEBUG nova.objects.instance [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.238 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <uuid>f3adcdf0-ca43-48d8-95a3-8f530868aee2</uuid>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <name>instance-000000b6</name>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:56:18</nova:creationTime>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <entry name="serial">f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <entry name="uuid">f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:2e:85:a1"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <target dev="tap2a244101-ce"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log" append="off"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:56:19 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:56:19 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:56:19 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:56:19 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.238 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Preparing to wait for external event network-vif-plugged-2a244101-ce3d-48b5-be3c-d1d2063b883a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.239 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.239 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.239 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.240 247403 DEBUG nova.virt.libvirt.vif [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:56:10Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.240 247403 DEBUG nova.network.os_vif_util [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.241 247403 DEBUG nova.network.os_vif_util [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.241 247403 DEBUG os_vif [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.242 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.242 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.244 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.245 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a244101-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.245 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a244101-ce, col_values=(('external_ids', {'iface-id': '2a244101-ce3d-48b5-be3c-d1d2063b883a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:85:a1', 'vm-uuid': 'f3adcdf0-ca43-48d8-95a3-8f530868aee2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:19 np0005603621 NetworkManager[49013]: <info>  [1769849779.2476] manager: (tap2a244101-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.249 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.252 247403 INFO os_vif [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce')#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.725 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.725 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.726 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:2e:85:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.726 247403 INFO nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Using config drive#033[00m
Jan 31 03:56:19 np0005603621 nova_compute[247399]: 2026-01-31 08:56:19.751 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.669 247403 DEBUG oslo_concurrency.lockutils [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.669 247403 DEBUG oslo_concurrency.lockutils [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.689 247403 INFO nova.compute.manager [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Detaching volume f694d315-eb2a-436c-b06d-0d95676e69e3#033[00m
Jan 31 03:56:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:20.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 506 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.8 MiB/s wr, 213 op/s
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.868 247403 INFO nova.virt.block_device [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Attempting to driver detach volume f694d315-eb2a-436c-b06d-0d95676e69e3 from mountpoint /dev/vdc#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.878 247403 DEBUG nova.virt.libvirt.driver [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Attempting to detach device vdc from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.879 247403 DEBUG nova.virt.libvirt.guest [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-f694d315-eb2a-436c-b06d-0d95676e69e3">
Jan 31 03:56:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <serial>f694d315-eb2a-436c-b06d-0d95676e69e3</serial>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:56:20 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.884 247403 INFO nova.virt.libvirt.driver [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Successfully detached device vdc from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the persistent domain config.#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.885 247403 DEBUG nova.virt.libvirt.driver [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.885 247403 DEBUG nova.virt.libvirt.guest [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-f694d315-eb2a-436c-b06d-0d95676e69e3">
Jan 31 03:56:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  </source>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <target dev="vdc" bus="virtio"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <serial>f694d315-eb2a-436c-b06d-0d95676e69e3</serial>
Jan 31 03:56:20 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 31 03:56:20 np0005603621 nova_compute[247399]: </disk>
Jan 31 03:56:20 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.988 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849780.9881895, c91674d0-7f78-4e09-b54e-e46f7fbd65a3 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.989 247403 DEBUG nova.virt.libvirt.driver [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:56:20 np0005603621 nova_compute[247399]: 2026-01-31 08:56:20.992 247403 INFO nova.virt.libvirt.driver [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Successfully detached device vdc from instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 from the live domain config.#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.074 247403 INFO nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Creating config drive at /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/disk.config#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.078 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpibhb7dlu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.208 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpibhb7dlu" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.232 247403 DEBUG nova.storage.rbd_utils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.235 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/disk.config f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.254 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.373 247403 DEBUG oslo_concurrency.processutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/disk.config f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.374 247403 INFO nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Deleting local config drive /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/disk.config because it was imported into RBD.#033[00m
Jan 31 03:56:21 np0005603621 NetworkManager[49013]: <info>  [1769849781.4084] manager: (tap2a244101-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Jan 31 03:56:21 np0005603621 kernel: tap2a244101-ce: entered promiscuous mode
Jan 31 03:56:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:21Z|00759|binding|INFO|Claiming lport 2a244101-ce3d-48b5-be3c-d1d2063b883a for this chassis.
Jan 31 03:56:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:21Z|00760|binding|INFO|2a244101-ce3d-48b5-be3c-d1d2063b883a: Claiming fa:16:3e:2e:85:a1 10.100.0.4
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.411 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.417 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.417 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.418 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.419 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:21Z|00761|binding|INFO|Setting lport 2a244101-ce3d-48b5-be3c-d1d2063b883a ovn-installed in OVS
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.426 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:21 np0005603621 systemd-udevd[380434]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:56:21 np0005603621 systemd-machined[212769]: New machine qemu-91-instance-000000b6.
Jan 31 03:56:21 np0005603621 NetworkManager[49013]: <info>  [1769849781.4428] device (tap2a244101-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:56:21 np0005603621 NetworkManager[49013]: <info>  [1769849781.4435] device (tap2a244101-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:56:21 np0005603621 systemd[1]: Started Virtual Machine qemu-91-instance-000000b6.
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.516 247403 DEBUG nova.objects.instance [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'flavor' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.538 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.622 247403 DEBUG nova.network.neutron [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updated VIF entry in instance network info cache for port 2a244101-ce3d-48b5-be3c-d1d2063b883a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.623 247403 DEBUG nova.network.neutron [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:21Z|00762|binding|INFO|Setting lport 2a244101-ce3d-48b5-be3c-d1d2063b883a up in Southbound
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.627 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:85:a1 10.100.0.4'], port_security=['fa:16:3e:2e:85:a1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f3adcdf0-ca43-48d8-95a3-8f530868aee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07480b93-f6d1-448c-a034-284ec264fb0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5857c371-e2b6-47a4-a6ea-533a4985822c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e9e42e-3b81-49cc-a8dd-864c96a6e232, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=2a244101-ce3d-48b5-be3c-d1d2063b883a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.629 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 2a244101-ce3d-48b5-be3c-d1d2063b883a in datapath 07480b93-f6d1-448c-a034-284ec264fb0a bound to our chassis#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.631 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 07480b93-f6d1-448c-a034-284ec264fb0a#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.638 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e4eb4878-f87c-4e5a-8ca5-9d19a0d4122c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.639 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap07480b93-f1 in ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.641 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap07480b93-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.641 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aa497176-3a88-4765-bcf8-11fcc03359d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.642 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0ead660e-4bd7-413d-a36b-b2c048e57259]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.653 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[535841c6-8f01-40b3-be5f-c8a67ea1c177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.665 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6b37682f-6992-4cdd-8e3e-c93db3df8e13]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.685 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb126d2-7e8d-4fb1-baa2-36c927ceec84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 NetworkManager[49013]: <info>  [1769849781.6906] manager: (tap07480b93-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/345)
Jan 31 03:56:21 np0005603621 systemd-udevd[380437]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.691 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[38fee98a-2014-4424-ba4f-36c06acf3a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.716 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[11862d50-875e-44ed-b011-b8e8bd400ea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.720 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b49c4d77-355f-43ff-b1ee-560dd55b3d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 NetworkManager[49013]: <info>  [1769849781.7334] device (tap07480b93-f0): carrier: link connected
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.738 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[384608bb-7e28-4cb3-a1e7-87fe30f30feb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.750 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d92870e-a510-480a-83bb-96e5021508ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07480b93-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:24:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908724, 'reachable_time': 16152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380468, 'error': None, 'target': 'ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.763 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39a65983-9bd5-49b4-b3db-760ce265726f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:24f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 908724, 'tstamp': 908724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380469, 'error': None, 'target': 'ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.778 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[af54ba5b-0332-46af-a2de-1c74d22a69a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap07480b93-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:24:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908724, 'reachable_time': 16152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380470, 'error': None, 'target': 'ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.802 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9556a28e-aeb0-42e2-86ea-b8524b308c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.820 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.821 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.821 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.821 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.822 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.846 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2359abdb-b839-4a2b-8e1c-f22053bebcda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.847 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07480b93-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.848 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.848 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07480b93-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.849 247403 DEBUG oslo_concurrency.lockutils [req-391746e8-ca0d-4da5-a80f-343e00e74166 req-87a45133-8587-47e3-864f-2b69862e16eb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:21 np0005603621 kernel: tap07480b93-f0: entered promiscuous mode
Jan 31 03:56:21 np0005603621 NetworkManager[49013]: <info>  [1769849781.8519] manager: (tap07480b93-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.852 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.854 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap07480b93-f0, col_values=(('external_ids', {'iface-id': 'd0f05c75-f95e-41bc-ada2-2185ab4deb30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:21Z|00763|binding|INFO|Releasing lport d0f05c75-f95e-41bc-ada2-2185ab4deb30 from this chassis (sb_readonly=0)
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.855 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.857 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/07480b93-f6d1-448c-a034-284ec264fb0a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/07480b93-f6d1-448c-a034-284ec264fb0a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.858 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[40597a0c-e42a-4708-b965-28cb258f8113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.859 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-07480b93-f6d1-448c-a034-284ec264fb0a
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/07480b93-f6d1-448c-a034-284ec264fb0a.pid.haproxy
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 07480b93-f6d1-448c-a034-284ec264fb0a
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:56:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:21.860 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a', 'env', 'PROCESS_TAG=haproxy-07480b93-f6d1-448c-a034-284ec264fb0a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/07480b93-f6d1-448c-a034-284ec264fb0a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.862 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.898 247403 DEBUG oslo_concurrency.lockutils [None req-ed96bc84-22ef-4547-b128-bae44e7c20d6 cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.965 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849781.9644976, f3adcdf0-ca43-48d8-95a3-8f530868aee2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:56:21 np0005603621 nova_compute[247399]: 2026-01-31 08:56:21.965 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] VM Started (Lifecycle Event)#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.004 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.008 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849781.9646034, f3adcdf0-ca43-48d8-95a3-8f530868aee2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.008 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.037 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.040 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.164 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:56:22 np0005603621 podman[380561]: 2026-01-31 08:56:22.173871006 +0000 UTC m=+0.039489468 container create 5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:56:22 np0005603621 systemd[1]: Started libpod-conmon-5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527.scope.
Jan 31 03:56:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:56:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/91788746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:56:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:56:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5867dc910bc34aa68752d01dc857f6add0c443eb1fd03fac8db5c35a9202829c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.243 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:22 np0005603621 podman[380561]: 2026-01-31 08:56:22.248322157 +0000 UTC m=+0.113940619 container init 5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:56:22 np0005603621 podman[380561]: 2026-01-31 08:56:22.152803451 +0000 UTC m=+0.018421933 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:56:22 np0005603621 podman[380561]: 2026-01-31 08:56:22.254700648 +0000 UTC m=+0.120319110 container start 5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:56:22 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [NOTICE]   (380583) : New worker (380586) forked
Jan 31 03:56:22 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [NOTICE]   (380583) : Loading success.
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.337 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.338 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000b6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.342 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.342 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:56:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.422 247403 DEBUG nova.compute.manager [req-1955fce6-8276-4d5c-8e7d-af64daa8710a req-130c7dc7-9402-4b8b-8a9e-9a8b7e4fba81 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-plugged-2a244101-ce3d-48b5-be3c-d1d2063b883a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.422 247403 DEBUG oslo_concurrency.lockutils [req-1955fce6-8276-4d5c-8e7d-af64daa8710a req-130c7dc7-9402-4b8b-8a9e-9a8b7e4fba81 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.423 247403 DEBUG oslo_concurrency.lockutils [req-1955fce6-8276-4d5c-8e7d-af64daa8710a req-130c7dc7-9402-4b8b-8a9e-9a8b7e4fba81 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.423 247403 DEBUG oslo_concurrency.lockutils [req-1955fce6-8276-4d5c-8e7d-af64daa8710a req-130c7dc7-9402-4b8b-8a9e-9a8b7e4fba81 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.423 247403 DEBUG nova.compute.manager [req-1955fce6-8276-4d5c-8e7d-af64daa8710a req-130c7dc7-9402-4b8b-8a9e-9a8b7e4fba81 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Processing event network-vif-plugged-2a244101-ce3d-48b5-be3c-d1d2063b883a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.423 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.426 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849782.4266443, f3adcdf0-ca43-48d8-95a3-8f530868aee2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.427 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.429 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.432 247403 INFO nova.virt.libvirt.driver [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Instance spawned successfully.#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.433 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.475 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.476 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.476 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.477 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.477 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.477 247403 DEBUG nova.virt.libvirt.driver [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.487 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.490 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.503 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.504 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3845MB free_disk=20.92169189453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.504 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.505 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.617 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.678 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.679 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance f3adcdf0-ca43-48d8-95a3-8f530868aee2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.679 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.679 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:56:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:22.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.787 247403 INFO nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Took 12.48 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.788 247403 DEBUG nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.799 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:56:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 730 KiB/s rd, 4.2 MiB/s wr, 202 op/s
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.915 247403 INFO nova.compute.manager [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Took 13.82 seconds to build instance.#033[00m
Jan 31 03:56:22 np0005603621 nova_compute[247399]: 2026-01-31 08:56:22.971 247403 DEBUG oslo_concurrency.lockutils [None req-879496e8-18ed-4717-bfc0-535a2dc1a07b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:56:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/241075575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:56:23 np0005603621 nova_compute[247399]: 2026-01-31 08:56:23.219 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:56:23 np0005603621 nova_compute[247399]: 2026-01-31 08:56:23.222 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:56:23 np0005603621 nova_compute[247399]: 2026-01-31 08:56:23.263 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:56:23 np0005603621 nova_compute[247399]: 2026-01-31 08:56:23.317 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:56:23 np0005603621 nova_compute[247399]: 2026-01-31 08:56:23.318 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:23.854 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:56:23 np0005603621 nova_compute[247399]: 2026-01-31 08:56:23.855 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:23 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:23.856 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.657 247403 DEBUG nova.compute.manager [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-plugged-2a244101-ce3d-48b5-be3c-d1d2063b883a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.657 247403 DEBUG oslo_concurrency.lockutils [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.658 247403 DEBUG oslo_concurrency.lockutils [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.658 247403 DEBUG oslo_concurrency.lockutils [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.658 247403 DEBUG nova.compute.manager [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] No waiting events found dispatching network-vif-plugged-2a244101-ce3d-48b5-be3c-d1d2063b883a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.658 247403 WARNING nova.compute.manager [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received unexpected event network-vif-plugged-2a244101-ce3d-48b5-be3c-d1d2063b883a for instance with vm_state active and task_state None.#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.658 247403 DEBUG nova.compute.manager [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.658 247403 DEBUG nova.compute.manager [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.659 247403 DEBUG oslo_concurrency.lockutils [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.659 247403 DEBUG oslo_concurrency.lockutils [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:24 np0005603621 nova_compute[247399]: 2026-01-31 08:56:24.659 247403 DEBUG nova.network.neutron [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:56:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:24.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 128 op/s
Jan 31 03:56:26 np0005603621 nova_compute[247399]: 2026-01-31 08:56:26.097 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:26 np0005603621 nova_compute[247399]: 2026-01-31 08:56:26.097 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:26 np0005603621 nova_compute[247399]: 2026-01-31 08:56:26.541 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:56:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:26.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:56:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:26.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 276 KiB/s wr, 107 op/s
Jan 31 03:56:27 np0005603621 nova_compute[247399]: 2026-01-31 08:56:27.350 247403 DEBUG nova.network.neutron [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:56:27 np0005603621 nova_compute[247399]: 2026-01-31 08:56:27.350 247403 DEBUG nova.network.neutron [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:27 np0005603621 nova_compute[247399]: 2026-01-31 08:56:27.811 247403 DEBUG oslo_concurrency.lockutils [req-e7c350b8-de6e-4da1-aec2-c00137bf3bbf req-bc662886-bf6b-44a0-bb4c-a2ed65920653 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 03:56:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:28.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:28.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:28.858 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:56:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 280 KiB/s wr, 154 op/s
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.940 247403 DEBUG nova.compute.manager [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-changed-2a244101-ce3d-48b5-be3c-d1d2063b883a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.940 247403 DEBUG nova.compute.manager [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing instance network info cache due to event network-changed-2a244101-ce3d-48b5-be3c-d1d2063b883a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.941 247403 DEBUG oslo_concurrency.lockutils [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.941 247403 DEBUG oslo_concurrency.lockutils [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:28 np0005603621 nova_compute[247399]: 2026-01-31 08:56:28.941 247403 DEBUG nova.network.neutron [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing network info cache for port 2a244101-ce3d-48b5-be3c-d1d2063b883a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:56:29 np0005603621 nova_compute[247399]: 2026-01-31 08:56:29.285 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:30.538 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:30.538 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:56:30.539 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:56:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 227 KiB/s wr, 119 op/s
Jan 31 03:56:31 np0005603621 nova_compute[247399]: 2026-01-31 08:56:31.544 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:32 np0005603621 nova_compute[247399]: 2026-01-31 08:56:32.498 247403 DEBUG nova.network.neutron [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updated VIF entry in instance network info cache for port 2a244101-ce3d-48b5-be3c-d1d2063b883a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:56:32 np0005603621 nova_compute[247399]: 2026-01-31 08:56:32.499 247403 DEBUG nova.network.neutron [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:56:32 np0005603621 nova_compute[247399]: 2026-01-31 08:56:32.610 247403 DEBUG oslo_concurrency.lockutils [req-f87d096e-2564-4319-8ada-7ce66f211d01 req-71ec89e5-d0f2-41da-9344-0a0746a32e3c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:56:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 213 KiB/s wr, 110 op/s
Jan 31 03:56:34 np0005603621 nova_compute[247399]: 2026-01-31 08:56:34.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:34.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:34.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 510 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1010 KiB/s wr, 96 op/s
Jan 31 03:56:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:35Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:85:a1 10.100.0.4
Jan 31 03:56:35 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:35Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:85:a1 10.100.0.4
Jan 31 03:56:36 np0005603621 nova_compute[247399]: 2026-01-31 08:56:36.547 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:36.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:36.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 510 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1006 KiB/s wr, 63 op/s
Jan 31 03:56:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:56:38
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.log', '.mgr', 'default.rgw.control', 'backups', 'vms', '.rgw.root', 'default.rgw.meta']
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:56:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:38.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:38.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 482 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 142 op/s
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:56:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:56:39 np0005603621 nova_compute[247399]: 2026-01-31 08:56:39.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:40.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:40.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 31 03:56:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:56:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3280806946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:56:41 np0005603621 nova_compute[247399]: 2026-01-31 08:56:41.548 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:41 np0005603621 nova_compute[247399]: 2026-01-31 08:56:41.826 247403 INFO nova.compute.manager [None req-5f893d72-543e-48aa-a1f3-3b54627c6f1b d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Get console output#033[00m
Jan 31 03:56:41 np0005603621 nova_compute[247399]: 2026-01-31 08:56:41.837 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:56:42 np0005603621 nova_compute[247399]: 2026-01-31 08:56:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:56:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:42.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 477 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 360 KiB/s rd, 2.7 MiB/s wr, 115 op/s
Jan 31 03:56:44 np0005603621 nova_compute[247399]: 2026-01-31 08:56:44.336 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 497 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 3.7 MiB/s wr, 112 op/s
Jan 31 03:56:46 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:46Z|00764|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:56:46 np0005603621 ovn_controller[149152]: 2026-01-31T08:56:46Z|00765|binding|INFO|Releasing lport d0f05c75-f95e-41bc-ada2-2185ab4deb30 from this chassis (sb_readonly=0)
Jan 31 03:56:46 np0005603621 nova_compute[247399]: 2026-01-31 08:56:46.444 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:46 np0005603621 nova_compute[247399]: 2026-01-31 08:56:46.583 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:46.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:46.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 497 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 243 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Jan 31 03:56:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:48 np0005603621 podman[380681]: 2026-01-31 08:56:48.486783421 +0000 UTC m=+0.045537358 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 03:56:48 np0005603621 podman[380682]: 2026-01-31 08:56:48.509602792 +0000 UTC m=+0.067618726 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:56:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:48.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:48.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 253 KiB/s rd, 2.9 MiB/s wr, 113 op/s
Jan 31 03:56:49 np0005603621 nova_compute[247399]: 2026-01-31 08:56:49.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003179962311557789 of space, bias 1.0, pg target 0.9539886934673366 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008663107202680067 of space, bias 1.0, pg target 2.59893216080402 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:56:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:56:50 np0005603621 nova_compute[247399]: 2026-01-31 08:56:50.204 247403 DEBUG oslo_concurrency.lockutils [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "interface-f3adcdf0-ca43-48d8-95a3-8f530868aee2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:56:50 np0005603621 nova_compute[247399]: 2026-01-31 08:56:50.205 247403 DEBUG oslo_concurrency.lockutils [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-f3adcdf0-ca43-48d8-95a3-8f530868aee2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:56:50 np0005603621 nova_compute[247399]: 2026-01-31 08:56:50.206 247403 DEBUG nova.objects.instance [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'flavor' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:50.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:50.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 31 03:56:50 np0005603621 nova_compute[247399]: 2026-01-31 08:56:50.994 247403 DEBUG nova.objects.instance [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_requests' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:56:51 np0005603621 nova_compute[247399]: 2026-01-31 08:56:51.241 247403 DEBUG nova.network.neutron [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:56:51 np0005603621 nova_compute[247399]: 2026-01-31 08:56:51.491 247403 DEBUG nova.policy [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:56:51 np0005603621 nova_compute[247399]: 2026-01-31 08:56:51.585 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:52.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:52.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 31 03:56:54 np0005603621 nova_compute[247399]: 2026-01-31 08:56:54.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:54.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1.2 MiB/s wr, 21 op/s
Jan 31 03:56:56 np0005603621 nova_compute[247399]: 2026-01-31 08:56:56.401 247403 DEBUG nova.network.neutron [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Successfully created port: 126b3679-04bf-4842-a972-0b628159b523 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:56:56 np0005603621 nova_compute[247399]: 2026-01-31 08:56:56.586 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:56:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:56.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:56.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 284 KiB/s wr, 19 op/s
Jan 31 03:56:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:56:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:56:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:56:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:56:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:56:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:56:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:56:58.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:56:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 973 KiB/s rd, 285 KiB/s wr, 54 op/s
Jan 31 03:56:58 np0005603621 nova_compute[247399]: 2026-01-31 08:56:58.893 247403 DEBUG nova.network.neutron [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Successfully updated port: 126b3679-04bf-4842-a972-0b628159b523 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:56:58 np0005603621 nova_compute[247399]: 2026-01-31 08:56:58.929 247403 DEBUG oslo_concurrency.lockutils [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:58 np0005603621 nova_compute[247399]: 2026-01-31 08:56:58.929 247403 DEBUG oslo_concurrency.lockutils [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:56:58 np0005603621 nova_compute[247399]: 2026-01-31 08:56:58.930 247403 DEBUG nova.network.neutron [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:56:59 np0005603621 nova_compute[247399]: 2026-01-31 08:56:59.143 247403 DEBUG nova.compute.manager [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-changed-126b3679-04bf-4842-a972-0b628159b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:56:59 np0005603621 nova_compute[247399]: 2026-01-31 08:56:59.144 247403 DEBUG nova.compute.manager [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing instance network info cache due to event network-changed-126b3679-04bf-4842-a972-0b628159b523. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:56:59 np0005603621 nova_compute[247399]: 2026-01-31 08:56:59.144 247403 DEBUG oslo_concurrency.lockutils [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:56:59 np0005603621 nova_compute[247399]: 2026-01-31 08:56:59.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:00.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:00.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Jan 31 03:57:01 np0005603621 nova_compute[247399]: 2026-01-31 08:57:01.588 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:02.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 75 op/s
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.102 247403 DEBUG nova.network.neutron [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.131 247403 DEBUG oslo_concurrency.lockutils [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.132 247403 DEBUG oslo_concurrency.lockutils [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.132 247403 DEBUG nova.network.neutron [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing network info cache for port 126b3679-04bf-4842-a972-0b628159b523 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.135 247403 DEBUG nova.virt.libvirt.vif [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.135 247403 DEBUG nova.network.os_vif_util [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.136 247403 DEBUG nova.network.os_vif_util [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.137 247403 DEBUG os_vif [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.137 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.137 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.138 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.141 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap126b3679-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.141 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap126b3679-04, col_values=(('external_ids', {'iface-id': '126b3679-04bf-4842-a972-0b628159b523', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:0f:0c', 'vm-uuid': 'f3adcdf0-ca43-48d8-95a3-8f530868aee2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.142 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.1432] manager: (tap126b3679-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.149 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.150 247403 INFO os_vif [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04')#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.151 247403 DEBUG nova.virt.libvirt.vif [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.151 247403 DEBUG nova.network.os_vif_util [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.152 247403 DEBUG nova.network.os_vif_util [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.154 247403 DEBUG nova.virt.libvirt.guest [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] attach device xml: <interface type="ethernet">
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:dc:0f:0c"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <target dev="tap126b3679-04"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:57:03 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 03:57:03 np0005603621 kernel: tap126b3679-04: entered promiscuous mode
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.1645] manager: (tap126b3679-04): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.165 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:03Z|00766|binding|INFO|Claiming lport 126b3679-04bf-4842-a972-0b628159b523 for this chassis.
Jan 31 03:57:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:03Z|00767|binding|INFO|126b3679-04bf-4842-a972-0b628159b523: Claiming fa:16:3e:dc:0f:0c 10.100.0.30
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.168 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.176 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:0f:0c 10.100.0.30'], port_security=['fa:16:3e:dc:0f:0c 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'f3adcdf0-ca43-48d8-95a3-8f530868aee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74db905e-fa9d-4aed-b533-40cafba0b81b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=606bf152-67c8-4b8f-8605-e543d471433f, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=126b3679-04bf-4842-a972-0b628159b523) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.178 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 126b3679-04bf-4842-a972-0b628159b523 in datapath 74db905e-fa9d-4aed-b533-40cafba0b81b bound to our chassis#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.180 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74db905e-fa9d-4aed-b533-40cafba0b81b#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.181 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:03Z|00768|binding|INFO|Setting lport 126b3679-04bf-4842-a972-0b628159b523 ovn-installed in OVS
Jan 31 03:57:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:03Z|00769|binding|INFO|Setting lport 126b3679-04bf-4842-a972-0b628159b523 up in Southbound
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.185 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 systemd-udevd[380789]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.189 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[21b36215-eee7-47d7-8e72-c18fc32f5d35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.191 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74db905e-f1 in ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.2049] device (tap126b3679-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.2053] device (tap126b3679-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.252 247403 DEBUG nova.virt.libvirt.driver [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.253 247403 DEBUG nova.virt.libvirt.driver [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.253 247403 DEBUG nova.virt.libvirt.driver [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:2e:85:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.253 247403 DEBUG nova.virt.libvirt.driver [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:dc:0f:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.284 247403 DEBUG nova.virt.libvirt.guest [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:03</nova:creationTime>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:03 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    <nova:port uuid="126b3679-04bf-4842-a972-0b628159b523">
Jan 31 03:57:03 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:03 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:03 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:03 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.285 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74db905e-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.285 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[be646e7d-90e6-4ef8-8c80-13d2b918b03d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.286 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0f4387-aecb-4b65-b3c1-82b19aaa0d4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.296 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6f2632-7301-44a5-b6f5-b4d7c2ff5183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.307 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b8654459-57fb-4e36-af38-89d07b0333a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.310 247403 DEBUG oslo_concurrency.lockutils [None req-8a5f8c1c-b862-42b6-9663-579cdd128309 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-f3adcdf0-ca43-48d8-95a3-8f530868aee2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.325 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8216e772-5bd0-4ce5-9ce0-3a1eb3e35a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.330 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[92a4a4be-f9c6-4250-ba07-f81f03e79ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.3309] manager: (tap74db905e-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/349)
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.351 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8f52e7e1-a8d7-40fd-b311-8d0528d0427c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.354 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[05d08ded-84a7-452a-bb18-677b1ed0f1ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.3713] device (tap74db905e-f0): carrier: link connected
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.375 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[325f0cba-282e-4a20-9275-9b9918dc832c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.388 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ea882ce8-ec81-4494-8e6e-6b1e95ee877d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74db905e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:66:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912888, 'reachable_time': 16444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380815, 'error': None, 'target': 'ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.399 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa6c06b-e14a-498a-a8bf-5686ee7676ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:6697'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 912888, 'tstamp': 912888}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380816, 'error': None, 'target': 'ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.413 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4b3c57-896e-4538-af18-3a57dda907ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74db905e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:66:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912888, 'reachable_time': 16444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380817, 'error': None, 'target': 'ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.438 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[45a15725-0e40-45d5-beb9-e822eecb9648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.481 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4757cfa8-6714-4842-8969-36f536918970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.483 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74db905e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.483 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.483 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74db905e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:03 np0005603621 NetworkManager[49013]: <info>  [1769849823.5342] manager: (tap74db905e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 kernel: tap74db905e-f0: entered promiscuous mode
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.535 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.538 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74db905e-f0, col_values=(('external_ids', {'iface-id': 'af31a41e-3028-45e5-8b17-4edec26770fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:03 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:03Z|00770|binding|INFO|Releasing lport af31a41e-3028-45e5-8b17-4edec26770fb from this chassis (sb_readonly=0)
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.539 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.541 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74db905e-fa9d-4aed-b533-40cafba0b81b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74db905e-fa9d-4aed-b533-40cafba0b81b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.542 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[509096af-0479-4eb5-9801-3847f5ce7f6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.543 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-74db905e-fa9d-4aed-b533-40cafba0b81b
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/74db905e-fa9d-4aed-b533-40cafba0b81b.pid.haproxy
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 74db905e-fa9d-4aed-b533-40cafba0b81b
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:57:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:03.543 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b', 'env', 'PROCESS_TAG=haproxy-74db905e-fa9d-4aed-b533-40cafba0b81b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74db905e-fa9d-4aed-b533-40cafba0b81b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:57:03 np0005603621 nova_compute[247399]: 2026-01-31 08:57:03.546 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:03 np0005603621 podman[380849]: 2026-01-31 08:57:03.838147583 +0000 UTC m=+0.044511727 container create 29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:57:03 np0005603621 systemd[1]: Started libpod-conmon-29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e.scope.
Jan 31 03:57:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad84ff1af538c4fae08d685cc047fa11744ac4a515421edc22549c650cec7abb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:03 np0005603621 podman[380849]: 2026-01-31 08:57:03.813519835 +0000 UTC m=+0.019884009 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:57:03 np0005603621 podman[380849]: 2026-01-31 08:57:03.910261569 +0000 UTC m=+0.116625733 container init 29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 03:57:03 np0005603621 podman[380849]: 2026-01-31 08:57:03.914618388 +0000 UTC m=+0.120982532 container start 29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 03:57:03 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [NOTICE]   (380869) : New worker (380871) forked
Jan 31 03:57:03 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [NOTICE]   (380869) : Loading success.
Jan 31 03:57:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:04.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:04.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 9.2 KiB/s wr, 69 op/s
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.433 247403 DEBUG nova.compute.manager [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.433 247403 DEBUG oslo_concurrency.lockutils [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.433 247403 DEBUG oslo_concurrency.lockutils [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.434 247403 DEBUG oslo_concurrency.lockutils [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.434 247403 DEBUG nova.compute.manager [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] No waiting events found dispatching network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.434 247403 WARNING nova.compute.manager [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received unexpected event network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.434 247403 DEBUG nova.compute.manager [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.434 247403 DEBUG oslo_concurrency.lockutils [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.434 247403 DEBUG oslo_concurrency.lockutils [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.435 247403 DEBUG oslo_concurrency.lockutils [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.435 247403 DEBUG nova.compute.manager [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] No waiting events found dispatching network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.435 247403 WARNING nova.compute.manager [req-af501dcb-dc6b-44c5-86a1-25f14eac421c req-908d4c5d-064b-478b-96a8-e1736268c438 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received unexpected event network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.803 247403 DEBUG oslo_concurrency.lockutils [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "interface-f3adcdf0-ca43-48d8-95a3-8f530868aee2-126b3679-04bf-4842-a972-0b628159b523" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.804 247403 DEBUG oslo_concurrency.lockutils [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-f3adcdf0-ca43-48d8-95a3-8f530868aee2-126b3679-04bf-4842-a972-0b628159b523" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.823 247403 DEBUG nova.objects.instance [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'flavor' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.859 247403 DEBUG nova.virt.libvirt.vif [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.860 247403 DEBUG nova.network.os_vif_util [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.861 247403 DEBUG nova.network.os_vif_util [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.864 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.866 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.868 247403 DEBUG nova.virt.libvirt.driver [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Attempting to detach device tap126b3679-04 from instance f3adcdf0-ca43-48d8-95a3-8f530868aee2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.868 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:dc:0f:0c"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <target dev="tap126b3679-04"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.872 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.875 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <name>instance-000000b6</name>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <uuid>f3adcdf0-ca43-48d8-95a3-8f530868aee2</uuid>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:03</nova:creationTime>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <nova:port uuid="126b3679-04bf-4842-a972-0b628159b523">
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <resource>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </resource>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <entry name='serial'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <entry name='uuid'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk' index='2'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config' index='1'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:2e:85:a1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target dev='tap2a244101-ce'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:dc:0f:0c'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target dev='tap126b3679-04'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='net1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      </target>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/1'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </console>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c144,c752</label>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c752</imagelabel>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.877 247403 INFO nova.virt.libvirt.driver [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully detached device tap126b3679-04 from instance f3adcdf0-ca43-48d8-95a3-8f530868aee2 from the persistent domain config.#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.877 247403 DEBUG nova.virt.libvirt.driver [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] (1/8): Attempting to detach device tap126b3679-04 with device alias net1 from instance f3adcdf0-ca43-48d8-95a3-8f530868aee2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.878 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] detach device xml: <interface type="ethernet">
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:dc:0f:0c"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]:  <target dev="tap126b3679-04"/>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: </interface>
Jan 31 03:57:05 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.916 247403 DEBUG nova.network.neutron [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updated VIF entry in instance network info cache for port 126b3679-04bf-4842-a972-0b628159b523. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.916 247403 DEBUG nova.network.neutron [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.937 247403 DEBUG oslo_concurrency.lockutils [req-8f65f0df-7931-4d27-aa5b-43928ca5f486 req-d65d6bfc-9bb5-41a7-af12-4bd48ead4f5d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:05 np0005603621 kernel: tap126b3679-04 (unregistering): left promiscuous mode
Jan 31 03:57:05 np0005603621 NetworkManager[49013]: <info>  [1769849825.9814] device (tap126b3679-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:57:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:05Z|00771|binding|INFO|Releasing lport 126b3679-04bf-4842-a972-0b628159b523 from this chassis (sb_readonly=0)
Jan 31 03:57:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:05Z|00772|binding|INFO|Setting lport 126b3679-04bf-4842-a972-0b628159b523 down in Southbound
Jan 31 03:57:05 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:05Z|00773|binding|INFO|Removing iface tap126b3679-04 ovn-installed in OVS
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.995 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:05.995 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:0f:0c 10.100.0.30'], port_security=['fa:16:3e:dc:0f:0c 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'f3adcdf0-ca43-48d8-95a3-8f530868aee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74db905e-fa9d-4aed-b533-40cafba0b81b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=606bf152-67c8-4b8f-8605-e543d471433f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=126b3679-04bf-4842-a972-0b628159b523) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.996 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769849825.995934, f3adcdf0-ca43-48d8-95a3-8f530868aee2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 03:57:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:05.997 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 126b3679-04bf-4842-a972-0b628159b523 in datapath 74db905e-fa9d-4aed-b533-40cafba0b81b unbound from our chassis#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.998 247403 DEBUG nova.virt.libvirt.driver [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Start waiting for the detach event from libvirt for device tap126b3679-04 with device alias net1 for instance f3adcdf0-ca43-48d8-95a3-8f530868aee2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 03:57:05 np0005603621 nova_compute[247399]: 2026-01-31 08:57:05.998 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:57:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:05.999 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74db905e-fa9d-4aed-b533-40cafba0b81b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.000 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[81c5069c-325c-4119-b7c8-aed26e837647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.000 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b namespace which is not needed anymore#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.003 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <name>instance-000000b6</name>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <uuid>f3adcdf0-ca43-48d8-95a3-8f530868aee2</uuid>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:03</nova:creationTime>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:port uuid="126b3679-04bf-4842-a972-0b628159b523">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:06 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <resource>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </resource>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <entry name='serial'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <entry name='uuid'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk' index='2'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config' index='1'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:2e:85:a1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target dev='tap2a244101-ce'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      </target>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/1'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </console>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c144,c752</label>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c752</imagelabel>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:06 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:57:06 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.004 247403 INFO nova.virt.libvirt.driver [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully detached device tap126b3679-04 from instance f3adcdf0-ca43-48d8-95a3-8f530868aee2 from the live domain config.#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.005 247403 DEBUG nova.virt.libvirt.vif [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.005 247403 DEBUG nova.network.os_vif_util [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.006 247403 DEBUG nova.network.os_vif_util [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.006 247403 DEBUG os_vif [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.008 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.008 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126b3679-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.012 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.015 247403 INFO os_vif [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04')#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.015 247403 DEBUG nova.virt.libvirt.guest [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:06</nova:creationTime>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:06 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:06 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:06 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:06 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:57:06 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [NOTICE]   (380869) : haproxy version is 2.8.14-c23fe91
Jan 31 03:57:06 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [NOTICE]   (380869) : path to executable is /usr/sbin/haproxy
Jan 31 03:57:06 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [WARNING]  (380869) : Exiting Master process...
Jan 31 03:57:06 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [ALERT]    (380869) : Current worker (380871) exited with code 143 (Terminated)
Jan 31 03:57:06 np0005603621 neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b[380865]: [WARNING]  (380869) : All workers exited. Exiting... (0)
Jan 31 03:57:06 np0005603621 systemd[1]: libpod-29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e.scope: Deactivated successfully.
Jan 31 03:57:06 np0005603621 podman[380902]: 2026-01-31 08:57:06.10453947 +0000 UTC m=+0.045203077 container died 29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e-userdata-shm.mount: Deactivated successfully.
Jan 31 03:57:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ad84ff1af538c4fae08d685cc047fa11744ac4a515421edc22549c650cec7abb-merged.mount: Deactivated successfully.
Jan 31 03:57:06 np0005603621 podman[380902]: 2026-01-31 08:57:06.137610785 +0000 UTC m=+0.078274392 container cleanup 29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:57:06 np0005603621 systemd[1]: libpod-conmon-29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e.scope: Deactivated successfully.
Jan 31 03:57:06 np0005603621 podman[380932]: 2026-01-31 08:57:06.180590352 +0000 UTC m=+0.030317058 container remove 29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.183 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[51f137cc-437f-4d0b-9b00-02f719d73c23]: (4, ('Sat Jan 31 08:57:06 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b (29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e)\n29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e\nSat Jan 31 08:57:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b (29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e)\n29efffbe4f417135304c9e517a2c50729e77a580ecafeace697b23fbb509d53e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.185 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b60b7a33-b6cd-4894-8344-36093ad4fc2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.186 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74db905e-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:06 np0005603621 kernel: tap74db905e-f0: left promiscuous mode
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.196 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[827ad228-9e09-4cf8-8092-3d2f46ee943b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.210 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4fd44b-6008-4a91-9dbb-546b3334902a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.211 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[64644c3b-5ea2-4b89-88a9-97d232182bc5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.226 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a441f832-49c4-4d56-ba80-c2fe450024f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 912883, 'reachable_time': 25451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380946, 'error': None, 'target': 'ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.229 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74db905e-fa9d-4aed-b533-40cafba0b81b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:57:06 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:06.229 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[a35e61c1-b8a8-4e65-b6d7-db24414b08a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:06 np0005603621 systemd[1]: run-netns-ovnmeta\x2d74db905e\x2dfa9d\x2d4aed\x2db533\x2d40cafba0b81b.mount: Deactivated successfully.
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:06 np0005603621 nova_compute[247399]: 2026-01-31 08:57:06.590 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:06.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:06.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 516 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 827 KiB/s wr, 90 op/s
Jan 31 03:57:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.546 247403 DEBUG nova.compute.manager [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-unplugged-126b3679-04bf-4842-a972-0b628159b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.547 247403 DEBUG oslo_concurrency.lockutils [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.547 247403 DEBUG oslo_concurrency.lockutils [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.547 247403 DEBUG oslo_concurrency.lockutils [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.547 247403 DEBUG nova.compute.manager [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] No waiting events found dispatching network-vif-unplugged-126b3679-04bf-4842-a972-0b628159b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.547 247403 WARNING nova.compute.manager [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received unexpected event network-vif-unplugged-126b3679-04bf-4842-a972-0b628159b523 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.547 247403 DEBUG nova.compute.manager [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.548 247403 DEBUG oslo_concurrency.lockutils [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.548 247403 DEBUG oslo_concurrency.lockutils [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.548 247403 DEBUG oslo_concurrency.lockutils [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.548 247403 DEBUG nova.compute.manager [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] No waiting events found dispatching network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:57:07 np0005603621 nova_compute[247399]: 2026-01-31 08:57:07.548 247403 WARNING nova.compute.manager [req-c79b1dc6-4f1b-4a37-9506-54d265327600 req-9c0da3d5-9df5-4b9e-92ab-0495b48935da fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received unexpected event network-vif-plugged-126b3679-04bf-4842-a972-0b628159b523 for instance with vm_state active and task_state None.#033[00m
Jan 31 03:57:08 np0005603621 nova_compute[247399]: 2026-01-31 08:57:08.054 247403 DEBUG oslo_concurrency.lockutils [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:08 np0005603621 nova_compute[247399]: 2026-01-31 08:57:08.055 247403 DEBUG oslo_concurrency.lockutils [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:08 np0005603621 nova_compute[247399]: 2026-01-31 08:57:08.055 247403 DEBUG nova.network.neutron [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:57:08 np0005603621 nova_compute[247399]: 2026-01-31 08:57:08.288 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:57:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:08.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:08.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 529 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.658 247403 DEBUG nova.compute.manager [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-deleted-126b3679-04bf-4842-a972-0b628159b523 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.658 247403 INFO nova.compute.manager [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Neutron deleted interface 126b3679-04bf-4842-a972-0b628159b523; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.659 247403 DEBUG nova.network.neutron [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.687 247403 DEBUG nova.objects.instance [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lazy-loading 'system_metadata' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.714 247403 DEBUG nova.objects.instance [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lazy-loading 'flavor' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.928 247403 DEBUG nova.virt.libvirt.vif [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.928 247403 DEBUG nova.network.os_vif_util [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converting VIF {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.929 247403 DEBUG nova.network.os_vif_util [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.932 247403 DEBUG nova.virt.libvirt.guest [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.934 247403 DEBUG nova.virt.libvirt.guest [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <name>instance-000000b6</name>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <uuid>f3adcdf0-ca43-48d8-95a3-8f530868aee2</uuid>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:06</nova:creationTime>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <resource>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </resource>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='serial'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='uuid'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk' index='2'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config' index='1'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:2e:85:a1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target dev='tap2a244101-ce'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </target>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/1'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </console>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c144,c752</label>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c752</imagelabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.935 247403 DEBUG nova.virt.libvirt.guest [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.938 247403 DEBUG nova.virt.libvirt.guest [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:dc:0f:0c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap126b3679-04"/></interface>not found in domain: <domain type='kvm' id='91'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <name>instance-000000b6</name>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <uuid>f3adcdf0-ca43-48d8-95a3-8f530868aee2</uuid>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:06</nova:creationTime>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <resource>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </resource>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='serial'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='uuid'>f3adcdf0-ca43-48d8-95a3-8f530868aee2</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk' index='2'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/f3adcdf0-ca43-48d8-95a3-8f530868aee2_disk.config' index='1'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </controller>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:2e:85:a1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target dev='tap2a244101-ce'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      </target>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/1'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <source path='/dev/pts/1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2/console.log' append='off'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </console>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </input>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c144,c752</label>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c144,c752</imagelabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.939 247403 WARNING nova.virt.libvirt.driver [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Detaching interface fa:16:3e:dc:0f:0c failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap126b3679-04' not found.#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.941 247403 DEBUG nova.virt.libvirt.vif [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.941 247403 DEBUG nova.network.os_vif_util [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converting VIF {"id": "126b3679-04bf-4842-a972-0b628159b523", "address": "fa:16:3e:dc:0f:0c", "network": {"id": "74db905e-fa9d-4aed-b533-40cafba0b81b", "bridge": "br-int", "label": "tempest-network-smoke--1136710194", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap126b3679-04", "ovs_interfaceid": "126b3679-04bf-4842-a972-0b628159b523", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.942 247403 DEBUG nova.network.os_vif_util [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.942 247403 DEBUG os_vif [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.943 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.943 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap126b3679-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.944 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.946 247403 INFO os_vif [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:0f:0c,bridge_name='br-int',has_traffic_filtering=True,id=126b3679-04bf-4842-a972-0b628159b523,network=Network(74db905e-fa9d-4aed-b533-40cafba0b81b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap126b3679-04')#033[00m
Jan 31 03:57:09 np0005603621 nova_compute[247399]: 2026-01-31 08:57:09.947 247403 DEBUG nova.virt.libvirt.guest [req-c329f45f-0b6c-454d-9130-5c182a93c292 req-cab99d35-a331-4f0a-a103-81da9cc95608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-151731571</nova:name>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 08:57:09</nova:creationTime>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    <nova:port uuid="2a244101-ce3d-48b5-be3c-d1d2063b883a">
Jan 31 03:57:09 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 03:57:09 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 03:57:09 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 03:57:10 np0005603621 nova_compute[247399]: 2026-01-31 08:57:10.214 247403 INFO nova.network.neutron [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Port 126b3679-04bf-4842-a972-0b628159b523 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 03:57:10 np0005603621 nova_compute[247399]: 2026-01-31 08:57:10.215 247403 DEBUG nova.network.neutron [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:10 np0005603621 nova_compute[247399]: 2026-01-31 08:57:10.274 247403 DEBUG oslo_concurrency.lockutils [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:10 np0005603621 nova_compute[247399]: 2026-01-31 08:57:10.318 247403 DEBUG oslo_concurrency.lockutils [None req-64efd25f-390c-42bc-9215-652f58a830af d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-f3adcdf0-ca43-48d8-95a3-8f530868aee2-126b3679-04bf-4842-a972-0b628159b523" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:10.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:10.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.3 MiB/s wr, 109 op/s
Jan 31 03:57:11 np0005603621 nova_compute[247399]: 2026-01-31 08:57:11.011 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:11Z|00774|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:57:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:11Z|00775|binding|INFO|Releasing lport d0f05c75-f95e-41bc-ada2-2185ab4deb30 from this chassis (sb_readonly=0)
Jan 31 03:57:11 np0005603621 nova_compute[247399]: 2026-01-31 08:57:11.252 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:11 np0005603621 nova_compute[247399]: 2026-01-31 08:57:11.593 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.437 247403 DEBUG nova.compute.manager [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-changed-2a244101-ce3d-48b5-be3c-d1d2063b883a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.437 247403 DEBUG nova.compute.manager [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing instance network info cache due to event network-changed-2a244101-ce3d-48b5-be3c-d1d2063b883a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.437 247403 DEBUG oslo_concurrency.lockutils [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.437 247403 DEBUG oslo_concurrency.lockutils [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.437 247403 DEBUG nova.network.neutron [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Refreshing network info cache for port 2a244101-ce3d-48b5-be3c-d1d2063b883a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.543 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.544 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.544 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.545 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.545 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.546 247403 INFO nova.compute.manager [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Terminating instance#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.547 247403 DEBUG nova.compute.manager [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:57:12 np0005603621 kernel: tap2a244101-ce (unregistering): left promiscuous mode
Jan 31 03:57:12 np0005603621 NetworkManager[49013]: <info>  [1769849832.6018] device (tap2a244101-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:57:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:12Z|00776|binding|INFO|Releasing lport 2a244101-ce3d-48b5-be3c-d1d2063b883a from this chassis (sb_readonly=0)
Jan 31 03:57:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:12Z|00777|binding|INFO|Setting lport 2a244101-ce3d-48b5-be3c-d1d2063b883a down in Southbound
Jan 31 03:57:12 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:12Z|00778|binding|INFO|Removing iface tap2a244101-ce ovn-installed in OVS
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.611 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.613 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.620 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:85:a1 10.100.0.4'], port_security=['fa:16:3e:2e:85:a1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f3adcdf0-ca43-48d8-95a3-8f530868aee2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-07480b93-f6d1-448c-a034-284ec264fb0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5857c371-e2b6-47a4-a6ea-533a4985822c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1e9e42e-3b81-49cc-a8dd-864c96a6e232, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=2a244101-ce3d-48b5-be3c-d1d2063b883a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.619 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.622 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 2a244101-ce3d-48b5-be3c-d1d2063b883a in datapath 07480b93-f6d1-448c-a034-284ec264fb0a unbound from our chassis#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.625 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 07480b93-f6d1-448c-a034-284ec264fb0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.626 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dd23ee18-b448-48ab-8096-4e6c700e665e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.626 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a namespace which is not needed anymore#033[00m
Jan 31 03:57:12 np0005603621 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b6.scope: Deactivated successfully.
Jan 31 03:57:12 np0005603621 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000b6.scope: Consumed 13.841s CPU time.
Jan 31 03:57:12 np0005603621 systemd-machined[212769]: Machine qemu-91-instance-000000b6 terminated.
Jan 31 03:57:12 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [NOTICE]   (380583) : haproxy version is 2.8.14-c23fe91
Jan 31 03:57:12 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [NOTICE]   (380583) : path to executable is /usr/sbin/haproxy
Jan 31 03:57:12 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [WARNING]  (380583) : Exiting Master process...
Jan 31 03:57:12 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [ALERT]    (380583) : Current worker (380586) exited with code 143 (Terminated)
Jan 31 03:57:12 np0005603621 neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a[380577]: [WARNING]  (380583) : All workers exited. Exiting... (0)
Jan 31 03:57:12 np0005603621 systemd[1]: libpod-5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527.scope: Deactivated successfully.
Jan 31 03:57:12 np0005603621 podman[380977]: 2026-01-31 08:57:12.741250747 +0000 UTC m=+0.039827929 container died 5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.763 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.766 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527-userdata-shm.mount: Deactivated successfully.
Jan 31 03:57:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5867dc910bc34aa68752d01dc857f6add0c443eb1fd03fac8db5c35a9202829c-merged.mount: Deactivated successfully.
Jan 31 03:57:12 np0005603621 podman[380977]: 2026-01-31 08:57:12.774766065 +0000 UTC m=+0.073343247 container cleanup 5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.776 247403 INFO nova.virt.libvirt.driver [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Instance destroyed successfully.#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.777 247403 DEBUG nova.objects.instance [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid f3adcdf0-ca43-48d8-95a3-8f530868aee2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:12 np0005603621 systemd[1]: libpod-conmon-5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527.scope: Deactivated successfully.
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.803 247403 DEBUG nova.virt.libvirt.vif [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:56:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-151731571',display_name='tempest-TestNetworkBasicOps-server-151731571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-151731571',id=182,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXMepLp9SLQEbd95Wsik5gxFTJRFxV+gD8zvfcOSMvaFwFGXd1w+LlhR3ZMGeiwTRf2ZJsiwpUnDxHsWvOolmSeFJwQUGZS6SB8xYT5WZ3mOW8/qmTkj6Ws6faIw62r2A==',key_name='tempest-TestNetworkBasicOps-1659557236',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:56:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-e5mh4f6v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:56:22Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=f3adcdf0-ca43-48d8-95a3-8f530868aee2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.804 247403 DEBUG nova.network.os_vif_util [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.805 247403 DEBUG nova.network.os_vif_util [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.805 247403 DEBUG os_vif [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.806 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.806 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a244101-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.807 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.809 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.811 247403 INFO os_vif [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:85:a1,bridge_name='br-int',has_traffic_filtering=True,id=2a244101-ce3d-48b5-be3c-d1d2063b883a,network=Network(07480b93-f6d1-448c-a034-284ec264fb0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a244101-ce')#033[00m
Jan 31 03:57:12 np0005603621 podman[381016]: 2026-01-31 08:57:12.831754324 +0000 UTC m=+0.039344692 container remove 5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.835 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3be66ae6-0281-4db6-af0e-65e338b04816]: (4, ('Sat Jan 31 08:57:12 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a (5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527)\n5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527\nSat Jan 31 08:57:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a (5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527)\n5c713cacdd6d9d2fbe999d61b6e1f2d93678ce5c711abc5087a07d2b4abf7527\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:57:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.836 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[540a2c84-b318-49f6-b82e-03a1f592d678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.837 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07480b93-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.838 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 kernel: tap07480b93-f0: left promiscuous mode
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.841 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.844 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7febd7cd-559f-4146-9ac4-6def68932caa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 nova_compute[247399]: 2026-01-31 08:57:12.846 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.856 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[61f313a6-2d84-44b8-8840-13bea7bfcf83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.857 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[369b7ea6-47ce-4a0d-86b1-59afeedcd29b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.869 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7edb526e-5226-4371-88cf-134ef96c4f69]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908719, 'reachable_time': 32841, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381050, 'error': None, 'target': 'ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.872 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07480b93-f6d1-448c-a034-284ec264fb0a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:57:12 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:12.872 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[18bd03fe-8bc8-4b55-9935-20bd54987753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:12 np0005603621 systemd[1]: run-netns-ovnmeta\x2d07480b93\x2df6d1\x2d448c\x2da034\x2d284ec264fb0a.mount: Deactivated successfully.
Jan 31 03:57:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 538 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 469 KiB/s rd, 2.3 MiB/s wr, 91 op/s
Jan 31 03:57:13 np0005603621 nova_compute[247399]: 2026-01-31 08:57:13.461 247403 INFO nova.virt.libvirt.driver [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Deleting instance files /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2_del#033[00m
Jan 31 03:57:13 np0005603621 nova_compute[247399]: 2026-01-31 08:57:13.463 247403 INFO nova.virt.libvirt.driver [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Deletion of /var/lib/nova/instances/f3adcdf0-ca43-48d8-95a3-8f530868aee2_del complete#033[00m
Jan 31 03:57:13 np0005603621 nova_compute[247399]: 2026-01-31 08:57:13.535 247403 INFO nova.compute.manager [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:57:13 np0005603621 nova_compute[247399]: 2026-01-31 08:57:13.536 247403 DEBUG oslo.service.loopingcall [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:57:13 np0005603621 nova_compute[247399]: 2026-01-31 08:57:13.536 247403 DEBUG nova.compute.manager [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:57:13 np0005603621 nova_compute[247399]: 2026-01-31 08:57:13.537 247403 DEBUG nova.network.neutron [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2774907845' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2774907845' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:57:14 np0005603621 nova_compute[247399]: 2026-01-31 08:57:14.265 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:57:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c1c17a45-5143-47eb-8c7d-fce4fc1e74ae does not exist
Jan 31 03:57:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 711e7cb8-6eef-4a83-a268-aec33d154757 does not exist
Jan 31 03:57:14 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 49a0f7e6-b376-40a9-b04b-1e912b068996 does not exist
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:57:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:57:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:14.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 477 KiB/s rd, 2.5 MiB/s wr, 103 op/s
Jan 31 03:57:14 np0005603621 nova_compute[247399]: 2026-01-31 08:57:14.928 247403 DEBUG nova.network.neutron [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:14 np0005603621 nova_compute[247399]: 2026-01-31 08:57:14.957 247403 INFO nova.compute.manager [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Took 1.42 seconds to deallocate network for instance.#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.037 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.037 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.136 247403 DEBUG oslo_concurrency.processutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.168 247403 DEBUG nova.network.neutron [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updated VIF entry in instance network info cache for port 2a244101-ce3d-48b5-be3c-d1d2063b883a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.168 247403 DEBUG nova.network.neutron [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Updating instance_info_cache with network_info: [{"id": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "address": "fa:16:3e:2e:85:a1", "network": {"id": "07480b93-f6d1-448c-a034-284ec264fb0a", "bridge": "br-int", "label": "tempest-network-smoke--978544581", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a244101-ce", "ovs_interfaceid": "2a244101-ce3d-48b5-be3c-d1d2063b883a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.192337097 +0000 UTC m=+0.032690053 container create 9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.216 247403 DEBUG nova.compute.manager [req-9fd65dae-4ac7-4b85-8cbf-7d00746b2a2b req-19397df2-4839-4176-bb9a-4fe2ad701c0c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Received event network-vif-deleted-2a244101-ce3d-48b5-be3c-d1d2063b883a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:15 np0005603621 systemd[1]: Started libpod-conmon-9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b.scope.
Jan 31 03:57:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.265640961 +0000 UTC m=+0.105993937 container init 9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.271869878 +0000 UTC m=+0.112222834 container start 9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.178228111 +0000 UTC m=+0.018581097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.275336638 +0000 UTC m=+0.115689614 container attach 9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:57:15 np0005603621 sleepy_keldysh[381340]: 167 167
Jan 31 03:57:15 np0005603621 systemd[1]: libpod-9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b.scope: Deactivated successfully.
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.276942208 +0000 UTC m=+0.117295164 container died 9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 03:57:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1530e686d77e4463d46f6fffeb1eaefb0533a403ee4c12ab39a7e1369d69c77f-merged.mount: Deactivated successfully.
Jan 31 03:57:15 np0005603621 podman[381324]: 2026-01-31 08:57:15.310636082 +0000 UTC m=+0.150989038 container remove 9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:57:15 np0005603621 systemd[1]: libpod-conmon-9e4ccf85212d14b48e87acbf2c3e2a8b4dc63c433d24b62bae576fcf34a5496b.scope: Deactivated successfully.
Jan 31 03:57:15 np0005603621 podman[381382]: 2026-01-31 08:57:15.425065775 +0000 UTC m=+0.034958254 container create 378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 03:57:15 np0005603621 systemd[1]: Started libpod-conmon-378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc.scope.
Jan 31 03:57:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b092e01ae9be7b28aca00086bd0ed2336b1c7326db6c6e726fe4d4e3b64b15ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b092e01ae9be7b28aca00086bd0ed2336b1c7326db6c6e726fe4d4e3b64b15ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b092e01ae9be7b28aca00086bd0ed2336b1c7326db6c6e726fe4d4e3b64b15ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b092e01ae9be7b28aca00086bd0ed2336b1c7326db6c6e726fe4d4e3b64b15ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:15 np0005603621 podman[381382]: 2026-01-31 08:57:15.409811034 +0000 UTC m=+0.019703523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:57:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b092e01ae9be7b28aca00086bd0ed2336b1c7326db6c6e726fe4d4e3b64b15ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:15 np0005603621 podman[381382]: 2026-01-31 08:57:15.515401187 +0000 UTC m=+0.125293696 container init 378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 31 03:57:15 np0005603621 podman[381382]: 2026-01-31 08:57:15.520449007 +0000 UTC m=+0.130341496 container start 378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:57:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822582912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:15 np0005603621 podman[381382]: 2026-01-31 08:57:15.52401858 +0000 UTC m=+0.133911059 container attach 378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.530 247403 DEBUG oslo_concurrency.lockutils [req-5a057cb2-1647-4260-bac4-0f382fb6718f req-00611430-d884-4975-92f5-6002dcbcd294 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-f3adcdf0-ca43-48d8-95a3-8f530868aee2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.539 247403 DEBUG oslo_concurrency.processutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.544 247403 DEBUG nova.compute.provider_tree [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.560 247403 DEBUG nova.scheduler.client.report [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.596 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.627 247403 INFO nova.scheduler.client.report [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance f3adcdf0-ca43-48d8-95a3-8f530868aee2#033[00m
Jan 31 03:57:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:57:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:57:15 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:57:15 np0005603621 nova_compute[247399]: 2026-01-31 08:57:15.732 247403 DEBUG oslo_concurrency.lockutils [None req-eb30e5a2-f812-476d-8bd3-03bf5789c2e1 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "f3adcdf0-ca43-48d8-95a3-8f530868aee2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:16 np0005603621 nova_compute[247399]: 2026-01-31 08:57:16.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:16 np0005603621 nova_compute[247399]: 2026-01-31 08:57:16.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:57:16 np0005603621 zealous_faraday[381399]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:57:16 np0005603621 zealous_faraday[381399]: --> relative data size: 1.0
Jan 31 03:57:16 np0005603621 zealous_faraday[381399]: --> All data devices are unavailable
Jan 31 03:57:16 np0005603621 systemd[1]: libpod-378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc.scope: Deactivated successfully.
Jan 31 03:57:16 np0005603621 podman[381382]: 2026-01-31 08:57:16.251386925 +0000 UTC m=+0.861279454 container died 378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:57:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b092e01ae9be7b28aca00086bd0ed2336b1c7326db6c6e726fe4d4e3b64b15ad-merged.mount: Deactivated successfully.
Jan 31 03:57:16 np0005603621 podman[381382]: 2026-01-31 08:57:16.307708974 +0000 UTC m=+0.917601463 container remove 378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:16 np0005603621 systemd[1]: libpod-conmon-378c7455c48149ebb187e5b999ab69630eca1ff6172abc8f3a464c2d6e3e75dc.scope: Deactivated successfully.
Jan 31 03:57:16 np0005603621 nova_compute[247399]: 2026-01-31 08:57:16.596 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:16.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:16.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.854265001 +0000 UTC m=+0.034175531 container create f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:57:16 np0005603621 systemd[1]: Started libpod-conmon-f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205.scope.
Jan 31 03:57:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 488 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 478 KiB/s rd, 2.5 MiB/s wr, 105 op/s
Jan 31 03:57:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.908878104 +0000 UTC m=+0.088788654 container init f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.914052878 +0000 UTC m=+0.093963408 container start f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.916727092 +0000 UTC m=+0.096637732 container attach f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:57:16 np0005603621 agitated_buck[381588]: 167 167
Jan 31 03:57:16 np0005603621 systemd[1]: libpod-f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205.scope: Deactivated successfully.
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.918593962 +0000 UTC m=+0.098504492 container died f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.838546914 +0000 UTC m=+0.018457464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:57:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a6578bd3dbb7aedb6fa88c3ee8eb89f16fd96aff0b983f76c5b816907c5dd9d5-merged.mount: Deactivated successfully.
Jan 31 03:57:16 np0005603621 podman[381573]: 2026-01-31 08:57:16.951203611 +0000 UTC m=+0.131114141 container remove f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:57:16 np0005603621 systemd[1]: libpod-conmon-f837df0c557fe21b2f6c7c731bff2b657f00cca1d7e58e07176c8dfdea6a7205.scope: Deactivated successfully.
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.064298642 +0000 UTC m=+0.035365458 container create 5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:57:17 np0005603621 systemd[1]: Started libpod-conmon-5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a.scope.
Jan 31 03:57:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f3df2dac8ee5a2faec8bcc5ccca2400445d0ccaeb0ecd67dd727b37ec351a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f3df2dac8ee5a2faec8bcc5ccca2400445d0ccaeb0ecd67dd727b37ec351a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f3df2dac8ee5a2faec8bcc5ccca2400445d0ccaeb0ecd67dd727b37ec351a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f3df2dac8ee5a2faec8bcc5ccca2400445d0ccaeb0ecd67dd727b37ec351a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.126782724 +0000 UTC m=+0.097849570 container init 5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.132281168 +0000 UTC m=+0.103347974 container start 5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.136164031 +0000 UTC m=+0.107230847 container attach 5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.048479612 +0000 UTC m=+0.019546448 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:57:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.465 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.465 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.465 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.465 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:17 np0005603621 nova_compute[247399]: 2026-01-31 08:57:17.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]: {
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:    "0": [
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:        {
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "devices": [
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "/dev/loop3"
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            ],
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "lv_name": "ceph_lv0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "lv_size": "7511998464",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "name": "ceph_lv0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "tags": {
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.cluster_name": "ceph",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.crush_device_class": "",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.encrypted": "0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.osd_id": "0",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.type": "block",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:                "ceph.vdo": "0"
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            },
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "type": "block",
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:            "vg_name": "ceph_vg0"
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:        }
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]:    ]
Jan 31 03:57:17 np0005603621 crazy_shockley[381630]: }
Jan 31 03:57:17 np0005603621 systemd[1]: libpod-5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a.scope: Deactivated successfully.
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.868249845 +0000 UTC m=+0.839316661 container died 5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:57:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-45f3df2dac8ee5a2faec8bcc5ccca2400445d0ccaeb0ecd67dd727b37ec351a1-merged.mount: Deactivated successfully.
Jan 31 03:57:17 np0005603621 podman[381614]: 2026-01-31 08:57:17.915992813 +0000 UTC m=+0.887059649 container remove 5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:17 np0005603621 systemd[1]: libpod-conmon-5a6c304d641f0dde71da204a53daeda4bf5aa99de22b73f135599d04f6d7345a.scope: Deactivated successfully.
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.399598483 +0000 UTC m=+0.036123892 container create 69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:57:18 np0005603621 systemd[1]: Started libpod-conmon-69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456.scope.
Jan 31 03:57:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.463718897 +0000 UTC m=+0.100244346 container init 69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_engelbart, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.468930651 +0000 UTC m=+0.105456060 container start 69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:57:18 np0005603621 youthful_engelbart[381824]: 167 167
Jan 31 03:57:18 np0005603621 systemd[1]: libpod-69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456.scope: Deactivated successfully.
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.473987561 +0000 UTC m=+0.110512980 container attach 69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_engelbart, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.474354953 +0000 UTC m=+0.110880362 container died 69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_engelbart, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.384372521 +0000 UTC m=+0.020897950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:57:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-484220871d1cb034a7a838d7f45f8b1bb32c1232a648edca14a870148f91db24-merged.mount: Deactivated successfully.
Jan 31 03:57:18 np0005603621 podman[381790]: 2026-01-31 08:57:18.504830115 +0000 UTC m=+0.141355524 container remove 69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:57:18 np0005603621 systemd[1]: libpod-conmon-69a3df49a68ee392dfd2e0ab509628138e940adf3c305b7602b58b7fe7643456.scope: Deactivated successfully.
Jan 31 03:57:18 np0005603621 podman[381872]: 2026-01-31 08:57:18.607498336 +0000 UTC m=+0.079613434 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 03:57:18 np0005603621 podman[381875]: 2026-01-31 08:57:18.60918203 +0000 UTC m=+0.080030289 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:57:18 np0005603621 podman[381917]: 2026-01-31 08:57:18.63579423 +0000 UTC m=+0.033843410 container create 9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_herschel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 03:57:18 np0005603621 systemd[1]: Started libpod-conmon-9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580.scope.
Jan 31 03:57:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5448717ef7ec54c4c16614a606cefcf4891d37d27a4a97b0da9b057b792b120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5448717ef7ec54c4c16614a606cefcf4891d37d27a4a97b0da9b057b792b120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5448717ef7ec54c4c16614a606cefcf4891d37d27a4a97b0da9b057b792b120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5448717ef7ec54c4c16614a606cefcf4891d37d27a4a97b0da9b057b792b120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:18 np0005603621 podman[381917]: 2026-01-31 08:57:18.704541531 +0000 UTC m=+0.102590731 container init 9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:57:18 np0005603621 podman[381917]: 2026-01-31 08:57:18.711015674 +0000 UTC m=+0.109064854 container start 9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:57:18 np0005603621 podman[381917]: 2026-01-31 08:57:18.714947849 +0000 UTC m=+0.112997019 container attach 9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 03:57:18 np0005603621 podman[381917]: 2026-01-31 08:57:18.622405047 +0000 UTC m=+0.020454247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:57:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:18.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 317 KiB/s rd, 1.7 MiB/s wr, 120 op/s
Jan 31 03:57:19 np0005603621 boring_herschel[381942]: {
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:        "osd_id": 0,
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:        "type": "bluestore"
Jan 31 03:57:19 np0005603621 boring_herschel[381942]:    }
Jan 31 03:57:19 np0005603621 boring_herschel[381942]: }
Jan 31 03:57:19 np0005603621 systemd[1]: libpod-9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580.scope: Deactivated successfully.
Jan 31 03:57:19 np0005603621 podman[381917]: 2026-01-31 08:57:19.481283835 +0000 UTC m=+0.879333005 container died 9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_herschel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 03:57:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c5448717ef7ec54c4c16614a606cefcf4891d37d27a4a97b0da9b057b792b120-merged.mount: Deactivated successfully.
Jan 31 03:57:19 np0005603621 podman[381917]: 2026-01-31 08:57:19.526956467 +0000 UTC m=+0.925005647 container remove 9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:57:19 np0005603621 systemd[1]: libpod-conmon-9c430e6ed4893a09a6d235712e50acfa8de96ceb9ad83fd1b1a59dc7ad2d6580.scope: Deactivated successfully.
Jan 31 03:57:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:57:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:57:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:57:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:57:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ec89b50a-0d06-4a80-b79f-212cd4bbf803 does not exist
Jan 31 03:57:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ea8d161e-e325-473b-8674-077ac1e44ea0 does not exist
Jan 31 03:57:19 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0ea51101-7a07-42a6-8e37-14d9d956f560 does not exist
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.599 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.641 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.642 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.643 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.666 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.667 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.667 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.668 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:57:19 np0005603621 nova_compute[247399]: 2026-01-31 08:57:19.668 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2695751318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.059 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.137 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.138 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000b2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.277 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.278 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3857MB free_disk=20.942283630371094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.278 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.278 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.349 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.349 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.349 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.400 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:57:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:57:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2748541595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.804 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:20 np0005603621 nova_compute[247399]: 2026-01-31 08:57:20.809 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:57:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:20.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 191 KiB/s rd, 726 KiB/s wr, 81 op/s
Jan 31 03:57:21 np0005603621 nova_compute[247399]: 2026-01-31 08:57:21.002 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:57:21 np0005603621 nova_compute[247399]: 2026-01-31 08:57:21.050 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:57:21 np0005603621 nova_compute[247399]: 2026-01-31 08:57:21.050 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:21Z|00779|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:57:21 np0005603621 nova_compute[247399]: 2026-01-31 08:57:21.544 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:21 np0005603621 nova_compute[247399]: 2026-01-31 08:57:21.601 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:21 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:21Z|00780|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:57:21 np0005603621 nova_compute[247399]: 2026-01-31 08:57:21.612 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:22.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:22 np0005603621 nova_compute[247399]: 2026-01-31 08:57:22.810 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:22.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 209 KiB/s wr, 69 op/s
Jan 31 03:57:23 np0005603621 nova_compute[247399]: 2026-01-31 08:57:23.605 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:24.488 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:24.489 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:57:24 np0005603621 nova_compute[247399]: 2026-01-31 08:57:24.489 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:24.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:24.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 204 KiB/s wr, 56 op/s
Jan 31 03:57:26 np0005603621 nova_compute[247399]: 2026-01-31 08:57:26.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:26.492 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:26 np0005603621 nova_compute[247399]: 2026-01-31 08:57:26.605 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:26.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:26.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 26 KiB/s wr, 43 op/s
Jan 31 03:57:27 np0005603621 nova_compute[247399]: 2026-01-31 08:57:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:27 np0005603621 nova_compute[247399]: 2026-01-31 08:57:27.775 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849832.7747703, f3adcdf0-ca43-48d8-95a3-8f530868aee2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:27 np0005603621 nova_compute[247399]: 2026-01-31 08:57:27.776 247403 INFO nova.compute.manager [-] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:57:27 np0005603621 nova_compute[247399]: 2026-01-31 08:57:27.812 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:27 np0005603621 nova_compute[247399]: 2026-01-31 08:57:27.824 247403 DEBUG nova.compute.manager [None req-ccfe669d-7f17-4f5f-a2e3-9b9d1b288142 - - - - - -] [instance: f3adcdf0-ca43-48d8-95a3-8f530868aee2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:28.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 18 KiB/s wr, 44 op/s
Jan 31 03:57:29 np0005603621 nova_compute[247399]: 2026-01-31 08:57:29.751 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603621 NetworkManager[49013]: <info>  [1769849849.7517] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Jan 31 03:57:29 np0005603621 NetworkManager[49013]: <info>  [1769849849.7527] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Jan 31 03:57:29 np0005603621 nova_compute[247399]: 2026-01-31 08:57:29.816 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:29 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:29Z|00781|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:57:29 np0005603621 nova_compute[247399]: 2026-01-31 08:57:29.843 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:30 np0005603621 nova_compute[247399]: 2026-01-31 08:57:30.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:57:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:30.538 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:30.539 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:30.540 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:30.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 5.8 KiB/s wr, 12 op/s
Jan 31 03:57:31 np0005603621 nova_compute[247399]: 2026-01-31 08:57:31.607 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:32.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:32 np0005603621 nova_compute[247399]: 2026-01-31 08:57:32.814 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:32.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 KiB/s rd, 3.7 KiB/s wr, 11 op/s
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.317 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.317 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.348 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.620 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.620 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.632 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.632 247403 INFO nova.compute.claims [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.861 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:33 np0005603621 nova_compute[247399]: 2026-01-31 08:57:33.940 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:57:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874271586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.350 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.355 247403 DEBUG nova.compute.provider_tree [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.375 247403 DEBUG nova.scheduler.client.report [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.400 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.402 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.476 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.476 247403 DEBUG nova.network.neutron [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.498 247403 INFO nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.556 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.722 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.723 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.724 247403 INFO nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Creating image(s)#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.746 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:34.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.773 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.798 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.803 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:34.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.861 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.863 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.864 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.864 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.892 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.2 KiB/s rd, 1.8 KiB/s wr, 8 op/s
Jan 31 03:57:34 np0005603621 nova_compute[247399]: 2026-01-31 08:57:34.896 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.546 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.613 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] resizing rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.761 247403 DEBUG nova.objects.instance [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lazy-loading 'migration_context' on Instance uuid c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.793 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.793 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Ensure instance console log exists: /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.794 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.794 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:35 np0005603621 nova_compute[247399]: 2026-01-31 08:57:35.794 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:36 np0005603621 nova_compute[247399]: 2026-01-31 08:57:36.411 247403 DEBUG nova.network.neutron [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Successfully created port: 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:57:36 np0005603621 nova_compute[247399]: 2026-01-31 08:57:36.609 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:36.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 452 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 337 KiB/s wr, 17 op/s
Jan 31 03:57:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:37 np0005603621 nova_compute[247399]: 2026-01-31 08:57:37.815 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:57:38
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms']
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:57:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:38.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:38.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 447 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.076 247403 DEBUG nova.network.neutron [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Successfully updated port: 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.105 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "refresh_cache-c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.106 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquired lock "refresh_cache-c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.106 247403 DEBUG nova.network.neutron [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:57:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.218 247403 DEBUG nova.compute.manager [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received event network-changed-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.218 247403 DEBUG nova.compute.manager [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Refreshing instance network info cache due to event network-changed-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.218 247403 DEBUG oslo_concurrency.lockutils [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:57:39 np0005603621 nova_compute[247399]: 2026-01-31 08:57:39.628 247403 DEBUG nova.network.neutron [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:57:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:40.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 31 03:57:41 np0005603621 nova_compute[247399]: 2026-01-31 08:57:41.611 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.320 247403 DEBUG nova.network.neutron [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Updating instance_info_cache with network_info: [{"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.356 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Releasing lock "refresh_cache-c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.357 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Instance network_info: |[{"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.357 247403 DEBUG oslo_concurrency.lockutils [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.358 247403 DEBUG nova.network.neutron [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Refreshing network info cache for port 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.360 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Start _get_guest_xml network_info=[{"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.365 247403 WARNING nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.369 247403 DEBUG nova.virt.libvirt.host [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.370 247403 DEBUG nova.virt.libvirt.host [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.374 247403 DEBUG nova.virt.libvirt.host [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.375 247403 DEBUG nova.virt.libvirt.host [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.376 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.376 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.377 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.377 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.377 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.378 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.378 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.378 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.378 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.379 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.379 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.379 247403 DEBUG nova.virt.hardware [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:57:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.382 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:42.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:57:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573196170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.805 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.828 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.832 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:42 np0005603621 nova_compute[247399]: 2026-01-31 08:57:42.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 2.9 MiB/s wr, 79 op/s
Jan 31 03:57:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:57:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811011772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.264 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.266 247403 DEBUG nova.virt.libvirt.vif [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1427637419',display_name='tempest-TestServerMultinode-server-1427637419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1427637419',id=184,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c37e7d6d634448bfb3172894ad2af105',ramdisk_id='',reservation_id='r-jtgb38q1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-893388561',owner_user_name='tempest-TestServerMultinode-893388561-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:57:34Z,user_data=None,user_id='4e364ad937544559bea978006e9ff229',uuid=c0bf1a5f-d393-4ffc-9582-c73a7ed1a412,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.266 247403 DEBUG nova.network.os_vif_util [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Converting VIF {"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.267 247403 DEBUG nova.network.os_vif_util [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.269 247403 DEBUG nova.objects.instance [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.294 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <uuid>c0bf1a5f-d393-4ffc-9582-c73a7ed1a412</uuid>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <name>instance-000000b8</name>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestServerMultinode-server-1427637419</nova:name>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:57:42</nova:creationTime>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:user uuid="4e364ad937544559bea978006e9ff229">tempest-TestServerMultinode-893388561-project-admin</nova:user>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:project uuid="c37e7d6d634448bfb3172894ad2af105">tempest-TestServerMultinode-893388561</nova:project>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <nova:port uuid="5cccd44e-cc05-46ea-8ad9-bb3d067eae8e">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <entry name="serial">c0bf1a5f-d393-4ffc-9582-c73a7ed1a412</entry>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <entry name="uuid">c0bf1a5f-d393-4ffc-9582-c73a7ed1a412</entry>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk.config">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:30:94:5d"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <target dev="tap5cccd44e-cc"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/console.log" append="off"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:57:43 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:57:43 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:57:43 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:57:43 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.296 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Preparing to wait for external event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.296 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.296 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.296 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.297 247403 DEBUG nova.virt.libvirt.vif [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1427637419',display_name='tempest-TestServerMultinode-server-1427637419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1427637419',id=184,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c37e7d6d634448bfb3172894ad2af105',ramdisk_id='',reservation_id='r-jtgb38q1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-893388561',owner_user_name='tempest-TestServerMultinode-893388561-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:57:34Z,user_data=None,user_id='4e364ad937544559bea978006e9ff229',uuid=c0bf1a5f-d393-4ffc-9582-c73a7ed1a412,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.297 247403 DEBUG nova.network.os_vif_util [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Converting VIF {"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.298 247403 DEBUG nova.network.os_vif_util [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.298 247403 DEBUG os_vif [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.299 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.299 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.302 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.302 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5cccd44e-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.303 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5cccd44e-cc, col_values=(('external_ids', {'iface-id': '5cccd44e-cc05-46ea-8ad9-bb3d067eae8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:94:5d', 'vm-uuid': 'c0bf1a5f-d393-4ffc-9582-c73a7ed1a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:43 np0005603621 NetworkManager[49013]: <info>  [1769849863.3051] manager: (tap5cccd44e-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.307 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.310 247403 INFO os_vif [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc')#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.403 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.404 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.404 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] No VIF found with MAC fa:16:3e:30:94:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.404 247403 INFO nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Using config drive#033[00m
Jan 31 03:57:43 np0005603621 nova_compute[247399]: 2026-01-31 08:57:43.425 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.604 247403 INFO nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Creating config drive at /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/disk.config#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.607 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphu45ux6_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.737 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmphu45ux6_" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.764 247403 DEBUG nova.storage.rbd_utils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] rbd image c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.769 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/disk.config c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:57:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:44.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 471 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 3.6 MiB/s wr, 84 op/s
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.921 247403 DEBUG oslo_concurrency.processutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/disk.config c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.922 247403 INFO nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Deleting local config drive /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412/disk.config because it was imported into RBD.#033[00m
Jan 31 03:57:44 np0005603621 kernel: tap5cccd44e-cc: entered promiscuous mode
Jan 31 03:57:44 np0005603621 NetworkManager[49013]: <info>  [1769849864.9689] manager: (tap5cccd44e-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Jan 31 03:57:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:44Z|00782|binding|INFO|Claiming lport 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e for this chassis.
Jan 31 03:57:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:44Z|00783|binding|INFO|5cccd44e-cc05-46ea-8ad9-bb3d067eae8e: Claiming fa:16:3e:30:94:5d 10.100.0.13
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.974 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:94:5d 10.100.0.13'], port_security=['fa:16:3e:30:94:5d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c0bf1a5f-d393-4ffc-9582-c73a7ed1a412', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c37e7d6d634448bfb3172894ad2af105', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd98fdedc-7ec4-4678-86fd-333fbe96f77f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f5a6fc0-3df3-4c2f-84cd-adc2af316a8e, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.976 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e in datapath b9195012-fef1-4e17-acdd-2b9ffc979da0 bound to our chassis#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.975 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:44 np0005603621 nova_compute[247399]: 2026-01-31 08:57:44.976 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:44Z|00784|binding|INFO|Setting lport 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e ovn-installed in OVS
Jan 31 03:57:44 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:44Z|00785|binding|INFO|Setting lport 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e up in Southbound
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.978 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b9195012-fef1-4e17-acdd-2b9ffc979da0#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.987 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b4f2ee-30b8-41f9-acfa-8ced15a99c4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.988 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb9195012-f1 in ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.989 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb9195012-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.989 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1026cf22-ce64-4ea9-a552-e49cdcea0016]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:44 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:44.990 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4a859302-7e1b-44d8-ab5b-85b8ef18ded2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:44 np0005603621 systemd-machined[212769]: New machine qemu-92-instance-000000b8.
Jan 31 03:57:45 np0005603621 systemd[1]: Started Virtual Machine qemu-92-instance-000000b8.
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.005 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[c24bbaf9-804c-43b9-b22d-ec6be462de59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 systemd-udevd[382464]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:57:45 np0005603621 NetworkManager[49013]: <info>  [1769849865.0246] device (tap5cccd44e-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:57:45 np0005603621 NetworkManager[49013]: <info>  [1769849865.0252] device (tap5cccd44e-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.030 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[90a95423-148f-47fb-a391-a0a204a8e5f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.052 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[96dda7bf-d6c0-410e-87b3-034057519755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.056 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[933d52cf-db0a-4daf-aa91-6a0f95cc25ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 NetworkManager[49013]: <info>  [1769849865.0572] manager: (tapb9195012-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.083 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c487139c-9828-4696-9297-f1dc186aecdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.085 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[395b48f5-8951-4794-bcff-d85f9dd7c3a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.091 247403 DEBUG nova.network.neutron [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Updated VIF entry in instance network info cache for port 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.091 247403 DEBUG nova.network.neutron [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Updating instance_info_cache with network_info: [{"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:57:45 np0005603621 NetworkManager[49013]: <info>  [1769849865.1069] device (tapb9195012-f0): carrier: link connected
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.109 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e130125f-a460-4fe0-9ef1-6e559b0bd4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.113 247403 DEBUG oslo_concurrency.lockutils [req-97efba84-e22d-4ab8-836f-881de289419e req-2b3981ad-5272-4c46-a59a-a215a4e6299d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.125 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f5213277-ecaf-44e8-9e68-b2b8efbddbf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9195012-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:f5:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917062, 'reachable_time': 43785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382494, 'error': None, 'target': 'ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.136 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[68bb36ff-4e17-4d60-b975-63ddeb58f8fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fede:f5e8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 917062, 'tstamp': 917062}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382495, 'error': None, 'target': 'ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.148 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f67dff33-ae60-42be-a864-ac896baa8230]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb9195012-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:de:f5:e8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917062, 'reachable_time': 43785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382496, 'error': None, 'target': 'ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.172 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[46de2257-be83-4e63-8ba3-a8b7768c7510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.223 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80f1c995-eccd-4197-812a-d7fcee6860a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.225 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9195012-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.225 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.226 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9195012-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.227 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:45 np0005603621 NetworkManager[49013]: <info>  [1769849865.2283] manager: (tapb9195012-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Jan 31 03:57:45 np0005603621 kernel: tapb9195012-f0: entered promiscuous mode
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.233 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb9195012-f0, col_values=(('external_ids', {'iface-id': '1553dad0-d27d-4162-94ad-0b8a3a359f3a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:57:45 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:45Z|00786|binding|INFO|Releasing lport 1553dad0-d27d-4162-94ad-0b8a3a359f3a from this chassis (sb_readonly=0)
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.236 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b9195012-fef1-4e17-acdd-2b9ffc979da0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b9195012-fef1-4e17-acdd-2b9ffc979da0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.237 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e76a245e-f9ec-4e4b-8cf7-62ee0c07b2b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.238 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-b9195012-fef1-4e17-acdd-2b9ffc979da0
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/b9195012-fef1-4e17-acdd-2b9ffc979da0.pid.haproxy
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID b9195012-fef1-4e17-acdd-2b9ffc979da0
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:57:45 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:57:45.238 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'env', 'PROCESS_TAG=haproxy-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b9195012-fef1-4e17-acdd-2b9ffc979da0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.242 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.495 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849865.4950283, c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.496 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] VM Started (Lifecycle Event)#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.521 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.524 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849865.4961848, c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.524 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.560 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.564 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.596 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:57:45 np0005603621 podman[382570]: 2026-01-31 08:57:45.63068469 +0000 UTC m=+0.076496866 container create 25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.664 247403 DEBUG nova.compute.manager [req-6b1d1c00-7a38-4191-b5f5-26791d86ea89 req-bf20d861-629d-4fc1-afd1-2e0a62e71045 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.664 247403 DEBUG oslo_concurrency.lockutils [req-6b1d1c00-7a38-4191-b5f5-26791d86ea89 req-bf20d861-629d-4fc1-afd1-2e0a62e71045 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.665 247403 DEBUG oslo_concurrency.lockutils [req-6b1d1c00-7a38-4191-b5f5-26791d86ea89 req-bf20d861-629d-4fc1-afd1-2e0a62e71045 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.665 247403 DEBUG oslo_concurrency.lockutils [req-6b1d1c00-7a38-4191-b5f5-26791d86ea89 req-bf20d861-629d-4fc1-afd1-2e0a62e71045 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.665 247403 DEBUG nova.compute.manager [req-6b1d1c00-7a38-4191-b5f5-26791d86ea89 req-bf20d861-629d-4fc1-afd1-2e0a62e71045 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Processing event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.666 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:57:45 np0005603621 systemd[1]: Started libpod-conmon-25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384.scope.
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.670 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849865.6698906, c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.670 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.672 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.676 247403 INFO nova.virt.libvirt.driver [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Instance spawned successfully.#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.676 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:57:45 np0005603621 podman[382570]: 2026-01-31 08:57:45.58253335 +0000 UTC m=+0.028345526 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.696 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.699 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:57:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.709 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7593fffd6d5ef676c34d56c1ab569a68c157e4f548b2e126a476db0c0bb4c058/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.710 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.711 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.711 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.712 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.712 247403 DEBUG nova.virt.libvirt.driver [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:57:45 np0005603621 podman[382570]: 2026-01-31 08:57:45.722247971 +0000 UTC m=+0.168060147 container init 25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:45 np0005603621 podman[382570]: 2026-01-31 08:57:45.726465004 +0000 UTC m=+0.172277180 container start 25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 03:57:45 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [NOTICE]   (382589) : New worker (382591) forked
Jan 31 03:57:45 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [NOTICE]   (382589) : Loading success.
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.749 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.811 247403 INFO nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Took 11.09 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.812 247403 DEBUG nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.917 247403 INFO nova.compute.manager [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Took 12.43 seconds to build instance.#033[00m
Jan 31 03:57:45 np0005603621 nova_compute[247399]: 2026-01-31 08:57:45.951 247403 DEBUG oslo_concurrency.lockutils [None req-2bf1d6f0-2d5d-460e-bb60-150aed6990ea 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:46 np0005603621 nova_compute[247399]: 2026-01-31 08:57:46.614 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:46.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:46.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 493 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 386 KiB/s rd, 4.4 MiB/s wr, 111 op/s
Jan 31 03:57:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:47 np0005603621 nova_compute[247399]: 2026-01-31 08:57:47.852 247403 DEBUG nova.compute.manager [req-e087b637-10fe-4a53-bd9d-2c56e57434a8 req-80fa8b16-c853-4521-b1af-c4974ea40158 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:57:47 np0005603621 nova_compute[247399]: 2026-01-31 08:57:47.853 247403 DEBUG oslo_concurrency.lockutils [req-e087b637-10fe-4a53-bd9d-2c56e57434a8 req-80fa8b16-c853-4521-b1af-c4974ea40158 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:57:47 np0005603621 nova_compute[247399]: 2026-01-31 08:57:47.853 247403 DEBUG oslo_concurrency.lockutils [req-e087b637-10fe-4a53-bd9d-2c56e57434a8 req-80fa8b16-c853-4521-b1af-c4974ea40158 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:57:47 np0005603621 nova_compute[247399]: 2026-01-31 08:57:47.853 247403 DEBUG oslo_concurrency.lockutils [req-e087b637-10fe-4a53-bd9d-2c56e57434a8 req-80fa8b16-c853-4521-b1af-c4974ea40158 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:57:47 np0005603621 nova_compute[247399]: 2026-01-31 08:57:47.853 247403 DEBUG nova.compute.manager [req-e087b637-10fe-4a53-bd9d-2c56e57434a8 req-80fa8b16-c853-4521-b1af-c4974ea40158 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] No waiting events found dispatching network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:57:47 np0005603621 nova_compute[247399]: 2026-01-31 08:57:47.853 247403 WARNING nova.compute.manager [req-e087b637-10fe-4a53-bd9d-2c56e57434a8 req-80fa8b16-c853-4521-b1af-c4974ea40158 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received unexpected event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e for instance with vm_state active and task_state None.#033[00m
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3376433460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3376433460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:57:48 np0005603621 nova_compute[247399]: 2026-01-31 08:57:48.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:48.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1129005817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:57:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.7 MiB/s wr, 189 op/s
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:57:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1129005817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:57:49 np0005603621 podman[382602]: 2026-01-31 08:57:49.495593149 +0000 UTC m=+0.047112968 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 03:57:49 np0005603621 podman[382603]: 2026-01-31 08:57:49.541796009 +0000 UTC m=+0.093316408 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0027161269309510516 of space, bias 1.0, pg target 0.8148380792853155 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008674194176903034 of space, bias 1.0, pg target 2.60225825307091 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:57:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 03:57:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:50.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 536 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 5.0 MiB/s wr, 216 op/s
Jan 31 03:57:51 np0005603621 nova_compute[247399]: 2026-01-31 08:57:51.615 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:52.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.7 MiB/s wr, 277 op/s
Jan 31 03:57:53 np0005603621 nova_compute[247399]: 2026-01-31 08:57:53.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:54.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 564 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 288 op/s
Jan 31 03:57:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:56Z|00787|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:57:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:56Z|00788|binding|INFO|Releasing lport 1553dad0-d27d-4162-94ad-0b8a3a359f3a from this chassis (sb_readonly=0)
Jan 31 03:57:56 np0005603621 nova_compute[247399]: 2026-01-31 08:57:56.241 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:56Z|00789|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:57:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:56Z|00790|binding|INFO|Releasing lport 1553dad0-d27d-4162-94ad-0b8a3a359f3a from this chassis (sb_readonly=0)
Jan 31 03:57:56 np0005603621 nova_compute[247399]: 2026-01-31 08:57:56.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:56 np0005603621 nova_compute[247399]: 2026-01-31 08:57:56.618 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:57:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:57:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 571 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.4 MiB/s wr, 322 op/s
Jan 31 03:57:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:57:58 np0005603621 nova_compute[247399]: 2026-01-31 08:57:58.311 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:57:58 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:58Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:94:5d 10.100.0.13
Jan 31 03:57:58 np0005603621 ovn_controller[149152]: 2026-01-31T08:57:58Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:94:5d 10.100.0.13
Jan 31 03:57:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:57:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:57:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:57:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:57:58.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:57:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 575 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 346 op/s
Jan 31 03:58:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:00.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 587 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.5 MiB/s wr, 301 op/s
Jan 31 03:58:01 np0005603621 nova_compute[247399]: 2026-01-31 08:58:01.620 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2213384817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 583 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.2 MiB/s wr, 368 op/s
Jan 31 03:58:03 np0005603621 nova_compute[247399]: 2026-01-31 08:58:03.314 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 561 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.2 MiB/s wr, 323 op/s
Jan 31 03:58:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Jan 31 03:58:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Jan 31 03:58:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Jan 31 03:58:06 np0005603621 nova_compute[247399]: 2026-01-31 08:58:06.622 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 549 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.4 MiB/s wr, 309 op/s
Jan 31 03:58:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Jan 31 03:58:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Jan 31 03:58:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.068 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.068 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.068 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.069 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.069 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.070 247403 INFO nova.compute.manager [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Terminating instance#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.071 247403 DEBUG nova.compute.manager [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:58:07 np0005603621 kernel: tap5cccd44e-cc (unregistering): left promiscuous mode
Jan 31 03:58:07 np0005603621 NetworkManager[49013]: <info>  [1769849887.1298] device (tap5cccd44e-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:58:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:07Z|00791|binding|INFO|Releasing lport 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e from this chassis (sb_readonly=0)
Jan 31 03:58:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:07Z|00792|binding|INFO|Setting lport 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e down in Southbound
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.134 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:07Z|00793|binding|INFO|Removing iface tap5cccd44e-cc ovn-installed in OVS
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.136 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.150 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:94:5d 10.100.0.13'], port_security=['fa:16:3e:30:94:5d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c0bf1a5f-d393-4ffc-9582-c73a7ed1a412', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c37e7d6d634448bfb3172894ad2af105', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd98fdedc-7ec4-4678-86fd-333fbe96f77f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f5a6fc0-3df3-4c2f-84cd-adc2af316a8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.152 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 5cccd44e-cc05-46ea-8ad9-bb3d067eae8e in datapath b9195012-fef1-4e17-acdd-2b9ffc979da0 unbound from our chassis#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.154 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b9195012-fef1-4e17-acdd-2b9ffc979da0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.155 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d1727ce1-6048-493b-a1ae-7faf5a264d94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.156 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0 namespace which is not needed anymore#033[00m
Jan 31 03:58:07 np0005603621 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Jan 31 03:58:07 np0005603621 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000b8.scope: Consumed 12.493s CPU time.
Jan 31 03:58:07 np0005603621 systemd-machined[212769]: Machine qemu-92-instance-000000b8 terminated.
Jan 31 03:58:07 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [NOTICE]   (382589) : haproxy version is 2.8.14-c23fe91
Jan 31 03:58:07 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [NOTICE]   (382589) : path to executable is /usr/sbin/haproxy
Jan 31 03:58:07 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [WARNING]  (382589) : Exiting Master process...
Jan 31 03:58:07 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [WARNING]  (382589) : Exiting Master process...
Jan 31 03:58:07 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [ALERT]    (382589) : Current worker (382591) exited with code 143 (Terminated)
Jan 31 03:58:07 np0005603621 neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0[382585]: [WARNING]  (382589) : All workers exited. Exiting... (0)
Jan 31 03:58:07 np0005603621 systemd[1]: libpod-25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384.scope: Deactivated successfully.
Jan 31 03:58:07 np0005603621 podman[382727]: 2026-01-31 08:58:07.278721221 +0000 UTC m=+0.044343811 container died 25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.299 247403 INFO nova.virt.libvirt.driver [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Instance destroyed successfully.#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.300 247403 DEBUG nova.objects.instance [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lazy-loading 'resources' on Instance uuid c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:58:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384-userdata-shm.mount: Deactivated successfully.
Jan 31 03:58:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7593fffd6d5ef676c34d56c1ab569a68c157e4f548b2e126a476db0c0bb4c058-merged.mount: Deactivated successfully.
Jan 31 03:58:07 np0005603621 podman[382727]: 2026-01-31 08:58:07.319393046 +0000 UTC m=+0.085015636 container cleanup 25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.320 247403 DEBUG nova.virt.libvirt.vif [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:57:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1427637419',display_name='tempest-TestServerMultinode-server-1427637419',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1427637419',id=184,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:57:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c37e7d6d634448bfb3172894ad2af105',ramdisk_id='',reservation_id='r-jtgb38q1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-893388561',owner_user_name='tempest-TestServerMultinode-893388561-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:57:45Z,user_data=None,user_id='4e364ad937544559bea978006e9ff229',uuid=c0bf1a5f-d393-4ffc-9582-c73a7ed1a412,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.321 247403 DEBUG nova.network.os_vif_util [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Converting VIF {"id": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "address": "fa:16:3e:30:94:5d", "network": {"id": "b9195012-fef1-4e17-acdd-2b9ffc979da0", "bridge": "br-int", "label": "tempest-TestServerMultinode-1076760224-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9ce62b246a60455e8ec83f770113c52c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5cccd44e-cc", "ovs_interfaceid": "5cccd44e-cc05-46ea-8ad9-bb3d067eae8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.321 247403 DEBUG nova.network.os_vif_util [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.322 247403 DEBUG os_vif [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.324 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.324 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5cccd44e-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:07 np0005603621 systemd[1]: libpod-conmon-25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384.scope: Deactivated successfully.
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.327 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.330 247403 INFO os_vif [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:94:5d,bridge_name='br-int',has_traffic_filtering=True,id=5cccd44e-cc05-46ea-8ad9-bb3d067eae8e,network=Network(b9195012-fef1-4e17-acdd-2b9ffc979da0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5cccd44e-cc')#033[00m
Jan 31 03:58:07 np0005603621 podman[382768]: 2026-01-31 08:58:07.375020012 +0000 UTC m=+0.038213958 container remove 25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.379 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3599844c-1d17-4022-abf9-00c355e9a6b5]: (4, ('Sat Jan 31 08:58:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0 (25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384)\n25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384\nSat Jan 31 08:58:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0 (25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384)\n25d619c29b309d66e13c72d4e5bcdcc802ea5431b23bef7110ff89b2b2877384\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.380 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[01f558ec-a71a-4870-b16b-b1db0a104617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.381 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9195012-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.383 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 kernel: tapb9195012-f0: left promiscuous mode
Jan 31 03:58:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.391 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.393 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11040c81-2cc9-40c3-8c68-815487a6f969]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.396 247403 DEBUG nova.compute.manager [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-changed-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.397 247403 DEBUG nova.compute.manager [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing instance network info cache due to event network-changed-a686c587-a94d-4875-a040-48d5b193a20a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.397 247403 DEBUG oslo_concurrency.lockutils [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.397 247403 DEBUG oslo_concurrency.lockutils [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.397 247403 DEBUG nova.network.neutron [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Refreshing network info cache for port a686c587-a94d-4875-a040-48d5b193a20a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.413 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f9277aeb-3676-4b99-871a-9bf0b94b39b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.415 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5124605e-6ef9-473d-833e-9de3ead9a440]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.426 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c916cb52-3ffa-4282-bebc-74da161f0080]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 917056, 'reachable_time': 34916, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382801, 'error': None, 'target': 'ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.429 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b9195012-fef1-4e17-acdd-2b9ffc979da0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:58:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:07.429 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[2f453606-174e-451d-8013-aac663879955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:07 np0005603621 systemd[1]: run-netns-ovnmeta\x2db9195012\x2dfef1\x2d4e17\x2dacdd\x2d2b9ffc979da0.mount: Deactivated successfully.
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.768 247403 INFO nova.virt.libvirt.driver [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Deleting instance files /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_del#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.769 247403 INFO nova.virt.libvirt.driver [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Deletion of /var/lib/nova/instances/c0bf1a5f-d393-4ffc-9582-c73a7ed1a412_del complete#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.867 247403 INFO nova.compute.manager [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.868 247403 DEBUG oslo.service.loopingcall [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.868 247403 DEBUG nova.compute.manager [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:58:07 np0005603621 nova_compute[247399]: 2026-01-31 08:58:07.868 247403 DEBUG nova.network.neutron [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:58:08 np0005603621 nova_compute[247399]: 2026-01-31 08:58:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:58:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:08.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 470 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.2 MiB/s wr, 309 op/s
Jan 31 03:58:08 np0005603621 nova_compute[247399]: 2026-01-31 08:58:08.947 247403 DEBUG nova.network.neutron [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:08 np0005603621 nova_compute[247399]: 2026-01-31 08:58:08.977 247403 INFO nova.compute.manager [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Took 1.11 seconds to deallocate network for instance.#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.050 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.051 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.160 247403 DEBUG oslo_concurrency.processutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.274 247403 DEBUG nova.compute.manager [req-d14995f5-3536-4136-b87d-e55ac991b238 req-5b94d821-320d-4ef6-9a82-fcf3c6aa4979 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received event network-vif-deleted-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.544 247403 DEBUG nova.compute.manager [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received event network-vif-unplugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.544 247403 DEBUG oslo_concurrency.lockutils [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.545 247403 DEBUG oslo_concurrency.lockutils [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.545 247403 DEBUG oslo_concurrency.lockutils [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.545 247403 DEBUG nova.compute.manager [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] No waiting events found dispatching network-vif-unplugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.545 247403 WARNING nova.compute.manager [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received unexpected event network-vif-unplugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.546 247403 DEBUG nova.compute.manager [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.546 247403 DEBUG oslo_concurrency.lockutils [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.546 247403 DEBUG oslo_concurrency.lockutils [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.546 247403 DEBUG oslo_concurrency.lockutils [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.546 247403 DEBUG nova.compute.manager [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] No waiting events found dispatching network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.547 247403 WARNING nova.compute.manager [req-735fa8ff-a7bf-47f7-a943-58324119c34c req-cef3c73e-4773-4edc-8a5a-1051835fd88c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Received unexpected event network-vif-plugged-5cccd44e-cc05-46ea-8ad9-bb3d067eae8e for instance with vm_state deleted and task_state None.#033[00m
Jan 31 03:58:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/874966171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.572 247403 DEBUG oslo_concurrency.processutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.579 247403 DEBUG nova.compute.provider_tree [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.610 247403 DEBUG nova.scheduler.client.report [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.635 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.676 247403 INFO nova.scheduler.client.report [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Deleted allocations for instance c0bf1a5f-d393-4ffc-9582-c73a7ed1a412#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.814 247403 DEBUG nova.network.neutron [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated VIF entry in instance network info cache for port a686c587-a94d-4875-a040-48d5b193a20a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.814 247403 DEBUG nova.network.neutron [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.835 247403 DEBUG oslo_concurrency.lockutils [req-8581478d-b3b0-47b2-970e-258a6e0cf859 req-d19508e2-4723-4d17-b014-568ed3d04a18 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:58:09 np0005603621 nova_compute[247399]: 2026-01-31 08:58:09.847 247403 DEBUG oslo_concurrency.lockutils [None req-fa365a89-34ec-4ff5-94da-c7dda42f21f3 4e364ad937544559bea978006e9ff229 c37e7d6d634448bfb3172894ad2af105 - - default default] Lock "c0bf1a5f-d393-4ffc-9582-c73a7ed1a412" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:10.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 450 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 686 KiB/s rd, 150 KiB/s wr, 156 op/s
Jan 31 03:58:11 np0005603621 nova_compute[247399]: 2026-01-31 08:58:11.624 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:12 np0005603621 nova_compute[247399]: 2026-01-31 08:58:12.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 455 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 622 KiB/s rd, 2.6 MiB/s wr, 235 op/s
Jan 31 03:58:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:14.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 2.6 MiB/s wr, 194 op/s
Jan 31 03:58:15 np0005603621 nova_compute[247399]: 2026-01-31 08:58:15.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:16 np0005603621 nova_compute[247399]: 2026-01-31 08:58:16.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:16 np0005603621 nova_compute[247399]: 2026-01-31 08:58:16.626 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:16.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 508 KiB/s rd, 2.6 MiB/s wr, 186 op/s
Jan 31 03:58:17 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:17Z|00794|binding|INFO|Releasing lport 5a0136e3-84ab-4495-80ff-8006a0a74934 from this chassis (sb_readonly=0)
Jan 31 03:58:17 np0005603621 nova_compute[247399]: 2026-01-31 08:58:17.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:17 np0005603621 nova_compute[247399]: 2026-01-31 08:58:17.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:58:17 np0005603621 nova_compute[247399]: 2026-01-31 08:58:17.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:17 np0005603621 nova_compute[247399]: 2026-01-31 08:58:17.328 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Jan 31 03:58:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Jan 31 03:58:17 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Jan 31 03:58:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:18.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:18.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 456 KiB/s rd, 2.6 MiB/s wr, 137 op/s
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.723 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.724 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.724 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:58:19 np0005603621 nova_compute[247399]: 2026-01-31 08:58:19.724 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:58:20 np0005603621 podman[382905]: 2026-01-31 08:58:20.035880113 +0000 UTC m=+0.051844598 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 31 03:58:20 np0005603621 podman[382906]: 2026-01-31 08:58:20.082508366 +0000 UTC m=+0.098219443 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:58:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5a6228a4-fcb0-40e6-a61c-5770fd09eed2 does not exist
Jan 31 03:58:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 35904351-cc4b-4295-b577-f2b2985c18d3 does not exist
Jan 31 03:58:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1caacb88-8060-4e27-8997-aded122ea688 does not exist
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.669 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.670 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.670 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.671 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.671 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.672 247403 INFO nova.compute.manager [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Terminating instance#033[00m
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.673 247403 DEBUG nova.compute.manager [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:58:20 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:58:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:20.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 450 KiB/s rd, 2.6 MiB/s wr, 126 op/s
Jan 31 03:58:20 np0005603621 kernel: tapa686c587-a9 (unregistering): left promiscuous mode
Jan 31 03:58:20 np0005603621 NetworkManager[49013]: <info>  [1769849900.9924] device (tapa686c587-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 03:58:20 np0005603621 nova_compute[247399]: 2026-01-31 08:58:20.998 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:20 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:20Z|00795|binding|INFO|Releasing lport a686c587-a94d-4875-a040-48d5b193a20a from this chassis (sb_readonly=0)
Jan 31 03:58:20 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:20Z|00796|binding|INFO|Setting lport a686c587-a94d-4875-a040-48d5b193a20a down in Southbound
Jan 31 03:58:20 np0005603621 ovn_controller[149152]: 2026-01-31T08:58:20Z|00797|binding|INFO|Removing iface tapa686c587-a9 ovn-installed in OVS
Jan 31 03:58:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:21.004 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:25:74 10.100.0.8'], port_security=['fa:16:3e:58:25:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c91674d0-7f78-4e09-b54e-e46f7fbd65a3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06b5fc9cfd4c49abb2d8b9f2f8a82c1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9d7b4c6b-30ca-4a01-b275-d4aa9d87b845', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe6e8b31-5a27-4e0f-b157-3b33899fa37b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a686c587-a94d-4875-a040-48d5b193a20a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:58:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:21.005 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a686c587-a94d-4875-a040-48d5b193a20a in datapath 405bd95c-1bad-49fb-83bf-a97a0c66786e unbound from our chassis#033[00m
Jan 31 03:58:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:21.006 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 405bd95c-1bad-49fb-83bf-a97a0c66786e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 03:58:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:21.008 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d5e586-d064-4957-ad79-f8d562a35a86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:21.008 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e namespace which is not needed anymore#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.009 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:21 np0005603621 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b2.scope: Deactivated successfully.
Jan 31 03:58:21 np0005603621 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000b2.scope: Consumed 18.627s CPU time.
Jan 31 03:58:21 np0005603621 systemd-machined[212769]: Machine qemu-90-instance-000000b2 terminated.
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.059516613 +0000 UTC m=+0.074414491 container create ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.100 247403 INFO nova.virt.libvirt.driver [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Instance destroyed successfully.#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.101 247403 DEBUG nova.objects.instance [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lazy-loading 'resources' on Instance uuid c91674d0-7f78-4e09-b54e-e46f7fbd65a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:58:21 np0005603621 systemd[1]: Started libpod-conmon-ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935.scope.
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.010243887 +0000 UTC m=+0.025141785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:58:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.127 247403 DEBUG nova.virt.libvirt.vif [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:55:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1909699072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1909699072',id=178,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEXzK6zUN8P2oqgqYwcegkodZ7bCeyyyhmYXIteBKXOhNEu+drS3qyKalg8BzkpjD3Rc/+FviAhlBApTbimNmOyPmM7IztIR2VGri6qDWFeRA0jXOdg2vS/Kgt0ALKH9cg==',key_name='tempest-TestInstancesWithCinderVolumes-176277168',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:55:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='06b5fc9cfd4c49abb2d8b9f2f8a82c1f',ramdisk_id='',reservation_id='r-g26td7gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-2132464628',owner_user_name='tempest-TestInstancesWithCinderVolumes-2132464628-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:55:41Z,user_data=None,user_id='cfaebb011a374541b083e772a6c83f25',uuid=c91674d0-7f78-4e09-b54e-e46f7fbd65a3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.129 247403 DEBUG nova.network.os_vif_util [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Converting VIF {"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.129 247403 DEBUG nova.network.os_vif_util [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.129 247403 DEBUG os_vif [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.131 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa686c587-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.133 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.135 247403 INFO os_vif [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:25:74,bridge_name='br-int',has_traffic_filtering=True,id=a686c587-a94d-4875-a040-48d5b193a20a,network=Network(405bd95c-1bad-49fb-83bf-a97a0c66786e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa686c587-a9')#033[00m
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.164629882 +0000 UTC m=+0.179527780 container init ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.17028459 +0000 UTC m=+0.185182468 container start ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 03:58:21 np0005603621 crazy_chatelet[383249]: 167 167
Jan 31 03:58:21 np0005603621 systemd[1]: libpod-ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935.scope: Deactivated successfully.
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.188940639 +0000 UTC m=+0.203838527 container attach ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.189676623 +0000 UTC m=+0.204574501 container died ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 03:58:21 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [NOTICE]   (378859) : haproxy version is 2.8.14-c23fe91
Jan 31 03:58:21 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [NOTICE]   (378859) : path to executable is /usr/sbin/haproxy
Jan 31 03:58:21 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [WARNING]  (378859) : Exiting Master process...
Jan 31 03:58:21 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [WARNING]  (378859) : Exiting Master process...
Jan 31 03:58:21 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [ALERT]    (378859) : Current worker (378861) exited with code 143 (Terminated)
Jan 31 03:58:21 np0005603621 neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e[378855]: [WARNING]  (378859) : All workers exited. Exiting... (0)
Jan 31 03:58:21 np0005603621 systemd[1]: libpod-c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6.scope: Deactivated successfully.
Jan 31 03:58:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-23665e39f2eec0ea9e24d95844dcbc1028d45384e06de48a381aa5d6a15f4943-merged.mount: Deactivated successfully.
Jan 31 03:58:21 np0005603621 podman[383247]: 2026-01-31 08:58:21.34605365 +0000 UTC m=+0.234617769 container died c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 31 03:58:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6-userdata-shm.mount: Deactivated successfully.
Jan 31 03:58:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fd44d71d67bc624f15fb2220569b0a4cd6aaf6c7c9916825697c3b0a90286ea0-merged.mount: Deactivated successfully.
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.529 247403 DEBUG nova.compute.manager [req-9462ca53-2d58-4617-964e-002a04bf551f req-299b36fa-826d-44f4-bb03-2abf7804a379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-vif-unplugged-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.530 247403 DEBUG oslo_concurrency.lockutils [req-9462ca53-2d58-4617-964e-002a04bf551f req-299b36fa-826d-44f4-bb03-2abf7804a379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.530 247403 DEBUG oslo_concurrency.lockutils [req-9462ca53-2d58-4617-964e-002a04bf551f req-299b36fa-826d-44f4-bb03-2abf7804a379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.530 247403 DEBUG oslo_concurrency.lockutils [req-9462ca53-2d58-4617-964e-002a04bf551f req-299b36fa-826d-44f4-bb03-2abf7804a379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.530 247403 DEBUG nova.compute.manager [req-9462ca53-2d58-4617-964e-002a04bf551f req-299b36fa-826d-44f4-bb03-2abf7804a379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] No waiting events found dispatching network-vif-unplugged-a686c587-a94d-4875-a040-48d5b193a20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.531 247403 DEBUG nova.compute.manager [req-9462ca53-2d58-4617-964e-002a04bf551f req-299b36fa-826d-44f4-bb03-2abf7804a379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-vif-unplugged-a686c587-a94d-4875-a040-48d5b193a20a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 03:58:21 np0005603621 podman[383198]: 2026-01-31 08:58:21.613787344 +0000 UTC m=+0.628685212 container remove ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_chatelet, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:58:21 np0005603621 nova_compute[247399]: 2026-01-31 08:58:21.628 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:21 np0005603621 podman[383247]: 2026-01-31 08:58:21.734410522 +0000 UTC m=+0.622974631 container cleanup c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:58:21 np0005603621 systemd[1]: libpod-conmon-c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6.scope: Deactivated successfully.
Jan 31 03:58:21 np0005603621 systemd[1]: libpod-conmon-ff9378b2334006cb521b217ed247ffa33a98cbc0add0cd71239a1a0951233935.scope: Deactivated successfully.
Jan 31 03:58:21 np0005603621 podman[383316]: 2026-01-31 08:58:21.775051245 +0000 UTC m=+0.080825243 container create e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:58:21 np0005603621 podman[383316]: 2026-01-31 08:58:21.752158552 +0000 UTC m=+0.057932570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:58:21 np0005603621 systemd[1]: Started libpod-conmon-e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1.scope.
Jan 31 03:58:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:58:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daff9ffaf40fdb8ed0454ac1677412144a0673224f6c7818e1d7b092c47c3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daff9ffaf40fdb8ed0454ac1677412144a0673224f6c7818e1d7b092c47c3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daff9ffaf40fdb8ed0454ac1677412144a0673224f6c7818e1d7b092c47c3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daff9ffaf40fdb8ed0454ac1677412144a0673224f6c7818e1d7b092c47c3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63daff9ffaf40fdb8ed0454ac1677412144a0673224f6c7818e1d7b092c47c3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:22 np0005603621 podman[383316]: 2026-01-31 08:58:22.063868614 +0000 UTC m=+0.369642632 container init e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:58:22 np0005603621 podman[383316]: 2026-01-31 08:58:22.07038968 +0000 UTC m=+0.376163678 container start e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:58:22 np0005603621 podman[383316]: 2026-01-31 08:58:22.104576379 +0000 UTC m=+0.410350407 container attach e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:58:22 np0005603621 podman[383333]: 2026-01-31 08:58:22.146698999 +0000 UTC m=+0.390672435 container remove c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.151 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6b3b76-edbe-41c2-8df0-d99a3d2c1635]: (4, ('Sat Jan 31 08:58:21 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e (c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6)\nc9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6\nSat Jan 31 08:58:21 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e (c9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6)\nc9b9fbe50cfbd84bac85bb79bc37630644d8aaaf9c1f2951c23e13fe85030df6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.153 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[92232194-6fb9-424e-88f4-de175fa9bb61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.154 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap405bd95c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:22 np0005603621 kernel: tap405bd95c-10: left promiscuous mode
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.166 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.167 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[54f4ad6f-6724-4ad2-958d-294ec35a09dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.186 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[13617756-02d4-4c63-92c2-512e54802a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.187 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4138a2-6cba-460c-b327-de79ccb3bee9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.200 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[85f4ed0d-1904-40e7-9473-5ae988ba00ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904595, 'reachable_time': 17208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383362, 'error': None, 'target': 'ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 systemd[1]: run-netns-ovnmeta\x2d405bd95c\x2d1bad\x2d49fb\x2d83bf\x2da97a0c66786e.mount: Deactivated successfully.
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.205 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-405bd95c-1bad-49fb-83bf-a97a0c66786e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 03:58:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:22.205 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[711fe064-7f43-4bb6-8a02-226d4ff77e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.297 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849887.2967095, c0bf1a5f-d393-4ffc-9582-c73a7ed1a412 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.298 247403 INFO nova.compute.manager [-] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.358 247403 DEBUG nova.compute.manager [None req-358ec0b3-520c-4377-a544-c1799a03ebb5 - - - - - -] [instance: c0bf1a5f-d393-4ffc-9582-c73a7ed1a412] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:58:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.412 247403 INFO nova.virt.libvirt.driver [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Deleting instance files /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3_del#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.412 247403 INFO nova.virt.libvirt.driver [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Deletion of /var/lib/nova/instances/c91674d0-7f78-4e09-b54e-e46f7fbd65a3_del complete#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.475 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [{"id": "a686c587-a94d-4875-a040-48d5b193a20a", "address": "fa:16:3e:58:25:74", "network": {"id": "405bd95c-1bad-49fb-83bf-a97a0c66786e", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-161168058-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06b5fc9cfd4c49abb2d8b9f2f8a82c1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa686c587-a9", "ovs_interfaceid": "a686c587-a94d-4875-a040-48d5b193a20a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.621 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-c91674d0-7f78-4e09-b54e-e46f7fbd65a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.622 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.623 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.674 247403 INFO nova.compute.manager [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Took 2.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.675 247403 DEBUG oslo.service.loopingcall [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.675 247403 DEBUG nova.compute.manager [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.675 247403 DEBUG nova.network.neutron [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.756 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.757 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.757 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.758 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:58:22 np0005603621 nova_compute[247399]: 2026-01-31 08:58:22.758 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:22 np0005603621 dreamy_keller[383353]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:58:22 np0005603621 dreamy_keller[383353]: --> relative data size: 1.0
Jan 31 03:58:22 np0005603621 dreamy_keller[383353]: --> All data devices are unavailable
Jan 31 03:58:22 np0005603621 systemd[1]: libpod-e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1.scope: Deactivated successfully.
Jan 31 03:58:22 np0005603621 podman[383316]: 2026-01-31 08:58:22.854009902 +0000 UTC m=+1.159783900 container died e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:58:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-63daff9ffaf40fdb8ed0454ac1677412144a0673224f6c7818e1d7b092c47c3f-merged.mount: Deactivated successfully.
Jan 31 03:58:22 np0005603621 podman[383316]: 2026-01-31 08:58:22.906562721 +0000 UTC m=+1.212336719 container remove e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keller, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:58:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:22 np0005603621 systemd[1]: libpod-conmon-e5336212268afc2b8ec6885384795a44c8e1d779d49d34f23de45e92837306f1.scope: Deactivated successfully.
Jan 31 03:58:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 397 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 511 KiB/s wr, 53 op/s
Jan 31 03:58:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717069906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.200 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.331 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.333 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4060MB free_disk=20.975730895996094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.333 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.333 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.391795172 +0000 UTC m=+0.037319460 container create 42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jennings, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:58:23 np0005603621 systemd[1]: Started libpod-conmon-42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b.scope.
Jan 31 03:58:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.459476758 +0000 UTC m=+0.105001056 container init 42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jennings, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.466603974 +0000 UTC m=+0.112128272 container start 42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jennings, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.370328564 +0000 UTC m=+0.015852872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.469866767 +0000 UTC m=+0.115391075 container attach 42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 03:58:23 np0005603621 tender_jennings[383562]: 167 167
Jan 31 03:58:23 np0005603621 systemd[1]: libpod-42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b.scope: Deactivated successfully.
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.470950461 +0000 UTC m=+0.116474749 container died 42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jennings, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:58:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-63828f999d1b456ced872d0d14a18a5610067502db955b4f0da22332388c4a36-merged.mount: Deactivated successfully.
Jan 31 03:58:23 np0005603621 podman[383546]: 2026-01-31 08:58:23.501559237 +0000 UTC m=+0.147083525 container remove 42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_jennings, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:58:23 np0005603621 systemd[1]: libpod-conmon-42a2eb31de73fe6067231242c7ae23c9c2dab13991d347d80df54ea8e7a7ea2b.scope: Deactivated successfully.
Jan 31 03:58:23 np0005603621 podman[383586]: 2026-01-31 08:58:23.603827347 +0000 UTC m=+0.035473661 container create 2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:58:23 np0005603621 systemd[1]: Started libpod-conmon-2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47.scope.
Jan 31 03:58:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:58:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e556907bd27cc837f01b3487b9b1e7587dae54a5cb3a441596cbaee630b9fdd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e556907bd27cc837f01b3487b9b1e7587dae54a5cb3a441596cbaee630b9fdd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e556907bd27cc837f01b3487b9b1e7587dae54a5cb3a441596cbaee630b9fdd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e556907bd27cc837f01b3487b9b1e7587dae54a5cb3a441596cbaee630b9fdd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:23 np0005603621 podman[383586]: 2026-01-31 08:58:23.587263794 +0000 UTC m=+0.018910128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:58:23 np0005603621 podman[383586]: 2026-01-31 08:58:23.682516791 +0000 UTC m=+0.114163115 container init 2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 03:58:23 np0005603621 podman[383586]: 2026-01-31 08:58:23.687730876 +0000 UTC m=+0.119377190 container start 2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mayer, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:58:23 np0005603621 podman[383586]: 2026-01-31 08:58:23.6907199 +0000 UTC m=+0.122366344 container attach 2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.857 247403 DEBUG nova.compute.manager [req-3cbbf26e-8f2c-48d4-8421-42e367c7936a req-466a8f06-66ae-4ebf-8c3a-b543333cf18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.858 247403 DEBUG oslo_concurrency.lockutils [req-3cbbf26e-8f2c-48d4-8421-42e367c7936a req-466a8f06-66ae-4ebf-8c3a-b543333cf18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.859 247403 DEBUG oslo_concurrency.lockutils [req-3cbbf26e-8f2c-48d4-8421-42e367c7936a req-466a8f06-66ae-4ebf-8c3a-b543333cf18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.859 247403 DEBUG oslo_concurrency.lockutils [req-3cbbf26e-8f2c-48d4-8421-42e367c7936a req-466a8f06-66ae-4ebf-8c3a-b543333cf18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.859 247403 DEBUG nova.compute.manager [req-3cbbf26e-8f2c-48d4-8421-42e367c7936a req-466a8f06-66ae-4ebf-8c3a-b543333cf18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] No waiting events found dispatching network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.860 247403 WARNING nova.compute.manager [req-3cbbf26e-8f2c-48d4-8421-42e367c7936a req-466a8f06-66ae-4ebf-8c3a-b543333cf18c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received unexpected event network-vif-plugged-a686c587-a94d-4875-a040-48d5b193a20a for instance with vm_state active and task_state deleting.#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.946 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.947 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.947 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:58:23 np0005603621 nova_compute[247399]: 2026-01-31 08:58:23.979 247403 DEBUG nova.network.neutron [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.016 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.112 247403 INFO nova.compute.manager [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Took 1.44 seconds to deallocate network for instance.#033[00m
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]: {
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:    "0": [
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:        {
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "devices": [
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "/dev/loop3"
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            ],
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "lv_name": "ceph_lv0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "lv_size": "7511998464",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "name": "ceph_lv0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "tags": {
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.cluster_name": "ceph",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.crush_device_class": "",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.encrypted": "0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.osd_id": "0",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.type": "block",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:                "ceph.vdo": "0"
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            },
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "type": "block",
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:            "vg_name": "ceph_vg0"
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:        }
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]:    ]
Jan 31 03:58:24 np0005603621 crazy_mayer[383602]: }
Jan 31 03:58:24 np0005603621 systemd[1]: libpod-2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47.scope: Deactivated successfully.
Jan 31 03:58:24 np0005603621 podman[383586]: 2026-01-31 08:58:24.406959634 +0000 UTC m=+0.838605948 container died 2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:58:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e556907bd27cc837f01b3487b9b1e7587dae54a5cb3a441596cbaee630b9fdd6-merged.mount: Deactivated successfully.
Jan 31 03:58:24 np0005603621 podman[383586]: 2026-01-31 08:58:24.450449797 +0000 UTC m=+0.882096111 container remove 2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:58:24 np0005603621 systemd[1]: libpod-conmon-2a12bee5c3f77ed010d4adad267c9569f3ad162d37ba1b4320737531bdb90e47.scope: Deactivated successfully.
Jan 31 03:58:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795518660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.505 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.511 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.639 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.786 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.787 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:24 np0005603621 nova_compute[247399]: 2026-01-31 08:58:24.827 247403 INFO nova.compute.manager [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Took 0.71 seconds to detach 1 volumes for instance.#033[00m
Jan 31 03:58:24 np0005603621 podman[383789]: 2026-01-31 08:58:24.895228291 +0000 UTC m=+0.034051066 container create 77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:58:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:24.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 18 KiB/s wr, 65 op/s
Jan 31 03:58:24 np0005603621 systemd[1]: Started libpod-conmon-77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83.scope.
Jan 31 03:58:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:58:24 np0005603621 podman[383789]: 2026-01-31 08:58:24.961925877 +0000 UTC m=+0.100748662 container init 77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:58:24 np0005603621 podman[383789]: 2026-01-31 08:58:24.966351296 +0000 UTC m=+0.105174051 container start 77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bassi, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:58:24 np0005603621 elegant_bassi[383806]: 167 167
Jan 31 03:58:24 np0005603621 systemd[1]: libpod-77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83.scope: Deactivated successfully.
Jan 31 03:58:24 np0005603621 podman[383789]: 2026-01-31 08:58:24.971050634 +0000 UTC m=+0.109873399 container attach 77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 03:58:24 np0005603621 podman[383789]: 2026-01-31 08:58:24.971277372 +0000 UTC m=+0.110100137 container died 77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bassi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:58:24 np0005603621 podman[383789]: 2026-01-31 08:58:24.879215935 +0000 UTC m=+0.018038730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:58:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5f21d97a8ba5c0a67d55b8ecd5b7e9624c1dfa8415d1815c272fbffad9dc545f-merged.mount: Deactivated successfully.
Jan 31 03:58:25 np0005603621 podman[383789]: 2026-01-31 08:58:25.005464461 +0000 UTC m=+0.144287226 container remove 77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_bassi, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:58:25 np0005603621 systemd[1]: libpod-conmon-77c793bf5702d7e7d46b08fb8986731af84d79435c129fa16c7eb067f262bc83.scope: Deactivated successfully.
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.030 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.032 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.083 247403 DEBUG oslo_concurrency.processutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:25 np0005603621 podman[383831]: 2026-01-31 08:58:25.13718265 +0000 UTC m=+0.035785461 container create 6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:58:25 np0005603621 systemd[1]: Started libpod-conmon-6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f.scope.
Jan 31 03:58:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:58:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b241cf3c37e452dc3c8a831aaff770b4b7208bb45e1e30017142dba0be6569fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b241cf3c37e452dc3c8a831aaff770b4b7208bb45e1e30017142dba0be6569fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b241cf3c37e452dc3c8a831aaff770b4b7208bb45e1e30017142dba0be6569fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b241cf3c37e452dc3c8a831aaff770b4b7208bb45e1e30017142dba0be6569fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:58:25 np0005603621 podman[383831]: 2026-01-31 08:58:25.123015003 +0000 UTC m=+0.021617834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:58:25 np0005603621 podman[383831]: 2026-01-31 08:58:25.225345884 +0000 UTC m=+0.123948715 container init 6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:58:25 np0005603621 podman[383831]: 2026-01-31 08:58:25.234769521 +0000 UTC m=+0.133372332 container start 6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 03:58:25 np0005603621 podman[383831]: 2026-01-31 08:58:25.237802897 +0000 UTC m=+0.136405728 container attach 6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.362 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348161018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.518 247403 DEBUG oslo_concurrency.processutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.524 247403 DEBUG nova.compute.provider_tree [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.585 247403 DEBUG nova.compute.manager [req-21aa17f1-3037-484a-944d-ed14605614cb req-e8e89a12-b0e5-45bb-b6e6-037d68efd980 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Received event network-vif-deleted-a686c587-a94d-4875-a040-48d5b193a20a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.622 247403 DEBUG nova.scheduler.client.report [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.760 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.808 247403 INFO nova.scheduler.client.report [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Deleted allocations for instance c91674d0-7f78-4e09-b54e-e46f7fbd65a3#033[00m
Jan 31 03:58:25 np0005603621 nova_compute[247399]: 2026-01-31 08:58:25.915 247403 DEBUG oslo_concurrency.lockutils [None req-d9a5acb2-741e-4530-97fb-8a4ce81e3b0f cfaebb011a374541b083e772a6c83f25 06b5fc9cfd4c49abb2d8b9f2f8a82c1f - - default default] Lock "c91674d0-7f78-4e09-b54e-e46f7fbd65a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:26 np0005603621 funny_napier[383847]: {
Jan 31 03:58:26 np0005603621 funny_napier[383847]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:58:26 np0005603621 funny_napier[383847]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:58:26 np0005603621 funny_napier[383847]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:58:26 np0005603621 funny_napier[383847]:        "osd_id": 0,
Jan 31 03:58:26 np0005603621 funny_napier[383847]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:58:26 np0005603621 funny_napier[383847]:        "type": "bluestore"
Jan 31 03:58:26 np0005603621 funny_napier[383847]:    }
Jan 31 03:58:26 np0005603621 funny_napier[383847]: }
Jan 31 03:58:26 np0005603621 systemd[1]: libpod-6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f.scope: Deactivated successfully.
Jan 31 03:58:26 np0005603621 podman[383831]: 2026-01-31 08:58:26.032762827 +0000 UTC m=+0.931365648 container died 6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:58:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b241cf3c37e452dc3c8a831aaff770b4b7208bb45e1e30017142dba0be6569fb-merged.mount: Deactivated successfully.
Jan 31 03:58:26 np0005603621 podman[383831]: 2026-01-31 08:58:26.080608048 +0000 UTC m=+0.979210859 container remove 6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_napier, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 03:58:26 np0005603621 systemd[1]: libpod-conmon-6c6620638a811d4b678e20492e5c28756c5be8d0d108f73855913afbe6bf2a8f.scope: Deactivated successfully.
Jan 31 03:58:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:58:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:58:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:58:26 np0005603621 nova_compute[247399]: 2026-01-31 08:58:26.134 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:58:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9f5624b0-899d-4b0b-9c2a-299e57339903 does not exist
Jan 31 03:58:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9afb9e95-3b03-4ce7-af03-e8d37b89d16a does not exist
Jan 31 03:58:26 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cd6645d8-0df8-40b5-81eb-fa117d6f7649 does not exist
Jan 31 03:58:26 np0005603621 nova_compute[247399]: 2026-01-31 08:58:26.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:26.248 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:58:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:26.249 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:58:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:58:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:58:26 np0005603621 nova_compute[247399]: 2026-01-31 08:58:26.629 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:58:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:58:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:26.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 44 KiB/s rd, 16 KiB/s wr, 61 op/s
Jan 31 03:58:27 np0005603621 nova_compute[247399]: 2026-01-31 08:58:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:28 np0005603621 nova_compute[247399]: 2026-01-31 08:58:28.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:58:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:28.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:28.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 14 KiB/s wr, 53 op/s
Jan 31 03:58:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:30.539 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:30.540 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:30.540 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:30.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:30.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.2 KiB/s wr, 60 op/s
Jan 31 03:58:31 np0005603621 nova_compute[247399]: 2026-01-31 08:58:31.138 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:31 np0005603621 nova_compute[247399]: 2026-01-31 08:58:31.631 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:32.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 300 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 2.7 KiB/s wr, 70 op/s
Jan 31 03:58:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:58:34.251 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:58:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:34.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 220 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 2.5 KiB/s wr, 51 op/s
Jan 31 03:58:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:34.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:36 np0005603621 nova_compute[247399]: 2026-01-31 08:58:36.100 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769849901.0990667, c91674d0-7f78-4e09-b54e-e46f7fbd65a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:58:36 np0005603621 nova_compute[247399]: 2026-01-31 08:58:36.100 247403 INFO nova.compute.manager [-] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] VM Stopped (Lifecycle Event)#033[00m
Jan 31 03:58:36 np0005603621 nova_compute[247399]: 2026-01-31 08:58:36.175 247403 DEBUG nova.compute.manager [None req-b68f06b1-367e-44d6-8bb1-3eaeaba0e7be - - - - - -] [instance: c91674d0-7f78-4e09-b54e-e46f7fbd65a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:58:36 np0005603621 nova_compute[247399]: 2026-01-31 08:58:36.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:36 np0005603621 nova_compute[247399]: 2026-01-31 08:58:36.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:36.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 197 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.4 KiB/s wr, 46 op/s
Jan 31 03:58:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:58:38
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images']
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:58:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Jan 31 03:58:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Jan 31 03:58:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:38.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:38 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Jan 31 03:58:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 179 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.7 KiB/s wr, 57 op/s
Jan 31 03:58:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:38.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:58:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:58:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:40.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 141 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 2.4 KiB/s wr, 59 op/s
Jan 31 03:58:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:40.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:41 np0005603621 nova_compute[247399]: 2026-01-31 08:58:41.182 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:41 np0005603621 nova_compute[247399]: 2026-01-31 08:58:41.635 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:42.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 164 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Jan 31 03:58:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:42.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:44.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 31 03:58:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:45 np0005603621 nova_compute[247399]: 2026-01-31 08:58:45.647 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:46 np0005603621 nova_compute[247399]: 2026-01-31 08:58:46.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:46 np0005603621 nova_compute[247399]: 2026-01-31 08:58:46.636 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:46.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 03:58:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:46.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Jan 31 03:58:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Jan 31 03:58:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Jan 31 03:58:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:48.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 03:58:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:58:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 03:58:50 np0005603621 podman[384014]: 2026-01-31 08:58:50.514609141 +0000 UTC m=+0.071716155 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 03:58:50 np0005603621 podman[384015]: 2026-01-31 08:58:50.535537922 +0000 UTC m=+0.092639926 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 03:58:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:50.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 31 03:58:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:50.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:51 np0005603621 nova_compute[247399]: 2026-01-31 08:58:51.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:51 np0005603621 nova_compute[247399]: 2026-01-31 08:58:51.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:58:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:52.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:58:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 626 KiB/s wr, 19 op/s
Jan 31 03:58:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 KiB/s rd, 15 KiB/s wr, 11 op/s
Jan 31 03:58:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:54.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:56 np0005603621 nova_compute[247399]: 2026-01-31 08:58:56.238 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:56 np0005603621 nova_compute[247399]: 2026-01-31 08:58:56.641 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:58:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:56.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 594 KiB/s rd, 15 KiB/s wr, 30 op/s
Jan 31 03:58:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:56.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.391 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.391 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.422 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.556 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.557 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.566 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.567 247403 INFO nova.compute.claims [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:58:57 np0005603621 nova_compute[247399]: 2026-01-31 08:58:57.698 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:58:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976302480' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.141 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.146 247403 DEBUG nova.compute.provider_tree [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.185 247403 DEBUG nova.scheduler.client.report [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.210 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.211 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.262 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.263 247403 DEBUG nova.network.neutron [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.290 247403 INFO nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.343 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.497 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.498 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.499 247403 INFO nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Creating image(s)#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.523 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.547 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.574 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.577 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.630 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.631 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.632 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.632 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.659 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:58:58 np0005603621 nova_compute[247399]: 2026-01-31 08:58:58.662 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:58:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:58:58.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 59 op/s
Jan 31 03:58:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:58:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:58:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:58:58.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:58:59 np0005603621 nova_compute[247399]: 2026-01-31 08:58:59.101 247403 DEBUG nova.policy [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:59:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:00.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 03:59:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:00.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.063 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.136 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.247 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.254 247403 DEBUG nova.objects.instance [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid aaf62d2f-5fda-43ee-8bf2-04e4940bf62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.272 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.272 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Ensure instance console log exists: /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.273 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.273 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.273 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.643 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:01 np0005603621 nova_compute[247399]: 2026-01-31 08:59:01.654 247403 DEBUG nova.network.neutron [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Successfully created port: b02a63e1-2ffc-49b9-ab9c-043d1923874b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:59:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:59:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:02.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:59:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 196 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 95 op/s
Jan 31 03:59:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:02.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:04.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 31 03:59:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:04.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.526 247403 DEBUG nova.network.neutron [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Successfully updated port: b02a63e1-2ffc-49b9-ab9c-043d1923874b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.746 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.747 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.747 247403 DEBUG nova.network.neutron [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.802 247403 DEBUG nova.compute.manager [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-changed-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.802 247403 DEBUG nova.compute.manager [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Refreshing instance network info cache due to event network-changed-b02a63e1-2ffc-49b9-ab9c-043d1923874b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:59:05 np0005603621 nova_compute[247399]: 2026-01-31 08:59:05.803 247403 DEBUG oslo_concurrency.lockutils [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:06 np0005603621 nova_compute[247399]: 2026-01-31 08:59:06.140 247403 DEBUG nova.network.neutron [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:59:06 np0005603621 nova_compute[247399]: 2026-01-31 08:59:06.250 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:06 np0005603621 nova_compute[247399]: 2026-01-31 08:59:06.696 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:06.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 31 03:59:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:06.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:08.010 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:08.011 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.237 247403 DEBUG nova.network.neutron [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.282 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.282 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Instance network_info: |[{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.282 247403 DEBUG oslo_concurrency.lockutils [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.282 247403 DEBUG nova.network.neutron [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Refreshing network info cache for port b02a63e1-2ffc-49b9-ab9c-043d1923874b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.285 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Start _get_guest_xml network_info=[{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.289 247403 WARNING nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.299 247403 DEBUG nova.virt.libvirt.host [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.300 247403 DEBUG nova.virt.libvirt.host [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.304 247403 DEBUG nova.virt.libvirt.host [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.305 247403 DEBUG nova.virt.libvirt.host [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.306 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.306 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.307 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.307 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.307 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.307 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.308 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.308 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.308 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.308 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.308 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.309 247403 DEBUG nova.virt.hardware [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.311 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:59:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:59:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546308256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.733 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.757 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:08 np0005603621 nova_compute[247399]: 2026-01-31 08:59:08.760 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 31 03:59:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:08.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:59:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532678256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.177 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.179 247403 DEBUG nova.virt.libvirt.vif [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-203598222',display_name='tempest-TestNetworkBasicOps-server-203598222',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-203598222',id=189,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrMY4MVn4hj/bsMk1gLTd1kGHztQrvatek01LjyFe0ofAcmMZ2Uk6h+k8aGUK75gBAILctU9wQ4d13+Zk3JniP8Xk8rAJMzxEpnGqpGTmMMpOlldx7flmFsisI/eSBL+A==',key_name='tempest-TestNetworkBasicOps-1465877602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-20t0cymf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:58:58Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=aaf62d2f-5fda-43ee-8bf2-04e4940bf62f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.180 247403 DEBUG nova.network.os_vif_util [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.180 247403 DEBUG nova.network.os_vif_util [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.181 247403 DEBUG nova.objects.instance [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid aaf62d2f-5fda-43ee-8bf2-04e4940bf62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.208 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <uuid>aaf62d2f-5fda-43ee-8bf2-04e4940bf62f</uuid>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <name>instance-000000bd</name>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-203598222</nova:name>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:59:08</nova:creationTime>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <nova:port uuid="b02a63e1-2ffc-49b9-ab9c-043d1923874b">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <entry name="serial">aaf62d2f-5fda-43ee-8bf2-04e4940bf62f</entry>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <entry name="uuid">aaf62d2f-5fda-43ee-8bf2-04e4940bf62f</entry>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk.config">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:f7:e7:95"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <target dev="tapb02a63e1-2f"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/console.log" append="off"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:59:09 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:59:09 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:59:09 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:59:09 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.209 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Preparing to wait for external event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.209 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.209 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.209 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.210 247403 DEBUG nova.virt.libvirt.vif [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-203598222',display_name='tempest-TestNetworkBasicOps-server-203598222',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-203598222',id=189,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrMY4MVn4hj/bsMk1gLTd1kGHztQrvatek01LjyFe0ofAcmMZ2Uk6h+k8aGUK75gBAILctU9wQ4d13+Zk3JniP8Xk8rAJMzxEpnGqpGTmMMpOlldx7flmFsisI/eSBL+A==',key_name='tempest-TestNetworkBasicOps-1465877602',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-20t0cymf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:58:58Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=aaf62d2f-5fda-43ee-8bf2-04e4940bf62f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.210 247403 DEBUG nova.network.os_vif_util [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.211 247403 DEBUG nova.network.os_vif_util [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.211 247403 DEBUG os_vif [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.211 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.212 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.212 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.215 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.215 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb02a63e1-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.215 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb02a63e1-2f, col_values=(('external_ids', {'iface-id': 'b02a63e1-2ffc-49b9-ab9c-043d1923874b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:e7:95', 'vm-uuid': 'aaf62d2f-5fda-43ee-8bf2-04e4940bf62f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:09 np0005603621 NetworkManager[49013]: <info>  [1769849949.2182] manager: (tapb02a63e1-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.223 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.224 247403 INFO os_vif [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f')#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.414 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.415 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.415 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:f7:e7:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.416 247403 INFO nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Using config drive#033[00m
Jan 31 03:59:09 np0005603621 nova_compute[247399]: 2026-01-31 08:59:09.441 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:10 np0005603621 nova_compute[247399]: 2026-01-31 08:59:10.230 247403 INFO nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Creating config drive at /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/disk.config#033[00m
Jan 31 03:59:10 np0005603621 nova_compute[247399]: 2026-01-31 08:59:10.233 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp90wl27sg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:10 np0005603621 nova_compute[247399]: 2026-01-31 08:59:10.355 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp90wl27sg" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:10 np0005603621 nova_compute[247399]: 2026-01-31 08:59:10.483 247403 DEBUG nova.storage.rbd_utils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:10 np0005603621 nova_compute[247399]: 2026-01-31 08:59:10.487 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/disk.config aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:10.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 564 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 31 03:59:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:10.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.002 247403 DEBUG oslo_concurrency.processutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/disk.config aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.003 247403 INFO nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Deleting local config drive /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f/disk.config because it was imported into RBD.#033[00m
Jan 31 03:59:11 np0005603621 kernel: tapb02a63e1-2f: entered promiscuous mode
Jan 31 03:59:11 np0005603621 NetworkManager[49013]: <info>  [1769849951.0425] manager: (tapb02a63e1-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Jan 31 03:59:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:11Z|00798|binding|INFO|Claiming lport b02a63e1-2ffc-49b9-ab9c-043d1923874b for this chassis.
Jan 31 03:59:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:11Z|00799|binding|INFO|b02a63e1-2ffc-49b9-ab9c-043d1923874b: Claiming fa:16:3e:f7:e7:95 10.100.0.14
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.043 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.045 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.048 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 systemd-udevd[384439]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.068 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:11Z|00800|binding|INFO|Setting lport b02a63e1-2ffc-49b9-ab9c-043d1923874b ovn-installed in OVS
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 NetworkManager[49013]: <info>  [1769849951.0804] device (tapb02a63e1-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:59:11 np0005603621 NetworkManager[49013]: <info>  [1769849951.0815] device (tapb02a63e1-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:59:11 np0005603621 systemd-machined[212769]: New machine qemu-93-instance-000000bd.
Jan 31 03:59:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:11Z|00801|binding|INFO|Setting lport b02a63e1-2ffc-49b9-ab9c-043d1923874b up in Southbound
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.230 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:e7:95 10.100.0.14'], port_security=['fa:16:3e:f7:e7:95 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aaf62d2f-5fda-43ee-8bf2-04e4940bf62f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '741cc279-854c-4c57-b65b-d0d947eda32c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3acdb359-6f9b-4f22-ab08-f7d3c7064a26, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b02a63e1-2ffc-49b9-ab9c-043d1923874b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.231 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b02a63e1-2ffc-49b9-ab9c-043d1923874b in datapath 506aa866-6ce6-4d84-b5c1-b07676e27a1d bound to our chassis#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.233 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 506aa866-6ce6-4d84-b5c1-b07676e27a1d#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.241 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[699aaa9a-d9cf-4758-a844-f5ce147a37ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.241 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap506aa866-61 in ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 03:59:11 np0005603621 systemd[1]: Started Virtual Machine qemu-93-instance-000000bd.
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.243 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap506aa866-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.244 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0a00e9c4-f0be-4a77-8709-bbe77e2c3ca7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.244 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ee466d-e54c-4a41-83d4-cb6ee3b2c64a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.252 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f8a348d0-0ce1-4810-820f-70bf594c5bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.271 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[143c56a8-3f9a-4d8b-955b-762a2a1eef69]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.276 247403 DEBUG nova.network.neutron [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updated VIF entry in instance network info cache for port b02a63e1-2ffc-49b9-ab9c-043d1923874b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.277 247403 DEBUG nova.network.neutron [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.297 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5df9f50b-fe88-4b0e-aaa1-5834fe5ea0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 NetworkManager[49013]: <info>  [1769849951.3032] manager: (tap506aa866-60): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Jan 31 03:59:11 np0005603621 systemd-udevd[384441]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.303 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fdde2478-73ec-4f66-b416-8c042c1a9b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.324 247403 DEBUG oslo_concurrency.lockutils [req-1189d4e3-6cb7-46e7-943e-f933b9ee24e0 req-cbf900a3-f969-41cf-a710-82eed8d52c7b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.325 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3370b509-42bd-4389-a299-cb4468e11e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.329 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b5551e6c-bec2-48cc-b559-724ef4d4343c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 NetworkManager[49013]: <info>  [1769849951.3429] device (tap506aa866-60): carrier: link connected
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.349 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[590db85b-0e2a-4bf0-9019-c9ac3eb57fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.361 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4da74fa5-4320-4127-83dc-5621b02b7474]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap506aa866-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:79:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 925685, 'reachable_time': 27602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384475, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.372 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d78db2-9606-4d8b-ad82-0108b9addf70]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe23:79dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 925685, 'tstamp': 925685}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384476, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.388 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[569e4927-3d7e-4847-bf95-7fbb00cc5ccb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap506aa866-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:79:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 925685, 'reachable_time': 27602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384477, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.415 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b7df3d42-54d8-4cf7-9fd5-3ec6ee3fc3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.461 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[082b7068-2acb-45be-8424-fe7de29e365d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.463 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap506aa866-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.463 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.463 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap506aa866-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.465 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 kernel: tap506aa866-60: entered promiscuous mode
Jan 31 03:59:11 np0005603621 NetworkManager[49013]: <info>  [1769849951.4656] manager: (tap506aa866-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.469 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap506aa866-60, col_values=(('external_ids', {'iface-id': 'e3a35e20-7eb7-4a52-9164-bd36f73ab684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:11 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:11Z|00802|binding|INFO|Releasing lport e3a35e20-7eb7-4a52-9164-bd36f73ab684 from this chassis (sb_readonly=0)
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.470 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.471 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.472 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/506aa866-6ce6-4d84-b5c1-b07676e27a1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/506aa866-6ce6-4d84-b5c1-b07676e27a1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.472 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc045dd-bc31-481d-8dcc-a01be75406af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.473 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-506aa866-6ce6-4d84-b5c1-b07676e27a1d
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/506aa866-6ce6-4d84-b5c1-b07676e27a1d.pid.haproxy
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 506aa866-6ce6-4d84-b5c1-b07676e27a1d
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 03:59:11 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:11.474 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'env', 'PROCESS_TAG=haproxy-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/506aa866-6ce6-4d84-b5c1-b07676e27a1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.476 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.645 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849951.6445947, aaf62d2f-5fda-43ee-8bf2-04e4940bf62f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.645 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] VM Started (Lifecycle Event)#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.681 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.684 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849951.6447656, aaf62d2f-5fda-43ee-8bf2-04e4940bf62f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.684 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.698 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.704 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.706 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:59:11 np0005603621 nova_compute[247399]: 2026-01-31 08:59:11.734 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:59:11 np0005603621 podman[384549]: 2026-01-31 08:59:11.761252487 +0000 UTC m=+0.019006821 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 03:59:12 np0005603621 podman[384549]: 2026-01-31 08:59:12.071172662 +0000 UTC m=+0.328926976 container create e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 03:59:12 np0005603621 systemd[1]: Started libpod-conmon-e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130.scope.
Jan 31 03:59:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:12 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39045215f28bbb4b61d0f712fe4c79cb65e544dfc8fa7acedc9082cd46b4ee68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:12 np0005603621 podman[384549]: 2026-01-31 08:59:12.215959234 +0000 UTC m=+0.473713588 container init e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 03:59:12 np0005603621 podman[384549]: 2026-01-31 08:59:12.220011351 +0000 UTC m=+0.477765665 container start e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 03:59:12 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [NOTICE]   (384569) : New worker (384571) forked
Jan 31 03:59:12 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [NOTICE]   (384569) : Loading success.
Jan 31 03:59:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.769 247403 DEBUG nova.compute.manager [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.769 247403 DEBUG oslo_concurrency.lockutils [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.769 247403 DEBUG oslo_concurrency.lockutils [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.770 247403 DEBUG oslo_concurrency.lockutils [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.770 247403 DEBUG nova.compute.manager [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Processing event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.770 247403 DEBUG nova.compute.manager [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.770 247403 DEBUG oslo_concurrency.lockutils [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.771 247403 DEBUG oslo_concurrency.lockutils [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.771 247403 DEBUG oslo_concurrency.lockutils [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.771 247403 DEBUG nova.compute.manager [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] No waiting events found dispatching network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.771 247403 WARNING nova.compute.manager [req-559f6dcf-6f5e-4d8c-bf3f-fb44b4cc8852 req-ea2062cf-dd00-4168-b556-f9726aafa13d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received unexpected event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b for instance with vm_state building and task_state spawning.#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.772 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.774 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849952.7745712, aaf62d2f-5fda-43ee-8bf2-04e4940bf62f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.775 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.776 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.779 247403 INFO nova.virt.libvirt.driver [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Instance spawned successfully.#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.779 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.807 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.811 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.811 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.812 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.812 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.813 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.813 247403 DEBUG nova.virt.libvirt.driver [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.818 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.859 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:59:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:12.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.939 247403 INFO nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Took 14.44 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:59:12 np0005603621 nova_compute[247399]: 2026-01-31 08:59:12.939 247403 DEBUG nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:12.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:13 np0005603621 nova_compute[247399]: 2026-01-31 08:59:13.045 247403 INFO nova.compute.manager [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Took 15.55 seconds to build instance.#033[00m
Jan 31 03:59:13 np0005603621 nova_compute[247399]: 2026-01-31 08:59:13.072 247403 DEBUG oslo_concurrency.lockutils [None req-b962c6c9-9a14-4d6b-8894-1e6f2829133e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.185793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849953185827, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2159, "num_deletes": 253, "total_data_size": 3614867, "memory_usage": 3678568, "flush_reason": "Manual Compaction"}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849953260943, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3539115, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71020, "largest_seqno": 73178, "table_properties": {"data_size": 3529531, "index_size": 5949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21069, "raw_average_key_size": 20, "raw_value_size": 3509874, "raw_average_value_size": 3458, "num_data_blocks": 258, "num_entries": 1015, "num_filter_entries": 1015, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849761, "oldest_key_time": 1769849761, "file_creation_time": 1769849953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 75199 microseconds, and 5271 cpu microseconds.
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.260988) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3539115 bytes OK
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.261010) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.295089) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.295139) EVENT_LOG_v1 {"time_micros": 1769849953295131, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.295165) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3605921, prev total WAL file size 3605921, number of live WAL files 2.
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.295901) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3456KB)], [164(10179KB)]
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849953295983, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 13962611, "oldest_snapshot_seqno": -1}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 10000 keys, 12047871 bytes, temperature: kUnknown
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849953450915, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 12047871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11984804, "index_size": 36970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25029, "raw_key_size": 263202, "raw_average_key_size": 26, "raw_value_size": 11811505, "raw_average_value_size": 1181, "num_data_blocks": 1404, "num_entries": 10000, "num_filter_entries": 10000, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769849953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.451527) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12047871 bytes
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.488532) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.9 rd, 77.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 10527, records dropped: 527 output_compression: NoCompression
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.488572) EVENT_LOG_v1 {"time_micros": 1769849953488559, "job": 102, "event": "compaction_finished", "compaction_time_micros": 155275, "compaction_time_cpu_micros": 21262, "output_level": 6, "num_output_files": 1, "total_output_size": 12047871, "num_input_records": 10527, "num_output_records": 10000, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849953489087, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769849953489853, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.295706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.489885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.489889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.489890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.489892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:59:13 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-08:59:13.489893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 03:59:14 np0005603621 nova_compute[247399]: 2026-01-31 08:59:14.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 03:59:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725917846' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 03:59:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 03:59:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2725917846' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 03:59:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:59:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:14.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:59:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 500 KiB/s rd, 421 KiB/s wr, 35 op/s
Jan 31 03:59:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:15.013 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:15 np0005603621 nova_compute[247399]: 2026-01-31 08:59:15.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:16 np0005603621 nova_compute[247399]: 2026-01-31 08:59:16.699 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:16.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 841 KiB/s rd, 12 KiB/s wr, 43 op/s
Jan 31 03:59:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:16.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:18 np0005603621 nova_compute[247399]: 2026-01-31 08:59:18.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:18.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 31 03:59:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:19 np0005603621 nova_compute[247399]: 2026-01-31 08:59:19.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:19 np0005603621 nova_compute[247399]: 2026-01-31 08:59:19.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 03:59:19 np0005603621 nova_compute[247399]: 2026-01-31 08:59:19.223 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:19 np0005603621 nova_compute[247399]: 2026-01-31 08:59:19.359 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:19 np0005603621 NetworkManager[49013]: <info>  [1769849959.3705] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Jan 31 03:59:19 np0005603621 NetworkManager[49013]: <info>  [1769849959.3714] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 31 03:59:19 np0005603621 nova_compute[247399]: 2026-01-31 08:59:19.409 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:19 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:19Z|00803|binding|INFO|Releasing lport e3a35e20-7eb7-4a52-9164-bd36f73ab684 from this chassis (sb_readonly=0)
Jan 31 03:59:19 np0005603621 nova_compute[247399]: 2026-01-31 08:59:19.424 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 31 03:59:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:59:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.129 247403 DEBUG nova.compute.manager [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-changed-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.129 247403 DEBUG nova.compute.manager [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Refreshing instance network info cache due to event network-changed-b02a63e1-2ffc-49b9-ab9c-043d1923874b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.130 247403 DEBUG oslo_concurrency.lockutils [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.130 247403 DEBUG oslo_concurrency.lockutils [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.130 247403 DEBUG nova.network.neutron [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Refreshing network info cache for port b02a63e1-2ffc-49b9-ab9c-043d1923874b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 03:59:21 np0005603621 podman[384635]: 2026-01-31 08:59:21.500861813 +0000 UTC m=+0.047258123 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 03:59:21 np0005603621 podman[384636]: 2026-01-31 08:59:21.520239475 +0000 UTC m=+0.068115322 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:21 np0005603621 nova_compute[247399]: 2026-01-31 08:59:21.701 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:22 np0005603621 nova_compute[247399]: 2026-01-31 08:59:22.124 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:22.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 204 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 114 op/s
Jan 31 03:59:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:59:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:59:23 np0005603621 nova_compute[247399]: 2026-01-31 08:59:23.769 247403 DEBUG nova.network.neutron [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updated VIF entry in instance network info cache for port b02a63e1-2ffc-49b9-ab9c-043d1923874b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:59:23 np0005603621 nova_compute[247399]: 2026-01-31 08:59:23.769 247403 DEBUG nova.network.neutron [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:23 np0005603621 nova_compute[247399]: 2026-01-31 08:59:23.827 247403 DEBUG oslo_concurrency.lockutils [req-2da557b3-2f85-4ec4-bde2-638187da3a18 req-bf2b071e-be20-41a1-86d1-bf1a4cefea02 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:23 np0005603621 nova_compute[247399]: 2026-01-31 08:59:23.828 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:23 np0005603621 nova_compute[247399]: 2026-01-31 08:59:23.828 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 03:59:23 np0005603621 nova_compute[247399]: 2026-01-31 08:59:23.828 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aaf62d2f-5fda-43ee-8bf2-04e4940bf62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:24 np0005603621 nova_compute[247399]: 2026-01-31 08:59:24.225 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:24.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 215 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 31 03:59:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:24.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:26Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:e7:95 10.100.0.14
Jan 31 03:59:26 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:26Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:e7:95 10.100.0.14
Jan 31 03:59:26 np0005603621 nova_compute[247399]: 2026-01-31 08:59:26.704 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:26.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 227 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.4 MiB/s wr, 116 op/s
Jan 31 03:59:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:26.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.260 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.294 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.295 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.295 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.295 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.597 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.599 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.599 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.599 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 03:59:27 np0005603621 nova_compute[247399]: 2026-01-31 08:59:27.600 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 03:59:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026366961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.065 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.327 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.328 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.452 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.453 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3893MB free_disk=20.93923568725586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.454 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.454 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.458 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.843 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance aaf62d2f-5fda-43ee-8bf2-04e4940bf62f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.843 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.843 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 03:59:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:28 np0005603621 nova_compute[247399]: 2026-01-31 08:59:28.895 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5e9f214c-0b34-4638-bb59-83c38eef9a84 does not exist
Jan 31 03:59:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 45cbdf6a-3082-402a-b5a1-62b24b8937a8 does not exist
Jan 31 03:59:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8004dfa5-7284-4750-acf1-db9e06661887 does not exist
Jan 31 03:59:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 242 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 140 op/s
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 03:59:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 03:59:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:59:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:28.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:59:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 03:59:29 np0005603621 nova_compute[247399]: 2026-01-31 08:59:29.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:29 np0005603621 podman[384980]: 2026-01-31 08:59:29.454182178 +0000 UTC m=+0.068648908 container create f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 31 03:59:29 np0005603621 podman[384980]: 2026-01-31 08:59:29.403284411 +0000 UTC m=+0.017751161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:59:29 np0005603621 nova_compute[247399]: 2026-01-31 08:59:29.508 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 03:59:29 np0005603621 nova_compute[247399]: 2026-01-31 08:59:29.509 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 03:59:29 np0005603621 nova_compute[247399]: 2026-01-31 08:59:29.627 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 03:59:29 np0005603621 nova_compute[247399]: 2026-01-31 08:59:29.674 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 03:59:29 np0005603621 systemd[1]: Started libpod-conmon-f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18.scope.
Jan 31 03:59:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:29 np0005603621 nova_compute[247399]: 2026-01-31 08:59:29.768 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:29 np0005603621 podman[384980]: 2026-01-31 08:59:29.988790448 +0000 UTC m=+0.603257188 container init f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 03:59:29 np0005603621 podman[384980]: 2026-01-31 08:59:29.995012575 +0000 UTC m=+0.609479305 container start f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:59:29 np0005603621 sharp_margulis[384996]: 167 167
Jan 31 03:59:29 np0005603621 systemd[1]: libpod-f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18.scope: Deactivated successfully.
Jan 31 03:59:30 np0005603621 podman[384980]: 2026-01-31 08:59:30.036506275 +0000 UTC m=+0.650973035 container attach f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:59:30 np0005603621 podman[384980]: 2026-01-31 08:59:30.037654241 +0000 UTC m=+0.652120971 container died f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:59:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-467f268fecba395f5b80613a546009fe1690edf4a15e4fd4b838d7d7947df588-merged.mount: Deactivated successfully.
Jan 31 03:59:30 np0005603621 podman[384980]: 2026-01-31 08:59:30.131099671 +0000 UTC m=+0.745566401 container remove f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_margulis, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 03:59:30 np0005603621 systemd[1]: libpod-conmon-f6256f07f3346d05225221b219c925d892789106cfb71c98c5819c3ee4ffab18.scope: Deactivated successfully.
Jan 31 03:59:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 03:59:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 03:59:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587660124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 03:59:30 np0005603621 nova_compute[247399]: 2026-01-31 08:59:30.197 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:30 np0005603621 nova_compute[247399]: 2026-01-31 08:59:30.204 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:59:30 np0005603621 nova_compute[247399]: 2026-01-31 08:59:30.298 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:59:30 np0005603621 podman[385045]: 2026-01-31 08:59:30.306989335 +0000 UTC m=+0.092476111 container create d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 03:59:30 np0005603621 podman[385045]: 2026-01-31 08:59:30.235163817 +0000 UTC m=+0.020650613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:59:30 np0005603621 nova_compute[247399]: 2026-01-31 08:59:30.392 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 03:59:30 np0005603621 nova_compute[247399]: 2026-01-31 08:59:30.393 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:30 np0005603621 systemd[1]: Started libpod-conmon-d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a.scope.
Jan 31 03:59:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85f1427a8086d88c0b3886d481076b7f569eebbe768658fdadc208f9870fe26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85f1427a8086d88c0b3886d481076b7f569eebbe768658fdadc208f9870fe26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85f1427a8086d88c0b3886d481076b7f569eebbe768658fdadc208f9870fe26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85f1427a8086d88c0b3886d481076b7f569eebbe768658fdadc208f9870fe26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c85f1427a8086d88c0b3886d481076b7f569eebbe768658fdadc208f9870fe26/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:30.540 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:30.541 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:30.542 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:30 np0005603621 podman[385045]: 2026-01-31 08:59:30.574072998 +0000 UTC m=+0.359559804 container init d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_galois, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:59:30 np0005603621 podman[385045]: 2026-01-31 08:59:30.57983615 +0000 UTC m=+0.365322926 container start d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 03:59:30 np0005603621 podman[385045]: 2026-01-31 08:59:30.606537433 +0000 UTC m=+0.392024229 container attach d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:59:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 460 KiB/s rd, 3.9 MiB/s wr, 125 op/s
Jan 31 03:59:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:30.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:31 np0005603621 nova_compute[247399]: 2026-01-31 08:59:31.295 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:31 np0005603621 nova_compute[247399]: 2026-01-31 08:59:31.296 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:31 np0005603621 distracted_galois[385060]: --> passed data devices: 0 physical, 1 LVM
Jan 31 03:59:31 np0005603621 distracted_galois[385060]: --> relative data size: 1.0
Jan 31 03:59:31 np0005603621 distracted_galois[385060]: --> All data devices are unavailable
Jan 31 03:59:31 np0005603621 systemd[1]: libpod-d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a.scope: Deactivated successfully.
Jan 31 03:59:31 np0005603621 podman[385045]: 2026-01-31 08:59:31.353834918 +0000 UTC m=+1.139321694 container died d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 03:59:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c85f1427a8086d88c0b3886d481076b7f569eebbe768658fdadc208f9870fe26-merged.mount: Deactivated successfully.
Jan 31 03:59:31 np0005603621 podman[385045]: 2026-01-31 08:59:31.394553184 +0000 UTC m=+1.180039960 container remove d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_galois, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:59:31 np0005603621 systemd[1]: libpod-conmon-d1e9a652409e469fcc3b9bf4e1d287b22f931b47b77775c600886b9f4c2f300a.scope: Deactivated successfully.
Jan 31 03:59:31 np0005603621 nova_compute[247399]: 2026-01-31 08:59:31.706 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:31 np0005603621 podman[385223]: 2026-01-31 08:59:31.887094355 +0000 UTC m=+0.040776788 container create 2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:59:31 np0005603621 systemd[1]: Started libpod-conmon-2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007.scope.
Jan 31 03:59:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:31 np0005603621 podman[385223]: 2026-01-31 08:59:31.955714462 +0000 UTC m=+0.109396925 container init 2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 03:59:31 np0005603621 podman[385223]: 2026-01-31 08:59:31.86602079 +0000 UTC m=+0.019703243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:59:31 np0005603621 podman[385223]: 2026-01-31 08:59:31.961698331 +0000 UTC m=+0.115380764 container start 2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 03:59:31 np0005603621 fervent_northcutt[385239]: 167 167
Jan 31 03:59:31 np0005603621 systemd[1]: libpod-2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007.scope: Deactivated successfully.
Jan 31 03:59:31 np0005603621 podman[385223]: 2026-01-31 08:59:31.967544196 +0000 UTC m=+0.121226629 container attach 2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:59:31 np0005603621 podman[385223]: 2026-01-31 08:59:31.968063721 +0000 UTC m=+0.121746154 container died 2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:59:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ec7e5c2de141244bec2551cb36cd59a08a8e29bb2d379959449628a09a33e77b-merged.mount: Deactivated successfully.
Jan 31 03:59:32 np0005603621 podman[385223]: 2026-01-31 08:59:32.003542542 +0000 UTC m=+0.157224975 container remove 2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_northcutt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:59:32 np0005603621 systemd[1]: libpod-conmon-2769ce222acca1c55fd33d0668cf7a3d9e8befaee676c0d4e177b4a0b88a4007.scope: Deactivated successfully.
Jan 31 03:59:32 np0005603621 podman[385263]: 2026-01-31 08:59:32.140130905 +0000 UTC m=+0.042493403 container create 1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:59:32 np0005603621 systemd[1]: Started libpod-conmon-1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344.scope.
Jan 31 03:59:32 np0005603621 nova_compute[247399]: 2026-01-31 08:59:32.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 03:59:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97c66a804dc83a14818293b99912ff6b58ce6b9ca73c8422d9332788eab395a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97c66a804dc83a14818293b99912ff6b58ce6b9ca73c8422d9332788eab395a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97c66a804dc83a14818293b99912ff6b58ce6b9ca73c8422d9332788eab395a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c97c66a804dc83a14818293b99912ff6b58ce6b9ca73c8422d9332788eab395a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:32 np0005603621 podman[385263]: 2026-01-31 08:59:32.115525578 +0000 UTC m=+0.017888126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:59:32 np0005603621 podman[385263]: 2026-01-31 08:59:32.222702701 +0000 UTC m=+0.125065229 container init 1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 03:59:32 np0005603621 podman[385263]: 2026-01-31 08:59:32.228549706 +0000 UTC m=+0.130912204 container start 1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 03:59:32 np0005603621 podman[385263]: 2026-01-31 08:59:32.23216712 +0000 UTC m=+0.134529648 container attach 1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 03:59:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Jan 31 03:59:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Jan 31 03:59:32 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Jan 31 03:59:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 258 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 526 KiB/s rd, 4.7 MiB/s wr, 112 op/s
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]: {
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:    "0": [
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:        {
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "devices": [
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "/dev/loop3"
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            ],
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "lv_name": "ceph_lv0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "lv_size": "7511998464",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "name": "ceph_lv0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "tags": {
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.cephx_lockbox_secret": "",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.cluster_name": "ceph",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.crush_device_class": "",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.encrypted": "0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.osd_id": "0",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.type": "block",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:                "ceph.vdo": "0"
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            },
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "type": "block",
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:            "vg_name": "ceph_vg0"
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:        }
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]:    ]
Jan 31 03:59:32 np0005603621 vibrant_gould[385280]: }
Jan 31 03:59:32 np0005603621 systemd[1]: libpod-1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344.scope: Deactivated successfully.
Jan 31 03:59:32 np0005603621 podman[385263]: 2026-01-31 08:59:32.972130454 +0000 UTC m=+0.874492952 container died 1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 03:59:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:32.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c97c66a804dc83a14818293b99912ff6b58ce6b9ca73c8422d9332788eab395a-merged.mount: Deactivated successfully.
Jan 31 03:59:33 np0005603621 podman[385263]: 2026-01-31 08:59:33.022281588 +0000 UTC m=+0.924644086 container remove 1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 03:59:33 np0005603621 systemd[1]: libpod-conmon-1a94a198d6a64eceede4fc19f50bbc71ee64374c248aaedab8cde0d088b99344.scope: Deactivated successfully.
Jan 31 03:59:33 np0005603621 nova_compute[247399]: 2026-01-31 08:59:33.414 247403 INFO nova.compute.manager [None req-68578913-32e3-4c00-9657-3b171a840197 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Get console output#033[00m
Jan 31 03:59:33 np0005603621 nova_compute[247399]: 2026-01-31 08:59:33.422 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.619794073 +0000 UTC m=+0.034437138 container create ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_borg, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 03:59:33 np0005603621 systemd[1]: Started libpod-conmon-ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f.scope.
Jan 31 03:59:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.698536949 +0000 UTC m=+0.113180044 container init ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_borg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.605246034 +0000 UTC m=+0.019889139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.705499869 +0000 UTC m=+0.120142944 container start ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.708343469 +0000 UTC m=+0.122986544 container attach ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 03:59:33 np0005603621 blissful_borg[385460]: 167 167
Jan 31 03:59:33 np0005603621 systemd[1]: libpod-ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f.scope: Deactivated successfully.
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.709592008 +0000 UTC m=+0.124235093 container died ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:59:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0c68d4debc353103492bfc22322575e90d313ac0552927141df49d1def5bb940-merged.mount: Deactivated successfully.
Jan 31 03:59:33 np0005603621 podman[385444]: 2026-01-31 08:59:33.745274795 +0000 UTC m=+0.159917870 container remove ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 03:59:33 np0005603621 systemd[1]: libpod-conmon-ed36932b8bc2d1891bb496e0fdc3d92f622a038fc7146274eab56a65891d991f.scope: Deactivated successfully.
Jan 31 03:59:33 np0005603621 podman[385484]: 2026-01-31 08:59:33.879408491 +0000 UTC m=+0.037217577 container create a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 03:59:33 np0005603621 systemd[1]: Started libpod-conmon-a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330.scope.
Jan 31 03:59:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 03:59:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a4e47f03aa3e39680598b4d64fd365d37f3f2e6902ca780fa065b1a2319be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a4e47f03aa3e39680598b4d64fd365d37f3f2e6902ca780fa065b1a2319be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a4e47f03aa3e39680598b4d64fd365d37f3f2e6902ca780fa065b1a2319be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a25a4e47f03aa3e39680598b4d64fd365d37f3f2e6902ca780fa065b1a2319be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 03:59:33 np0005603621 podman[385484]: 2026-01-31 08:59:33.862885259 +0000 UTC m=+0.020694365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 03:59:34 np0005603621 podman[385484]: 2026-01-31 08:59:34.0022952 +0000 UTC m=+0.160104316 container init a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 03:59:34 np0005603621 podman[385484]: 2026-01-31 08:59:34.008859047 +0000 UTC m=+0.166668133 container start a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 03:59:34 np0005603621 podman[385484]: 2026-01-31 08:59:34.025524793 +0000 UTC m=+0.183333899 container attach a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 03:59:34 np0005603621 nova_compute[247399]: 2026-01-31 08:59:34.256 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]: {
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:        "osd_id": 0,
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:        "type": "bluestore"
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]:    }
Jan 31 03:59:34 np0005603621 flamboyant_almeida[385501]: }
Jan 31 03:59:34 np0005603621 systemd[1]: libpod-a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330.scope: Deactivated successfully.
Jan 31 03:59:34 np0005603621 podman[385484]: 2026-01-31 08:59:34.804945323 +0000 UTC m=+0.962754429 container died a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 03:59:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a25a4e47f03aa3e39680598b4d64fd365d37f3f2e6902ca780fa065b1a2319be-merged.mount: Deactivated successfully.
Jan 31 03:59:34 np0005603621 podman[385484]: 2026-01-31 08:59:34.851168672 +0000 UTC m=+1.008977758 container remove a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 03:59:34 np0005603621 systemd[1]: libpod-conmon-a243485b1a6b7cea3c6a944c4c51751d7bf45f890296034eb2f0ddbc3617e330.scope: Deactivated successfully.
Jan 31 03:59:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 03:59:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 03:59:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6fdc1991-365f-496d-af97-36899fa9e1b5 does not exist
Jan 31 03:59:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fc4ff001-eeaa-46fc-8758-27acc288b2f2 does not exist
Jan 31 03:59:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8e9604b8-a43f-47eb-96a5-fcc0bbbb5b95 does not exist
Jan 31 03:59:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 870 KiB/s rd, 4.1 MiB/s wr, 102 op/s
Jan 31 03:59:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:34.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:35 np0005603621 nova_compute[247399]: 2026-01-31 08:59:35.658 247403 DEBUG nova.compute.manager [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-changed-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:35 np0005603621 nova_compute[247399]: 2026-01-31 08:59:35.658 247403 DEBUG nova.compute.manager [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Refreshing instance network info cache due to event network-changed-b02a63e1-2ffc-49b9-ab9c-043d1923874b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:59:35 np0005603621 nova_compute[247399]: 2026-01-31 08:59:35.658 247403 DEBUG oslo_concurrency.lockutils [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:35 np0005603621 nova_compute[247399]: 2026-01-31 08:59:35.659 247403 DEBUG oslo_concurrency.lockutils [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:35 np0005603621 nova_compute[247399]: 2026-01-31 08:59:35.659 247403 DEBUG nova.network.neutron [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Refreshing network info cache for port b02a63e1-2ffc-49b9-ab9c-043d1923874b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:59:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 03:59:36 np0005603621 nova_compute[247399]: 2026-01-31 08:59:36.709 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 128 op/s
Jan 31 03:59:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_08:59:38
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', 'volumes', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log']
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 03:59:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 116 op/s
Jan 31 03:59:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:38.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:59:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 03:59:39 np0005603621 nova_compute[247399]: 2026-01-31 08:59:39.197 247403 DEBUG nova.network.neutron [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updated VIF entry in instance network info cache for port b02a63e1-2ffc-49b9-ab9c-043d1923874b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:59:39 np0005603621 nova_compute[247399]: 2026-01-31 08:59:39.198 247403 DEBUG nova.network.neutron [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:39 np0005603621 nova_compute[247399]: 2026-01-31 08:59:39.260 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:39 np0005603621 nova_compute[247399]: 2026-01-31 08:59:39.361 247403 DEBUG oslo_concurrency.lockutils [req-37a57c63-5626-4236-b71f-f8eaab698d26 req-a287ae1f-b763-46cd-ba36-d7fca4d10c1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 109 op/s
Jan 31 03:59:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:40.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:41 np0005603621 nova_compute[247399]: 2026-01-31 08:59:41.710 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:59:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:42.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:59:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 103 op/s
Jan 31 03:59:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:44 np0005603621 nova_compute[247399]: 2026-01-31 08:59:44.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:44.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 711 KiB/s wr, 93 op/s
Jan 31 03:59:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:45 np0005603621 nova_compute[247399]: 2026-01-31 08:59:45.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:46 np0005603621 nova_compute[247399]: 2026-01-31 08:59:46.713 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:46.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 282 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 849 KiB/s wr, 96 op/s
Jan 31 03:59:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.359 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.360 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.381 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 03:59:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 03:59:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1684 writes, 7248 keys, 1684 commit groups, 1.0 writes per commit group, ingest: 10.93 MB, 0.02 MB/s#012Interval WAL: 1684 writes, 1684 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     27.5      3.63              0.23        51    0.071       0      0       0.0       0.0#012  L6      1/0   11.49 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.1     50.7     43.2     11.81              1.12        50    0.236    368K    27K       0.0       0.0#012 Sum      1/0   11.49 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.1     38.8     39.5     15.44              1.36       101    0.153    368K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     67.1     68.1      1.00              0.13        10    0.100     50K   2607       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     50.7     43.2     11.81              1.12        50    0.236    368K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     27.5      3.63              0.23        50    0.073       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.098, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.60 GB write, 0.10 MB/s write, 0.58 GB read, 0.10 MB/s read, 15.4 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 66.70 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000439 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3843,64.02 MB,21.058%) FilterBlock(102,1022.67 KB,0.328521%) IndexBlock(102,1.69 MB,0.554321%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.556 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.556 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.567 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.568 247403 INFO nova.compute.claims [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 03:59:48 np0005603621 nova_compute[247399]: 2026-01-31 08:59:48.732 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:48.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 305 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.7 MiB/s wr, 81 op/s
Jan 31 03:59:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.157 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.162 247403 DEBUG nova.compute.provider_tree [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.351 247403 DEBUG nova.scheduler.client.report [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.469 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.470 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.536 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.537 247403 DEBUG nova.network.neutron [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.558 247403 INFO nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.618 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004130534035920342 of space, bias 1.0, pg target 1.2391602107761026 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6469151312116136 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 03:59:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.745 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.747 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.747 247403 INFO nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Creating image(s)#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.772 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.798 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.823 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.828 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.852 247403 DEBUG nova.policy [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.884 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.885 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.885 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.886 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.913 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:49 np0005603621 nova_compute[247399]: 2026-01-31 08:59:49.917 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:50.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 334 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 80 op/s
Jan 31 03:59:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:51.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:51 np0005603621 nova_compute[247399]: 2026-01-31 08:59:51.159 247403 DEBUG nova.network.neutron [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Successfully created port: a5835356-9105-4477-b460-52d3c133005b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 03:59:51 np0005603621 nova_compute[247399]: 2026-01-31 08:59:51.715 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.205 247403 DEBUG nova.network.neutron [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Successfully updated port: a5835356-9105-4477-b460-52d3c133005b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.226 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.227 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.228 247403 DEBUG nova.network.neutron [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.325 247403 DEBUG nova.compute.manager [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-changed-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.326 247403 DEBUG nova.compute.manager [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Refreshing instance network info cache due to event network-changed-a5835356-9105-4477-b460-52d3c133005b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.326 247403 DEBUG oslo_concurrency.lockutils [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.380 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:52 np0005603621 nova_compute[247399]: 2026-01-31 08:59:52.420 247403 DEBUG nova.network.neutron [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 03:59:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:52 np0005603621 podman[385761]: 2026-01-31 08:59:52.500555877 +0000 UTC m=+0.053402897 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:59:52 np0005603621 podman[385762]: 2026-01-31 08:59:52.523650697 +0000 UTC m=+0.076487937 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 03:59:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 363 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 108 op/s
Jan 31 03:59:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:53.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.387 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.450 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.559 247403 DEBUG nova.network.neutron [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updating instance_info_cache with network_info: [{"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.617 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.617 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Instance network_info: |[{"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.618 247403 DEBUG oslo_concurrency.lockutils [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.618 247403 DEBUG nova.network.neutron [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Refreshing network info cache for port a5835356-9105-4477-b460-52d3c133005b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.723 247403 DEBUG nova.objects.instance [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid 0caaa0c0-9873-4501-b06c-237d07cac4fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.742 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.743 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Ensure instance console log exists: /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.743 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.744 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.744 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.746 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Start _get_guest_xml network_info=[{"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.750 247403 WARNING nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.756 247403 DEBUG nova.virt.libvirt.host [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.756 247403 DEBUG nova.virt.libvirt.host [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.763 247403 DEBUG nova.virt.libvirt.host [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.763 247403 DEBUG nova.virt.libvirt.host [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.764 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.765 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.765 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.765 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.765 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.766 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.766 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.766 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.766 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.766 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.767 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.767 247403 DEBUG nova.virt.hardware [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 03:59:53 np0005603621 nova_compute[247399]: 2026-01-31 08:59:53.769 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:59:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1255909548' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.186 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.209 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.213 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 03:59:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259059042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.606 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.607 247403 DEBUG nova.virt.libvirt.vif [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-257568249',display_name='tempest-TestNetworkBasicOps-server-257568249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-257568249',id=192,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK64YKCJgeKBmuKWp3ED7tbweB+jmarjthioIohq8NfCC4aDfAAGGt9caWH0MyCTR9i6pOjomc3WHQgXmlltHHaWz/NrCG9nSV10R0pK+IcuVpzQuiJIvGczWF/CGdCjpQ==',key_name='tempest-TestNetworkBasicOps-1620914591',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-0l6qsw2s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:59:49Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0caaa0c0-9873-4501-b06c-237d07cac4fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.608 247403 DEBUG nova.network.os_vif_util [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.608 247403 DEBUG nova.network.os_vif_util [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.609 247403 DEBUG nova.objects.instance [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0caaa0c0-9873-4501-b06c-237d07cac4fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.629 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] End _get_guest_xml xml=<domain type="kvm">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <uuid>0caaa0c0-9873-4501-b06c-237d07cac4fe</uuid>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <name>instance-000000c0</name>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-257568249</nova:name>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 08:59:53</nova:creationTime>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <nova:port uuid="a5835356-9105-4477-b460-52d3c133005b">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <system>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <entry name="serial">0caaa0c0-9873-4501-b06c-237d07cac4fe</entry>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <entry name="uuid">0caaa0c0-9873-4501-b06c-237d07cac4fe</entry>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </system>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <os>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </os>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <features>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </features>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </clock>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  <devices>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0caaa0c0-9873-4501-b06c-237d07cac4fe_disk">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0caaa0c0-9873-4501-b06c-237d07cac4fe_disk.config">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </source>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      </auth>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </disk>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:c6:6a:04"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <target dev="tapa5835356-91"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </interface>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/console.log" append="off"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </serial>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <video>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </video>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </rng>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 03:59:54 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 03:59:54 np0005603621 nova_compute[247399]:  </devices>
Jan 31 03:59:54 np0005603621 nova_compute[247399]: </domain>
Jan 31 03:59:54 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.630 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Preparing to wait for external event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.630 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.631 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.631 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.631 247403 DEBUG nova.virt.libvirt.vif [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T08:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-257568249',display_name='tempest-TestNetworkBasicOps-server-257568249',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-257568249',id=192,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK64YKCJgeKBmuKWp3ED7tbweB+jmarjthioIohq8NfCC4aDfAAGGt9caWH0MyCTR9i6pOjomc3WHQgXmlltHHaWz/NrCG9nSV10R0pK+IcuVpzQuiJIvGczWF/CGdCjpQ==',key_name='tempest-TestNetworkBasicOps-1620914591',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-0l6qsw2s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T08:59:49Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0caaa0c0-9873-4501-b06c-237d07cac4fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.632 247403 DEBUG nova.network.os_vif_util [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.632 247403 DEBUG nova.network.os_vif_util [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.632 247403 DEBUG os_vif [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.633 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.633 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.635 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.636 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5835356-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.636 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5835356-91, col_values=(('external_ids', {'iface-id': 'a5835356-9105-4477-b460-52d3c133005b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:6a:04', 'vm-uuid': '0caaa0c0-9873-4501-b06c-237d07cac4fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.637 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:54 np0005603621 NetworkManager[49013]: <info>  [1769849994.6385] manager: (tapa5835356-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.639 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.642 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.643 247403 INFO os_vif [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91')#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.707 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.707 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.707 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:c6:6a:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.708 247403 INFO nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Using config drive#033[00m
Jan 31 03:59:54 np0005603621 nova_compute[247399]: 2026-01-31 08:59:54.731 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.7 MiB/s wr, 141 op/s
Jan 31 03:59:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:55.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:55 np0005603621 nova_compute[247399]: 2026-01-31 08:59:55.666 247403 INFO nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Creating config drive at /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/disk.config#033[00m
Jan 31 03:59:55 np0005603621 nova_compute[247399]: 2026-01-31 08:59:55.671 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6t8jpilt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:55 np0005603621 nova_compute[247399]: 2026-01-31 08:59:55.796 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6t8jpilt" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:55 np0005603621 nova_compute[247399]: 2026-01-31 08:59:55.828 247403 DEBUG nova.storage.rbd_utils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 03:59:55 np0005603621 nova_compute[247399]: 2026-01-31 08:59:55.832 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/disk.config 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.485 247403 DEBUG nova.network.neutron [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updated VIF entry in instance network info cache for port a5835356-9105-4477-b460-52d3c133005b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.486 247403 DEBUG nova.network.neutron [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updating instance_info_cache with network_info: [{"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.531 247403 DEBUG oslo_concurrency.lockutils [req-07fe1eb4-ae56-4dc0-9c15-a621209cdef1 req-cb0f9d20-6798-435f-8d3b-3389c0aa9fe5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.718 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 03:59:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.926 247403 DEBUG oslo_concurrency.processutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/disk.config 0caaa0c0-9873-4501-b06c-237d07cac4fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.926 247403 INFO nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Deleting local config drive /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe/disk.config because it was imported into RBD.#033[00m
Jan 31 03:59:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.7 MiB/s wr, 143 op/s
Jan 31 03:59:56 np0005603621 kernel: tapa5835356-91: entered promiscuous mode
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:56Z|00804|binding|INFO|Claiming lport a5835356-9105-4477-b460-52d3c133005b for this chassis.
Jan 31 03:59:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:56Z|00805|binding|INFO|a5835356-9105-4477-b460-52d3c133005b: Claiming fa:16:3e:c6:6a:04 10.100.0.5
Jan 31 03:59:56 np0005603621 NetworkManager[49013]: <info>  [1769849996.9699] manager: (tapa5835356-91): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.973 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:56Z|00806|binding|INFO|Setting lport a5835356-9105-4477-b460-52d3c133005b ovn-installed in OVS
Jan 31 03:59:56 np0005603621 nova_compute[247399]: 2026-01-31 08:59:56.975 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:56.980 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:6a:04 10.100.0.5'], port_security=['fa:16:3e:c6:6a:04 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0caaa0c0-9873-4501-b06c-237d07cac4fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c352ef8-91d5-4a40-a1fd-9ca13c293e32', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3acdb359-6f9b-4f22-ab08-f7d3c7064a26, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a5835356-9105-4477-b460-52d3c133005b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 03:59:56 np0005603621 ovn_controller[149152]: 2026-01-31T08:59:56Z|00807|binding|INFO|Setting lport a5835356-9105-4477-b460-52d3c133005b up in Southbound
Jan 31 03:59:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:56.981 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a5835356-9105-4477-b460-52d3c133005b in datapath 506aa866-6ce6-4d84-b5c1-b07676e27a1d bound to our chassis#033[00m
Jan 31 03:59:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:56.983 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 506aa866-6ce6-4d84-b5c1-b07676e27a1d#033[00m
Jan 31 03:59:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:56.997 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2e0f9e-45b8-408a-8163-0801b4696742]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:57 np0005603621 systemd-machined[212769]: New machine qemu-94-instance-000000c0.
Jan 31 03:59:57 np0005603621 systemd-udevd[386016]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 03:59:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:57 np0005603621 NetworkManager[49013]: <info>  [1769849997.0116] device (tapa5835356-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 03:59:57 np0005603621 NetworkManager[49013]: <info>  [1769849997.0125] device (tapa5835356-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.019 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd67155-e5f3-417c-85a6-ea82ff8c6afe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.022 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[318f7cb5-f0c7-4d80-aa01-fc58328b82d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:57 np0005603621 systemd[1]: Started Virtual Machine qemu-94-instance-000000c0.
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.042 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5b245b98-af74-4b5a-aa95-66cfc8d96cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.056 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0099a9dc-e702-458c-8c7a-9c4853c492ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap506aa866-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:79:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 925685, 'reachable_time': 27602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386028, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.067 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ef32c571-456e-4156-b9cc-b562593c6a65]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap506aa866-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 925695, 'tstamp': 925695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386029, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap506aa866-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 925697, 'tstamp': 925697}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386029, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.069 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap506aa866-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.070 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.072 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.072 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap506aa866-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.072 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.073 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap506aa866-60, col_values=(('external_ids', {'iface-id': 'e3a35e20-7eb7-4a52-9164-bd36f73ab684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 03:59:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 08:59:57.073 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 03:59:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.964 247403 DEBUG nova.compute.manager [req-3eb7dcc5-7cf5-4f7b-9c57-04b04d7ac74a req-5457eff1-c642-4788-b448-a5c58a8f1d0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.967 247403 DEBUG oslo_concurrency.lockutils [req-3eb7dcc5-7cf5-4f7b-9c57-04b04d7ac74a req-5457eff1-c642-4788-b448-a5c58a8f1d0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.967 247403 DEBUG oslo_concurrency.lockutils [req-3eb7dcc5-7cf5-4f7b-9c57-04b04d7ac74a req-5457eff1-c642-4788-b448-a5c58a8f1d0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.968 247403 DEBUG oslo_concurrency.lockutils [req-3eb7dcc5-7cf5-4f7b-9c57-04b04d7ac74a req-5457eff1-c642-4788-b448-a5c58a8f1d0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 03:59:57 np0005603621 nova_compute[247399]: 2026-01-31 08:59:57.969 247403 DEBUG nova.compute.manager [req-3eb7dcc5-7cf5-4f7b-9c57-04b04d7ac74a req-5457eff1-c642-4788-b448-a5c58a8f1d0d fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Processing event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 03:59:58 np0005603621 nova_compute[247399]: 2026-01-31 08:59:58.833 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849998.8324947, 0caaa0c0-9873-4501-b06c-237d07cac4fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:58 np0005603621 nova_compute[247399]: 2026-01-31 08:59:58.834 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] VM Started (Lifecycle Event)#033[00m
Jan 31 03:59:58 np0005603621 nova_compute[247399]: 2026-01-31 08:59:58.837 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 03:59:58 np0005603621 nova_compute[247399]: 2026-01-31 08:59:58.840 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 03:59:58 np0005603621 nova_compute[247399]: 2026-01-31 08:59:58.843 247403 INFO nova.virt.libvirt.driver [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Instance spawned successfully.#033[00m
Jan 31 03:59:58 np0005603621 nova_compute[247399]: 2026-01-31 08:59:58.843 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 03:59:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 03:59:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:08:59:58.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 03:59:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.9 MiB/s wr, 153 op/s
Jan 31 03:59:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 03:59:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 03:59:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:08:59:59.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.058 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.061 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.067 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.068 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.068 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.069 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.069 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.070 247403 DEBUG nova.virt.libvirt.driver [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.181 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.182 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849998.8329003, 0caaa0c0-9873-4501-b06c-237d07cac4fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.182 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] VM Paused (Lifecycle Event)#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.318 247403 INFO nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Took 9.57 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.318 247403 DEBUG nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.565 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.567 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769849998.839319, 0caaa0c0-9873-4501-b06c-237d07cac4fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.568 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] VM Resumed (Lifecycle Event)#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.645 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.648 247403 INFO nova.compute.manager [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Took 11.20 seconds to build instance.#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.664 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.667 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 03:59:59 np0005603621 nova_compute[247399]: 2026-01-31 08:59:59.862 247403 DEBUG oslo_concurrency.lockutils [None req-c3f19f6b-2110-4d52-9bea-61d5c1eb5435 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 04:00:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 04:00:00 np0005603621 nova_compute[247399]: 2026-01-31 09:00:00.175 247403 DEBUG nova.compute.manager [req-a075d440-72a6-47e8-afed-2871103f0d33 req-162840fd-7bc4-4950-ac49-194cf2bc40e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:00 np0005603621 nova_compute[247399]: 2026-01-31 09:00:00.175 247403 DEBUG oslo_concurrency.lockutils [req-a075d440-72a6-47e8-afed-2871103f0d33 req-162840fd-7bc4-4950-ac49-194cf2bc40e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:00 np0005603621 nova_compute[247399]: 2026-01-31 09:00:00.176 247403 DEBUG oslo_concurrency.lockutils [req-a075d440-72a6-47e8-afed-2871103f0d33 req-162840fd-7bc4-4950-ac49-194cf2bc40e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:00 np0005603621 nova_compute[247399]: 2026-01-31 09:00:00.176 247403 DEBUG oslo_concurrency.lockutils [req-a075d440-72a6-47e8-afed-2871103f0d33 req-162840fd-7bc4-4950-ac49-194cf2bc40e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:00 np0005603621 nova_compute[247399]: 2026-01-31 09:00:00.176 247403 DEBUG nova.compute.manager [req-a075d440-72a6-47e8-afed-2871103f0d33 req-162840fd-7bc4-4950-ac49-194cf2bc40e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] No waiting events found dispatching network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:00:00 np0005603621 nova_compute[247399]: 2026-01-31 09:00:00.176 247403 WARNING nova.compute.manager [req-a075d440-72a6-47e8-afed-2871103f0d33 req-162840fd-7bc4-4950-ac49-194cf2bc40e6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received unexpected event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b for instance with vm_state active and task_state None.#033[00m
Jan 31 04:00:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:00.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 161 op/s
Jan 31 04:00:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:01.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:01 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:01Z|00808|binding|INFO|Releasing lport e3a35e20-7eb7-4a52-9164-bd36f73ab684 from this chassis (sb_readonly=0)
Jan 31 04:00:01 np0005603621 nova_compute[247399]: 2026-01-31 09:00:01.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:01 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:01Z|00809|binding|INFO|Releasing lport e3a35e20-7eb7-4a52-9164-bd36f73ab684 from this chassis (sb_readonly=0)
Jan 31 04:00:01 np0005603621 nova_compute[247399]: 2026-01-31 09:00:01.334 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:01 np0005603621 nova_compute[247399]: 2026-01-31 09:00:01.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.0 MiB/s wr, 172 op/s
Jan 31 04:00:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:00:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624913754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:00:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:03.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:04 np0005603621 nova_compute[247399]: 2026-01-31 09:00:04.649 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:04.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 166 op/s
Jan 31 04:00:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:05.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Jan 31 04:00:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Jan 31 04:00:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Jan 31 04:00:05 np0005603621 NetworkManager[49013]: <info>  [1769850005.6341] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Jan 31 04:00:05 np0005603621 nova_compute[247399]: 2026-01-31 09:00:05.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:05 np0005603621 NetworkManager[49013]: <info>  [1769850005.6350] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 31 04:00:05 np0005603621 nova_compute[247399]: 2026-01-31 09:00:05.651 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:05 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:05Z|00810|binding|INFO|Releasing lport e3a35e20-7eb7-4a52-9164-bd36f73ab684 from this chassis (sb_readonly=0)
Jan 31 04:00:05 np0005603621 nova_compute[247399]: 2026-01-31 09:00:05.673 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Jan 31 04:00:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Jan 31 04:00:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Jan 31 04:00:06 np0005603621 nova_compute[247399]: 2026-01-31 09:00:06.725 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:06.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 429 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.7 MiB/s wr, 216 op/s
Jan 31 04:00:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:07.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.440666) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850007440795, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 805, "num_deletes": 255, "total_data_size": 1065854, "memory_usage": 1091360, "flush_reason": "Manual Compaction"}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850007451016, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1054346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73180, "largest_seqno": 73983, "table_properties": {"data_size": 1050233, "index_size": 1828, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9388, "raw_average_key_size": 19, "raw_value_size": 1041785, "raw_average_value_size": 2161, "num_data_blocks": 79, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769849953, "oldest_key_time": 1769849953, "file_creation_time": 1769850007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 10361 microseconds, and 3183 cpu microseconds.
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.451074) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1054346 bytes OK
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.451098) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.454270) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.454302) EVENT_LOG_v1 {"time_micros": 1769850007454294, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.454328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1061836, prev total WAL file size 1061836, number of live WAL files 2.
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.454841) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323635' seq:0, type:0; will stop at (end)
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1029KB)], [167(11MB)]
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850007454897, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 13102217, "oldest_snapshot_seqno": -1}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 9954 keys, 12971299 bytes, temperature: kUnknown
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850007522566, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 12971299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12907110, "index_size": 38220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24901, "raw_key_size": 263253, "raw_average_key_size": 26, "raw_value_size": 12733233, "raw_average_value_size": 1279, "num_data_blocks": 1454, "num_entries": 9954, "num_filter_entries": 9954, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850007, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.523633) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 12971299 bytes
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.525968) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 190.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.5 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(24.7) write-amplify(12.3) OK, records in: 10482, records dropped: 528 output_compression: NoCompression
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.526016) EVENT_LOG_v1 {"time_micros": 1769850007525999, "job": 104, "event": "compaction_finished", "compaction_time_micros": 68220, "compaction_time_cpu_micros": 22754, "output_level": 6, "num_output_files": 1, "total_output_size": 12971299, "num_input_records": 10482, "num_output_records": 9954, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850007526571, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850007527390, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.454755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.527507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.527514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.527521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.527523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:00:07 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:00:07.527526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:00:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:07.562 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:00:07 np0005603621 nova_compute[247399]: 2026-01-31 09:00:07.562 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:07.563 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:00:07 np0005603621 nova_compute[247399]: 2026-01-31 09:00:07.578 247403 DEBUG nova.compute.manager [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-changed-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:07 np0005603621 nova_compute[247399]: 2026-01-31 09:00:07.579 247403 DEBUG nova.compute.manager [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Refreshing instance network info cache due to event network-changed-a5835356-9105-4477-b460-52d3c133005b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:00:07 np0005603621 nova_compute[247399]: 2026-01-31 09:00:07.581 247403 DEBUG oslo_concurrency.lockutils [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:00:07 np0005603621 nova_compute[247399]: 2026-01-31 09:00:07.582 247403 DEBUG oslo_concurrency.lockutils [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:00:07 np0005603621 nova_compute[247399]: 2026-01-31 09:00:07.582 247403 DEBUG nova.network.neutron [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Refreshing network info cache for port a5835356-9105-4477-b460-52d3c133005b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:00:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:08.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 462 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.7 MiB/s rd, 7.1 MiB/s wr, 232 op/s
Jan 31 04:00:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Jan 31 04:00:09 np0005603621 nova_compute[247399]: 2026-01-31 09:00:09.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Jan 31 04:00:09 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Jan 31 04:00:09 np0005603621 nova_compute[247399]: 2026-01-31 09:00:09.653 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:10 np0005603621 nova_compute[247399]: 2026-01-31 09:00:10.906 247403 DEBUG nova.network.neutron [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updated VIF entry in instance network info cache for port a5835356-9105-4477-b460-52d3c133005b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:00:10 np0005603621 nova_compute[247399]: 2026-01-31 09:00:10.907 247403 DEBUG nova.network.neutron [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updating instance_info_cache with network_info: [{"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:10.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 12 MiB/s wr, 289 op/s
Jan 31 04:00:10 np0005603621 nova_compute[247399]: 2026-01-31 09:00:10.965 247403 DEBUG oslo_concurrency.lockutils [req-853b4739-4ae6-486c-b1d9-fb09ede4280e req-ee8c6c0a-c38c-4d95-b250-fb8a109d4991 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0caaa0c0-9873-4501-b06c-237d07cac4fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:00:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:11.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:11 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:11Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:6a:04 10.100.0.5
Jan 31 04:00:11 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:11Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:6a:04 10.100.0.5
Jan 31 04:00:11 np0005603621 nova_compute[247399]: 2026-01-31 09:00:11.727 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Jan 31 04:00:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Jan 31 04:00:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Jan 31 04:00:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 527 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 11 MiB/s wr, 290 op/s
Jan 31 04:00:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:14 np0005603621 nova_compute[247399]: 2026-01-31 09:00:14.656 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:14.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 537 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 9.8 MiB/s wr, 257 op/s
Jan 31 04:00:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:15 np0005603621 nova_compute[247399]: 2026-01-31 09:00:15.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:16 np0005603621 nova_compute[247399]: 2026-01-31 09:00:16.776 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:16.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 537 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.9 MiB/s wr, 217 op/s
Jan 31 04:00:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:17.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:17 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:17.565 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:18 np0005603621 nova_compute[247399]: 2026-01-31 09:00:18.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:18 np0005603621 nova_compute[247399]: 2026-01-31 09:00:18.376 247403 INFO nova.compute.manager [None req-da419bce-68ec-4946-be95-4cbfa50f6c54 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Get console output#033[00m
Jan 31 04:00:18 np0005603621 nova_compute[247399]: 2026-01-31 09:00:18.380 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 04:00:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:18.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 537 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 473 KiB/s rd, 2.8 MiB/s wr, 104 op/s
Jan 31 04:00:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.061 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.062 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.062 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.062 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.063 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.064 247403 INFO nova.compute.manager [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Terminating instance#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.065 247403 DEBUG nova.compute.manager [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:00:19 np0005603621 kernel: tapa5835356-91 (unregistering): left promiscuous mode
Jan 31 04:00:19 np0005603621 NetworkManager[49013]: <info>  [1769850019.1565] device (tapa5835356-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:00:19 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:19Z|00811|binding|INFO|Releasing lport a5835356-9105-4477-b460-52d3c133005b from this chassis (sb_readonly=0)
Jan 31 04:00:19 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:19Z|00812|binding|INFO|Setting lport a5835356-9105-4477-b460-52d3c133005b down in Southbound
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.163 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:19Z|00813|binding|INFO|Removing iface tapa5835356-91 ovn-installed in OVS
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.165 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.169 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.181 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:6a:04 10.100.0.5'], port_security=['fa:16:3e:c6:6a:04 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0caaa0c0-9873-4501-b06c-237d07cac4fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c352ef8-91d5-4a40-a1fd-9ca13c293e32', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3acdb359-6f9b-4f22-ab08-f7d3c7064a26, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=a5835356-9105-4477-b460-52d3c133005b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.183 159734 INFO neutron.agent.ovn.metadata.agent [-] Port a5835356-9105-4477-b460-52d3c133005b in datapath 506aa866-6ce6-4d84-b5c1-b07676e27a1d unbound from our chassis#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.185 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 506aa866-6ce6-4d84-b5c1-b07676e27a1d#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.197 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74d358c6-27b9-48a8-a77a-9ea1a3a11590]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:19 np0005603621 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c0.scope: Deactivated successfully.
Jan 31 04:00:19 np0005603621 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000c0.scope: Consumed 14.025s CPU time.
Jan 31 04:00:19 np0005603621 systemd-machined[212769]: Machine qemu-94-instance-000000c0 terminated.
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.219 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6be2c2-0ae3-4d31-ab0c-b78b8e03c0e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.221 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc72ab2-f569-46eb-a058-fe682258e53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.242 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[4d315559-6fec-4d0f-9918-7ef3e126a602]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.256 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[72b8bfe5-cabb-4d3c-9541-7688a8f14708]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap506aa866-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:23:79:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 241], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 925685, 'reachable_time': 27602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386150, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.272 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdc1a95-c805-476b-adab-bf92378cf0a8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap506aa866-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 925695, 'tstamp': 925695}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386151, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap506aa866-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 925697, 'tstamp': 925697}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386151, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.274 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap506aa866-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.275 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.280 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap506aa866-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.280 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.281 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap506aa866-60, col_values=(('external_ids', {'iface-id': 'e3a35e20-7eb7-4a52-9164-bd36f73ab684'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:19.281 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.293 247403 INFO nova.virt.libvirt.driver [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Instance destroyed successfully.#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.294 247403 DEBUG nova.objects.instance [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid 0caaa0c0-9873-4501-b06c-237d07cac4fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.344 247403 DEBUG nova.virt.libvirt.vif [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:59:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-257568249',display_name='tempest-TestNetworkBasicOps-server-257568249',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-257568249',id=192,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK64YKCJgeKBmuKWp3ED7tbweB+jmarjthioIohq8NfCC4aDfAAGGt9caWH0MyCTR9i6pOjomc3WHQgXmlltHHaWz/NrCG9nSV10R0pK+IcuVpzQuiJIvGczWF/CGdCjpQ==',key_name='tempest-TestNetworkBasicOps-1620914591',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:59:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-0l6qsw2s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:59:59Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0caaa0c0-9873-4501-b06c-237d07cac4fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.344 247403 DEBUG nova.network.os_vif_util [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "a5835356-9105-4477-b460-52d3c133005b", "address": "fa:16:3e:c6:6a:04", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5835356-91", "ovs_interfaceid": "a5835356-9105-4477-b460-52d3c133005b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.347 247403 DEBUG nova.network.os_vif_util [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.348 247403 DEBUG os_vif [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.349 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.350 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5835356-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.354 247403 INFO os_vif [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:6a:04,bridge_name='br-int',has_traffic_filtering=True,id=a5835356-9105-4477-b460-52d3c133005b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5835356-91')#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.804 247403 INFO nova.virt.libvirt.driver [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Deleting instance files /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe_del#033[00m
Jan 31 04:00:19 np0005603621 nova_compute[247399]: 2026-01-31 09:00:19.805 247403 INFO nova.virt.libvirt.driver [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Deletion of /var/lib/nova/instances/0caaa0c0-9873-4501-b06c-237d07cac4fe_del complete#033[00m
Jan 31 04:00:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:20.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 503 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 462 KiB/s rd, 2.6 MiB/s wr, 121 op/s
Jan 31 04:00:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.436 247403 DEBUG nova.compute.manager [req-32bb0b37-18b0-4146-942a-ad7db14383a3 req-07dc0336-c21c-4ca4-b4f2-401b18af95af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-vif-unplugged-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.437 247403 DEBUG oslo_concurrency.lockutils [req-32bb0b37-18b0-4146-942a-ad7db14383a3 req-07dc0336-c21c-4ca4-b4f2-401b18af95af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.437 247403 DEBUG oslo_concurrency.lockutils [req-32bb0b37-18b0-4146-942a-ad7db14383a3 req-07dc0336-c21c-4ca4-b4f2-401b18af95af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.437 247403 DEBUG oslo_concurrency.lockutils [req-32bb0b37-18b0-4146-942a-ad7db14383a3 req-07dc0336-c21c-4ca4-b4f2-401b18af95af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.437 247403 DEBUG nova.compute.manager [req-32bb0b37-18b0-4146-942a-ad7db14383a3 req-07dc0336-c21c-4ca4-b4f2-401b18af95af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] No waiting events found dispatching network-vif-unplugged-a5835356-9105-4477-b460-52d3c133005b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.437 247403 DEBUG nova.compute.manager [req-32bb0b37-18b0-4146-942a-ad7db14383a3 req-07dc0336-c21c-4ca4-b4f2-401b18af95af fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-vif-unplugged-a5835356-9105-4477-b460-52d3c133005b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.474 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.537 247403 INFO nova.compute.manager [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Took 2.47 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.537 247403 DEBUG oslo.service.loopingcall [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.538 247403 DEBUG nova.compute.manager [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.538 247403 DEBUG nova.network.neutron [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:00:21 np0005603621 nova_compute[247399]: 2026-01-31 09:00:21.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:22 np0005603621 nova_compute[247399]: 2026-01-31 09:00:22.076 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:00:22 np0005603621 nova_compute[247399]: 2026-01-31 09:00:22.076 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:00:22 np0005603621 nova_compute[247399]: 2026-01-31 09:00:22.077 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:00:22 np0005603621 nova_compute[247399]: 2026-01-31 09:00:22.077 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid aaf62d2f-5fda-43ee-8bf2-04e4940bf62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:22.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 416 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 361 KiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 31 04:00:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:23 np0005603621 podman[386234]: 2026-01-31 09:00:23.503585876 +0000 UTC m=+0.058571570 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Jan 31 04:00:23 np0005603621 podman[386235]: 2026-01-31 09:00:23.524854377 +0000 UTC m=+0.078964613 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible)
Jan 31 04:00:23 np0005603621 nova_compute[247399]: 2026-01-31 09:00:23.635 247403 DEBUG nova.compute.manager [req-c54c9b6e-ecdb-4fdf-a3eb-5e8773e95a67 req-e4798655-731a-4ea7-a185-9d8abbe60dcb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:23 np0005603621 nova_compute[247399]: 2026-01-31 09:00:23.635 247403 DEBUG oslo_concurrency.lockutils [req-c54c9b6e-ecdb-4fdf-a3eb-5e8773e95a67 req-e4798655-731a-4ea7-a185-9d8abbe60dcb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:23 np0005603621 nova_compute[247399]: 2026-01-31 09:00:23.636 247403 DEBUG oslo_concurrency.lockutils [req-c54c9b6e-ecdb-4fdf-a3eb-5e8773e95a67 req-e4798655-731a-4ea7-a185-9d8abbe60dcb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:23 np0005603621 nova_compute[247399]: 2026-01-31 09:00:23.636 247403 DEBUG oslo_concurrency.lockutils [req-c54c9b6e-ecdb-4fdf-a3eb-5e8773e95a67 req-e4798655-731a-4ea7-a185-9d8abbe60dcb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:23 np0005603621 nova_compute[247399]: 2026-01-31 09:00:23.636 247403 DEBUG nova.compute.manager [req-c54c9b6e-ecdb-4fdf-a3eb-5e8773e95a67 req-e4798655-731a-4ea7-a185-9d8abbe60dcb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] No waiting events found dispatching network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:00:23 np0005603621 nova_compute[247399]: 2026-01-31 09:00:23.636 247403 WARNING nova.compute.manager [req-c54c9b6e-ecdb-4fdf-a3eb-5e8773e95a67 req-e4798655-731a-4ea7-a185-9d8abbe60dcb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received unexpected event network-vif-plugged-a5835356-9105-4477-b460-52d3c133005b for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:00:24 np0005603621 nova_compute[247399]: 2026-01-31 09:00:24.018 247403 DEBUG nova.network.neutron [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:24 np0005603621 nova_compute[247399]: 2026-01-31 09:00:24.230 247403 DEBUG nova.compute.manager [req-20867ed1-f6bb-45df-81ba-a3ad6f813ed4 req-a5fc4134-5df4-4454-96e2-19ecc92cc9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Received event network-vif-deleted-a5835356-9105-4477-b460-52d3c133005b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:24 np0005603621 nova_compute[247399]: 2026-01-31 09:00:24.230 247403 INFO nova.compute.manager [req-20867ed1-f6bb-45df-81ba-a3ad6f813ed4 req-a5fc4134-5df4-4454-96e2-19ecc92cc9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Neutron deleted interface a5835356-9105-4477-b460-52d3c133005b; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 04:00:24 np0005603621 nova_compute[247399]: 2026-01-31 09:00:24.230 247403 DEBUG nova.network.neutron [req-20867ed1-f6bb-45df-81ba-a3ad6f813ed4 req-a5fc4134-5df4-4454-96e2-19ecc92cc9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:24 np0005603621 nova_compute[247399]: 2026-01-31 09:00:24.232 247403 INFO nova.compute.manager [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Took 2.69 seconds to deallocate network for instance.#033[00m
Jan 31 04:00:24 np0005603621 nova_compute[247399]: 2026-01-31 09:00:24.352 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:24.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 379 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 180 KiB/s rd, 999 KiB/s wr, 78 op/s
Jan 31 04:00:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:25 np0005603621 nova_compute[247399]: 2026-01-31 09:00:25.568 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:25 np0005603621 nova_compute[247399]: 2026-01-31 09:00:25.631 247403 DEBUG nova.compute.manager [req-20867ed1-f6bb-45df-81ba-a3ad6f813ed4 req-a5fc4134-5df4-4454-96e2-19ecc92cc9a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Detach interface failed, port_id=a5835356-9105-4477-b460-52d3c133005b, reason: Instance 0caaa0c0-9873-4501-b06c-237d07cac4fe could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 04:00:26 np0005603621 nova_compute[247399]: 2026-01-31 09:00:26.137 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:26 np0005603621 nova_compute[247399]: 2026-01-31 09:00:26.138 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:26 np0005603621 nova_compute[247399]: 2026-01-31 09:00:26.731 247403 DEBUG oslo_concurrency.processutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:26 np0005603621 nova_compute[247399]: 2026-01-31 09:00:26.782 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:26.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 379 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 55 op/s
Jan 31 04:00:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153150609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.153 247403 DEBUG oslo_concurrency.processutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.158 247403 DEBUG nova.compute.provider_tree [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.199 247403 DEBUG nova.scheduler.client.report [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.263 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.290 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [{"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.342 247403 INFO nova.scheduler.client.report [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance 0caaa0c0-9873-4501-b06c-237d07cac4fe#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.345 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.345 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.346 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.346 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.347 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.347 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.412 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.413 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.413 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.414 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.414 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.789 247403 DEBUG oslo_concurrency.lockutils [None req-9b7a2fb4-7bc9-4ac1-aeca-8227788a497e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0caaa0c0-9873-4501-b06c-237d07cac4fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438359300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.855 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.955 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:00:27 np0005603621 nova_compute[247399]: 2026-01-31 09:00:27.955 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000bd as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.122 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.124 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3916MB free_disk=20.897174835205078GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.124 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.125 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.230 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance aaf62d2f-5fda-43ee-8bf2-04e4940bf62f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.231 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.231 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.278 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674801092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.781 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.787 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:00:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:28.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:28 np0005603621 nova_compute[247399]: 2026-01-31 09:00:28.950 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:00:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 379 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 55 op/s
Jan 31 04:00:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:29 np0005603621 nova_compute[247399]: 2026-01-31 09:00:29.171 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:00:29 np0005603621 nova_compute[247399]: 2026-01-31 09:00:29.171 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:29 np0005603621 nova_compute[247399]: 2026-01-31 09:00:29.354 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:29 np0005603621 nova_compute[247399]: 2026-01-31 09:00:29.669 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:29 np0005603621 nova_compute[247399]: 2026-01-31 09:00:29.669 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:29 np0005603621 nova_compute[247399]: 2026-01-31 09:00:29.670 247403 INFO nova.compute.manager [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Unshelving#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.023 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.024 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.026 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.027 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.036 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.059 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.085 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.086 247403 INFO nova.compute.claims [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.314 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:30.541 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:30.542 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:30.542 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178482177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.736 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.741 247403 DEBUG nova.compute.provider_tree [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.775 247403 DEBUG nova.scheduler.client.report [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:00:30 np0005603621 nova_compute[247399]: 2026-01-31 09:00:30.891 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:30.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 379 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 3.7 KiB/s wr, 55 op/s
Jan 31 04:00:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.128 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.128 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.129 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.129 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.129 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.130 247403 INFO nova.compute.manager [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Terminating instance#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.131 247403 DEBUG nova.compute.manager [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.181 247403 INFO nova.network.neutron [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updating port 8cd49adc-5281-4272-9c97-e9121d662fff with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 04:00:31 np0005603621 kernel: tapb02a63e1-2f (unregistering): left promiscuous mode
Jan 31 04:00:31 np0005603621 NetworkManager[49013]: <info>  [1769850031.1969] device (tapb02a63e1-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:00:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:31Z|00814|binding|INFO|Releasing lport b02a63e1-2ffc-49b9-ab9c-043d1923874b from this chassis (sb_readonly=0)
Jan 31 04:00:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:31Z|00815|binding|INFO|Setting lport b02a63e1-2ffc-49b9-ab9c-043d1923874b down in Southbound
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.201 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:31Z|00816|binding|INFO|Removing iface tapb02a63e1-2f ovn-installed in OVS
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.209 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000bd.scope: Deactivated successfully.
Jan 31 04:00:31 np0005603621 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000bd.scope: Consumed 14.440s CPU time.
Jan 31 04:00:31 np0005603621 systemd-machined[212769]: Machine qemu-93-instance-000000bd terminated.
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.300 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:e7:95 10.100.0.14'], port_security=['fa:16:3e:f7:e7:95 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'aaf62d2f-5fda-43ee-8bf2-04e4940bf62f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '741cc279-854c-4c57-b65b-d0d947eda32c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3acdb359-6f9b-4f22-ab08-f7d3c7064a26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=b02a63e1-2ffc-49b9-ab9c-043d1923874b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.301 159734 INFO neutron.agent.ovn.metadata.agent [-] Port b02a63e1-2ffc-49b9-ab9c-043d1923874b in datapath 506aa866-6ce6-4d84-b5c1-b07676e27a1d unbound from our chassis#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.303 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 506aa866-6ce6-4d84-b5c1-b07676e27a1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.304 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b41e00c2-ede7-4a3b-862e-33104fc0c15a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.304 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d namespace which is not needed anymore#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.359 247403 INFO nova.virt.libvirt.driver [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Instance destroyed successfully.#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.360 247403 DEBUG nova.objects.instance [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid aaf62d2f-5fda-43ee-8bf2-04e4940bf62f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:31 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [NOTICE]   (384569) : haproxy version is 2.8.14-c23fe91
Jan 31 04:00:31 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [NOTICE]   (384569) : path to executable is /usr/sbin/haproxy
Jan 31 04:00:31 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [WARNING]  (384569) : Exiting Master process...
Jan 31 04:00:31 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [ALERT]    (384569) : Current worker (384571) exited with code 143 (Terminated)
Jan 31 04:00:31 np0005603621 neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d[384564]: [WARNING]  (384569) : All workers exited. Exiting... (0)
Jan 31 04:00:31 np0005603621 systemd[1]: libpod-e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130.scope: Deactivated successfully.
Jan 31 04:00:31 np0005603621 podman[386408]: 2026-01-31 09:00:31.412611953 +0000 UTC m=+0.038317061 container died e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.430 247403 DEBUG nova.virt.libvirt.vif [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T08:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-203598222',display_name='tempest-TestNetworkBasicOps-server-203598222',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-203598222',id=189,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNrMY4MVn4hj/bsMk1gLTd1kGHztQrvatek01LjyFe0ofAcmMZ2Uk6h+k8aGUK75gBAILctU9wQ4d13+Zk3JniP8Xk8rAJMzxEpnGqpGTmMMpOlldx7flmFsisI/eSBL+A==',key_name='tempest-TestNetworkBasicOps-1465877602',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:59:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-20t0cymf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T08:59:13Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=aaf62d2f-5fda-43ee-8bf2-04e4940bf62f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.430 247403 DEBUG nova.network.os_vif_util [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "address": "fa:16:3e:f7:e7:95", "network": {"id": "506aa866-6ce6-4d84-b5c1-b07676e27a1d", "bridge": "br-int", "label": "tempest-network-smoke--401311232", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb02a63e1-2f", "ovs_interfaceid": "b02a63e1-2ffc-49b9-ab9c-043d1923874b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.431 247403 DEBUG nova.network.os_vif_util [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.431 247403 DEBUG os_vif [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.433 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb02a63e1-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130-userdata-shm.mount: Deactivated successfully.
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.436 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-39045215f28bbb4b61d0f712fe4c79cb65e544dfc8fa7acedc9082cd46b4ee68-merged.mount: Deactivated successfully.
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.438 247403 INFO os_vif [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:e7:95,bridge_name='br-int',has_traffic_filtering=True,id=b02a63e1-2ffc-49b9-ab9c-043d1923874b,network=Network(506aa866-6ce6-4d84-b5c1-b07676e27a1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb02a63e1-2f')#033[00m
Jan 31 04:00:31 np0005603621 podman[386408]: 2026-01-31 09:00:31.445873543 +0000 UTC m=+0.071578651 container cleanup e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:00:31 np0005603621 systemd[1]: libpod-conmon-e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130.scope: Deactivated successfully.
Jan 31 04:00:31 np0005603621 podman[386450]: 2026-01-31 09:00:31.495017654 +0000 UTC m=+0.034012894 container remove e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.498 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7fcc5fad-ca22-4e24-add1-2342a769c373]: (4, ('Sat Jan 31 09:00:31 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d (e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130)\ne08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130\nSat Jan 31 09:00:31 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d (e08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130)\ne08bdb02f3cff39a8dea991c89da9acf09bd8c9eb5b941da535781915832a130\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.499 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f246382e-01b0-475a-b150-feef8a263e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.500 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap506aa866-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.538 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 kernel: tap506aa866-60: left promiscuous mode
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.544 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.546 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[88269eba-1d5e-4d64-a600-589f02c81d89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.566 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[67855bf6-5953-4689-981e-3234f601827b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.567 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[949f47f3-f57a-4aaa-bb28-ed00f60a9dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.577 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[876ce941-02d9-4141-b6ca-840805f57ef5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 925680, 'reachable_time': 27192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386469, 'error': None, 'target': 'ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 systemd[1]: run-netns-ovnmeta\x2d506aa866\x2d6ce6\x2d4d84\x2db5c1\x2db07676e27a1d.mount: Deactivated successfully.
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.581 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-506aa866-6ce6-4d84-b5c1-b07676e27a1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:00:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:31.582 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe27fdf-c407-4775-81dc-8d5922260339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.973 247403 INFO nova.virt.libvirt.driver [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Deleting instance files /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_del#033[00m
Jan 31 04:00:31 np0005603621 nova_compute[247399]: 2026-01-31 09:00:31.974 247403 INFO nova.virt.libvirt.driver [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Deletion of /var/lib/nova/instances/aaf62d2f-5fda-43ee-8bf2-04e4940bf62f_del complete#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.119 247403 INFO nova.compute.manager [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.120 247403 DEBUG oslo.service.loopingcall [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.120 247403 DEBUG nova.compute.manager [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.121 247403 DEBUG nova.network.neutron [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.253 247403 DEBUG nova.compute.manager [req-c3bd207b-d4ac-4b3d-9e6d-91196eb7cba9 req-fe401f33-592e-4a34-8f99-bb0d8f7cce1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-vif-unplugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.253 247403 DEBUG oslo_concurrency.lockutils [req-c3bd207b-d4ac-4b3d-9e6d-91196eb7cba9 req-fe401f33-592e-4a34-8f99-bb0d8f7cce1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.254 247403 DEBUG oslo_concurrency.lockutils [req-c3bd207b-d4ac-4b3d-9e6d-91196eb7cba9 req-fe401f33-592e-4a34-8f99-bb0d8f7cce1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.255 247403 DEBUG oslo_concurrency.lockutils [req-c3bd207b-d4ac-4b3d-9e6d-91196eb7cba9 req-fe401f33-592e-4a34-8f99-bb0d8f7cce1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.255 247403 DEBUG nova.compute.manager [req-c3bd207b-d4ac-4b3d-9e6d-91196eb7cba9 req-fe401f33-592e-4a34-8f99-bb0d8f7cce1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] No waiting events found dispatching network-vif-unplugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:00:32 np0005603621 nova_compute[247399]: 2026-01-31 09:00:32.255 247403 DEBUG nova.compute.manager [req-c3bd207b-d4ac-4b3d-9e6d-91196eb7cba9 req-fe401f33-592e-4a34-8f99-bb0d8f7cce1e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-vif-unplugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:00:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:32.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 322 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 KiB/s wr, 53 op/s
Jan 31 04:00:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:33.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.186 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.186 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquired lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.187 247403 DEBUG nova.network.neutron [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.500 247403 DEBUG nova.network.neutron [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.545 247403 INFO nova.compute.manager [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Took 1.42 seconds to deallocate network for instance.#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.644 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.645 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.733 247403 DEBUG nova.compute.manager [req-adf69c30-c625-4e39-945e-4f3926da1049 req-54455085-1c7a-4137-acd3-0b6aa5a01313 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-vif-deleted-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:33 np0005603621 nova_compute[247399]: 2026-01-31 09:00:33.809 247403 DEBUG oslo_concurrency.processutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:00:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731775960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.231 247403 DEBUG oslo_concurrency.processutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.239 247403 DEBUG nova.compute.provider_tree [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.275 247403 DEBUG nova.scheduler.client.report [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.292 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850019.292213, 0caaa0c0-9873-4501-b06c-237d07cac4fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.293 247403 INFO nova.compute.manager [-] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.316 247403 DEBUG nova.compute.manager [None req-3f05987e-c4bd-4e69-ad4a-7aae8d701a0b - - - - - -] [instance: 0caaa0c0-9873-4501-b06c-237d07cac4fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.318 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.353 247403 INFO nova.scheduler.client.report [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance aaf62d2f-5fda-43ee-8bf2-04e4940bf62f#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.472 247403 DEBUG oslo_concurrency.lockutils [None req-e8319821-2185-4317-9e83-c72e660e9923 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.514 247403 DEBUG nova.compute.manager [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.515 247403 DEBUG oslo_concurrency.lockutils [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.516 247403 DEBUG oslo_concurrency.lockutils [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.516 247403 DEBUG oslo_concurrency.lockutils [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "aaf62d2f-5fda-43ee-8bf2-04e4940bf62f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.516 247403 DEBUG nova.compute.manager [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] No waiting events found dispatching network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.517 247403 WARNING nova.compute.manager [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Received unexpected event network-vif-plugged-b02a63e1-2ffc-49b9-ab9c-043d1923874b for instance with vm_state deleted and task_state None.#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.517 247403 DEBUG nova.compute.manager [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-changed-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.517 247403 DEBUG nova.compute.manager [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Refreshing instance network info cache due to event network-changed-8cd49adc-5281-4272-9c97-e9121d662fff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:00:34 np0005603621 nova_compute[247399]: 2026-01-31 09:00:34.518 247403 DEBUG oslo_concurrency.lockutils [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:00:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:34.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 299 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Jan 31 04:00:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:35 np0005603621 podman[386668]: 2026-01-31 09:00:35.831368801 +0000 UTC m=+0.057879199 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:00:35 np0005603621 podman[386668]: 2026-01-31 09:00:35.948050554 +0000 UTC m=+0.174560972 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 04:00:36 np0005603621 podman[386822]: 2026-01-31 09:00:36.420861363 +0000 UTC m=+0.044012671 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:00:36 np0005603621 podman[386845]: 2026-01-31 09:00:36.485361299 +0000 UTC m=+0.052365834 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:36 np0005603621 podman[386822]: 2026-01-31 09:00:36.490477471 +0000 UTC m=+0.113628739 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:00:36 np0005603621 podman[386890]: 2026-01-31 09:00:36.659531859 +0000 UTC m=+0.044401793 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., release=1793, distribution-scope=public)
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.674 247403 DEBUG nova.network.neutron [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updating instance_info_cache with network_info: [{"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:36 np0005603621 podman[386890]: 2026-01-31 09:00:36.700161361 +0000 UTC m=+0.085031275 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.719 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Releasing lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.721 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.721 247403 INFO nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Creating image(s)#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.746 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:00:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.750 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.752 247403 DEBUG oslo_concurrency.lockutils [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.752 247403 DEBUG nova.network.neutron [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Refreshing network info cache for port 8cd49adc-5281-4272-9c97-e9121d662fff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:00:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:00:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.788 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.884 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.908 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.912 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "d0b86e9d2243e76dc493fdee44be5bfeb1a962ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:36 np0005603621 nova_compute[247399]: 2026-01-31 09:00:36.913 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "d0b86e9d2243e76dc493fdee44be5bfeb1a962ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:36.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 299 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 4.2 KiB/s wr, 28 op/s
Jan 31 04:00:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 99fab89b-2467-4edc-a5cc-dfb592beb32f does not exist
Jan 31 04:00:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 659e814e-fe6f-4007-98e3-18a01d186b13 does not exist
Jan 31 04:00:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d3a42c11-988a-4e8e-ae11-3e3a4b2806da does not exist
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:37 np0005603621 nova_compute[247399]: 2026-01-31 09:00:37.537 247403 DEBUG nova.virt.libvirt.imagebackend [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Image locations are: [{'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/297f46fc-627f-48b0-8a66-2e6f3dab7554/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/297f46fc-627f-48b0-8a66-2e6f3dab7554/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 31 04:00:37 np0005603621 nova_compute[247399]: 2026-01-31 09:00:37.609 247403 DEBUG nova.virt.libvirt.imagebackend [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Selected location: {'url': 'rbd://2f5ab832-5f2e-5a84-bd93-cf8bab960ee2/images/297f46fc-627f-48b0-8a66-2e6f3dab7554/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 31 04:00:37 np0005603621 nova_compute[247399]: 2026-01-31 09:00:37.609 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] cloning images/297f46fc-627f-48b0-8a66-2e6f3dab7554@snap to None/3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.774845373 +0000 UTC m=+0.038471726 container create 87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:00:37 np0005603621 nova_compute[247399]: 2026-01-31 09:00:37.788 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "d0b86e9d2243e76dc493fdee44be5bfeb1a962ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:37 np0005603621 systemd[1]: Started libpod-conmon-87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db.scope.
Jan 31 04:00:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.757205976 +0000 UTC m=+0.020832349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.858099922 +0000 UTC m=+0.121726295 container init 87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.864457592 +0000 UTC m=+0.128083945 container start 87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meninsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:00:37 np0005603621 cranky_meninsky[387349]: 167 167
Jan 31 04:00:37 np0005603621 systemd[1]: libpod-87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db.scope: Deactivated successfully.
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.871395052 +0000 UTC m=+0.135021405 container attach 87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.871791984 +0000 UTC m=+0.135418337 container died 87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:00:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d7e5c7395c36a2fe3c9e68932ae76ea628c95e91ed839a9aa8c1f6acf1e45df4-merged.mount: Deactivated successfully.
Jan 31 04:00:37 np0005603621 podman[387314]: 2026-01-31 09:00:37.927865015 +0000 UTC m=+0.191491368 container remove 87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:00:37 np0005603621 nova_compute[247399]: 2026-01-31 09:00:37.931 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:37 np0005603621 systemd[1]: libpod-conmon-87b9a856936eb32fb3856df4291444531a89206733421b1b581e16c86b3595db.scope: Deactivated successfully.
Jan 31 04:00:38 np0005603621 podman[387408]: 2026-01-31 09:00:38.040599734 +0000 UTC m=+0.036019119 container create 510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:00:38 np0005603621 systemd[1]: Started libpod-conmon-510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308.scope.
Jan 31 04:00:38 np0005603621 podman[387408]: 2026-01-31 09:00:38.022638817 +0000 UTC m=+0.018058222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.123 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] flattening vms/3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 04:00:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5727205cfb2338ea69acc26c4faf4ad04414810181d58516ee1e718a33e9af1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5727205cfb2338ea69acc26c4faf4ad04414810181d58516ee1e718a33e9af1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5727205cfb2338ea69acc26c4faf4ad04414810181d58516ee1e718a33e9af1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5727205cfb2338ea69acc26c4faf4ad04414810181d58516ee1e718a33e9af1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5727205cfb2338ea69acc26c4faf4ad04414810181d58516ee1e718a33e9af1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:38 np0005603621 podman[387408]: 2026-01-31 09:00:38.156227915 +0000 UTC m=+0.151647320 container init 510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:00:38 np0005603621 podman[387408]: 2026-01-31 09:00:38.161927435 +0000 UTC m=+0.157346810 container start 510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:00:38 np0005603621 podman[387408]: 2026-01-31 09:00:38.170048421 +0000 UTC m=+0.165467836 container attach 510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.498 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Image rbd:vms/3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.499 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.499 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Ensure instance console log exists: /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.499 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.500 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.500 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.502 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Start _get_guest_xml network_info=[{"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:59:59Z,direct_url=<?>,disk_format='raw',id=297f46fc-627f-48b0-8a66-2e6f3dab7554,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-273568541-shelved',owner='1f293713f6854265a89a1a4a002088d5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T09:00:10Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.506 247403 WARNING nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.512 247403 DEBUG nova.virt.libvirt.host [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.512 247403 DEBUG nova.virt.libvirt.host [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.515 247403 DEBUG nova.virt.libvirt.host [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.515 247403 DEBUG nova.virt.libvirt.host [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.516 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.517 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-31T08:59:59Z,direct_url=<?>,disk_format='raw',id=297f46fc-627f-48b0-8a66-2e6f3dab7554,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-273568541-shelved',owner='1f293713f6854265a89a1a4a002088d5',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-31T09:00:10Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.517 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.517 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.517 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.518 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.518 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.518 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.518 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.518 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.519 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.519 247403 DEBUG nova.virt.hardware [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.519 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.554 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:00:38
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.data']
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:00:38 np0005603621 funny_wescoff[387443]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:00:38 np0005603621 funny_wescoff[387443]: --> relative data size: 1.0
Jan 31 04:00:38 np0005603621 funny_wescoff[387443]: --> All data devices are unavailable
Jan 31 04:00:38 np0005603621 systemd[1]: libpod-510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308.scope: Deactivated successfully.
Jan 31 04:00:38 np0005603621 podman[387497]: 2026-01-31 09:00:38.945253287 +0000 UTC m=+0.021705386 container died 510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:00:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:38.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:00:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2394604307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:00:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 299 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.971 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b5727205cfb2338ea69acc26c4faf4ad04414810181d58516ee1e718a33e9af1-merged.mount: Deactivated successfully.
Jan 31 04:00:38 np0005603621 nova_compute[247399]: 2026-01-31 09:00:38.999 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.002 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:39 np0005603621 podman[387497]: 2026-01-31 09:00:39.040941978 +0000 UTC m=+0.117394067 container remove 510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:00:39 np0005603621 systemd[1]: libpod-conmon-510706229c9dde7b8cc0b2c31d657644d31919c2b1cc773b88edfd573652e308.scope: Deactivated successfully.
Jan 31 04:00:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:39.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:00:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.360 247403 DEBUG nova.network.neutron [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updated VIF entry in instance network info cache for port 8cd49adc-5281-4272-9c97-e9121d662fff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.360 247403 DEBUG nova.network.neutron [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updating instance_info_cache with network_info: [{"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.387 247403 DEBUG oslo_concurrency.lockutils [req-bee81c4c-32db-4b68-8993-f42e236ff86d req-d66f5878-b0b5-4302-bda8-1eb267cb4b2e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:00:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:00:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844183836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.429 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.431 247403 DEBUG nova.virt.libvirt.vif [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:59:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-273568541',display_name='tempest-TestShelveInstance-server-273568541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-273568541',id=190,image_ref='297f46fc-627f-48b0-8a66-2e6f3dab7554',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1861761101',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:59:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1f293713f6854265a89a1a4a002088d5',ramdisk_id='',reservation_id='r-179fgz65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1813478377',owner_user_name='tempest-TestShelveInstance-1813478377-project-member',shelved_at='2026-01-31T09:00:11.214123',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='297f46fc-627f-48b0-8a66-2e6f3dab7554'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:00:29Z,user_data=None,user_id='3859f52c5b70471097d1e4ffa75ecc0e',uuid=3f0d401f-df22-424f-b572-4eb9ab2df0f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.431 247403 DEBUG nova.network.os_vif_util [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converting VIF {"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.432 247403 DEBUG nova.network.os_vif_util [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.433 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.464 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <uuid>3f0d401f-df22-424f-b572-4eb9ab2df0f4</uuid>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <name>instance-000000be</name>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestShelveInstance-server-273568541</nova:name>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:00:38</nova:creationTime>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:user uuid="3859f52c5b70471097d1e4ffa75ecc0e">tempest-TestShelveInstance-1813478377-project-member</nova:user>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:project uuid="1f293713f6854265a89a1a4a002088d5">tempest-TestShelveInstance-1813478377</nova:project>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="297f46fc-627f-48b0-8a66-2e6f3dab7554"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <nova:port uuid="8cd49adc-5281-4272-9c97-e9121d662fff">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <entry name="serial">3f0d401f-df22-424f-b572-4eb9ab2df0f4</entry>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <entry name="uuid">3f0d401f-df22-424f-b572-4eb9ab2df0f4</entry>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk.config">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:ef:a5:77"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <target dev="tap8cd49adc-52"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/console.log" append="off"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:00:39 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:00:39 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:00:39 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:00:39 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.464 247403 DEBUG nova.compute.manager [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Preparing to wait for external event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.464 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.465 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.465 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.465 247403 DEBUG nova.virt.libvirt.vif [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:59:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-273568541',display_name='tempest-TestShelveInstance-server-273568541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-273568541',id=190,image_ref='297f46fc-627f-48b0-8a66-2e6f3dab7554',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1861761101',keypairs=<?>,launch_index=0,launched_at=2026-01-31T08:59:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1f293713f6854265a89a1a4a002088d5',ramdisk_id='',reservation_id='r-179fgz65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1813478377',owner_user_name='tempest-TestShelveInstance-1813478377-project-member',shelved_at='2026-01-31T09:00:11.214123',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='297f46fc-627f-48b0-8a66-2e6f3dab7554'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:00:29Z,user_data=None,user_id='3859f52c5b70471097d1e4ffa75ecc0e',uuid=3f0d401f-df22-424f-b572-4eb9ab2df0f4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.466 247403 DEBUG nova.network.os_vif_util [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converting VIF {"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.466 247403 DEBUG nova.network.os_vif_util [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.466 247403 DEBUG os_vif [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.467 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.467 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.467 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.470 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.470 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8cd49adc-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.470 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8cd49adc-52, col_values=(('external_ids', {'iface-id': '8cd49adc-5281-4272-9c97-e9121d662fff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:a5:77', 'vm-uuid': '3f0d401f-df22-424f-b572-4eb9ab2df0f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:39 np0005603621 NetworkManager[49013]: <info>  [1769850039.4732] manager: (tap8cd49adc-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.473 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.478 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.479 247403 INFO os_vif [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52')#033[00m
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.529399891 +0000 UTC m=+0.035813762 container create 7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.555 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.555 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.555 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] No VIF found with MAC fa:16:3e:ef:a5:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.556 247403 INFO nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Using config drive#033[00m
Jan 31 04:00:39 np0005603621 systemd[1]: Started libpod-conmon-7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d.scope.
Jan 31 04:00:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.581 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.591641806 +0000 UTC m=+0.098055697 container init 7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.597659446 +0000 UTC m=+0.104073317 container start 7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.600226057 +0000 UTC m=+0.106639928 container attach 7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:00:39 np0005603621 busy_haibt[387771]: 167 167
Jan 31 04:00:39 np0005603621 systemd[1]: libpod-7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d.scope: Deactivated successfully.
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.602997894 +0000 UTC m=+0.109411765 container died 7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.604 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.513683294 +0000 UTC m=+0.020097175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:00:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f6966d107b78abfe24c29bb7601c796d8ea4bb3a822a7b638d10ecb61552054b-merged.mount: Deactivated successfully.
Jan 31 04:00:39 np0005603621 podman[387721]: 2026-01-31 09:00:39.652029733 +0000 UTC m=+0.158443604 container remove 7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:00:39 np0005603621 systemd[1]: libpod-conmon-7ce232465c29471a5da3bf4b61ace3d844e59ddc4bd78bd480d006b2551fc43d.scope: Deactivated successfully.
Jan 31 04:00:39 np0005603621 nova_compute[247399]: 2026-01-31 09:00:39.663 247403 DEBUG nova.objects.instance [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'keypairs' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:00:39 np0005603621 podman[387808]: 2026-01-31 09:00:39.762981286 +0000 UTC m=+0.032876559 container create 46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 04:00:39 np0005603621 systemd[1]: Started libpod-conmon-46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60.scope.
Jan 31 04:00:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ccaeeeb5723bbd881d218bb1309aeef64ff156eed78b71cecb4daf4c98ce16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ccaeeeb5723bbd881d218bb1309aeef64ff156eed78b71cecb4daf4c98ce16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ccaeeeb5723bbd881d218bb1309aeef64ff156eed78b71cecb4daf4c98ce16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ccaeeeb5723bbd881d218bb1309aeef64ff156eed78b71cecb4daf4c98ce16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:39 np0005603621 podman[387808]: 2026-01-31 09:00:39.831295283 +0000 UTC m=+0.101190566 container init 46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:00:39 np0005603621 podman[387808]: 2026-01-31 09:00:39.837482959 +0000 UTC m=+0.107378232 container start 46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:00:39 np0005603621 podman[387808]: 2026-01-31 09:00:39.840586606 +0000 UTC m=+0.110481949 container attach 46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:00:39 np0005603621 podman[387808]: 2026-01-31 09:00:39.748902291 +0000 UTC m=+0.018797584 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:00:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Jan 31 04:00:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Jan 31 04:00:40 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.262 247403 INFO nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Creating config drive at /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/disk.config#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.266 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplooqqnxj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.376 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.393 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmplooqqnxj" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.418 247403 DEBUG nova.storage.rbd_utils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.421 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/disk.config 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]: {
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:    "0": [
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:        {
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "devices": [
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "/dev/loop3"
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            ],
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "lv_name": "ceph_lv0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "lv_size": "7511998464",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "name": "ceph_lv0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "tags": {
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.cluster_name": "ceph",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.crush_device_class": "",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.encrypted": "0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.osd_id": "0",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.type": "block",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:                "ceph.vdo": "0"
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            },
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "type": "block",
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:            "vg_name": "ceph_vg0"
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:        }
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]:    ]
Jan 31 04:00:40 np0005603621 stoic_proskuriakova[387824]: }
Jan 31 04:00:40 np0005603621 systemd[1]: libpod-46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60.scope: Deactivated successfully.
Jan 31 04:00:40 np0005603621 podman[387808]: 2026-01-31 09:00:40.583648757 +0000 UTC m=+0.853544030 container died 46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.601 247403 DEBUG oslo_concurrency.processutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/disk.config 3f0d401f-df22-424f-b572-4eb9ab2df0f4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.602 247403 INFO nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Deleting local config drive /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4/disk.config because it was imported into RBD.#033[00m
Jan 31 04:00:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-50ccaeeeb5723bbd881d218bb1309aeef64ff156eed78b71cecb4daf4c98ce16-merged.mount: Deactivated successfully.
Jan 31 04:00:40 np0005603621 kernel: tap8cd49adc-52: entered promiscuous mode
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.6491] manager: (tap8cd49adc-52): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Jan 31 04:00:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:40Z|00817|binding|INFO|Claiming lport 8cd49adc-5281-4272-9c97-e9121d662fff for this chassis.
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.649 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:40Z|00818|binding|INFO|8cd49adc-5281-4272-9c97-e9121d662fff: Claiming fa:16:3e:ef:a5:77 10.100.0.4
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.659 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.664 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.6655] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.6660] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Jan 31 04:00:40 np0005603621 systemd-udevd[387898]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.673 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:a5:77 10.100.0.4'], port_security=['fa:16:3e:ef:a5:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f0d401f-df22-424f-b572-4eb9ab2df0f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f293713f6854265a89a1a4a002088d5', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cc6e8c6f-bb2f-4c15-bc84-a1452e5550a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05ac80f4-66e3-4e8c-b69d-f2f58ada92e8, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=8cd49adc-5281-4272-9c97-e9121d662fff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.674 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 8cd49adc-5281-4272-9c97-e9121d662fff in datapath 1c62fa1c-f7d2-4937-9258-1d3a4456b207 bound to our chassis#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.676 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c62fa1c-f7d2-4937-9258-1d3a4456b207#033[00m
Jan 31 04:00:40 np0005603621 systemd-machined[212769]: New machine qemu-95-instance-000000be.
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.6845] device (tap8cd49adc-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.6854] device (tap8cd49adc-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.685 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[80473408-52d6-4de5-b592-a2a6245b1aa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.685 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c62fa1c-f1 in ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.687 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c62fa1c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.687 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[20816716-4e85-4a60-be5f-250dded0ed6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.687 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0e9e01-074d-42ef-8353-7dd2c568a56b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 systemd[1]: Started Virtual Machine qemu-95-instance-000000be.
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.699 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[0b381042-621c-4aee-828f-276cfc456aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.718 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[21d99df7-2162-4b6b-88d2-74a0e9076a22]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 podman[387808]: 2026-01-31 09:00:40.733663814 +0000 UTC m=+1.003559107 container remove 46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 04:00:40 np0005603621 systemd[1]: libpod-conmon-46369cc8093ce8840aca403c57e7e172d6c1ab9508bf2a5dd280553bb1b33e60.scope: Deactivated successfully.
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.746 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fa88418f-b4d0-4787-beec-07de0886adcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.7536] manager: (tap1c62fa1c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/371)
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.754 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[90ada377-248a-470b-98bb-4ead1f7b99c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 systemd-udevd[387901]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.783 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff5a324-7ec2-4cbe-ab42-246ad8f14c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.787 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[317f1025-2175-4e98-9ecc-47bd36fe327b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.789 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:40Z|00819|binding|INFO|Setting lport 8cd49adc-5281-4272-9c97-e9121d662fff ovn-installed in OVS
Jan 31 04:00:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:40Z|00820|binding|INFO|Setting lport 8cd49adc-5281-4272-9c97-e9121d662fff up in Southbound
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.794 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.8039] device (tap1c62fa1c-f0): carrier: link connected
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.808 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[764396c8-6f88-4fef-b462-ced6f1d9a9a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.821 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3b35aa58-b95f-4a7c-9969-0fa0b7d04910]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c62fa1c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:15:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934631, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387955, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.836 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[21d172cb-b304-46b8-9a1f-9d190fe11263]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:1552'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 934631, 'tstamp': 934631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387960, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.848 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37cfb98a-8c85-4745-8f51-9dd75078108d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c62fa1c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:15:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934631, 'reachable_time': 22870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 387976, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.874 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa4e90a-70db-4fa1-9406-903c22fb8623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.912 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b894ace2-fd84-4195-816b-8609d3d81947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.913 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c62fa1c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.913 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.913 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c62fa1c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:40 np0005603621 kernel: tap1c62fa1c-f0: entered promiscuous mode
Jan 31 04:00:40 np0005603621 NetworkManager[49013]: <info>  [1769850040.9163] manager: (tap1c62fa1c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.917 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.919 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c62fa1c-f0, col_values=(('external_ids', {'iface-id': '46e41546-aa3b-4838-b2c2-ba3b46cf445c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:40Z|00821|binding|INFO|Releasing lport 46e41546-aa3b-4838-b2c2-ba3b46cf445c from this chassis (sb_readonly=0)
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.921 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.922 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c62fa1c-f7d2-4937-9258-1d3a4456b207.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c62fa1c-f7d2-4937-9258-1d3a4456b207.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.923 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[34603c60-84b3-4436-be67-f6d3244cafbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.924 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-1c62fa1c-f7d2-4937-9258-1d3a4456b207
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/1c62fa1c-f7d2-4937-9258-1d3a4456b207.pid.haproxy
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 1c62fa1c-f7d2-4937-9258-1d3a4456b207
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:00:40 np0005603621 nova_compute[247399]: 2026-01-31 09:00:40.926 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:40.926 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'env', 'PROCESS_TAG=haproxy-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c62fa1c-f7d2-4937-9258-1d3a4456b207.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:00:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 324 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 68 op/s
Jan 31 04:00:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:41.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.098 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850041.0978084, 3f0d401f-df22-424f-b572-4eb9ab2df0f4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.098 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] VM Started (Lifecycle Event)#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.149 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.155 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850041.0978837, 3f0d401f-df22-424f-b572-4eb9ab2df0f4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.156 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.192 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.198 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.269 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.277427452 +0000 UTC m=+0.067017517 container create 665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.22921946 +0000 UTC m=+0.018809545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:00:41 np0005603621 podman[388152]: 2026-01-31 09:00:41.334608378 +0000 UTC m=+0.113981760 container create 1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:00:41 np0005603621 podman[388152]: 2026-01-31 09:00:41.238107221 +0000 UTC m=+0.017480623 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:00:41 np0005603621 systemd[1]: Started libpod-conmon-665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647.scope.
Jan 31 04:00:41 np0005603621 systemd[1]: Started libpod-conmon-1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245.scope.
Jan 31 04:00:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c60254afef75514983b7f979a1eacb15ddb6b0b3b5a4b6c3902337854453a34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:41 np0005603621 podman[388152]: 2026-01-31 09:00:41.383097418 +0000 UTC m=+0.162470820 container init 1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:00:41 np0005603621 podman[388152]: 2026-01-31 09:00:41.387800678 +0000 UTC m=+0.167174060 container start 1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.389184931 +0000 UTC m=+0.178775016 container init 665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.395347336 +0000 UTC m=+0.184937401 container start 665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.397902926 +0000 UTC m=+0.187492991 container attach 665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:00:41 np0005603621 festive_hoover[388176]: 167 167
Jan 31 04:00:41 np0005603621 systemd[1]: libpod-665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647.scope: Deactivated successfully.
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.401640635 +0000 UTC m=+0.191230700 container died 665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 31 04:00:41 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [NOTICE]   (388183) : New worker (388188) forked
Jan 31 04:00:41 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [NOTICE]   (388183) : Loading success.
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.430 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:41 np0005603621 podman[388136]: 2026-01-31 09:00:41.438306772 +0000 UTC m=+0.227896857 container remove 665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:00:41 np0005603621 systemd[1]: libpod-conmon-665a8211f5c918b1acd6cf75e03f1b66ca9d95bfaebc229d3ccf168d0d19b647.scope: Deactivated successfully.
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.523 247403 DEBUG nova.compute.manager [req-bbdd780d-2e40-4c38-8e65-8cf0b9ed9b86 req-293c372e-e5bf-420a-adb9-3e426ee61c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.524 247403 DEBUG oslo_concurrency.lockutils [req-bbdd780d-2e40-4c38-8e65-8cf0b9ed9b86 req-293c372e-e5bf-420a-adb9-3e426ee61c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.524 247403 DEBUG oslo_concurrency.lockutils [req-bbdd780d-2e40-4c38-8e65-8cf0b9ed9b86 req-293c372e-e5bf-420a-adb9-3e426ee61c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.524 247403 DEBUG oslo_concurrency.lockutils [req-bbdd780d-2e40-4c38-8e65-8cf0b9ed9b86 req-293c372e-e5bf-420a-adb9-3e426ee61c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.525 247403 DEBUG nova.compute.manager [req-bbdd780d-2e40-4c38-8e65-8cf0b9ed9b86 req-293c372e-e5bf-420a-adb9-3e426ee61c0b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Processing event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.525 247403 DEBUG nova.compute.manager [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.529 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850041.5293329, 3f0d401f-df22-424f-b572-4eb9ab2df0f4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.530 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.531 247403 DEBUG nova.virt.libvirt.driver [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.536 247403 INFO nova.virt.libvirt.driver [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Instance spawned successfully.#033[00m
Jan 31 04:00:41 np0005603621 podman[388214]: 2026-01-31 09:00:41.559827219 +0000 UTC m=+0.034668115 container create 547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.565 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.568 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:00:41 np0005603621 systemd[1]: Started libpod-conmon-547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d.scope.
Jan 31 04:00:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:00:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4070da2f52ef70c6890d745c94ed53a6a12654334ca42fcf52f8da1cf9e765aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4070da2f52ef70c6890d745c94ed53a6a12654334ca42fcf52f8da1cf9e765aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4070da2f52ef70c6890d745c94ed53a6a12654334ca42fcf52f8da1cf9e765aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4070da2f52ef70c6890d745c94ed53a6a12654334ca42fcf52f8da1cf9e765aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.627 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:00:41 np0005603621 podman[388214]: 2026-01-31 09:00:41.637034017 +0000 UTC m=+0.111874933 container init 547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mestorf, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:00:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6ee7792ffc906d683bea977fb0a641faa64d716d7f08240906be60984249dfe1-merged.mount: Deactivated successfully.
Jan 31 04:00:41 np0005603621 podman[388214]: 2026-01-31 09:00:41.544473444 +0000 UTC m=+0.019314360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:00:41 np0005603621 podman[388214]: 2026-01-31 09:00:41.642352355 +0000 UTC m=+0.117193251 container start 547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mestorf, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:00:41 np0005603621 podman[388214]: 2026-01-31 09:00:41.645225345 +0000 UTC m=+0.120066261 container attach 547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:00:41 np0005603621 nova_compute[247399]: 2026-01-31 09:00:41.789 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]: {
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:        "osd_id": 0,
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:        "type": "bluestore"
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]:    }
Jan 31 04:00:42 np0005603621 confident_mestorf[388230]: }
Jan 31 04:00:42 np0005603621 systemd[1]: libpod-547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d.scope: Deactivated successfully.
Jan 31 04:00:42 np0005603621 podman[388214]: 2026-01-31 09:00:42.406503652 +0000 UTC m=+0.881344548 container died 547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 04:00:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4070da2f52ef70c6890d745c94ed53a6a12654334ca42fcf52f8da1cf9e765aa-merged.mount: Deactivated successfully.
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:42 np0005603621 podman[388214]: 2026-01-31 09:00:42.46029281 +0000 UTC m=+0.935133706 container remove 547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mestorf, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:00:42 np0005603621 systemd[1]: libpod-conmon-547b5317981b9dd924dc4e7be2b22e72bcef4c6a3d11e63aa137b2cb875cf44d.scope: Deactivated successfully.
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:00:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d89e7f07-d414-4d6f-8f64-ca230bf30cf5 does not exist
Jan 31 04:00:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1f776220-f5c9-4c25-be2f-8a4069f11017 does not exist
Jan 31 04:00:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8413d715-7f58-4265-8510-b1da02a4f815 does not exist
Jan 31 04:00:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:42.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 378 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 5.9 MiB/s wr, 175 op/s
Jan 31 04:00:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:43.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.158 247403 DEBUG nova.compute.manager [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.252 247403 DEBUG oslo_concurrency.lockutils [None req-1815850c-3090-4d42-869a-b3bec4cd2c04 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:43 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.680 247403 DEBUG nova.compute.manager [req-f3073c63-31c8-4ae5-a13c-80e39cbcd757 req-96938cb9-4058-4776-8952-d8e489574b53 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.680 247403 DEBUG oslo_concurrency.lockutils [req-f3073c63-31c8-4ae5-a13c-80e39cbcd757 req-96938cb9-4058-4776-8952-d8e489574b53 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.680 247403 DEBUG oslo_concurrency.lockutils [req-f3073c63-31c8-4ae5-a13c-80e39cbcd757 req-96938cb9-4058-4776-8952-d8e489574b53 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.681 247403 DEBUG oslo_concurrency.lockutils [req-f3073c63-31c8-4ae5-a13c-80e39cbcd757 req-96938cb9-4058-4776-8952-d8e489574b53 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.681 247403 DEBUG nova.compute.manager [req-f3073c63-31c8-4ae5-a13c-80e39cbcd757 req-96938cb9-4058-4776-8952-d8e489574b53 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] No waiting events found dispatching network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:00:43 np0005603621 nova_compute[247399]: 2026-01-31 09:00:43.681 247403 WARNING nova.compute.manager [req-f3073c63-31c8-4ae5-a13c-80e39cbcd757 req-96938cb9-4058-4776-8952-d8e489574b53 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received unexpected event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff for instance with vm_state active and task_state None.#033[00m
Jan 31 04:00:44 np0005603621 nova_compute[247399]: 2026-01-31 09:00:44.475 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:00:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:44.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:00:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.9 MiB/s wr, 225 op/s
Jan 31 04:00:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:46 np0005603621 nova_compute[247399]: 2026-01-31 09:00:46.357 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850031.3566368, aaf62d2f-5fda-43ee-8bf2-04e4940bf62f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:00:46 np0005603621 nova_compute[247399]: 2026-01-31 09:00:46.358 247403 INFO nova.compute.manager [-] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:00:46 np0005603621 nova_compute[247399]: 2026-01-31 09:00:46.443 247403 DEBUG nova.compute.manager [None req-160a34a1-876a-4634-971b-7374e85428f6 - - - - - -] [instance: aaf62d2f-5fda-43ee-8bf2-04e4940bf62f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:00:46 np0005603621 nova_compute[247399]: 2026-01-31 09:00:46.792 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:46.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 321 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.9 MiB/s wr, 259 op/s
Jan 31 04:00:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:47.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Jan 31 04:00:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Jan 31 04:00:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Jan 31 04:00:48 np0005603621 nova_compute[247399]: 2026-01-31 09:00:48.534 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:48.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.1 MiB/s wr, 245 op/s
Jan 31 04:00:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:49.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:49 np0005603621 nova_compute[247399]: 2026-01-31 09:00:49.477 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043493654615670944 of space, bias 1.0, pg target 1.3048096384701284 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.6470238199097339 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:00:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:00:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:50.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.3 MiB/s wr, 232 op/s
Jan 31 04:00:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:51.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:51.594 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:00:51 np0005603621 nova_compute[247399]: 2026-01-31 09:00:51.595 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:51.597 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:00:51 np0005603621 nova_compute[247399]: 2026-01-31 09:00:51.792 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 32 KiB/s wr, 188 op/s
Jan 31 04:00:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:54 np0005603621 nova_compute[247399]: 2026-01-31 09:00:54.083 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:00:54Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:a5:77 10.100.0.4
Jan 31 04:00:54 np0005603621 nova_compute[247399]: 2026-01-31 09:00:54.479 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:54 np0005603621 podman[388321]: 2026-01-31 09:00:54.511206702 +0000 UTC m=+0.053854152 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 31 04:00:54 np0005603621 podman[388322]: 2026-01-31 09:00:54.534825298 +0000 UTC m=+0.077487818 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:00:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:54.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 18 KiB/s wr, 161 op/s
Jan 31 04:00:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:55 np0005603621 nova_compute[247399]: 2026-01-31 09:00:55.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:56 np0005603621 nova_compute[247399]: 2026-01-31 09:00:56.794 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:00:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:56.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 16 KiB/s wr, 144 op/s
Jan 31 04:00:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:00:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:57.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:00:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:00:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:00:57.600 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:00:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:00:58.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 14 KiB/s wr, 134 op/s
Jan 31 04:00:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:00:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:00:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:00:59.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:00:59 np0005603621 nova_compute[247399]: 2026-01-31 09:00:59.483 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:00.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 300 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 22 KiB/s wr, 111 op/s
Jan 31 04:01:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:01:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:01.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:01:01 np0005603621 nova_compute[247399]: 2026-01-31 09:01:01.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 301 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 25 KiB/s wr, 109 op/s
Jan 31 04:01:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:03.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:03 np0005603621 nova_compute[247399]: 2026-01-31 09:01:03.336 247403 DEBUG nova.compute.manager [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-changed-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:03 np0005603621 nova_compute[247399]: 2026-01-31 09:01:03.336 247403 DEBUG nova.compute.manager [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Refreshing instance network info cache due to event network-changed-8cd49adc-5281-4272-9c97-e9121d662fff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:01:03 np0005603621 nova_compute[247399]: 2026-01-31 09:01:03.336 247403 DEBUG oslo_concurrency.lockutils [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:03 np0005603621 nova_compute[247399]: 2026-01-31 09:01:03.336 247403 DEBUG oslo_concurrency.lockutils [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:03 np0005603621 nova_compute[247399]: 2026-01-31 09:01:03.336 247403 DEBUG nova.network.neutron [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Refreshing network info cache for port 8cd49adc-5281-4272-9c97-e9121d662fff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.487 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.724 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.725 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.725 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.725 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.726 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.727 247403 INFO nova.compute.manager [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Terminating instance#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.728 247403 DEBUG nova.compute.manager [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:01:04 np0005603621 kernel: tap8cd49adc-52 (unregistering): left promiscuous mode
Jan 31 04:01:04 np0005603621 NetworkManager[49013]: <info>  [1769850064.8410] device (tap8cd49adc-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:01:04 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:04Z|00822|binding|INFO|Releasing lport 8cd49adc-5281-4272-9c97-e9121d662fff from this chassis (sb_readonly=0)
Jan 31 04:01:04 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:04Z|00823|binding|INFO|Setting lport 8cd49adc-5281-4272-9c97-e9121d662fff down in Southbound
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.845 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:04Z|00824|binding|INFO|Removing iface tap8cd49adc-52 ovn-installed in OVS
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.847 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.857 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:04.860 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:a5:77 10.100.0.4'], port_security=['fa:16:3e:ef:a5:77 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f0d401f-df22-424f-b572-4eb9ab2df0f4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f293713f6854265a89a1a4a002088d5', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'cc6e8c6f-bb2f-4c15-bc84-a1452e5550a6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05ac80f4-66e3-4e8c-b69d-f2f58ada92e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=8cd49adc-5281-4272-9c97-e9121d662fff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:01:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:04.862 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 8cd49adc-5281-4272-9c97-e9121d662fff in datapath 1c62fa1c-f7d2-4937-9258-1d3a4456b207 unbound from our chassis#033[00m
Jan 31 04:01:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:04.863 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c62fa1c-f7d2-4937-9258-1d3a4456b207, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:01:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:04.865 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[22316154-20a4-412e-a7d8-a5d5b0afa5e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:04 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:04.865 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 namespace which is not needed anymore#033[00m
Jan 31 04:01:04 np0005603621 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000be.scope: Deactivated successfully.
Jan 31 04:01:04 np0005603621 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000be.scope: Consumed 12.593s CPU time.
Jan 31 04:01:04 np0005603621 systemd-machined[212769]: Machine qemu-95-instance-000000be terminated.
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.943 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.954 247403 INFO nova.virt.libvirt.driver [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Instance destroyed successfully.#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.955 247403 DEBUG nova.objects.instance [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'resources' on Instance uuid 3f0d401f-df22-424f-b572-4eb9ab2df0f4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.979 247403 DEBUG nova.virt.libvirt.vif [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T08:59:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-273568541',display_name='tempest-TestShelveInstance-server-273568541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-273568541',id=190,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMSJMgmfnDpAAhB+HaJLz9QSwipQmTsA86IiQRxWFaiZVFUeEfcIK6d3P3mBAHHd/rEKxm6Cw/JZh8tqOgCxKABbrDqL+FM2acHOfaAtltHep9oak+RawMJvZFvKOagynQ==',key_name='tempest-TestShelveInstance-1861761101',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:00:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f293713f6854265a89a1a4a002088d5',ramdisk_id='',reservation_id='r-179fgz65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1813478377',owner_user_name='tempest-TestShelveInstance-1813478377-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:00:43Z,user_data=None,user_id='3859f52c5b70471097d1e4ffa75ecc0e',uuid=3f0d401f-df22-424f-b572-4eb9ab2df0f4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.980 247403 DEBUG nova.network.os_vif_util [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converting VIF {"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:01:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.981 247403 DEBUG nova.network.os_vif_util [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.982 247403 DEBUG os_vif [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.983 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.983 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8cd49adc-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 301 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 25 KiB/s wr, 82 op/s
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.986 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:04 np0005603621 nova_compute[247399]: 2026-01-31 09:01:04.989 247403 INFO os_vif [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:a5:77,bridge_name='br-int',has_traffic_filtering=True,id=8cd49adc-5281-4272-9c97-e9121d662fff,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8cd49adc-52')#033[00m
Jan 31 04:01:05 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [NOTICE]   (388183) : haproxy version is 2.8.14-c23fe91
Jan 31 04:01:05 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [NOTICE]   (388183) : path to executable is /usr/sbin/haproxy
Jan 31 04:01:05 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [WARNING]  (388183) : Exiting Master process...
Jan 31 04:01:05 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [WARNING]  (388183) : Exiting Master process...
Jan 31 04:01:05 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [ALERT]    (388183) : Current worker (388188) exited with code 143 (Terminated)
Jan 31 04:01:05 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[388177]: [WARNING]  (388183) : All workers exited. Exiting... (0)
Jan 31 04:01:05 np0005603621 systemd[1]: libpod-1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245.scope: Deactivated successfully.
Jan 31 04:01:05 np0005603621 podman[388454]: 2026-01-31 09:01:05.026552603 +0000 UTC m=+0.095644481 container died 1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:01:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:05.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245-userdata-shm.mount: Deactivated successfully.
Jan 31 04:01:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7c60254afef75514983b7f979a1eacb15ddb6b0b3b5a4b6c3902337854453a34-merged.mount: Deactivated successfully.
Jan 31 04:01:05 np0005603621 podman[388454]: 2026-01-31 09:01:05.456951902 +0000 UTC m=+0.526043780 container cleanup 1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:01:05 np0005603621 systemd[1]: libpod-conmon-1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245.scope: Deactivated successfully.
Jan 31 04:01:05 np0005603621 podman[388514]: 2026-01-31 09:01:05.699279243 +0000 UTC m=+0.226273265 container remove 1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.704 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe4bacb-e0d0-4068-b029-e657557e7725]: (4, ('Sat Jan 31 09:01:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 (1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245)\n1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245\nSat Jan 31 09:01:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 (1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245)\n1b44fb622e935899bef317204e22dac9a3e22346aabf632128e7a482b277c245\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.707 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[10c5dc3f-63e5-442d-a674-a8a2dfd5848a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.708 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c62fa1c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.710 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:05 np0005603621 kernel: tap1c62fa1c-f0: left promiscuous mode
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.716 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.719 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[79deed1c-11cc-4745-b82f-b7cce6d48dc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.739 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[116870a8-420c-433c-b842-f224ed2083e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.740 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[596ad97f-fe1e-48ad-9323-39d07f609254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.753 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b1405cdd-c9ba-47f8-aabe-920d99c16066]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 934625, 'reachable_time': 18520, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388528, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.755 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:01:05 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:05.755 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e96ceb-b6aa-42d3-a823-47bd5272d214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:05 np0005603621 systemd[1]: run-netns-ovnmeta\x2d1c62fa1c\x2df7d2\x2d4937\x2d9258\x2d1d3a4456b207.mount: Deactivated successfully.
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.768 247403 DEBUG nova.compute.manager [req-bb001a1d-a8ee-44e8-bdbb-02f37e185efb req-46f79e4d-e78e-432c-b53b-24ddab31c753 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-vif-unplugged-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.769 247403 DEBUG oslo_concurrency.lockutils [req-bb001a1d-a8ee-44e8-bdbb-02f37e185efb req-46f79e4d-e78e-432c-b53b-24ddab31c753 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.770 247403 DEBUG oslo_concurrency.lockutils [req-bb001a1d-a8ee-44e8-bdbb-02f37e185efb req-46f79e4d-e78e-432c-b53b-24ddab31c753 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.770 247403 DEBUG oslo_concurrency.lockutils [req-bb001a1d-a8ee-44e8-bdbb-02f37e185efb req-46f79e4d-e78e-432c-b53b-24ddab31c753 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.770 247403 DEBUG nova.compute.manager [req-bb001a1d-a8ee-44e8-bdbb-02f37e185efb req-46f79e4d-e78e-432c-b53b-24ddab31c753 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] No waiting events found dispatching network-vif-unplugged-8cd49adc-5281-4272-9c97-e9121d662fff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:01:05 np0005603621 nova_compute[247399]: 2026-01-31 09:01:05.771 247403 DEBUG nova.compute.manager [req-bb001a1d-a8ee-44e8-bdbb-02f37e185efb req-46f79e4d-e78e-432c-b53b-24ddab31c753 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-vif-unplugged-8cd49adc-5281-4272-9c97-e9121d662fff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:01:06 np0005603621 nova_compute[247399]: 2026-01-31 09:01:06.778 247403 INFO nova.virt.libvirt.driver [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Deleting instance files /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4_del#033[00m
Jan 31 04:01:06 np0005603621 nova_compute[247399]: 2026-01-31 09:01:06.779 247403 INFO nova.virt.libvirt.driver [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Deletion of /var/lib/nova/instances/3f0d401f-df22-424f-b572-4eb9ab2df0f4_del complete#033[00m
Jan 31 04:01:06 np0005603621 nova_compute[247399]: 2026-01-31 09:01:06.798 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:06.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 282 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 992 KiB/s rd, 24 KiB/s wr, 75 op/s
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.018 247403 INFO nova.compute.manager [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Took 2.29 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.019 247403 DEBUG oslo.service.loopingcall [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.019 247403 DEBUG nova.compute.manager [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.019 247403 DEBUG nova.network.neutron [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:01:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:07.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.234 247403 DEBUG nova.network.neutron [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updated VIF entry in instance network info cache for port 8cd49adc-5281-4272-9c97-e9121d662fff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.234 247403 DEBUG nova.network.neutron [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updating instance_info_cache with network_info: [{"id": "8cd49adc-5281-4272-9c97-e9121d662fff", "address": "fa:16:3e:ef:a5:77", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8cd49adc-52", "ovs_interfaceid": "8cd49adc-5281-4272-9c97-e9121d662fff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:07 np0005603621 nova_compute[247399]: 2026-01-31 09:01:07.348 247403 DEBUG oslo_concurrency.lockutils [req-6dfcc80c-3b6a-4867-8b50-c2fecf93bc08 req-2b9295ad-b36d-4de6-82d6-fdf9c2c1abea fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-3f0d401f-df22-424f-b572-4eb9ab2df0f4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.037 247403 DEBUG nova.compute.manager [req-2ec7cd74-28eb-4e8a-b6db-d79eedeff4c6 req-f8b7f224-b039-414b-997f-2ff93c52a487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.037 247403 DEBUG oslo_concurrency.lockutils [req-2ec7cd74-28eb-4e8a-b6db-d79eedeff4c6 req-f8b7f224-b039-414b-997f-2ff93c52a487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.038 247403 DEBUG oslo_concurrency.lockutils [req-2ec7cd74-28eb-4e8a-b6db-d79eedeff4c6 req-f8b7f224-b039-414b-997f-2ff93c52a487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.038 247403 DEBUG oslo_concurrency.lockutils [req-2ec7cd74-28eb-4e8a-b6db-d79eedeff4c6 req-f8b7f224-b039-414b-997f-2ff93c52a487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.038 247403 DEBUG nova.compute.manager [req-2ec7cd74-28eb-4e8a-b6db-d79eedeff4c6 req-f8b7f224-b039-414b-997f-2ff93c52a487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] No waiting events found dispatching network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.038 247403 WARNING nova.compute.manager [req-2ec7cd74-28eb-4e8a-b6db-d79eedeff4c6 req-f8b7f224-b039-414b-997f-2ff93c52a487 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received unexpected event network-vif-plugged-8cd49adc-5281-4272-9c97-e9121d662fff for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.299 247403 DEBUG nova.network.neutron [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.305 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.306 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.308 247403 DEBUG nova.compute.manager [req-a0664050-a8ae-4ceb-8974-ec2dbe8d4813 req-2fed7d79-91e9-400f-90ef-2f6297bba731 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Received event network-vif-deleted-8cd49adc-5281-4272-9c97-e9121d662fff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.308 247403 INFO nova.compute.manager [req-a0664050-a8ae-4ceb-8974-ec2dbe8d4813 req-2fed7d79-91e9-400f-90ef-2f6297bba731 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Neutron deleted interface 8cd49adc-5281-4272-9c97-e9121d662fff; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.308 247403 DEBUG nova.network.neutron [req-a0664050-a8ae-4ceb-8974-ec2dbe8d4813 req-2fed7d79-91e9-400f-90ef-2f6297bba731 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.365 247403 INFO nova.compute.manager [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Took 1.35 seconds to deallocate network for instance.#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.370 247403 DEBUG nova.compute.manager [req-a0664050-a8ae-4ceb-8974-ec2dbe8d4813 req-2fed7d79-91e9-400f-90ef-2f6297bba731 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Detach interface failed, port_id=8cd49adc-5281-4272-9c97-e9121d662fff, reason: Instance 3f0d401f-df22-424f-b572-4eb9ab2df0f4 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.372 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.483 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.483 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.569 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.572 247403 DEBUG oslo_concurrency.processutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:08 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400089174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:08.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 868 KiB/s rd, 47 KiB/s wr, 78 op/s
Jan 31 04:01:08 np0005603621 nova_compute[247399]: 2026-01-31 09:01:08.994 247403 DEBUG oslo_concurrency.processutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.000 247403 DEBUG nova.compute.provider_tree [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:01:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:09.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.489 247403 DEBUG nova.scheduler.client.report [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.559 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.562 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.568 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.569 247403 INFO nova.compute.claims [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.654 247403 INFO nova.scheduler.client.report [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Deleted allocations for instance 3f0d401f-df22-424f-b572-4eb9ab2df0f4#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.936 247403 DEBUG oslo_concurrency.lockutils [None req-378d21bb-6b06-4791-af1b-4f4fc18921ed 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "3f0d401f-df22-424f-b572-4eb9ab2df0f4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.942 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:09 np0005603621 nova_compute[247399]: 2026-01-31 09:01:09.986 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421044661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.357 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.361 247403 DEBUG nova.compute.provider_tree [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.435 247403 DEBUG nova.scheduler.client.report [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.594 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.595 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.733 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.733 247403 DEBUG nova.network.neutron [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.761 247403 INFO nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:01:10 np0005603621 nova_compute[247399]: 2026-01-31 09:01:10.787 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:01:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 222 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 762 KiB/s rd, 56 KiB/s wr, 85 op/s
Jan 31 04:01:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:10.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.048 247403 DEBUG nova.policy [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:01:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:11.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.162 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.163 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.163 247403 INFO nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Creating image(s)#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.186 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.208 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.229 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.233 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.289 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.290 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.291 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.291 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.314 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.316 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.555 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.239s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.637 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.760 247403 DEBUG nova.objects.instance [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.781 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.782 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Ensure instance console log exists: /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.782 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.783 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.783 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:11 np0005603621 nova_compute[247399]: 2026-01-31 09:01:11.855 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 247 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 754 KiB/s rd, 977 KiB/s wr, 108 op/s
Jan 31 04:01:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:12.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:13.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:13 np0005603621 nova_compute[247399]: 2026-01-31 09:01:13.140 247403 DEBUG nova.network.neutron [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Successfully created port: 4f52d762-814b-4a00-a616-c2e6586a29dd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:01:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:01:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4149880252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:01:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 31 04:01:14 np0005603621 nova_compute[247399]: 2026-01-31 09:01:14.989 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:14.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:15.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 04:01:15 np0005603621 nova_compute[247399]: 2026-01-31 09:01:15.635 247403 DEBUG nova.network.neutron [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Successfully updated port: 4f52d762-814b-4a00-a616-c2e6586a29dd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:01:15 np0005603621 nova_compute[247399]: 2026-01-31 09:01:15.912 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:15 np0005603621 nova_compute[247399]: 2026-01-31 09:01:15.913 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:15 np0005603621 nova_compute[247399]: 2026-01-31 09:01:15.913 247403 DEBUG nova.network.neutron [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:01:16 np0005603621 nova_compute[247399]: 2026-01-31 09:01:16.273 247403 DEBUG nova.compute.manager [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-changed-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:16 np0005603621 nova_compute[247399]: 2026-01-31 09:01:16.274 247403 DEBUG nova.compute.manager [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing instance network info cache due to event network-changed-4f52d762-814b-4a00-a616-c2e6586a29dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:01:16 np0005603621 nova_compute[247399]: 2026-01-31 09:01:16.274 247403 DEBUG oslo_concurrency.lockutils [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:16 np0005603621 nova_compute[247399]: 2026-01-31 09:01:16.858 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 31 04:01:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:16.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:01:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:17.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:01:17 np0005603621 nova_compute[247399]: 2026-01-31 09:01:17.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:17 np0005603621 nova_compute[247399]: 2026-01-31 09:01:17.274 247403 DEBUG nova.network.neutron [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:01:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:18 np0005603621 nova_compute[247399]: 2026-01-31 09:01:18.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:18 np0005603621 nova_compute[247399]: 2026-01-31 09:01:18.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:18 np0005603621 nova_compute[247399]: 2026-01-31 09:01:18.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:01:18 np0005603621 nova_compute[247399]: 2026-01-31 09:01:18.218 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:01:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 31 04:01:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:18.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:19.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.339 247403 DEBUG nova.network.neutron [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.377 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.378 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Instance network_info: |[{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.378 247403 DEBUG oslo_concurrency.lockutils [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.378 247403 DEBUG nova.network.neutron [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing network info cache for port 4f52d762-814b-4a00-a616-c2e6586a29dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.381 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Start _get_guest_xml network_info=[{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.385 247403 WARNING nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.389 247403 DEBUG nova.virt.libvirt.host [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.389 247403 DEBUG nova.virt.libvirt.host [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.392 247403 DEBUG nova.virt.libvirt.host [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.393 247403 DEBUG nova.virt.libvirt.host [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.394 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.394 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.394 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.394 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.395 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.395 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.395 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.395 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.396 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.396 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.396 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.396 247403 DEBUG nova.virt.hardware [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.399 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:01:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459502020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.857 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.886 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.890 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.954 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850064.9527936, 3f0d401f-df22-424f-b572-4eb9ab2df0f4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.954 247403 INFO nova.compute.manager [-] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.982 247403 DEBUG nova.compute.manager [None req-a3022e51-667a-4784-8aff-dac54b492054 - - - - - -] [instance: 3f0d401f-df22-424f-b572-4eb9ab2df0f4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:19 np0005603621 nova_compute[247399]: 2026-01-31 09:01:19.991 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:01:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082247887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.303 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.304 247403 DEBUG nova.virt.libvirt.vif [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:01:10Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.305 247403 DEBUG nova.network.os_vif_util [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.305 247403 DEBUG nova.network.os_vif_util [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.306 247403 DEBUG nova.objects.instance [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.356 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <uuid>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</uuid>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <name>instance-000000c1</name>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:01:19</nova:creationTime>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <entry name="serial">0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <entry name="uuid">0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:d1:e9:05"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <target dev="tap4f52d762-81"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log" append="off"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:01:20 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:01:20 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:01:20 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:01:20 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.358 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Preparing to wait for external event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.358 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.358 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.358 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.359 247403 DEBUG nova.virt.libvirt.vif [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:01:10Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.359 247403 DEBUG nova.network.os_vif_util [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.360 247403 DEBUG nova.network.os_vif_util [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.360 247403 DEBUG os_vif [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.361 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.362 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.365 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f52d762-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.365 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f52d762-81, col_values=(('external_ids', {'iface-id': '4f52d762-814b-4a00-a616-c2e6586a29dd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:e9:05', 'vm-uuid': '0469d90c-1c5c-40d4-ac77-94e6496bc9ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:20 np0005603621 NetworkManager[49013]: <info>  [1769850080.3687] manager: (tap4f52d762-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/373)
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.370 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.371 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.372 247403 INFO os_vif [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81')#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.455 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.455 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.456 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:d1:e9:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.456 247403 INFO nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Using config drive#033[00m
Jan 31 04:01:20 np0005603621 nova_compute[247399]: 2026-01-31 09:01:20.481 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 31 04:01:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:20.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:21.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.220 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.246 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.247 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.247 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.248 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.248 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1258732135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.668 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.804 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.804 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.860 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.920 247403 INFO nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Creating config drive at /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/disk.config#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.924 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpno90c0_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.944 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.945 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4073MB free_disk=20.921878814697266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.946 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:21 np0005603621 nova_compute[247399]: 2026-01-31 09:01:21.946 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.048 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.048 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.048 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.051 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpno90c0_o" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.077 247403 DEBUG nova.storage.rbd_utils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.080 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/disk.config 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.126 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.224 247403 DEBUG oslo_concurrency.processutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/disk.config 0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.225 247403 INFO nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Deleting local config drive /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/disk.config because it was imported into RBD.#033[00m
Jan 31 04:01:22 np0005603621 kernel: tap4f52d762-81: entered promiscuous mode
Jan 31 04:01:22 np0005603621 NetworkManager[49013]: <info>  [1769850082.2668] manager: (tap4f52d762-81): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Jan 31 04:01:22 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:22Z|00825|binding|INFO|Claiming lport 4f52d762-814b-4a00-a616-c2e6586a29dd for this chassis.
Jan 31 04:01:22 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:22Z|00826|binding|INFO|4f52d762-814b-4a00-a616-c2e6586a29dd: Claiming fa:16:3e:d1:e9:05 10.100.0.8
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.296 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:22Z|00827|binding|INFO|Setting lport 4f52d762-814b-4a00-a616-c2e6586a29dd ovn-installed in OVS
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.307 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:e9:05 10.100.0.8'], port_security=['fa:16:3e:d1:e9:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0469d90c-1c5c-40d4-ac77-94e6496bc9ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5d6a0755-a98a-4d42-b80b-5f2e1c4a586c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8b4f65c-63f8-45f3-bcd8-2eb92c6b57c1, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=4f52d762-814b-4a00-a616-c2e6586a29dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.305 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.310 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 4f52d762-814b-4a00-a616-c2e6586a29dd in datapath a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 bound to our chassis#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.312 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a1b24494-72ae-4ffa-a7cb-1e8e7578dd60#033[00m
Jan 31 04:01:22 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:22Z|00828|binding|INFO|Setting lport 4f52d762-814b-4a00-a616-c2e6586a29dd up in Southbound
Jan 31 04:01:22 np0005603621 systemd-machined[212769]: New machine qemu-96-instance-000000c1.
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.326 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[dabc8fa4-a670-4ac9-a0a4-f65386da2481]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.327 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa1b24494-71 in ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.329 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa1b24494-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.329 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fe45c949-ddda-4405-8abc-3cdbea0c72cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.329 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[55ae31f5-f165-4728-add3-c0ddaa98bfdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 systemd[1]: Started Virtual Machine qemu-96-instance-000000c1.
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.337 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a9e642-190b-4f77-b43a-372dd59474b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.348 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[148afbf9-35c4-427e-b6e5-e3b205b883fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 systemd-udevd[388981]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:01:22 np0005603621 NetworkManager[49013]: <info>  [1769850082.3613] device (tap4f52d762-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:01:22 np0005603621 NetworkManager[49013]: <info>  [1769850082.3620] device (tap4f52d762-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.380 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0d28ed-5ed9-4af7-9576-41923b9fc3d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.384 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[08575820-3fcd-4bec-8aba-1bfd542f88c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 NetworkManager[49013]: <info>  [1769850082.3858] manager: (tapa1b24494-70): new Veth device (/org/freedesktop/NetworkManager/Devices/375)
Jan 31 04:01:22 np0005603621 systemd-udevd[388989]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.407 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[bf88f076-502a-40e0-a24b-8164247274a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.411 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[592b90a4-c63c-4a15-b42a-53b972744865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 NetworkManager[49013]: <info>  [1769850082.4271] device (tapa1b24494-70): carrier: link connected
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.429 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[51948af8-b544-4fd7-8423-2f46417c82cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.442 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1055afec-b8e9-42c5-9ba0-df1c046dde37]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1b24494-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:45:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938794, 'reachable_time': 29960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389010, 'error': None, 'target': 'ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.457 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3381e464-ffe8-4bcc-bfde-032711df22bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:45d2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938794, 'tstamp': 938794}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 389011, 'error': None, 'target': 'ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.468 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1355f6b8-c131-49c5-9c79-251546be7c26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa1b24494-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:45:d2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938794, 'reachable_time': 29960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 389012, 'error': None, 'target': 'ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.486 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[aa87da29-1659-4a11-a54d-c11b64eae804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.515 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f86f36e6-dda7-4694-a0b7-1824d7772908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.516 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1b24494-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.517 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.517 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1b24494-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:22 np0005603621 kernel: tapa1b24494-70: entered promiscuous mode
Jan 31 04:01:22 np0005603621 NetworkManager[49013]: <info>  [1769850082.5196] manager: (tapa1b24494-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.518 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.520 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.522 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa1b24494-70, col_values=(('external_ids', {'iface-id': 'c4b9dc18-1a7a-4655-80a1-1689bd3ce11a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.523 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:22Z|00829|binding|INFO|Releasing lport c4b9dc18-1a7a-4655-80a1-1689bd3ce11a from this chassis (sb_readonly=0)
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.523 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.526 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a1b24494-72ae-4ffa-a7cb-1e8e7578dd60.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a1b24494-72ae-4ffa-a7cb-1e8e7578dd60.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.527 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d424163e-2399-4236-8046-f48ff30661d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.528 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.528 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/a1b24494-72ae-4ffa-a7cb-1e8e7578dd60.pid.haproxy
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID a1b24494-72ae-4ffa-a7cb-1e8e7578dd60
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:01:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:22.529 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'env', 'PROCESS_TAG=haproxy-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a1b24494-72ae-4ffa-a7cb-1e8e7578dd60.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:01:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:01:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709492273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.603 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.610 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.647 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.679 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.680 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.721 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850082.7208416, 0469d90c-1c5c-40d4-ac77-94e6496bc9ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.721 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] VM Started (Lifecycle Event)#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.725 247403 DEBUG nova.network.neutron [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated VIF entry in instance network info cache for port 4f52d762-814b-4a00-a616-c2e6586a29dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.726 247403 DEBUG nova.network.neutron [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.751 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.754 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850082.7209144, 0469d90c-1c5c-40d4-ac77-94e6496bc9ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.754 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:01:22 np0005603621 podman[389088]: 2026-01-31 09:01:22.831566895 +0000 UTC m=+0.043788834 container create 5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:01:22 np0005603621 systemd[1]: Started libpod-conmon-5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4.scope.
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.872 247403 DEBUG oslo_concurrency.lockutils [req-af48af2f-2f1b-433f-ab72-e3cb303327d8 req-2be3cff2-c626-4996-b7ae-5df52386baf5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.877 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.881 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:01:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:22 np0005603621 podman[389088]: 2026-01-31 09:01:22.806546725 +0000 UTC m=+0.018768694 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:01:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec4cc4740c56596af3f98b176e6d1171c55fb59c37cd27709670b9037e7ac6c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:22 np0005603621 podman[389088]: 2026-01-31 09:01:22.912694096 +0000 UTC m=+0.124916035 container init 5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 31 04:01:22 np0005603621 nova_compute[247399]: 2026-01-31 09:01:22.914 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:01:22 np0005603621 podman[389088]: 2026-01-31 09:01:22.916425164 +0000 UTC m=+0.128647103 container start 5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 04:01:22 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [NOTICE]   (389107) : New worker (389109) forked
Jan 31 04:01:22 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [NOTICE]   (389107) : Loading success.
Jan 31 04:01:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 164 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 04:01:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:23.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:23.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.565 247403 DEBUG nova.compute.manager [req-a35aa702-49df-46a6-8bdb-82fca683ea0c req-25fbd4bb-0832-4b0d-a808-3a3ca0703379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.565 247403 DEBUG oslo_concurrency.lockutils [req-a35aa702-49df-46a6-8bdb-82fca683ea0c req-25fbd4bb-0832-4b0d-a808-3a3ca0703379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.565 247403 DEBUG oslo_concurrency.lockutils [req-a35aa702-49df-46a6-8bdb-82fca683ea0c req-25fbd4bb-0832-4b0d-a808-3a3ca0703379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.566 247403 DEBUG oslo_concurrency.lockutils [req-a35aa702-49df-46a6-8bdb-82fca683ea0c req-25fbd4bb-0832-4b0d-a808-3a3ca0703379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.566 247403 DEBUG nova.compute.manager [req-a35aa702-49df-46a6-8bdb-82fca683ea0c req-25fbd4bb-0832-4b0d-a808-3a3ca0703379 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Processing event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.566 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.570 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850083.570226, 0469d90c-1c5c-40d4-ac77-94e6496bc9ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.570 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.572 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.575 247403 INFO nova.virt.libvirt.driver [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Instance spawned successfully.#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.576 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.635 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.641 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.644 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.645 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.645 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.646 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.646 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.647 247403 DEBUG nova.virt.libvirt.driver [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.659 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.659 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.659 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.686 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.693 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.693 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.694 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.694 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.787 247403 INFO nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Took 12.62 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.787 247403 DEBUG nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.953 247403 INFO nova.compute.manager [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Took 15.44 seconds to build instance.#033[00m
Jan 31 04:01:23 np0005603621 nova_compute[247399]: 2026-01-31 09:01:23.994 247403 DEBUG oslo_concurrency.lockutils [None req-86dfe1f7-46af-418c-beeb-130943b54d78 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 215 KiB/s rd, 901 KiB/s wr, 11 op/s
Jan 31 04:01:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:25.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:25.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.368 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:25 np0005603621 podman[389119]: 2026-01-31 09:01:25.485461908 +0000 UTC m=+0.043206885 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:01:25 np0005603621 podman[389120]: 2026-01-31 09:01:25.508378812 +0000 UTC m=+0.063677641 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.973 247403 DEBUG nova.compute.manager [req-cb8984f7-81dd-4bed-b04d-a295c72c8fd9 req-46abfbb6-87dd-4466-9872-163124c3a8cb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.974 247403 DEBUG oslo_concurrency.lockutils [req-cb8984f7-81dd-4bed-b04d-a295c72c8fd9 req-46abfbb6-87dd-4466-9872-163124c3a8cb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.974 247403 DEBUG oslo_concurrency.lockutils [req-cb8984f7-81dd-4bed-b04d-a295c72c8fd9 req-46abfbb6-87dd-4466-9872-163124c3a8cb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.974 247403 DEBUG oslo_concurrency.lockutils [req-cb8984f7-81dd-4bed-b04d-a295c72c8fd9 req-46abfbb6-87dd-4466-9872-163124c3a8cb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.974 247403 DEBUG nova.compute.manager [req-cb8984f7-81dd-4bed-b04d-a295c72c8fd9 req-46abfbb6-87dd-4466-9872-163124c3a8cb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:01:25 np0005603621 nova_compute[247399]: 2026-01-31 09:01:25.974 247403 WARNING nova.compute.manager [req-cb8984f7-81dd-4bed-b04d-a295c72c8fd9 req-46abfbb6-87dd-4466-9872-163124c3a8cb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received unexpected event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd for instance with vm_state active and task_state None.#033[00m
Jan 31 04:01:26 np0005603621 nova_compute[247399]: 2026-01-31 09:01:26.863 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 32 op/s
Jan 31 04:01:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:01:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:27.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:01:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:27.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:28 np0005603621 nova_compute[247399]: 2026-01-31 09:01:28.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:28 np0005603621 nova_compute[247399]: 2026-01-31 09:01:28.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 26 KiB/s wr, 66 op/s
Jan 31 04:01:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:29.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:30 np0005603621 nova_compute[247399]: 2026-01-31 09:01:30.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:30 np0005603621 nova_compute[247399]: 2026-01-31 09:01:30.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:30.542 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:30.543 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:30.544 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 96 op/s
Jan 31 04:01:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:31.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:31.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:31 np0005603621 nova_compute[247399]: 2026-01-31 09:01:31.865 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:32 np0005603621 nova_compute[247399]: 2026-01-31 09:01:32.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 97 op/s
Jan 31 04:01:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:33.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:33.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:34 np0005603621 nova_compute[247399]: 2026-01-31 09:01:34.450 247403 DEBUG nova.compute.manager [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-changed-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:34 np0005603621 nova_compute[247399]: 2026-01-31 09:01:34.451 247403 DEBUG nova.compute.manager [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing instance network info cache due to event network-changed-4f52d762-814b-4a00-a616-c2e6586a29dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:01:34 np0005603621 nova_compute[247399]: 2026-01-31 09:01:34.451 247403 DEBUG oslo_concurrency.lockutils [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:34 np0005603621 nova_compute[247399]: 2026-01-31 09:01:34.451 247403 DEBUG oslo_concurrency.lockutils [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:34 np0005603621 nova_compute[247399]: 2026-01-31 09:01:34.451 247403 DEBUG nova.network.neutron [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing network info cache for port 4f52d762-814b-4a00-a616-c2e6586a29dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:01:34 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 04:01:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 269 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 23 KiB/s wr, 94 op/s
Jan 31 04:01:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:35.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:35.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:35 np0005603621 nova_compute[247399]: 2026-01-31 09:01:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:35 np0005603621 nova_compute[247399]: 2026-01-31 09:01:35.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:01:35 np0005603621 nova_compute[247399]: 2026-01-31 09:01:35.416 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:35 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:35Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:e9:05 10.100.0.8
Jan 31 04:01:35 np0005603621 ovn_controller[149152]: 2026-01-31T09:01:35Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:e9:05 10.100.0.8
Jan 31 04:01:36 np0005603621 nova_compute[247399]: 2026-01-31 09:01:36.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 280 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 921 KiB/s wr, 105 op/s
Jan 31 04:01:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:37.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:37.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:01:38
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'images', '.mgr', 'default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root']
Jan 31 04:01:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:01:38 np0005603621 nova_compute[247399]: 2026-01-31 09:01:38.675 247403 DEBUG nova.network.neutron [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated VIF entry in instance network info cache for port 4f52d762-814b-4a00-a616-c2e6586a29dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:01:38 np0005603621 nova_compute[247399]: 2026-01-31 09:01:38.676 247403 DEBUG nova.network.neutron [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.4 MiB/s wr, 118 op/s
Jan 31 04:01:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:39.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:39.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:01:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.387 247403 DEBUG oslo_concurrency.lockutils [req-56e0888f-0c8a-49a6-a607-092105192cc2 req-522e66f2-704a-4634-a97d-e1f8bf554e58 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.986 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.987 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.987 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.988 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.988 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.988 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:39 np0005603621 nova_compute[247399]: 2026-01-31 09:01:39.988 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.181 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.181 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Image id 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 yields fingerprint b1c202daae0a5d5b639e0239462ea0d46fe633d6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.181 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] image 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 at (/var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6): checking#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.181 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] image 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16 at (/var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.182 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.183 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] 0469d90c-1c5c-40d4-ac77-94e6496bc9ae is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.183 247403 WARNING nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.183 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Active base files: /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.183 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Removable base files: /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.183 247403 INFO nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/365f9823d2619ef09948bdeed685488da63755b5#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.183 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.184 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.184 247403 DEBUG nova.virt.libvirt.imagecache [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 31 04:01:40 np0005603621 nova_compute[247399]: 2026-01-31 09:01:40.418 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 310 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 125 op/s
Jan 31 04:01:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:41.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:41 np0005603621 nova_compute[247399]: 2026-01-31 09:01:41.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 348 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 374 KiB/s rd, 3.9 MiB/s wr, 107 op/s
Jan 31 04:01:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:43.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:43.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:01:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6183c563-2ff3-4e5a-a7ce-d66a5c5a0250 does not exist
Jan 31 04:01:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8794a6b2-597f-4cda-acf0-1ca62134aa49 does not exist
Jan 31 04:01:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9835398f-f48a-4514-8cf3-81fecf62b88b does not exist
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:01:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:01:43 np0005603621 nova_compute[247399]: 2026-01-31 09:01:43.881 247403 INFO nova.compute.manager [None req-83fe6c0c-d6b1-4fb3-9571-fa12945ec193 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Get console output#033[00m
Jan 31 04:01:43 np0005603621 nova_compute[247399]: 2026-01-31 09:01:43.886 307490 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:44.004102079 +0000 UTC m=+0.036233615 container create c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:01:44 np0005603621 systemd[1]: Started libpod-conmon-c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7.scope.
Jan 31 04:01:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:43.987737833 +0000 UTC m=+0.019869409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:44.08960997 +0000 UTC m=+0.121741546 container init c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:44.097017993 +0000 UTC m=+0.129149539 container start c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:44.100431701 +0000 UTC m=+0.132563267 container attach c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hellman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:01:44 np0005603621 nervous_hellman[389509]: 167 167
Jan 31 04:01:44 np0005603621 systemd[1]: libpod-c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7.scope: Deactivated successfully.
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:44.10354464 +0000 UTC m=+0.135676216 container died c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:01:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-710713b8fa343c84cde3019e9192afd049cdee5b3c1cee7514d63db92800a3bd-merged.mount: Deactivated successfully.
Jan 31 04:01:44 np0005603621 podman[389493]: 2026-01-31 09:01:44.146331501 +0000 UTC m=+0.178463037 container remove c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:01:44 np0005603621 systemd[1]: libpod-conmon-c27d27735606da8be08415bb6cb82897a10686dfa35408964253871fd1ccc9d7.scope: Deactivated successfully.
Jan 31 04:01:44 np0005603621 podman[389534]: 2026-01-31 09:01:44.256139817 +0000 UTC m=+0.033221910 container create 9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_babbage, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:01:44 np0005603621 systemd[1]: Started libpod-conmon-9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c.scope.
Jan 31 04:01:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b430c64fdee92bd6ca14622e3d717ee11658160d54362882f03ee966107cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b430c64fdee92bd6ca14622e3d717ee11658160d54362882f03ee966107cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b430c64fdee92bd6ca14622e3d717ee11658160d54362882f03ee966107cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b430c64fdee92bd6ca14622e3d717ee11658160d54362882f03ee966107cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b430c64fdee92bd6ca14622e3d717ee11658160d54362882f03ee966107cbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:44 np0005603621 podman[389534]: 2026-01-31 09:01:44.331349181 +0000 UTC m=+0.108431304 container init 9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 04:01:44 np0005603621 podman[389534]: 2026-01-31 09:01:44.241480455 +0000 UTC m=+0.018562568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:01:44 np0005603621 podman[389534]: 2026-01-31 09:01:44.338205408 +0000 UTC m=+0.115287531 container start 9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:01:44 np0005603621 podman[389534]: 2026-01-31 09:01:44.341927936 +0000 UTC m=+0.119010039 container attach 9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 04:01:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:01:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:01:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:01:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 348 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Jan 31 04:01:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:01:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:45.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:01:45 np0005603621 nostalgic_babbage[389550]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:01:45 np0005603621 nostalgic_babbage[389550]: --> relative data size: 1.0
Jan 31 04:01:45 np0005603621 nostalgic_babbage[389550]: --> All data devices are unavailable
Jan 31 04:01:45 np0005603621 systemd[1]: libpod-9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c.scope: Deactivated successfully.
Jan 31 04:01:45 np0005603621 podman[389534]: 2026-01-31 09:01:45.123341819 +0000 UTC m=+0.900423912 container died 9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_babbage, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:01:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:45.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-47b430c64fdee92bd6ca14622e3d717ee11658160d54362882f03ee966107cbb-merged.mount: Deactivated successfully.
Jan 31 04:01:45 np0005603621 podman[389534]: 2026-01-31 09:01:45.187493984 +0000 UTC m=+0.964576097 container remove 9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_babbage, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:01:45 np0005603621 systemd[1]: libpod-conmon-9f97e75836572f7f63324bd55302b116a96199019bd924d023ea468339cbd57c.scope: Deactivated successfully.
Jan 31 04:01:45 np0005603621 nova_compute[247399]: 2026-01-31 09:01:45.419 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.719204862 +0000 UTC m=+0.036881726 container create 255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 31 04:01:45 np0005603621 systemd[1]: Started libpod-conmon-255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc.scope.
Jan 31 04:01:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.79327346 +0000 UTC m=+0.110950344 container init 255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.702536496 +0000 UTC m=+0.020213380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.799795706 +0000 UTC m=+0.117472560 container start 255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.80306499 +0000 UTC m=+0.120741874 container attach 255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 04:01:45 np0005603621 gallant_bhabha[389733]: 167 167
Jan 31 04:01:45 np0005603621 systemd[1]: libpod-255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc.scope: Deactivated successfully.
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.80621992 +0000 UTC m=+0.123896784 container died 255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhabha, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:01:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-102cb88061424d10f1ddcbe92690f9f9d401dd0edbc8a15cd4f8f11534e054c8-merged.mount: Deactivated successfully.
Jan 31 04:01:45 np0005603621 podman[389717]: 2026-01-31 09:01:45.844061634 +0000 UTC m=+0.161738498 container remove 255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:01:45 np0005603621 systemd[1]: libpod-conmon-255d6a8ce0ea7e5e84f0958471acc1520d543c2561f1a6212f392cee1f2a2cfc.scope: Deactivated successfully.
Jan 31 04:01:45 np0005603621 podman[389756]: 2026-01-31 09:01:45.966115768 +0000 UTC m=+0.034034795 container create 1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:01:46 np0005603621 systemd[1]: Started libpod-conmon-1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a.scope.
Jan 31 04:01:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151cb3acf37429b7afa9d61521c14be846707ea5dcb8f83bb473de11cfcf5f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151cb3acf37429b7afa9d61521c14be846707ea5dcb8f83bb473de11cfcf5f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151cb3acf37429b7afa9d61521c14be846707ea5dcb8f83bb473de11cfcf5f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7151cb3acf37429b7afa9d61521c14be846707ea5dcb8f83bb473de11cfcf5f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:46 np0005603621 podman[389756]: 2026-01-31 09:01:46.026949669 +0000 UTC m=+0.094868716 container init 1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:01:46 np0005603621 podman[389756]: 2026-01-31 09:01:46.031578835 +0000 UTC m=+0.099497862 container start 1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:01:46 np0005603621 podman[389756]: 2026-01-31 09:01:46.034893509 +0000 UTC m=+0.102812556 container attach 1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:01:46 np0005603621 podman[389756]: 2026-01-31 09:01:45.951410974 +0000 UTC m=+0.019330021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2983216672' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2983216672' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3500142180' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:01:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3500142180' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]: {
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:    "0": [
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:        {
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "devices": [
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "/dev/loop3"
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            ],
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "lv_name": "ceph_lv0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "lv_size": "7511998464",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "name": "ceph_lv0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "tags": {
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.cluster_name": "ceph",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.crush_device_class": "",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.encrypted": "0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.osd_id": "0",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.type": "block",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:                "ceph.vdo": "0"
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            },
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "type": "block",
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:            "vg_name": "ceph_vg0"
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:        }
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]:    ]
Jan 31 04:01:46 np0005603621 blissful_jennings[389772]: }
Jan 31 04:01:46 np0005603621 systemd[1]: libpod-1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a.scope: Deactivated successfully.
Jan 31 04:01:46 np0005603621 podman[389756]: 2026-01-31 09:01:46.79029493 +0000 UTC m=+0.858214017 container died 1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:01:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7151cb3acf37429b7afa9d61521c14be846707ea5dcb8f83bb473de11cfcf5f1-merged.mount: Deactivated successfully.
Jan 31 04:01:46 np0005603621 podman[389756]: 2026-01-31 09:01:46.833460853 +0000 UTC m=+0.901379880 container remove 1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:01:46 np0005603621 systemd[1]: libpod-conmon-1a1113b2b1b1e392d066a80217fd6c775789c3259c2d0bffd5b37444aeac6d1a.scope: Deactivated successfully.
Jan 31 04:01:46 np0005603621 nova_compute[247399]: 2026-01-31 09:01:46.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 348 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 355 KiB/s rd, 3.9 MiB/s wr, 112 op/s
Jan 31 04:01:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:01:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:47.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:01:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:47.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.286194338 +0000 UTC m=+0.031939300 container create bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:01:47 np0005603621 systemd[1]: Started libpod-conmon-bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036.scope.
Jan 31 04:01:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.351101987 +0000 UTC m=+0.096846969 container init bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.357826259 +0000 UTC m=+0.103571221 container start bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.360908656 +0000 UTC m=+0.106653618 container attach bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 04:01:47 np0005603621 flamboyant_driscoll[389951]: 167 167
Jan 31 04:01:47 np0005603621 systemd[1]: libpod-bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036.scope: Deactivated successfully.
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.363288611 +0000 UTC m=+0.109033573 container died bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.272579078 +0000 UTC m=+0.018324060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:01:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b50543b1edc36a32406c1da6b3c6acba717413d621ff08e477bf53b15e1e16ce-merged.mount: Deactivated successfully.
Jan 31 04:01:47 np0005603621 podman[389934]: 2026-01-31 09:01:47.4056805 +0000 UTC m=+0.151425502 container remove bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 31 04:01:47 np0005603621 systemd[1]: libpod-conmon-bfd51393166752d17304071928c0e95f602e6df0bbb8199170e75895a2a79036.scope: Deactivated successfully.
Jan 31 04:01:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:47 np0005603621 podman[389975]: 2026-01-31 09:01:47.59726394 +0000 UTC m=+0.105809702 container create d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:01:47 np0005603621 podman[389975]: 2026-01-31 09:01:47.543714459 +0000 UTC m=+0.052260271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:01:47 np0005603621 systemd[1]: Started libpod-conmon-d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc.scope.
Jan 31 04:01:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:01:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5757648599a494e6e84148d74a4614f637edb7eea4f29327473e0d53ee34170e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5757648599a494e6e84148d74a4614f637edb7eea4f29327473e0d53ee34170e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5757648599a494e6e84148d74a4614f637edb7eea4f29327473e0d53ee34170e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:47 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5757648599a494e6e84148d74a4614f637edb7eea4f29327473e0d53ee34170e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:01:47 np0005603621 podman[389975]: 2026-01-31 09:01:47.721317827 +0000 UTC m=+0.229863609 container init d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 04:01:47 np0005603621 podman[389975]: 2026-01-31 09:01:47.725939052 +0000 UTC m=+0.234484814 container start d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:01:47 np0005603621 podman[389975]: 2026-01-31 09:01:47.756016952 +0000 UTC m=+0.264562714 container attach d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]: {
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:        "osd_id": 0,
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:        "type": "bluestore"
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]:    }
Jan 31 04:01:48 np0005603621 admiring_nobel[389992]: }
Jan 31 04:01:48 np0005603621 systemd[1]: libpod-d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc.scope: Deactivated successfully.
Jan 31 04:01:48 np0005603621 podman[389975]: 2026-01-31 09:01:48.568364501 +0000 UTC m=+1.076910263 container died d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:01:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5757648599a494e6e84148d74a4614f637edb7eea4f29327473e0d53ee34170e-merged.mount: Deactivated successfully.
Jan 31 04:01:48 np0005603621 podman[389975]: 2026-01-31 09:01:48.728358482 +0000 UTC m=+1.236904234 container remove d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nobel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:01:48 np0005603621 systemd[1]: libpod-conmon-d65f00013a76195618845d6e3f5aa05a4bb102f8b027392e0410cfd19e7fffbc.scope: Deactivated successfully.
Jan 31 04:01:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:01:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:01:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:01:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:01:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4641c323-0b01-4f72-b4dc-3da921c4e9da does not exist
Jan 31 04:01:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c2672acc-6ab1-4a15-adc5-8a1aad410f2c does not exist
Jan 31 04:01:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 765123d1-0d92-4a06-9c58-6e43a0339046 does not exist
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 312 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 229 KiB/s rd, 3.0 MiB/s wr, 117 op/s
Jan 31 04:01:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:49.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:01:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:49.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033947951563372417 of space, bias 1.0, pg target 1.0184385469011725 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031523357528382653 of space, bias 1.0, pg target 0.9425483900986413 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:01:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:01:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:01:49 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:01:50 np0005603621 nova_compute[247399]: 2026-01-31 09:01:50.422 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 148 KiB/s rd, 2.5 MiB/s wr, 105 op/s
Jan 31 04:01:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:51.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:51.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:51 np0005603621 nova_compute[247399]: 2026-01-31 09:01:51.505 247403 DEBUG oslo_concurrency.lockutils [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "interface-0469d90c-1c5c-40d4-ac77-94e6496bc9ae-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:01:51 np0005603621 nova_compute[247399]: 2026-01-31 09:01:51.505 247403 DEBUG oslo_concurrency.lockutils [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-0469d90c-1c5c-40d4-ac77-94e6496bc9ae-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:01:51 np0005603621 nova_compute[247399]: 2026-01-31 09:01:51.506 247403 DEBUG nova.objects.instance [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'flavor' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:01:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/945282495' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:01:51 np0005603621 nova_compute[247399]: 2026-01-31 09:01:51.799 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:51.799 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:01:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:51.800 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:01:51 np0005603621 nova_compute[247399]: 2026-01-31 09:01:51.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:52 np0005603621 nova_compute[247399]: 2026-01-31 09:01:52.512 247403 DEBUG nova.objects.instance [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:01:52 np0005603621 nova_compute[247399]: 2026-01-31 09:01:52.584 247403 DEBUG nova.network.neutron [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:01:52 np0005603621 nova_compute[247399]: 2026-01-31 09:01:52.966 247403 DEBUG nova.policy [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:01:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 1.4 MiB/s wr, 73 op/s
Jan 31 04:01:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:53.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:53.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:53 np0005603621 nova_compute[247399]: 2026-01-31 09:01:53.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:01:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Jan 31 04:01:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Jan 31 04:01:54 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Jan 31 04:01:54 np0005603621 nova_compute[247399]: 2026-01-31 09:01:54.415 247403 DEBUG nova.network.neutron [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Successfully created port: 4c3d4391-c276-4043-93a4-6eacb291ef17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:01:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 869 KiB/s rd, 20 KiB/s wr, 69 op/s
Jan 31 04:01:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:55.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:55.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:55 np0005603621 nova_compute[247399]: 2026-01-31 09:01:55.425 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.177 247403 DEBUG nova.network.neutron [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Successfully updated port: 4c3d4391-c276-4043-93a4-6eacb291ef17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.434 247403 DEBUG oslo_concurrency.lockutils [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.434 247403 DEBUG oslo_concurrency.lockutils [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.434 247403 DEBUG nova.network.neutron [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:01:56 np0005603621 podman[390082]: 2026-01-31 09:01:56.491599587 +0000 UTC m=+0.049454042 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 31 04:01:56 np0005603621 podman[390083]: 2026-01-31 09:01:56.543499135 +0000 UTC m=+0.101641190 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.570 247403 DEBUG nova.compute.manager [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-changed-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.570 247403 DEBUG nova.compute.manager [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing instance network info cache due to event network-changed-4c3d4391-c276-4043-93a4-6eacb291ef17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.570 247403 DEBUG oslo_concurrency.lockutils [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:01:56 np0005603621 nova_compute[247399]: 2026-01-31 09:01:56.916 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:01:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 258 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 7.6 KiB/s wr, 96 op/s
Jan 31 04:01:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:57.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:57.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Jan 31 04:01:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Jan 31 04:01:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Jan 31 04:01:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:01:58 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:01:58.802 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:01:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 270 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 56 op/s
Jan 31 04:01:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:01:59.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:01:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:01:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:01:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:01:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:00 np0005603621 nova_compute[247399]: 2026-01-31 09:02:00.427 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 67 op/s
Jan 31 04:02:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:01.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:01.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:02:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 54K writes, 207K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 54K writes, 19K syncs, 2.79 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6949 writes, 23K keys, 6949 commit groups, 1.0 writes per commit group, ingest: 23.77 MB, 0.04 MB/s#012Interval WAL: 6949 writes, 2806 syncs, 2.48 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000266 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000266 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.168 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Jan 31 04:02:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Jan 31 04:02:02 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.637 247403 DEBUG nova.network.neutron [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.900 247403 DEBUG oslo_concurrency.lockutils [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.901 247403 DEBUG oslo_concurrency.lockutils [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.901 247403 DEBUG nova.network.neutron [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing network info cache for port 4c3d4391-c276-4043-93a4-6eacb291ef17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.904 247403 DEBUG nova.virt.libvirt.vif [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.904 247403 DEBUG nova.network.os_vif_util [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.905 247403 DEBUG nova.network.os_vif_util [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.905 247403 DEBUG os_vif [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.906 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.906 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.907 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.911 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c3d4391-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.911 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c3d4391-c2, col_values=(('external_ids', {'iface-id': '4c3d4391-c276-4043-93a4-6eacb291ef17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:81:16', 'vm-uuid': '0469d90c-1c5c-40d4-ac77-94e6496bc9ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.912 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 NetworkManager[49013]: <info>  [1769850122.9150] manager: (tap4c3d4391-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.915 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.920 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.921 247403 INFO os_vif [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2')#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.922 247403 DEBUG nova.virt.libvirt.vif [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.922 247403 DEBUG nova.network.os_vif_util [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.923 247403 DEBUG nova.network.os_vif_util [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.926 247403 DEBUG nova.virt.libvirt.guest [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] attach device xml: <interface type="ethernet">
Jan 31 04:02:02 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:4d:81:16"/>
Jan 31 04:02:02 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 04:02:02 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:02:02 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 04:02:02 np0005603621 nova_compute[247399]:  <target dev="tap4c3d4391-c2"/>
Jan 31 04:02:02 np0005603621 nova_compute[247399]: </interface>
Jan 31 04:02:02 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 04:02:02 np0005603621 NetworkManager[49013]: <info>  [1769850122.9412] manager: (tap4c3d4391-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/378)
Jan 31 04:02:02 np0005603621 kernel: tap4c3d4391-c2: entered promiscuous mode
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.945 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:02Z|00830|binding|INFO|Claiming lport 4c3d4391-c276-4043-93a4-6eacb291ef17 for this chassis.
Jan 31 04:02:02 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:02Z|00831|binding|INFO|4c3d4391-c276-4043-93a4-6eacb291ef17: Claiming fa:16:3e:4d:81:16 10.100.0.27
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.959 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.962 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:81:16 10.100.0.27'], port_security=['fa:16:3e:4d:81:16 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '0469d90c-1c5c-40d4-ac77-94e6496bc9ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d12028-6cf1-48b0-8622-9e4a18613610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69ee6f93-cadf-4e2f-a073-fd82c56c8449, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=4c3d4391-c276-4043-93a4-6eacb291ef17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.964 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 4c3d4391-c276-4043-93a4-6eacb291ef17 in datapath 58d12028-6cf1-48b0-8622-9e4a18613610 bound to our chassis#033[00m
Jan 31 04:02:02 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:02Z|00832|binding|INFO|Setting lport 4c3d4391-c276-4043-93a4-6eacb291ef17 ovn-installed in OVS
Jan 31 04:02:02 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:02Z|00833|binding|INFO|Setting lport 4c3d4391-c276-4043-93a4-6eacb291ef17 up in Southbound
Jan 31 04:02:02 np0005603621 nova_compute[247399]: 2026-01-31 09:02:02.965 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.967 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58d12028-6cf1-48b0-8622-9e4a18613610#033[00m
Jan 31 04:02:02 np0005603621 systemd-udevd[390185]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.977 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b592a11d-838d-4e64-90b3-9e44a5b06be3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.978 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58d12028-61 in ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.980 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58d12028-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.980 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0718d5d1-f685-4a05-8c69-9ec9cf0a8c92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.981 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e57490-57a4-4339-b125-97589f812918]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:02 np0005603621 NetworkManager[49013]: <info>  [1769850122.9834] device (tap4c3d4391-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:02:02 np0005603621 NetworkManager[49013]: <info>  [1769850122.9841] device (tap4c3d4391-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:02:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:02.991 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b529d077-117b-407d-91fd-0d830bd97e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.004 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[325d9936-daba-496a-98eb-641da972a7e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.6 MiB/s wr, 161 op/s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.025 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[122428d2-4239-4167-a1f0-6fcce2b2da90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.030 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8f8211e3-109c-4c02-b17a-09690a571502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 NetworkManager[49013]: <info>  [1769850123.0310] manager: (tap58d12028-60): new Veth device (/org/freedesktop/NetworkManager/Devices/379)
Jan 31 04:02:03 np0005603621 systemd-udevd[390188]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:02:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.053 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9ebfd74a-9d32-4988-aa2b-779f15d2a003]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.057 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[40b2694c-14b7-4ca7-bdbe-ac8133f4bfba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 NetworkManager[49013]: <info>  [1769850123.0720] device (tap58d12028-60): carrier: link connected
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.076 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[08843407-9a15-4c1b-a16f-8b9bf9cc63a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.083 247403 DEBUG nova.virt.libvirt.driver [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.083 247403 DEBUG nova.virt.libvirt.driver [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.083 247403 DEBUG nova.virt.libvirt.driver [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:d1:e9:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.084 247403 DEBUG nova.virt.libvirt.driver [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:4d:81:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.089 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ce466aac-ed0d-448d-be0f-5806317a1c71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d12028-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:54:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 942858, 'reachable_time': 29070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390212, 'error': None, 'target': 'ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.101 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d92714b9-f583-4ced-9b49-6ffbdfaa12ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:543f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 942858, 'tstamp': 942858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390213, 'error': None, 'target': 'ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.113 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4cfc0b-d7ca-4683-bd6e-028a86a86067]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58d12028-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:54:3f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 251], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 942858, 'reachable_time': 29070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390214, 'error': None, 'target': 'ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.134 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e1da4a76-7bbc-49b6-8e8e-3db95c1cce1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:03.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.176 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ae6eb95-3eae-45e6-8917-7c7ba8785fe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.178 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d12028-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.178 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.178 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58d12028-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:03 np0005603621 NetworkManager[49013]: <info>  [1769850123.1804] manager: (tap58d12028-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 31 04:02:03 np0005603621 kernel: tap58d12028-60: entered promiscuous mode
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.182 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58d12028-60, col_values=(('external_ids', {'iface-id': 'e67a29eb-6020-405c-9d90-f517b6b8d40e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:03 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:03Z|00834|binding|INFO|Releasing lport e67a29eb-6020-405c-9d90-f517b6b8d40e from this chassis (sb_readonly=0)
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.188 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.189 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58d12028-6cf1-48b0-8622-9e4a18613610.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58d12028-6cf1-48b0-8622-9e4a18613610.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.190 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b27e4bf0-93f6-48aa-bf85-352b419642e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.190 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-58d12028-6cf1-48b0-8622-9e4a18613610
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/58d12028-6cf1-48b0-8622-9e4a18613610.pid.haproxy
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 58d12028-6cf1-48b0-8622-9e4a18613610
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:02:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:03.191 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610', 'env', 'PROCESS_TAG=haproxy-58d12028-6cf1-48b0-8622-9e4a18613610', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58d12028-6cf1-48b0-8622-9e4a18613610.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.328 247403 DEBUG nova.compute.manager [req-2b380b93-2f4a-4318-8800-94cf19c93454 req-32e8883b-7516-46ea-bbb5-051098e8e775 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.328 247403 DEBUG oslo_concurrency.lockutils [req-2b380b93-2f4a-4318-8800-94cf19c93454 req-32e8883b-7516-46ea-bbb5-051098e8e775 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.329 247403 DEBUG oslo_concurrency.lockutils [req-2b380b93-2f4a-4318-8800-94cf19c93454 req-32e8883b-7516-46ea-bbb5-051098e8e775 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.329 247403 DEBUG oslo_concurrency.lockutils [req-2b380b93-2f4a-4318-8800-94cf19c93454 req-32e8883b-7516-46ea-bbb5-051098e8e775 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.329 247403 DEBUG nova.compute.manager [req-2b380b93-2f4a-4318-8800-94cf19c93454 req-32e8883b-7516-46ea-bbb5-051098e8e775 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.330 247403 WARNING nova.compute.manager [req-2b380b93-2f4a-4318-8800-94cf19c93454 req-32e8883b-7516-46ea-bbb5-051098e8e775 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received unexpected event network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.332 247403 DEBUG nova.virt.libvirt.guest [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:02:03</nova:creationTime>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:02:03 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    <nova:port uuid="4c3d4391-c276-4043-93a4-6eacb291ef17">
Jan 31 04:02:03 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:02:03 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:02:03 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:02:03 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 04:02:03 np0005603621 nova_compute[247399]: 2026-01-31 09:02:03.383 247403 DEBUG oslo_concurrency.lockutils [None req-adc11c1c-e534-4a8f-a4bc-fc345455fe86 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-0469d90c-1c5c-40d4-ac77-94e6496bc9ae-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:03 np0005603621 podman[390246]: 2026-01-31 09:02:03.530531352 +0000 UTC m=+0.073432600 container create a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:02:03 np0005603621 systemd[1]: Started libpod-conmon-a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7.scope.
Jan 31 04:02:03 np0005603621 podman[390246]: 2026-01-31 09:02:03.47855741 +0000 UTC m=+0.021458668 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:02:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:03 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d9b87f461b7a5907a4534ffe03e4d4990bf16049bd5d8805928de16ca573808/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:03 np0005603621 podman[390246]: 2026-01-31 09:02:03.602629958 +0000 UTC m=+0.145531236 container init a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:02:03 np0005603621 podman[390246]: 2026-01-31 09:02:03.606966174 +0000 UTC m=+0.149867432 container start a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 04:02:03 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [NOTICE]   (390266) : New worker (390268) forked
Jan 31 04:02:03 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [NOTICE]   (390266) : Loading success.
Jan 31 04:02:04 np0005603621 nova_compute[247399]: 2026-01-31 09:02:04.131 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:04 np0005603621 nova_compute[247399]: 2026-01-31 09:02:04.170 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Triggering sync for uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 31 04:02:04 np0005603621 nova_compute[247399]: 2026-01-31 09:02:04.171 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:04 np0005603621 nova_compute[247399]: 2026-01-31 09:02:04.171 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:04 np0005603621 nova_compute[247399]: 2026-01-31 09:02:04.209 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:04 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:04Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:81:16 10.100.0.27
Jan 31 04:02:04 np0005603621 ovn_controller[149152]: 2026-01-31T09:02:04Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:81:16 10.100.0.27
Jan 31 04:02:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 159 op/s
Jan 31 04:02:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:05.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:05.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.838 247403 DEBUG nova.compute.manager [req-e0d18adf-ac04-47f6-bef8-53f079ad8ba8 req-70901cda-5aef-4062-b28f-4faa2a97ed39 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.838 247403 DEBUG oslo_concurrency.lockutils [req-e0d18adf-ac04-47f6-bef8-53f079ad8ba8 req-70901cda-5aef-4062-b28f-4faa2a97ed39 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.838 247403 DEBUG oslo_concurrency.lockutils [req-e0d18adf-ac04-47f6-bef8-53f079ad8ba8 req-70901cda-5aef-4062-b28f-4faa2a97ed39 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.838 247403 DEBUG oslo_concurrency.lockutils [req-e0d18adf-ac04-47f6-bef8-53f079ad8ba8 req-70901cda-5aef-4062-b28f-4faa2a97ed39 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.838 247403 DEBUG nova.compute.manager [req-e0d18adf-ac04-47f6-bef8-53f079ad8ba8 req-70901cda-5aef-4062-b28f-4faa2a97ed39 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.839 247403 WARNING nova.compute.manager [req-e0d18adf-ac04-47f6-bef8-53f079ad8ba8 req-70901cda-5aef-4062-b28f-4faa2a97ed39 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received unexpected event network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.932 247403 DEBUG nova.network.neutron [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated VIF entry in instance network info cache for port 4c3d4391-c276-4043-93a4-6eacb291ef17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:02:05 np0005603621 nova_compute[247399]: 2026-01-31 09:02:05.933 247403 DEBUG nova.network.neutron [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:06 np0005603621 nova_compute[247399]: 2026-01-31 09:02:06.221 247403 DEBUG oslo_concurrency.lockutils [req-ec215792-ae38-464e-a88e-936bdbc8d23e req-2f20a1f0-64af-4677-9b51-1152581b3459 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:06 np0005603621 nova_compute[247399]: 2026-01-31 09:02:06.918 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 301 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 154 op/s
Jan 31 04:02:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:07.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:07.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:07 np0005603621 nova_compute[247399]: 2026-01-31 09:02:07.913 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:02:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:02:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Jan 31 04:02:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Jan 31 04:02:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Jan 31 04:02:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 309 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 160 op/s
Jan 31 04:02:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:09.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:09.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 120 op/s
Jan 31 04:02:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:11.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:11.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:11 np0005603621 nova_compute[247399]: 2026-01-31 09:02:11.238 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 04:02:11 np0005603621 nova_compute[247399]: 2026-01-31 09:02:11.921 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:12 np0005603621 nova_compute[247399]: 2026-01-31 09:02:12.956 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 335 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 88 op/s
Jan 31 04:02:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:13.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:02:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4191772923' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:02:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:02:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4191772923' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:02:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 360 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 107 op/s
Jan 31 04:02:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:15.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:15.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:16 np0005603621 nova_compute[247399]: 2026-01-31 09:02:16.923 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 387 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.1 MiB/s wr, 111 op/s
Jan 31 04:02:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:17.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:17.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:17 np0005603621 nova_compute[247399]: 2026-01-31 09:02:17.958 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:18 np0005603621 nova_compute[247399]: 2026-01-31 09:02:18.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 390 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 111 op/s
Jan 31 04:02:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:19.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:19.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:19 np0005603621 nova_compute[247399]: 2026-01-31 09:02:19.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 104 op/s
Jan 31 04:02:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:21.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:21.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:21 np0005603621 nova_compute[247399]: 2026-01-31 09:02:21.926 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.243 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.244 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.244 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.244 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580614076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.857 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:22 np0005603621 nova_compute[247399]: 2026-01-31 09:02:22.960 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 31 04:02:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:23.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:23.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.444 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.445 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.622 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.624 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3909MB free_disk=20.921737670898438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.624 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.624 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.768 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.770 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.770 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:02:24 np0005603621 nova_compute[247399]: 2026-01-31 09:02:24.877 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 247 KiB/s rd, 2.3 MiB/s wr, 76 op/s
Jan 31 04:02:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:25.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:25.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1489481860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:25 np0005603621 nova_compute[247399]: 2026-01-31 09:02:25.322 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:25 np0005603621 nova_compute[247399]: 2026-01-31 09:02:25.329 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:02:25 np0005603621 nova_compute[247399]: 2026-01-31 09:02:25.427 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.155301) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850146155430, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1628, "num_deletes": 255, "total_data_size": 2608391, "memory_usage": 2652416, "flush_reason": "Manual Compaction"}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Jan 31 04:02:26 np0005603621 nova_compute[247399]: 2026-01-31 09:02:26.243 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:02:26 np0005603621 nova_compute[247399]: 2026-01-31 09:02:26.244 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850146256587, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2554761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73985, "largest_seqno": 75611, "table_properties": {"data_size": 2547338, "index_size": 4301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16335, "raw_average_key_size": 20, "raw_value_size": 2532193, "raw_average_value_size": 3185, "num_data_blocks": 189, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850007, "oldest_key_time": 1769850007, "file_creation_time": 1769850146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 101469 microseconds, and 6708 cpu microseconds.
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.256792) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2554761 bytes OK
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.256847) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.388026) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.388082) EVENT_LOG_v1 {"time_micros": 1769850146388072, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.388121) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2601396, prev total WAL file size 2601396, number of live WAL files 2.
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.389513) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2494KB)], [170(12MB)]
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850146389593, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 15526060, "oldest_snapshot_seqno": -1}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 10222 keys, 13698114 bytes, temperature: kUnknown
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850146647384, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 13698114, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13631303, "index_size": 40138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 269642, "raw_average_key_size": 26, "raw_value_size": 13451939, "raw_average_value_size": 1315, "num_data_blocks": 1532, "num_entries": 10222, "num_filter_entries": 10222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.647633) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13698114 bytes
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.669976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 60.2 rd, 53.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.4 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(11.4) write-amplify(5.4) OK, records in: 10749, records dropped: 527 output_compression: NoCompression
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.670028) EVENT_LOG_v1 {"time_micros": 1769850146670010, "job": 106, "event": "compaction_finished", "compaction_time_micros": 257848, "compaction_time_cpu_micros": 23661, "output_level": 6, "num_output_files": 1, "total_output_size": 13698114, "num_input_records": 10749, "num_output_records": 10222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850146670465, "job": 106, "event": "table_file_deletion", "file_number": 172}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850146671651, "job": 106, "event": "table_file_deletion", "file_number": 170}
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.389090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.671678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.671681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.671683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.671685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:02:26 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:02:26.671686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:02:26 np0005603621 nova_compute[247399]: 2026-01-31 09:02:26.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 121 KiB/s rd, 1017 KiB/s wr, 36 op/s
Jan 31 04:02:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:27.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:27 np0005603621 nova_compute[247399]: 2026-01-31 09:02:27.245 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:27 np0005603621 nova_compute[247399]: 2026-01-31 09:02:27.247 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:02:27 np0005603621 nova_compute[247399]: 2026-01-31 09:02:27.247 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:02:27 np0005603621 podman[390384]: 2026-01-31 09:02:27.490574779 +0000 UTC m=+0.045574961 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 04:02:27 np0005603621 podman[390385]: 2026-01-31 09:02:27.512341236 +0000 UTC m=+0.065104566 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 31 04:02:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:27 np0005603621 nova_compute[247399]: 2026-01-31 09:02:27.962 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:28 np0005603621 nova_compute[247399]: 2026-01-31 09:02:28.975 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:02:28 np0005603621 nova_compute[247399]: 2026-01-31 09:02:28.975 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:02:28 np0005603621 nova_compute[247399]: 2026-01-31 09:02:28.975 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:02:28 np0005603621 nova_compute[247399]: 2026-01-31 09:02:28.975 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 134 KiB/s wr, 21 op/s
Jan 31 04:02:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:29.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:29.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:30.543 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:30.544 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:30.545 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 392 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 80 KiB/s wr, 21 op/s
Jan 31 04:02:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:31.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:31.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:31 np0005603621 nova_compute[247399]: 2026-01-31 09:02:31.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:32 np0005603621 nova_compute[247399]: 2026-01-31 09:02:32.964 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 866 KiB/s rd, 456 KiB/s wr, 64 op/s
Jan 31 04:02:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:33.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:33.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:34.281 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:02:34 np0005603621 nova_compute[247399]: 2026-01-31 09:02:34.282 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:34.282 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:02:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 423 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 113 op/s
Jan 31 04:02:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:35.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:35.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:36 np0005603621 nova_compute[247399]: 2026-01-31 09:02:36.686 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:02:36 np0005603621 nova_compute[247399]: 2026-01-31 09:02:36.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 425 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 113 op/s
Jan 31 04:02:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:37.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:37.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:37 np0005603621 nova_compute[247399]: 2026-01-31 09:02:37.536 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:02:37 np0005603621 nova_compute[247399]: 2026-01-31 09:02:37.537 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:02:37 np0005603621 nova_compute[247399]: 2026-01-31 09:02:37.538 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:37 np0005603621 nova_compute[247399]: 2026-01-31 09:02:37.538 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:37 np0005603621 nova_compute[247399]: 2026-01-31 09:02:37.539 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:02:37 np0005603621 nova_compute[247399]: 2026-01-31 09:02:37.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:38 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:02:38.284 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:02:38
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', 'images', '.mgr', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Jan 31 04:02:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 439 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 31 04:02:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:39.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:02:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:02:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:39.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 438 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Jan 31 04:02:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:41.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:41.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:41 np0005603621 nova_compute[247399]: 2026-01-31 09:02:41.934 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:42 np0005603621 nova_compute[247399]: 2026-01-31 09:02:42.967 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 452 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 160 op/s
Jan 31 04:02:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:43.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 466 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.5 MiB/s wr, 176 op/s
Jan 31 04:02:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:45.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:45.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:46 np0005603621 nova_compute[247399]: 2026-01-31 09:02:46.936 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 467 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.7 MiB/s wr, 141 op/s
Jan 31 04:02:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:47.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:47 np0005603621 nova_compute[247399]: 2026-01-31 09:02:47.969 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.157 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.157 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.158 247403 INFO nova.compute.manager [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Unshelving#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.408 247403 INFO nova.virt.block_device [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Booting with volume 766e40b4-9f67-4558-bd4a-c5d2d46d1ef1 at /dev/vda#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.865 247403 DEBUG os_brick.utils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.866 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.875 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.875 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[fef14c02-28c1-46fd-90bc-3a63a52d32ee]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.876 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.882 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.882 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc2c35f-9b57-4476-8489-5e371694a432]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.884 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.889 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.890 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[fcba917b-94be-4775-91ee-272bcfa4ff7e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.891 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[e24b5ba7-54d9-4d4a-9125-dc5a9c4b146b]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.892 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.912 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.914 247403 DEBUG os_brick.initiator.connectors.lightos [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.915 247403 DEBUG os_brick.initiator.connectors.lightos [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.915 247403 DEBUG os_brick.initiator.connectors.lightos [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.915 247403 DEBUG os_brick.utils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] <== get_connector_properties: return (49ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 04:02:48 np0005603621 nova_compute[247399]: 2026-01-31 09:02:48.915 247403 DEBUG nova.virt.block_device [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updating existing volume attachment record: b03c76af-04e1-4d0c-8319-5cca406eaf79 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 472 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 158 op/s
Jan 31 04:02:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:49.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005340286501954216 of space, bias 1.0, pg target 1.6020859505862648 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005319203075563 of space, bias 1.0, pg target 1.5904417195933371 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:02:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 04:02:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:02:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:02:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:50 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 472 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Jan 31 04:02:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:51.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:02:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:51.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 708930e9-6cdc-4854-9d7b-5074baf87339 does not exist
Jan 31 04:02:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 07742b0e-5022-45e4-aadb-05b6287c7c80 does not exist
Jan 31 04:02:51 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 796ef34b-edaa-42ad-940b-c77004617ce2 does not exist
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.701658031 +0000 UTC m=+0.035447210 container create 7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 04:02:51 np0005603621 systemd[1]: Started libpod-conmon-7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903.scope.
Jan 31 04:02:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.76590967 +0000 UTC m=+0.099698869 container init 7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hermann, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.77288956 +0000 UTC m=+0.106678739 container start 7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.775662298 +0000 UTC m=+0.109451477 container attach 7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:02:51 np0005603621 amazing_hermann[390791]: 167 167
Jan 31 04:02:51 np0005603621 systemd[1]: libpod-7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903.scope: Deactivated successfully.
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:51 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.778707723 +0000 UTC m=+0.112496902 container died 7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.684401026 +0000 UTC m=+0.018190225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:02:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9eb74e20b0cbe1a0af7c2b413063ff7ec9e55881382af477de19d8d5f17ed44b-merged.mount: Deactivated successfully.
Jan 31 04:02:51 np0005603621 podman[390774]: 2026-01-31 09:02:51.812036187 +0000 UTC m=+0.145825366 container remove 7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_hermann, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:02:51 np0005603621 systemd[1]: libpod-conmon-7418d0a5a551119f7a6f12ce4ee0ed520d8c181562d51baa75acb200c898f903.scope: Deactivated successfully.
Jan 31 04:02:51 np0005603621 podman[390814]: 2026-01-31 09:02:51.930966661 +0000 UTC m=+0.034651585 container create 67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:02:51 np0005603621 nova_compute[247399]: 2026-01-31 09:02:51.938 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:51 np0005603621 systemd[1]: Started libpod-conmon-67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9.scope.
Jan 31 04:02:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301f173fd4b99efd2ea6861297edb150336773531d652c4b7fa70081c442949c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301f173fd4b99efd2ea6861297edb150336773531d652c4b7fa70081c442949c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301f173fd4b99efd2ea6861297edb150336773531d652c4b7fa70081c442949c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301f173fd4b99efd2ea6861297edb150336773531d652c4b7fa70081c442949c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/301f173fd4b99efd2ea6861297edb150336773531d652c4b7fa70081c442949c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:51 np0005603621 podman[390814]: 2026-01-31 09:02:51.998575496 +0000 UTC m=+0.102260440 container init 67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:02:52 np0005603621 podman[390814]: 2026-01-31 09:02:52.003710338 +0000 UTC m=+0.107395262 container start 67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:02:52 np0005603621 podman[390814]: 2026-01-31 09:02:52.006404013 +0000 UTC m=+0.110088937 container attach 67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:02:52 np0005603621 podman[390814]: 2026-01-31 09:02:51.916785183 +0000 UTC m=+0.020470117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.279 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.280 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.301 247403 DEBUG nova.objects.instance [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.416 247403 DEBUG nova.objects.instance [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.514 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.514 247403 INFO nova.compute.claims [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:02:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:52 np0005603621 objective_tharp[390830]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:02:52 np0005603621 objective_tharp[390830]: --> relative data size: 1.0
Jan 31 04:02:52 np0005603621 objective_tharp[390830]: --> All data devices are unavailable
Jan 31 04:02:52 np0005603621 systemd[1]: libpod-67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9.scope: Deactivated successfully.
Jan 31 04:02:52 np0005603621 conmon[390830]: conmon 67873184e1e5be4f5a8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9.scope/container/memory.events
Jan 31 04:02:52 np0005603621 podman[390814]: 2026-01-31 09:02:52.774927979 +0000 UTC m=+0.878612903 container died 67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:02:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-301f173fd4b99efd2ea6861297edb150336773531d652c4b7fa70081c442949c-merged.mount: Deactivated successfully.
Jan 31 04:02:52 np0005603621 podman[390814]: 2026-01-31 09:02:52.826024061 +0000 UTC m=+0.929708985 container remove 67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:02:52 np0005603621 systemd[1]: libpod-conmon-67873184e1e5be4f5a8c1f9b5872761d1f4ffc8e7aca24cb72d5826f433135c9.scope: Deactivated successfully.
Jan 31 04:02:52 np0005603621 nova_compute[247399]: 2026-01-31 09:02:52.971 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 472 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 233 op/s
Jan 31 04:02:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:53.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:53 np0005603621 nova_compute[247399]: 2026-01-31 09:02:53.226 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.318995016 +0000 UTC m=+0.029500602 container create fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:02:53 np0005603621 systemd[1]: Started libpod-conmon-fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b.scope.
Jan 31 04:02:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.374152628 +0000 UTC m=+0.084658234 container init fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.378996761 +0000 UTC m=+0.089502347 container start fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:02:53 np0005603621 festive_spence[391033]: 167 167
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.3821376 +0000 UTC m=+0.092643216 container attach fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:02:53 np0005603621 systemd[1]: libpod-fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b.scope: Deactivated successfully.
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.383387009 +0000 UTC m=+0.093892585 container died fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 04:02:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2256753c5d1f961c0e5ca4167743843b59a33f7832131b5a96852f478276e825-merged.mount: Deactivated successfully.
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.306527643 +0000 UTC m=+0.017033239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:02:53 np0005603621 podman[390998]: 2026-01-31 09:02:53.41537619 +0000 UTC m=+0.125881796 container remove fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:02:53 np0005603621 systemd[1]: libpod-conmon-fcaa04164a815e91a3f336d34b3c24cf90cbe9e69bd4c72777591c8daa25f70b.scope: Deactivated successfully.
Jan 31 04:02:53 np0005603621 podman[391057]: 2026-01-31 09:02:53.580546225 +0000 UTC m=+0.064514108 container create 5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:02:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:02:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2754456779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:02:53 np0005603621 systemd[1]: Started libpod-conmon-5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e.scope.
Jan 31 04:02:53 np0005603621 nova_compute[247399]: 2026-01-31 09:02:53.643 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:02:53 np0005603621 nova_compute[247399]: 2026-01-31 09:02:53.649 247403 DEBUG nova.compute.provider_tree [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:02:53 np0005603621 podman[391057]: 2026-01-31 09:02:53.556569937 +0000 UTC m=+0.040537800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:02:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2eaaf63256ef471cfa9671c665ba95215213d65db1371fe9caaba5a39fae96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2eaaf63256ef471cfa9671c665ba95215213d65db1371fe9caaba5a39fae96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2eaaf63256ef471cfa9671c665ba95215213d65db1371fe9caaba5a39fae96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af2eaaf63256ef471cfa9671c665ba95215213d65db1371fe9caaba5a39fae96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:53 np0005603621 nova_compute[247399]: 2026-01-31 09:02:53.690 247403 DEBUG nova.scheduler.client.report [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:02:53 np0005603621 podman[391057]: 2026-01-31 09:02:53.691055523 +0000 UTC m=+0.175023386 container init 5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 04:02:53 np0005603621 podman[391057]: 2026-01-31 09:02:53.699315024 +0000 UTC m=+0.183282887 container start 5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:02:53 np0005603621 podman[391057]: 2026-01-31 09:02:53.702689531 +0000 UTC m=+0.186657404 container attach 5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:02:53 np0005603621 nova_compute[247399]: 2026-01-31 09:02:53.732 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:02:54 np0005603621 nova_compute[247399]: 2026-01-31 09:02:54.133 247403 INFO nova.network.neutron [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updating port acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]: {
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:    "0": [
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:        {
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "devices": [
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "/dev/loop3"
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            ],
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "lv_name": "ceph_lv0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "lv_size": "7511998464",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "name": "ceph_lv0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "tags": {
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.cluster_name": "ceph",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.crush_device_class": "",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.encrypted": "0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.osd_id": "0",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.type": "block",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:                "ceph.vdo": "0"
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            },
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "type": "block",
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:            "vg_name": "ceph_vg0"
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:        }
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]:    ]
Jan 31 04:02:54 np0005603621 zealous_brattain[391075]: }
Jan 31 04:02:54 np0005603621 systemd[1]: libpod-5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e.scope: Deactivated successfully.
Jan 31 04:02:54 np0005603621 podman[391057]: 2026-01-31 09:02:54.40769578 +0000 UTC m=+0.891663653 container died 5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:02:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-af2eaaf63256ef471cfa9671c665ba95215213d65db1371fe9caaba5a39fae96-merged.mount: Deactivated successfully.
Jan 31 04:02:54 np0005603621 podman[391057]: 2026-01-31 09:02:54.44948315 +0000 UTC m=+0.933450993 container remove 5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_brattain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:02:54 np0005603621 systemd[1]: libpod-conmon-5900ed32c887c7c2f5cecfda4ff81636c97c36669ebc598aee478000f685948e.scope: Deactivated successfully.
Jan 31 04:02:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:02:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3264442629' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:02:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:02:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3264442629' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:02:54 np0005603621 podman[391236]: 2026-01-31 09:02:54.913003935 +0000 UTC m=+0.032718303 container create 5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:02:54 np0005603621 systemd[1]: Started libpod-conmon-5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4.scope.
Jan 31 04:02:54 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:54 np0005603621 podman[391236]: 2026-01-31 09:02:54.976349195 +0000 UTC m=+0.096063593 container init 5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:02:54 np0005603621 podman[391236]: 2026-01-31 09:02:54.982118787 +0000 UTC m=+0.101833195 container start 5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:02:54 np0005603621 mystifying_borg[391252]: 167 167
Jan 31 04:02:54 np0005603621 systemd[1]: libpod-5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4.scope: Deactivated successfully.
Jan 31 04:02:54 np0005603621 podman[391236]: 2026-01-31 09:02:54.985552076 +0000 UTC m=+0.105266454 container attach 5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:02:54 np0005603621 podman[391236]: 2026-01-31 09:02:54.985817554 +0000 UTC m=+0.105531942 container died 5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 04:02:54 np0005603621 podman[391236]: 2026-01-31 09:02:54.897485575 +0000 UTC m=+0.017199963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:02:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b6b814dd1d34b42c8c815c5922cf7f5f84ba1717a08fcd3aa4dd934919e5cefc-merged.mount: Deactivated successfully.
Jan 31 04:02:55 np0005603621 podman[391236]: 2026-01-31 09:02:55.016961518 +0000 UTC m=+0.136675886 container remove 5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:02:55 np0005603621 systemd[1]: libpod-conmon-5bd248ea6bc04f441d3d0a872b4d622d7ea5315ea181a7e40637e8ed8bff6ff4.scope: Deactivated successfully.
Jan 31 04:02:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 471 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 781 KiB/s wr, 191 op/s
Jan 31 04:02:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:55.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:55 np0005603621 podman[391274]: 2026-01-31 09:02:55.144533965 +0000 UTC m=+0.034422437 container create 93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:02:55 np0005603621 systemd[1]: Started libpod-conmon-93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9.scope.
Jan 31 04:02:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:02:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14470a73f23094970ec63d036375429c68a615c3932836a749995b8e385f5bdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14470a73f23094970ec63d036375429c68a615c3932836a749995b8e385f5bdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14470a73f23094970ec63d036375429c68a615c3932836a749995b8e385f5bdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14470a73f23094970ec63d036375429c68a615c3932836a749995b8e385f5bdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:02:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:55.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:55 np0005603621 podman[391274]: 2026-01-31 09:02:55.212485141 +0000 UTC m=+0.102373633 container init 93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:02:55 np0005603621 podman[391274]: 2026-01-31 09:02:55.217431007 +0000 UTC m=+0.107319469 container start 93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:02:55 np0005603621 podman[391274]: 2026-01-31 09:02:55.221217487 +0000 UTC m=+0.111105969 container attach 93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:02:55 np0005603621 podman[391274]: 2026-01-31 09:02:55.128983015 +0000 UTC m=+0.018871497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]: {
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:        "osd_id": 0,
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:        "type": "bluestore"
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]:    }
Jan 31 04:02:55 np0005603621 sleepy_blackburn[391290]: }
Jan 31 04:02:55 np0005603621 systemd[1]: libpod-93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9.scope: Deactivated successfully.
Jan 31 04:02:55 np0005603621 podman[391274]: 2026-01-31 09:02:55.999319365 +0000 UTC m=+0.889207877 container died 93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:02:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-14470a73f23094970ec63d036375429c68a615c3932836a749995b8e385f5bdd-merged.mount: Deactivated successfully.
Jan 31 04:02:56 np0005603621 podman[391274]: 2026-01-31 09:02:56.039816883 +0000 UTC m=+0.929705355 container remove 93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_blackburn, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:02:56 np0005603621 systemd[1]: libpod-conmon-93bb7d260baf6de27a0903888cf8c1666502cfca979d1e8fa5738634e3754ec9.scope: Deactivated successfully.
Jan 31 04:02:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:02:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:56 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:02:56 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e391923d-e003-4e2b-ba88-8290efd1edfd does not exist
Jan 31 04:02:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 91aba4e9-1cf1-4400-86ce-02e8b2f4f607 does not exist
Jan 31 04:02:56 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b1a4424f-e2e4-40ae-976d-8e382c15a470 does not exist
Jan 31 04:02:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:56 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:02:56 np0005603621 nova_compute[247399]: 2026-01-31 09:02:56.941 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 471 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 32 KiB/s wr, 135 op/s
Jan 31 04:02:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:57.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:02:57 np0005603621 nova_compute[247399]: 2026-01-31 09:02:57.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:02:58 np0005603621 podman[391376]: 2026-01-31 09:02:58.491386899 +0000 UTC m=+0.048467302 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:02:58 np0005603621 podman[391377]: 2026-01-31 09:02:58.520856719 +0000 UTC m=+0.077962993 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Jan 31 04:02:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 471 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 32 KiB/s wr, 123 op/s
Jan 31 04:02:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:02:59.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:02:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:02:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:02:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:02:59.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:00 np0005603621 nova_compute[247399]: 2026-01-31 09:03:00.736 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:00 np0005603621 nova_compute[247399]: 2026-01-31 09:03:00.737 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquired lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:00 np0005603621 nova_compute[247399]: 2026-01-31 09:03:00.737 247403 DEBUG nova.network.neutron [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:03:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 473 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 213 KiB/s wr, 111 op/s
Jan 31 04:03:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:01.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:01 np0005603621 nova_compute[247399]: 2026-01-31 09:03:01.425 247403 DEBUG nova.compute.manager [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-changed-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:01 np0005603621 nova_compute[247399]: 2026-01-31 09:03:01.425 247403 DEBUG nova.compute.manager [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Refreshing instance network info cache due to event network-changed-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:03:01 np0005603621 nova_compute[247399]: 2026-01-31 09:03:01.425 247403 DEBUG oslo_concurrency.lockutils [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Jan 31 04:03:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Jan 31 04:03:01 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Jan 31 04:03:01 np0005603621 nova_compute[247399]: 2026-01-31 09:03:01.943 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:02 np0005603621 nova_compute[247399]: 2026-01-31 09:03:02.976 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 502 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 2.3 MiB/s wr, 91 op/s
Jan 31 04:03:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:03.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:03.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.075 247403 DEBUG nova.network.neutron [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updating instance_info_cache with network_info: [{"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.113 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Releasing lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.115 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.116 247403 INFO nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Creating image(s)#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.116 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.116 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Ensure instance console log exists: /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.117 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.117 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.118 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.120 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Start _get_guest_xml network_info=[{"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-766e40b4-9f67-4558-bd4a-c5d2d46d1ef1', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '766e40b4-9f67-4558-bd4a-c5d2d46d1ef1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '2fbbeeee-ff60-4a39-9bea-e3d59301b0ad', 'attached_at': '', 'detached_at': '', 'volume_id': '766e40b4-9f67-4558-bd4a-c5d2d46d1ef1', 'serial': '766e40b4-9f67-4558-bd4a-c5d2d46d1ef1'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': True, 'attachment_id': 'b03c76af-04e1-4d0c-8319-5cca406eaf79', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.121 247403 DEBUG oslo_concurrency.lockutils [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.121 247403 DEBUG nova.network.neutron [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Refreshing network info cache for port acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.126 247403 WARNING nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.141 247403 DEBUG nova.virt.libvirt.host [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.142 247403 DEBUG nova.virt.libvirt.host [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.153 247403 DEBUG nova.virt.libvirt.host [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.154 247403 DEBUG nova.virt.libvirt.host [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.155 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.155 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.155 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.155 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.156 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.156 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.156 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.156 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.157 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.157 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.157 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.158 247403 DEBUG nova.virt.hardware [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.158 247403 DEBUG nova.objects.instance [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.208 247403 DEBUG nova.storage.rbd_utils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.212 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:03:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783262794' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.635 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.697 247403 DEBUG nova.virt.libvirt.vif [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-700374568',display_name='tempest-TestShelveInstance-server-700374568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-700374568',id=195,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1586472022',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:02:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1f293713f6854265a89a1a4a002088d5',ramdisk_id='',reservation_id='r-kzm36xl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1813478377',owner_user_name='tempest-TestShelveInstance-1813478377-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:02:48Z,user_data=None,user_id='3859f52c5b70471097d1e4ffa75ecc0e',uuid=2fbbeeee-ff60-4a39-9bea-e3d59301b0ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.698 247403 DEBUG nova.network.os_vif_util [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converting VIF {"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.700 247403 DEBUG nova.network.os_vif_util [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.702 247403 DEBUG nova.objects.instance [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.725 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <uuid>2fbbeeee-ff60-4a39-9bea-e3d59301b0ad</uuid>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <name>instance-000000c3</name>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestShelveInstance-server-700374568</nova:name>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:03:04</nova:creationTime>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:user uuid="3859f52c5b70471097d1e4ffa75ecc0e">tempest-TestShelveInstance-1813478377-project-member</nova:user>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:project uuid="1f293713f6854265a89a1a4a002088d5">tempest-TestShelveInstance-1813478377</nova:project>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <nova:port uuid="acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <entry name="serial">2fbbeeee-ff60-4a39-9bea-e3d59301b0ad</entry>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <entry name="uuid">2fbbeeee-ff60-4a39-9bea-e3d59301b0ad</entry>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_disk.config">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-766e40b4-9f67-4558-bd4a-c5d2d46d1ef1">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <serial>766e40b4-9f67-4558-bd4a-c5d2d46d1ef1</serial>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:ad:a6:e8"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <target dev="tapacfc2f5c-0e"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/console.log" append="off"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <input type="keyboard" bus="usb"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:03:04 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:03:04 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:03:04 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:03:04 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.726 247403 DEBUG nova.compute.manager [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Preparing to wait for external event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.727 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.727 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.728 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.729 247403 DEBUG nova.virt.libvirt.vif [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-700374568',display_name='tempest-TestShelveInstance-server-700374568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-700374568',id=195,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1586472022',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:02:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1f293713f6854265a89a1a4a002088d5',ramdisk_id='',reservation_id='r-kzm36xl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1813478377',owner_user_name='tempest-TestShelveInstance-1813478377-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:02:48Z,user_data=None,user_id='3859f52c5b70471097d1e4ffa75ecc0e',uuid=2fbbeeee-ff60-4a39-9bea-e3d59301b0ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.729 247403 DEBUG nova.network.os_vif_util [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converting VIF {"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.730 247403 DEBUG nova.network.os_vif_util [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.731 247403 DEBUG os_vif [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.733 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.734 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.734 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.744 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.745 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacfc2f5c-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.746 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacfc2f5c-0e, col_values=(('external_ids', {'iface-id': 'acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ad:a6:e8', 'vm-uuid': '2fbbeeee-ff60-4a39-9bea-e3d59301b0ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:04 np0005603621 NetworkManager[49013]: <info>  [1769850184.7499] manager: (tapacfc2f5c-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.754 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.755 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.756 247403 INFO os_vif [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e')#033[00m
Jan 31 04:03:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:03:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200130729' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.849 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.849 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.849 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] No VIF found with MAC fa:16:3e:ad:a6:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.850 247403 INFO nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Using config drive#033[00m
Jan 31 04:03:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:03:04 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200130729' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.876 247403 DEBUG nova.storage.rbd_utils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.940 247403 DEBUG nova.objects.instance [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:04 np0005603621 nova_compute[247399]: 2026-01-31 09:03:04.998 247403 DEBUG nova.objects.instance [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'keypairs' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 504 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 338 KiB/s rd, 2.6 MiB/s wr, 104 op/s
Jan 31 04:03:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:05.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:06 np0005603621 nova_compute[247399]: 2026-01-31 09:03:06.820 247403 INFO nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Creating config drive at /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/disk.config#033[00m
Jan 31 04:03:06 np0005603621 nova_compute[247399]: 2026-01-31 09:03:06.824 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6jfefz7m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:06 np0005603621 nova_compute[247399]: 2026-01-31 09:03:06.946 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:06 np0005603621 nova_compute[247399]: 2026-01-31 09:03:06.954 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp6jfefz7m" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:06 np0005603621 nova_compute[247399]: 2026-01-31 09:03:06.989 247403 DEBUG nova.storage.rbd_utils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] rbd image 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:03:06 np0005603621 nova_compute[247399]: 2026-01-31 09:03:06.993 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/disk.config 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 495 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 2.6 MiB/s wr, 103 op/s
Jan 31 04:03:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.256 247403 DEBUG oslo_concurrency.processutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/disk.config 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.257 247403 INFO nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Deleting local config drive /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad/disk.config because it was imported into RBD.#033[00m
Jan 31 04:03:07 np0005603621 kernel: tapacfc2f5c-0e: entered promiscuous mode
Jan 31 04:03:07 np0005603621 NetworkManager[49013]: <info>  [1769850187.2983] manager: (tapacfc2f5c-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Jan 31 04:03:07 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:07Z|00835|binding|INFO|Claiming lport acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 for this chassis.
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:07Z|00836|binding|INFO|acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1: Claiming fa:16:3e:ad:a6:e8 10.100.0.3
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.305 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:07Z|00837|binding|INFO|Setting lport acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 ovn-installed in OVS
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.308 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.315 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:a6:e8 10.100.0.3'], port_security=['fa:16:3e:ad:a6:e8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2fbbeeee-ff60-4a39-9bea-e3d59301b0ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f293713f6854265a89a1a4a002088d5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '400baeb3-ed1b-4018-bb0b-3e4e58b8921d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05ac80f4-66e3-4e8c-b69d-f2f58ada92e8, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:07 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:07Z|00838|binding|INFO|Setting lport acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 up in Southbound
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.317 159734 INFO neutron.agent.ovn.metadata.agent [-] Port acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 in datapath 1c62fa1c-f7d2-4937-9258-1d3a4456b207 bound to our chassis#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.321 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1c62fa1c-f7d2-4937-9258-1d3a4456b207#033[00m
Jan 31 04:03:07 np0005603621 systemd-machined[212769]: New machine qemu-97-instance-000000c3.
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.329 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ad0a92-bc7e-485a-9d32-6cee4188486f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.330 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1c62fa1c-f1 in ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.332 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1c62fa1c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.332 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bc959f57-9b11-4c17-b904-e8b603e8bba4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.333 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9edf97fe-ecb3-4d8b-89f7-654a9ebbc177]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.341 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfe7794-5f49-492f-ae7c-c81da456d532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.351 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7c10671a-383a-4fc1-85d7-14acaa189eed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 systemd[1]: Started Virtual Machine qemu-97-instance-000000c3.
Jan 31 04:03:07 np0005603621 systemd-udevd[391591]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:03:07 np0005603621 NetworkManager[49013]: <info>  [1769850187.3700] device (tapacfc2f5c-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:03:07 np0005603621 NetworkManager[49013]: <info>  [1769850187.3706] device (tapacfc2f5c-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.376 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7afdd659-abeb-4022-b161-56f2a0e0aced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.381 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8bb150-fd97-4819-854c-6c191c0d3214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 systemd-udevd[391594]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:03:07 np0005603621 NetworkManager[49013]: <info>  [1769850187.3828] manager: (tap1c62fa1c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.407 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ea40e8bb-d202-4ea6-adea-554e68f8594d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.410 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[e74d7b4e-3b7d-4dd4-83a0-6f13b63ed4df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.420 247403 DEBUG nova.network.neutron [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updated VIF entry in instance network info cache for port acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.421 247403 DEBUG nova.network.neutron [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updating instance_info_cache with network_info: [{"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:07 np0005603621 NetworkManager[49013]: <info>  [1769850187.4315] device (tap1c62fa1c-f0): carrier: link connected
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.435 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1d6d78-6f71-48da-b1cd-0bb5da67ccdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.447 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a454f695-4c44-4f30-a1fc-89f2bd1e7868]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c62fa1c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:15:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949294, 'reachable_time': 17619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391620, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.455 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[da0b89ce-434c-473f-9125-2d07c6792680]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe52:1552'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 949294, 'tstamp': 949294}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391621, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.468 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a360711a-5bcc-4988-87a8-0f7b7c7f86a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1c62fa1c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:52:15:52'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 253], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949294, 'reachable_time': 17619, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391622, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.488 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[65128d0b-7c83-438c-8cc7-8370ce94a142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.489 247403 DEBUG oslo_concurrency.lockutils [req-a7d2e867-8958-4e1d-bfa4-975cea3eb3a0 req-c3f41c63-cf49-4d57-b786-4c6330bcbb98 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.526 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6eac636d-8896-4102-ae2f-22e153dd1bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.527 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c62fa1c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.527 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.528 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c62fa1c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 NetworkManager[49013]: <info>  [1769850187.5303] manager: (tap1c62fa1c-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 31 04:03:07 np0005603621 kernel: tap1c62fa1c-f0: entered promiscuous mode
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.531 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.536 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1c62fa1c-f0, col_values=(('external_ids', {'iface-id': '46e41546-aa3b-4838-b2c2-ba3b46cf445c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.538 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:07Z|00839|binding|INFO|Releasing lport 46e41546-aa3b-4838-b2c2-ba3b46cf445c from this chassis (sb_readonly=0)
Jan 31 04:03:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.543 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.545 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1c62fa1c-f7d2-4937-9258-1d3a4456b207.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1c62fa1c-f7d2-4937-9258-1d3a4456b207.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.546 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e2639c-436d-4f0a-961c-19936cbe8f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.547 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-1c62fa1c-f7d2-4937-9258-1d3a4456b207
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/1c62fa1c-f7d2-4937-9258-1d3a4456b207.pid.haproxy
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 1c62fa1c-f7d2-4937-9258-1d3a4456b207
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:03:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Jan 31 04:03:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:07.551 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'env', 'PROCESS_TAG=haproxy-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1c62fa1c-f7d2-4937-9258-1d3a4456b207.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.749 247403 DEBUG nova.compute.manager [req-5bc76a5c-0a39-4a39-8c7f-cb425921ea15 req-d59e4737-bc0e-478e-bada-c8711417c635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.750 247403 DEBUG oslo_concurrency.lockutils [req-5bc76a5c-0a39-4a39-8c7f-cb425921ea15 req-d59e4737-bc0e-478e-bada-c8711417c635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.750 247403 DEBUG oslo_concurrency.lockutils [req-5bc76a5c-0a39-4a39-8c7f-cb425921ea15 req-d59e4737-bc0e-478e-bada-c8711417c635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.750 247403 DEBUG oslo_concurrency.lockutils [req-5bc76a5c-0a39-4a39-8c7f-cb425921ea15 req-d59e4737-bc0e-478e-bada-c8711417c635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:07 np0005603621 nova_compute[247399]: 2026-01-31 09:03:07.751 247403 DEBUG nova.compute.manager [req-5bc76a5c-0a39-4a39-8c7f-cb425921ea15 req-d59e4737-bc0e-478e-bada-c8711417c635 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Processing event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:03:07 np0005603621 podman[391654]: 2026-01-31 09:03:07.856010056 +0000 UTC m=+0.042650908 container create 2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Jan 31 04:03:07 np0005603621 systemd[1]: Started libpod-conmon-2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7.scope.
Jan 31 04:03:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:03:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf3c813b51a41045a8d90b70d78d747733a8cf3f6631f78e1027fb9d0873afe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:07 np0005603621 podman[391654]: 2026-01-31 09:03:07.909707071 +0000 UTC m=+0.096347933 container init 2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:03:07 np0005603621 podman[391654]: 2026-01-31 09:03:07.91505063 +0000 UTC m=+0.101691482 container start 2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:03:07 np0005603621 podman[391654]: 2026-01-31 09:03:07.835603471 +0000 UTC m=+0.022244343 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:03:07 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [NOTICE]   (391674) : New worker (391676) forked
Jan 31 04:03:07 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [NOTICE]   (391674) : Loading success.
Jan 31 04:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:03:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:03:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 487 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.9 MiB/s wr, 120 op/s
Jan 31 04:03:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:09.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.332 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850189.3321328, 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.333 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] VM Started (Lifecycle Event)#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.335 247403 DEBUG nova.compute.manager [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.338 247403 DEBUG nova.virt.libvirt.driver [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.341 247403 INFO nova.virt.libvirt.driver [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Instance spawned successfully.#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.342 247403 DEBUG nova.compute.manager [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.419 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.424 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.479 247403 DEBUG oslo_concurrency.lockutils [None req-b2f47ee4-efbf-4cc5-ab50-3e4b9c38f136 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 21.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.483 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850189.332353, 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.484 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.529 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.534 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850189.3382316, 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.534 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.578 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.582 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:03:09 np0005603621 nova_compute[247399]: 2026-01-31 09:03:09.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:10 np0005603621 nova_compute[247399]: 2026-01-31 09:03:10.202 247403 DEBUG nova.compute.manager [req-1210ba2b-7dea-4d13-a9f2-b2415a7e86ed req-8e9c4c82-08b8-4ba5-8c7d-30e2cae6f57b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:10 np0005603621 nova_compute[247399]: 2026-01-31 09:03:10.203 247403 DEBUG oslo_concurrency.lockutils [req-1210ba2b-7dea-4d13-a9f2-b2415a7e86ed req-8e9c4c82-08b8-4ba5-8c7d-30e2cae6f57b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:10 np0005603621 nova_compute[247399]: 2026-01-31 09:03:10.204 247403 DEBUG oslo_concurrency.lockutils [req-1210ba2b-7dea-4d13-a9f2-b2415a7e86ed req-8e9c4c82-08b8-4ba5-8c7d-30e2cae6f57b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:10 np0005603621 nova_compute[247399]: 2026-01-31 09:03:10.205 247403 DEBUG oslo_concurrency.lockutils [req-1210ba2b-7dea-4d13-a9f2-b2415a7e86ed req-8e9c4c82-08b8-4ba5-8c7d-30e2cae6f57b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:10 np0005603621 nova_compute[247399]: 2026-01-31 09:03:10.205 247403 DEBUG nova.compute.manager [req-1210ba2b-7dea-4d13-a9f2-b2415a7e86ed req-8e9c4c82-08b8-4ba5-8c7d-30e2cae6f57b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] No waiting events found dispatching network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:10 np0005603621 nova_compute[247399]: 2026-01-31 09:03:10.206 247403 WARNING nova.compute.manager [req-1210ba2b-7dea-4d13-a9f2-b2415a7e86ed req-8e9c4c82-08b8-4ba5-8c7d-30e2cae6f57b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received unexpected event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:03:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 336 KiB/s rd, 2.5 MiB/s wr, 106 op/s
Jan 31 04:03:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:11.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:11 np0005603621 nova_compute[247399]: 2026-01-31 09:03:11.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:11.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:11 np0005603621 nova_compute[247399]: 2026-01-31 09:03:11.978 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:12 np0005603621 nova_compute[247399]: 2026-01-31 09:03:12.028 247403 DEBUG nova.compute.manager [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-changed-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:12 np0005603621 nova_compute[247399]: 2026-01-31 09:03:12.029 247403 DEBUG nova.compute.manager [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing instance network info cache due to event network-changed-4c3d4391-c276-4043-93a4-6eacb291ef17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:03:12 np0005603621 nova_compute[247399]: 2026-01-31 09:03:12.029 247403 DEBUG oslo_concurrency.lockutils [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:12 np0005603621 nova_compute[247399]: 2026-01-31 09:03:12.029 247403 DEBUG oslo_concurrency.lockutils [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:12 np0005603621 nova_compute[247399]: 2026-01-31 09:03:12.029 247403 DEBUG nova.network.neutron [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing network info cache for port 4c3d4391-c276-4043-93a4-6eacb291ef17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.588703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850192588822, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 705, "num_deletes": 253, "total_data_size": 913749, "memory_usage": 926016, "flush_reason": "Manual Compaction"}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850192597720, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 643817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75612, "largest_seqno": 76316, "table_properties": {"data_size": 640521, "index_size": 1139, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9075, "raw_average_key_size": 21, "raw_value_size": 633443, "raw_average_value_size": 1480, "num_data_blocks": 49, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850147, "oldest_key_time": 1769850147, "file_creation_time": 1769850192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 9049 microseconds, and 2360 cpu microseconds.
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.597803) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 643817 bytes OK
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.597827) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.603760) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.603779) EVENT_LOG_v1 {"time_micros": 1769850192603773, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.603800) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 910085, prev total WAL file size 910800, number of live WAL files 2.
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.604155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373536' seq:72057594037927935, type:22 .. '6D6772737461740033303039' seq:0, type:0; will stop at (end)
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(628KB)], [173(13MB)]
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850192604179, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14341931, "oldest_snapshot_seqno": -1}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 10142 keys, 10722398 bytes, temperature: kUnknown
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850192732369, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10722398, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10660305, "index_size": 35612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 268239, "raw_average_key_size": 26, "raw_value_size": 10486646, "raw_average_value_size": 1033, "num_data_blocks": 1343, "num_entries": 10142, "num_filter_entries": 10142, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850192, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.732638) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10722398 bytes
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.735112) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 83.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 13.1 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(38.9) write-amplify(16.7) OK, records in: 10650, records dropped: 508 output_compression: NoCompression
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.735130) EVENT_LOG_v1 {"time_micros": 1769850192735121, "job": 108, "event": "compaction_finished", "compaction_time_micros": 128268, "compaction_time_cpu_micros": 21360, "output_level": 6, "num_output_files": 1, "total_output_size": 10722398, "num_input_records": 10650, "num_output_records": 10142, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850192735307, "job": 108, "event": "table_file_deletion", "file_number": 175}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850192736594, "job": 108, "event": "table_file_deletion", "file_number": 173}
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.604120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.736637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.736641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.736643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.736644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:03:12 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:03:12.736646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:03:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 255 KiB/s wr, 103 op/s
Jan 31 04:03:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:13.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:13.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:14 np0005603621 nova_compute[247399]: 2026-01-31 09:03:14.799 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:03:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2837958438' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:03:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:03:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2837958438' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:03:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 37 KiB/s wr, 117 op/s
Jan 31 04:03:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:15.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:15.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:16 np0005603621 nova_compute[247399]: 2026-01-31 09:03:16.980 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 36 KiB/s wr, 103 op/s
Jan 31 04:03:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:17.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:17.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:18 np0005603621 nova_compute[247399]: 2026-01-31 09:03:18.191 247403 DEBUG nova.network.neutron [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated VIF entry in instance network info cache for port 4c3d4391-c276-4043-93a4-6eacb291ef17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:03:18 np0005603621 nova_compute[247399]: 2026-01-31 09:03:18.192 247403 DEBUG nova.network.neutron [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:18 np0005603621 nova_compute[247399]: 2026-01-31 09:03:18.233 247403 DEBUG oslo_concurrency.lockutils [req-a102c908-0d26-4ae8-9c6f-3cd3a70a0961 req-fc9d3a55-0546-4e00-9b80-48c72497aa5e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 415 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 34 KiB/s wr, 93 op/s
Jan 31 04:03:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:19.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:19 np0005603621 nova_compute[247399]: 2026-01-31 09:03:19.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:19.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:19 np0005603621 nova_compute[247399]: 2026-01-31 09:03:19.853 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:20 np0005603621 nova_compute[247399]: 2026-01-31 09:03:20.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 392 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 32 KiB/s wr, 107 op/s
Jan 31 04:03:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:21.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:21.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:21 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:21Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ad:a6:e8 10.100.0.3
Jan 31 04:03:21 np0005603621 nova_compute[247399]: 2026-01-31 09:03:21.982 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.233 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.233 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:03:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595222812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:03:22 np0005603621 nova_compute[247399]: 2026-01-31 09:03:22.638 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.025 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.026 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.029 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.029 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000c3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:03:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 379 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 13 KiB/s wr, 138 op/s
Jan 31 04:03:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:23.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.180 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.181 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3726MB free_disk=20.890331268310547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.181 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.181 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:23.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.303 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.303 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.303 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.303 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.413 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:03:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352233840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.807 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.812 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.874 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.922 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:03:23 np0005603621 nova_compute[247399]: 2026-01-31 09:03:23.922 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:24 np0005603621 nova_compute[247399]: 2026-01-31 09:03:24.856 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:24 np0005603621 nova_compute[247399]: 2026-01-31 09:03:24.922 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:24 np0005603621 nova_compute[247399]: 2026-01-31 09:03:24.923 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:03:24 np0005603621 nova_compute[247399]: 2026-01-31 09:03:24.923 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.005 247403 DEBUG oslo_concurrency.lockutils [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "interface-0469d90c-1c5c-40d4-ac77-94e6496bc9ae-4c3d4391-c276-4043-93a4-6eacb291ef17" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.006 247403 DEBUG oslo_concurrency.lockutils [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-0469d90c-1c5c-40d4-ac77-94e6496bc9ae-4c3d4391-c276-4043-93a4-6eacb291ef17" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.038 247403 DEBUG nova.objects.instance [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'flavor' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 379 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 24 KiB/s wr, 109 op/s
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.095 247403 DEBUG nova.virt.libvirt.vif [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.096 247403 DEBUG nova.network.os_vif_util [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.096 247403 DEBUG nova.network.os_vif_util [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.098 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.101 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.103 247403 DEBUG nova.virt.libvirt.driver [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Attempting to detach device tap4c3d4391-c2 from instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.104 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] detach device xml: <interface type="ethernet">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:4d:81:16"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <target dev="tap4c3d4391-c2"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </interface>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 04:03:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:25.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.165 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.168 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface>not found in domain: <domain type='kvm' id='96'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <name>instance-000000c1</name>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <uuid>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</uuid>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:02:03</nova:creationTime>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:port uuid="4c3d4391-c276-4043-93a4-6eacb291ef17">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <resource>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </resource>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='serial'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='uuid'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk' index='2'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config' index='1'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:d1:e9:05'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='tap4f52d762-81'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:4d:81:16'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='tap4c3d4391-c2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='net1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </target>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </console>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c69,c160</label>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c69,c160</imagelabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.169 247403 INFO nova.virt.libvirt.driver [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully detached device tap4c3d4391-c2 from instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae from the persistent domain config.#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.169 247403 DEBUG nova.virt.libvirt.driver [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] (1/8): Attempting to detach device tap4c3d4391-c2 with device alias net1 from instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.170 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] detach device xml: <interface type="ethernet">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <mac address="fa:16:3e:4d:81:16"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <model type="virtio"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <mtu size="1442"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <target dev="tap4c3d4391-c2"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </interface>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 04:03:25 np0005603621 kernel: tap4c3d4391-c2 (unregistering): left promiscuous mode
Jan 31 04:03:25 np0005603621 NetworkManager[49013]: <info>  [1769850205.2192] device (tap4c3d4391-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:03:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:25.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.258 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.259 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.259 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.266 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:25Z|00840|binding|INFO|Releasing lport 4c3d4391-c276-4043-93a4-6eacb291ef17 from this chassis (sb_readonly=0)
Jan 31 04:03:25 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:25Z|00841|binding|INFO|Setting lport 4c3d4391-c276-4043-93a4-6eacb291ef17 down in Southbound
Jan 31 04:03:25 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:25Z|00842|binding|INFO|Removing iface tap4c3d4391-c2 ovn-installed in OVS
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.267 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.268 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769850205.2678077, 0469d90c-1c5c-40d4-ac77-94e6496bc9ae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.269 247403 DEBUG nova.virt.libvirt.driver [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Start waiting for the detach event from libvirt for device tap4c3d4391-c2 with device alias net1 for instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.270 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.272 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.273 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface>not found in domain: <domain type='kvm' id='96'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <name>instance-000000c1</name>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <uuid>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</uuid>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:02:03</nova:creationTime>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:port uuid="4c3d4391-c276-4043-93a4-6eacb291ef17">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <resource>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </resource>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='serial'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='uuid'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk' index='2'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config' index='1'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:d1:e9:05'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target dev='tap4f52d762-81'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      </target>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </console>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c69,c160</label>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c69,c160</imagelabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.273 247403 INFO nova.virt.libvirt.driver [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully detached device tap4c3d4391-c2 from instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae from the live domain config.#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.274 247403 DEBUG nova.virt.libvirt.vif [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.274 247403 DEBUG nova.network.os_vif_util [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.275 247403 DEBUG nova.network.os_vif_util [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.275 247403 DEBUG os_vif [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.277 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3d4391-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.278 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.282 247403 INFO os_vif [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2')#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.282 247403 DEBUG nova.virt.libvirt.guest [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:03:25</nova:creationTime>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:03:25 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:25 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:03:25 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.293 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:81:16 10.100.0.27', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': '0469d90c-1c5c-40d4-ac77-94e6496bc9ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58d12028-6cf1-48b0-8622-9e4a18613610', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=69ee6f93-cadf-4e2f-a073-fd82c56c8449, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=4c3d4391-c276-4043-93a4-6eacb291ef17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.294 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 4c3d4391-c276-4043-93a4-6eacb291ef17 in datapath 58d12028-6cf1-48b0-8622-9e4a18613610 unbound from our chassis#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.296 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58d12028-6cf1-48b0-8622-9e4a18613610, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.297 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d9121d2c-b805-44da-b5e0-ffa2f60693b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.298 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610 namespace which is not needed anymore#033[00m
Jan 31 04:03:25 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [NOTICE]   (390266) : haproxy version is 2.8.14-c23fe91
Jan 31 04:03:25 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [NOTICE]   (390266) : path to executable is /usr/sbin/haproxy
Jan 31 04:03:25 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [WARNING]  (390266) : Exiting Master process...
Jan 31 04:03:25 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [ALERT]    (390266) : Current worker (390268) exited with code 143 (Terminated)
Jan 31 04:03:25 np0005603621 neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610[390262]: [WARNING]  (390266) : All workers exited. Exiting... (0)
Jan 31 04:03:25 np0005603621 systemd[1]: libpod-a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7.scope: Deactivated successfully.
Jan 31 04:03:25 np0005603621 podman[391855]: 2026-01-31 09:03:25.435433104 +0000 UTC m=+0.068242605 container died a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:03:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7-userdata-shm.mount: Deactivated successfully.
Jan 31 04:03:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8d9b87f461b7a5907a4534ffe03e4d4990bf16049bd5d8805928de16ca573808-merged.mount: Deactivated successfully.
Jan 31 04:03:25 np0005603621 podman[391855]: 2026-01-31 09:03:25.669299238 +0000 UTC m=+0.302108739 container cleanup a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:03:25 np0005603621 systemd[1]: libpod-conmon-a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7.scope: Deactivated successfully.
Jan 31 04:03:25 np0005603621 podman[391886]: 2026-01-31 09:03:25.847169534 +0000 UTC m=+0.163743011 container remove a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.851 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[44ec7b74-f590-4a5f-8e4b-8a6b7af463a0]: (4, ('Sat Jan 31 09:03:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610 (a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7)\na7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7\nSat Jan 31 09:03:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610 (a7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7)\na7b0d699ed7e5468922e54271dc4aa702a692c2abef9a0eb9614182546a0b0d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.853 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[424f7cdd-f4d5-4793-a04b-8d2761859e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.854 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58d12028-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.857 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 kernel: tap58d12028-60: left promiscuous mode
Jan 31 04:03:25 np0005603621 nova_compute[247399]: 2026-01-31 09:03:25.863 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.864 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1e11f4c7-1e93-4e79-b1cd-61adb51aebd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.879 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eb16b30e-c8c0-4323-9a43-d7b88a3432fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.880 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[85e16b69-1f25-40c9-ad2e-ea17d238892a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.891 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eb168b54-e33b-4e0a-98b3-37b715a9bdd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 942853, 'reachable_time': 24382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391901, 'error': None, 'target': 'ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.893 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58d12028-6cf1-48b0-8622-9e4a18613610 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:03:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:25.894 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[dec3953b-dc94-4404-8f28-f9ea7fcb579d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:25 np0005603621 systemd[1]: run-netns-ovnmeta\x2d58d12028\x2d6cf1\x2d48b0\x2d8622\x2d9e4a18613610.mount: Deactivated successfully.
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.134 247403 DEBUG nova.compute.manager [req-622387b2-f013-4eba-84ae-397277cd9dec req-361b6ea1-ab11-451d-8733-231b9f8fcab9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-unplugged-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.135 247403 DEBUG oslo_concurrency.lockutils [req-622387b2-f013-4eba-84ae-397277cd9dec req-361b6ea1-ab11-451d-8733-231b9f8fcab9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.135 247403 DEBUG oslo_concurrency.lockutils [req-622387b2-f013-4eba-84ae-397277cd9dec req-361b6ea1-ab11-451d-8733-231b9f8fcab9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.135 247403 DEBUG oslo_concurrency.lockutils [req-622387b2-f013-4eba-84ae-397277cd9dec req-361b6ea1-ab11-451d-8733-231b9f8fcab9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.135 247403 DEBUG nova.compute.manager [req-622387b2-f013-4eba-84ae-397277cd9dec req-361b6ea1-ab11-451d-8733-231b9f8fcab9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-unplugged-4c3d4391-c276-4043-93a4-6eacb291ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.135 247403 WARNING nova.compute.manager [req-622387b2-f013-4eba-84ae-397277cd9dec req-361b6ea1-ab11-451d-8733-231b9f8fcab9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received unexpected event network-vif-unplugged-4c3d4391-c276-4043-93a4-6eacb291ef17 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.660 247403 DEBUG oslo_concurrency.lockutils [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.791 247403 DEBUG nova.compute.manager [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-deleted-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.791 247403 INFO nova.compute.manager [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Neutron deleted interface 4c3d4391-c276-4043-93a4-6eacb291ef17; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.791 247403 DEBUG nova.network.neutron [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.848 247403 DEBUG nova.objects.instance [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lazy-loading 'system_metadata' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.883 247403 DEBUG nova.objects.instance [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lazy-loading 'flavor' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.920 247403 DEBUG nova.virt.libvirt.vif [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.921 247403 DEBUG nova.network.os_vif_util [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.922 247403 DEBUG nova.network.os_vif_util [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.924 247403 DEBUG nova.virt.libvirt.guest [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.928 247403 DEBUG nova.virt.libvirt.guest [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface>not found in domain: <domain type='kvm' id='96'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <name>instance-000000c1</name>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <uuid>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</uuid>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:03:25</nova:creationTime>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <resource>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </resource>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='serial'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='uuid'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk' index='2'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config' index='1'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:d1:e9:05'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target dev='tap4f52d762-81'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </target>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </console>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c69,c160</label>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c69,c160</imagelabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.928 247403 DEBUG nova.virt.libvirt.guest [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.931 247403 DEBUG nova.virt.libvirt.guest [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:4d:81:16"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4c3d4391-c2"/></interface>not found in domain: <domain type='kvm' id='96'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <name>instance-000000c1</name>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <uuid>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</uuid>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:03:25</nova:creationTime>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <memory unit='KiB'>131072</memory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <vcpu placement='static'>1</vcpu>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <resource>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <partition>/machine</partition>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </resource>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <sysinfo type='smbios'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='manufacturer'>RDO</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='product'>OpenStack Compute</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='version'>27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='serial'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='uuid'>0469d90c-1c5c-40d4-ac77-94e6496bc9ae</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <entry name='family'>Virtual Machine</entry>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <boot dev='hd'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <smbios mode='sysinfo'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <vmcoreinfo state='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <cpu mode='custom' match='exact' check='full'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <model fallback='forbid'>Nehalem</model>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <feature policy='require' name='x2apic'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <feature policy='require' name='hypervisor'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <feature policy='require' name='vme'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <clock offset='utc'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <timer name='pit' tickpolicy='delay'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <timer name='hpet' present='no'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <on_poweroff>destroy</on_poweroff>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <on_reboot>restart</on_reboot>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <on_crash>destroy</on_crash>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <disk type='network' device='disk'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk' index='2'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target dev='vda' bus='virtio'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='virtio-disk0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <disk type='network' device='cdrom'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <driver name='qemu' type='raw' cache='none'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <auth username='openstack'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <secret type='ceph' uuid='2f5ab832-5f2e-5a84-bd93-cf8bab960ee2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source protocol='rbd' name='vms/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_disk.config' index='1'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.100' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.102' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <host name='192.168.122.101' port='6789'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target dev='sda' bus='sata'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <readonly/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='sata0-0-0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='0' model='pcie-root'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pcie.0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='1' port='0x10'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='2' port='0x11'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='3' port='0x12'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='4' port='0x13'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='5' port='0x14'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='6' port='0x15'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='7' port='0x16'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='8' port='0x17'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.8'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='9' port='0x18'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.9'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='10' port='0x19'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.10'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='11' port='0x1a'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.11'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='12' port='0x1b'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.12'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='13' port='0x1c'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.13'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='14' port='0x1d'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.14'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='15' port='0x1e'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.15'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='16' port='0x1f'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.16'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='17' port='0x20'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.17'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='18' port='0x21'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.18'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='19' port='0x22'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.19'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='20' port='0x23'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.20'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='21' port='0x24'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.21'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='22' port='0x25'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.22'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='23' port='0x26'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.23'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='24' port='0x27'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.24'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-root-port'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target chassis='25' port='0x28'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.25'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model name='pcie-pci-bridge'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='pci.26'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='usb'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <controller type='sata' index='0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='ide'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </controller>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <interface type='ethernet'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <mac address='fa:16:3e:d1:e9:05'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target dev='tap4f52d762-81'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model type='virtio'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <driver name='vhost' rx_queue_size='512'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <mtu size='1442'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='net0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <serial type='pty'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target type='isa-serial' port='0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:        <model name='isa-serial'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      </target>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <console type='pty' tty='/dev/pts/0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <source path='/dev/pts/0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <log file='/var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae/console.log' append='off'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <target type='serial' port='0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='serial0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </console>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <input type='tablet' bus='usb'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='input0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='usb' bus='0' port='1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <input type='mouse' bus='ps2'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='input1'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <input type='keyboard' bus='ps2'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='input2'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </input>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <listen type='address' address='::0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </graphics>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <audio id='1' type='none'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <model type='virtio' heads='1' primary='yes'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='video0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <watchdog model='itco' action='reset'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='watchdog0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </watchdog>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <memballoon model='virtio'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <stats period='10'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='balloon0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <rng model='virtio'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <backend model='random'>/dev/urandom</backend>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <alias name='rng0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <label>system_u:system_r:svirt_t:s0:c69,c160</label>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c69,c160</imagelabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <label>+107:+107</label>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <imagelabel>+107:+107</imagelabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </seclabel>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.931 247403 WARNING nova.virt.libvirt.driver [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Detaching interface fa:16:3e:4d:81:16 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4c3d4391-c2' not found.#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.932 247403 DEBUG nova.virt.libvirt.vif [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.932 247403 DEBUG nova.network.os_vif_util [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.932 247403 DEBUG nova.network.os_vif_util [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.933 247403 DEBUG os_vif [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.934 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.935 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3d4391-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.935 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.936 247403 INFO os_vif [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2')#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.937 247403 DEBUG nova.virt.libvirt.guest [req-d6a7e1bd-5de4-4ee9-bfd7-8c58cc816c53 req-8cc1de4d-f85d-4247-8c21-36985eddb9f2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:name>tempest-TestNetworkBasicOps-server-829154381</nova:name>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:creationTime>2026-01-31 09:03:26</nova:creationTime>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:flavor name="m1.nano">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:memory>128</nova:memory>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:disk>1</nova:disk>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:swap>0</nova:swap>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:vcpus>1</nova:vcpus>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:flavor>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:owner>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:owner>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  <nova:ports>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    <nova:port uuid="4f52d762-814b-4a00-a616-c2e6586a29dd">
Jan 31 04:03:26 np0005603621 nova_compute[247399]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:    </nova:port>
Jan 31 04:03:26 np0005603621 nova_compute[247399]:  </nova:ports>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: </nova:instance>
Jan 31 04:03:26 np0005603621 nova_compute[247399]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 31 04:03:26 np0005603621 nova_compute[247399]: 2026-01-31 09:03:26.984 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 395 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 566 KiB/s rd, 438 KiB/s wr, 100 op/s
Jan 31 04:03:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:27.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:27.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:28 np0005603621 nova_compute[247399]: 2026-01-31 09:03:28.267 247403 DEBUG nova.compute.manager [req-f39d4a88-c569-4036-92b6-b25cb8d3699e req-c625b48f-b4f3-4575-bdb7-b0451ad9a80f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:28 np0005603621 nova_compute[247399]: 2026-01-31 09:03:28.267 247403 DEBUG oslo_concurrency.lockutils [req-f39d4a88-c569-4036-92b6-b25cb8d3699e req-c625b48f-b4f3-4575-bdb7-b0451ad9a80f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:28 np0005603621 nova_compute[247399]: 2026-01-31 09:03:28.267 247403 DEBUG oslo_concurrency.lockutils [req-f39d4a88-c569-4036-92b6-b25cb8d3699e req-c625b48f-b4f3-4575-bdb7-b0451ad9a80f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:28 np0005603621 nova_compute[247399]: 2026-01-31 09:03:28.268 247403 DEBUG oslo_concurrency.lockutils [req-f39d4a88-c569-4036-92b6-b25cb8d3699e req-c625b48f-b4f3-4575-bdb7-b0451ad9a80f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:28 np0005603621 nova_compute[247399]: 2026-01-31 09:03:28.268 247403 DEBUG nova.compute.manager [req-f39d4a88-c569-4036-92b6-b25cb8d3699e req-c625b48f-b4f3-4575-bdb7-b0451ad9a80f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:28 np0005603621 nova_compute[247399]: 2026-01-31 09:03:28.268 247403 WARNING nova.compute.manager [req-f39d4a88-c569-4036-92b6-b25cb8d3699e req-c625b48f-b4f3-4575-bdb7-b0451ad9a80f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received unexpected event network-vif-plugged-4c3d4391-c276-4043-93a4-6eacb291ef17 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:03:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 566 KiB/s rd, 954 KiB/s wr, 100 op/s
Jan 31 04:03:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:29.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:29.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.301 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:29 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:29Z|00843|binding|INFO|Releasing lport c4b9dc18-1a7a-4655-80a1-1689bd3ce11a from this chassis (sb_readonly=0)
Jan 31 04:03:29 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:29Z|00844|binding|INFO|Releasing lport 46e41546-aa3b-4838-b2c2-ba3b46cf445c from this chassis (sb_readonly=0)
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.332 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.358 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.359 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.359 247403 DEBUG oslo_concurrency.lockutils [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.360 247403 DEBUG nova.network.neutron [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.361 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:29 np0005603621 nova_compute[247399]: 2026-01-31 09:03:29.362 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:29 np0005603621 podman[391904]: 2026-01-31 09:03:29.482293739 +0000 UTC m=+0.038841608 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 04:03:29 np0005603621 podman[391905]: 2026-01-31 09:03:29.507438123 +0000 UTC m=+0.060849212 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.545 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.545 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.547 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.624 247403 DEBUG nova.compute.manager [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-changed-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.625 247403 DEBUG nova.compute.manager [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing instance network info cache due to event network-changed-4f52d762-814b-4a00-a616-c2e6586a29dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.625 247403 DEBUG oslo_concurrency.lockutils [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.666 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.667 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.667 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.668 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.668 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.669 247403 INFO nova.compute.manager [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Terminating instance#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.670 247403 DEBUG nova.compute.manager [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:03:30 np0005603621 kernel: tap4f52d762-81 (unregistering): left promiscuous mode
Jan 31 04:03:30 np0005603621 NetworkManager[49013]: <info>  [1769850210.8109] device (tap4f52d762-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:03:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:30Z|00845|binding|INFO|Releasing lport 4f52d762-814b-4a00-a616-c2e6586a29dd from this chassis (sb_readonly=0)
Jan 31 04:03:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:30Z|00846|binding|INFO|Setting lport 4f52d762-814b-4a00-a616-c2e6586a29dd down in Southbound
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.856 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:30Z|00847|binding|INFO|Removing iface tap4f52d762-81 ovn-installed in OVS
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.859 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:30 np0005603621 nova_compute[247399]: 2026-01-31 09:03:30.864 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.886 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:e9:05 10.100.0.8'], port_security=['fa:16:3e:d1:e9:05 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0469d90c-1c5c-40d4-ac77-94e6496bc9ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d6a0755-a98a-4d42-b80b-5f2e1c4a586c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d8b4f65c-63f8-45f3-bcd8-2eb92c6b57c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=4f52d762-814b-4a00-a616-c2e6586a29dd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.887 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 4f52d762-814b-4a00-a616-c2e6586a29dd in datapath a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 unbound from our chassis#033[00m
Jan 31 04:03:30 np0005603621 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000c1.scope: Deactivated successfully.
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.889 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:03:30 np0005603621 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000c1.scope: Consumed 17.345s CPU time.
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.890 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ea3f8d6a-2adc-4720-a311-508c15f54d21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:30.890 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 namespace which is not needed anymore#033[00m
Jan 31 04:03:30 np0005603621 systemd-machined[212769]: Machine qemu-96-instance-000000c1 terminated.
Jan 31 04:03:31 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [NOTICE]   (389107) : haproxy version is 2.8.14-c23fe91
Jan 31 04:03:31 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [NOTICE]   (389107) : path to executable is /usr/sbin/haproxy
Jan 31 04:03:31 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [WARNING]  (389107) : Exiting Master process...
Jan 31 04:03:31 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [ALERT]    (389107) : Current worker (389109) exited with code 143 (Terminated)
Jan 31 04:03:31 np0005603621 neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60[389103]: [WARNING]  (389107) : All workers exited. Exiting... (0)
Jan 31 04:03:31 np0005603621 systemd[1]: libpod-5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4.scope: Deactivated successfully.
Jan 31 04:03:31 np0005603621 podman[391969]: 2026-01-31 09:03:31.009950022 +0000 UTC m=+0.056319789 container died 5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:03:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 427 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 569 KiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.101 247403 INFO nova.virt.libvirt.driver [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Instance destroyed successfully.#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.101 247403 DEBUG nova.objects.instance [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid 0469d90c-1c5c-40d4-ac77-94e6496bc9ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4-userdata-shm.mount: Deactivated successfully.
Jan 31 04:03:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eec4cc4740c56596af3f98b176e6d1171c55fb59c37cd27709670b9037e7ac6c-merged.mount: Deactivated successfully.
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.126 247403 DEBUG nova.virt.libvirt.vif [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.127 247403 DEBUG nova.network.os_vif_util [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.129 247403 DEBUG nova.network.os_vif_util [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.129 247403 DEBUG os_vif [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.130 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.130 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f52d762-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.134 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.136 247403 INFO os_vif [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:e9:05,bridge_name='br-int',has_traffic_filtering=True,id=4f52d762-814b-4a00-a616-c2e6586a29dd,network=Network(a1b24494-72ae-4ffa-a7cb-1e8e7578dd60),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f52d762-81')#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.136 247403 DEBUG nova.virt.libvirt.vif [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-829154381',display_name='tempest-TestNetworkBasicOps-server-829154381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-829154381',id=193,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPFA3woozFMq8S8jI5TSrswwXRmJE/SYyFrwSAeRD0zE4Suov4+pQjy6umJZg/HS7gZGiehGBKPIxcQDvWXGD+yEriQTIJwnA9crrJWLZ1an/EEic3nNDWYZiRiAAPXj2A==',key_name='tempest-TestNetworkBasicOps-1416627649',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:01:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-niptovlt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:01:23Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=0469d90c-1c5c-40d4-ac77-94e6496bc9ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.137 247403 DEBUG nova.network.os_vif_util [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "4c3d4391-c276-4043-93a4-6eacb291ef17", "address": "fa:16:3e:4d:81:16", "network": {"id": "58d12028-6cf1-48b0-8622-9e4a18613610", "bridge": "br-int", "label": "tempest-network-smoke--1228215158", "subnets": [], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c3d4391-c2", "ovs_interfaceid": "4c3d4391-c276-4043-93a4-6eacb291ef17", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.137 247403 DEBUG nova.network.os_vif_util [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.137 247403 DEBUG os_vif [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.138 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.139 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c3d4391-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.139 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.140 247403 INFO os_vif [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:81:16,bridge_name='br-int',has_traffic_filtering=True,id=4c3d4391-c276-4043-93a4-6eacb291ef17,network=Network(58d12028-6cf1-48b0-8622-9e4a18613610),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c3d4391-c2')#033[00m
Jan 31 04:03:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:31.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:31 np0005603621 podman[391969]: 2026-01-31 09:03:31.446964731 +0000 UTC m=+0.493334508 container cleanup 5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:03:31 np0005603621 systemd[1]: libpod-conmon-5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4.scope: Deactivated successfully.
Jan 31 04:03:31 np0005603621 podman[392029]: 2026-01-31 09:03:31.740994544 +0000 UTC m=+0.277720729 container remove 5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.747 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a3de06fe-6088-45fd-81d5-f2b13c20aad3]: (4, ('Sat Jan 31 09:03:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 (5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4)\n5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4\nSat Jan 31 09:03:31 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 (5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4)\n5bc20fb6df80e6054b5eae38fd42868898b843e8c4289e1548f66ce7f7951ef4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.749 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[22c2cec0-12ce-4a5a-a67c-ca68c0904f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.751 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1b24494-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.753 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:31 np0005603621 kernel: tapa1b24494-70: left promiscuous mode
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.759 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.761 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[30fee4a5-9c70-4665-a7b4-a328ae78f341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.773 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[57416370-8223-4711-b990-cf897bd11f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.774 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee589c8-6fd5-4407-9bc2-b2751d8f056c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.786 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcf0873-127d-4153-8c7c-9554b8835c90]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938788, 'reachable_time': 40023, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392043, 'error': None, 'target': 'ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.789 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a1b24494-72ae-4ffa-a7cb-1e8e7578dd60 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:03:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:31.790 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[718c4054-535d-49ec-95ce-9d5f0f866751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:31 np0005603621 systemd[1]: run-netns-ovnmeta\x2da1b24494\x2d72ae\x2d4ffa\x2da7cb\x2d1e8e7578dd60.mount: Deactivated successfully.
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.963 247403 INFO nova.network.neutron [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Port 4c3d4391-c276-4043-93a4-6eacb291ef17 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.963 247403 DEBUG nova.network.neutron [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:31 np0005603621 nova_compute[247399]: 2026-01-31 09:03:31.986 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.016 247403 DEBUG oslo_concurrency.lockutils [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.020 247403 DEBUG oslo_concurrency.lockutils [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.021 247403 DEBUG nova.network.neutron [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Refreshing network info cache for port 4f52d762-814b-4a00-a616-c2e6586a29dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.070 247403 DEBUG oslo_concurrency.lockutils [None req-02daba2b-6635-499b-952e-aca9b8a5d6b5 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "interface-0469d90c-1c5c-40d4-ac77-94e6496bc9ae-4c3d4391-c276-4043-93a4-6eacb291ef17" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 7.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.813 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.814 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.814 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.814 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.815 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.816 247403 INFO nova.compute.manager [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Terminating instance#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.818 247403 DEBUG nova.compute.manager [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:03:32 np0005603621 kernel: tapacfc2f5c-0e (unregistering): left promiscuous mode
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.885 247403 INFO nova.virt.libvirt.driver [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Deleting instance files /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_del#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.886 247403 INFO nova.virt.libvirt.driver [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Deletion of /var/lib/nova/instances/0469d90c-1c5c-40d4-ac77-94e6496bc9ae_del complete#033[00m
Jan 31 04:03:32 np0005603621 NetworkManager[49013]: <info>  [1769850212.8877] device (tapacfc2f5c-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.890 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.892 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:32 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:32Z|00848|binding|INFO|Releasing lport acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 from this chassis (sb_readonly=0)
Jan 31 04:03:32 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:32Z|00849|binding|INFO|Setting lport acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 down in Southbound
Jan 31 04:03:32 np0005603621 ovn_controller[149152]: 2026-01-31T09:03:32Z|00850|binding|INFO|Removing iface tapacfc2f5c-0e ovn-installed in OVS
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.894 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.898 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:32.901 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ad:a6:e8 10.100.0.3'], port_security=['fa:16:3e:ad:a6:e8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2fbbeeee-ff60-4a39-9bea-e3d59301b0ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f293713f6854265a89a1a4a002088d5', 'neutron:revision_number': '9', 'neutron:security_group_ids': '400baeb3-ed1b-4018-bb0b-3e4e58b8921d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05ac80f4-66e3-4e8c-b69d-f2f58ada92e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:32.902 159734 INFO neutron.agent.ovn.metadata.agent [-] Port acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 in datapath 1c62fa1c-f7d2-4937-9258-1d3a4456b207 unbound from our chassis#033[00m
Jan 31 04:03:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:32.904 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1c62fa1c-f7d2-4937-9258-1d3a4456b207, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:03:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:32.905 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[467a43fe-d47b-4a1b-ac09-ed6b7adf3837]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:32.905 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 namespace which is not needed anymore#033[00m
Jan 31 04:03:32 np0005603621 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c3.scope: Deactivated successfully.
Jan 31 04:03:32 np0005603621 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000c3.scope: Consumed 14.233s CPU time.
Jan 31 04:03:32 np0005603621 systemd-machined[212769]: Machine qemu-97-instance-000000c3 terminated.
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.945 247403 DEBUG nova.compute.manager [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-unplugged-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.946 247403 DEBUG oslo_concurrency.lockutils [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.947 247403 DEBUG oslo_concurrency.lockutils [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.947 247403 DEBUG oslo_concurrency.lockutils [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.947 247403 DEBUG nova.compute.manager [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-unplugged-4f52d762-814b-4a00-a616-c2e6586a29dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.947 247403 DEBUG nova.compute.manager [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-unplugged-4f52d762-814b-4a00-a616-c2e6586a29dd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.948 247403 DEBUG nova.compute.manager [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.948 247403 DEBUG oslo_concurrency.lockutils [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.948 247403 DEBUG oslo_concurrency.lockutils [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.949 247403 DEBUG oslo_concurrency.lockutils [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.949 247403 DEBUG nova.compute.manager [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] No waiting events found dispatching network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:32 np0005603621 nova_compute[247399]: 2026-01-31 09:03:32.949 247403 WARNING nova.compute.manager [req-08c2e051-2d89-4fc3-bdf5-b592e7daa25f req-bdbdf049-7ee3-4e30-905d-63f7816e5f92 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received unexpected event network-vif-plugged-4f52d762-814b-4a00-a616-c2e6586a29dd for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.011 247403 INFO nova.compute.manager [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Took 2.34 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.012 247403 DEBUG oslo.service.loopingcall [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.012 247403 DEBUG nova.compute.manager [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.012 247403 DEBUG nova.network.neutron [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:03:33 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [NOTICE]   (391674) : haproxy version is 2.8.14-c23fe91
Jan 31 04:03:33 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [NOTICE]   (391674) : path to executable is /usr/sbin/haproxy
Jan 31 04:03:33 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [WARNING]  (391674) : Exiting Master process...
Jan 31 04:03:33 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [ALERT]    (391674) : Current worker (391676) exited with code 143 (Terminated)
Jan 31 04:03:33 np0005603621 neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207[391670]: [WARNING]  (391674) : All workers exited. Exiting... (0)
Jan 31 04:03:33 np0005603621 systemd[1]: libpod-2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7.scope: Deactivated successfully.
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.037 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.040 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 podman[392067]: 2026-01-31 09:03:33.042306221 +0000 UTC m=+0.057886878 container died 2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.051 247403 INFO nova.virt.libvirt.driver [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Instance destroyed successfully.#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.052 247403 DEBUG nova.objects.instance [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lazy-loading 'resources' on Instance uuid 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:03:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 398 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 755 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Jan 31 04:03:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:33.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.213 247403 DEBUG nova.virt.libvirt.vif [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-31T09:01:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-700374568',display_name='tempest-TestShelveInstance-server-700374568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-700374568',id=195,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGtA3anYK1F9iXzxi+WbZdX9+H2hCPBQsf9eZ9YvPVN48lWV0Tj2M8EqzHWhivNuSFaYD9k1TbjDSy9xGFH7/SEr14KZUm/LE8cO61iZWeNARWc/E4iBetyQV/0Aqvvflw==',key_name='tempest-TestShelveInstance-1586472022',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:03:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f293713f6854265a89a1a4a002088d5',ramdisk_id='',reservation_id='r-kzm36xl7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-1813478377',owner_user_name='tempest-TestShelveInstance-1813478377-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:03:09Z,user_data=None,user_id='3859f52c5b70471097d1e4ffa75ecc0e',uuid=2fbbeeee-ff60-4a39-9bea-e3d59301b0ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.213 247403 DEBUG nova.network.os_vif_util [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converting VIF {"id": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "address": "fa:16:3e:ad:a6:e8", "network": {"id": "1c62fa1c-f7d2-4937-9258-1d3a4456b207", "bridge": "br-int", "label": "tempest-TestShelveInstance-1686478311-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f293713f6854265a89a1a4a002088d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacfc2f5c-0e", "ovs_interfaceid": "acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.214 247403 DEBUG nova.network.os_vif_util [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.214 247403 DEBUG os_vif [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.216 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.216 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacfc2f5c-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.221 247403 INFO os_vif [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ad:a6:e8,bridge_name='br-int',has_traffic_filtering=True,id=acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1,network=Network(1c62fa1c-f7d2-4937-9258-1d3a4456b207),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacfc2f5c-0e')#033[00m
Jan 31 04:03:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7-userdata-shm.mount: Deactivated successfully.
Jan 31 04:03:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dcf3c813b51a41045a8d90b70d78d747733a8cf3f6631f78e1027fb9d0873afe-merged.mount: Deactivated successfully.
Jan 31 04:03:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:33.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:33 np0005603621 podman[392067]: 2026-01-31 09:03:33.264983643 +0000 UTC m=+0.280564300 container cleanup 2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:03:33 np0005603621 systemd[1]: libpod-conmon-2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7.scope: Deactivated successfully.
Jan 31 04:03:33 np0005603621 podman[392126]: 2026-01-31 09:03:33.349633415 +0000 UTC m=+0.067973237 container remove 2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.353 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ab3405-9f9c-412e-8f71-456af6028dbb]: (4, ('Sat Jan 31 09:03:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 (2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7)\n2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7\nSat Jan 31 09:03:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 (2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7)\n2455a707779d04a839bab80a3c5779fad2a837aa41946b7d21d8aa878f093df7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.355 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[27c2439c-aa3c-4826-ac85-164c58a0aeeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.356 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c62fa1c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.358 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 kernel: tap1c62fa1c-f0: left promiscuous mode
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.364 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.367 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[edfac65d-4ebd-4836-8f4c-9e60007c654f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.381 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a742f7-93f5-4bee-961c-885ee75f92b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.382 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[41b69f61-a210-403c-99e8-2c9855f4204d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.395 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b7c65a68-6a1a-4755-894f-959bac6a0f34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949288, 'reachable_time': 27101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392141, 'error': None, 'target': 'ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 systemd[1]: run-netns-ovnmeta\x2d1c62fa1c\x2df7d2\x2d4937\x2d9258\x2d1d3a4456b207.mount: Deactivated successfully.
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.397 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1c62fa1c-f7d2-4937-9258-1d3a4456b207 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:03:33 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:33.398 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb85026-eb7b-4899-bd98-61e6ff52a7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.616 247403 INFO nova.virt.libvirt.driver [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Deleting instance files /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_del#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.617 247403 INFO nova.virt.libvirt.driver [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Deletion of /var/lib/nova/instances/2fbbeeee-ff60-4a39-9bea-e3d59301b0ad_del complete#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.749 247403 INFO nova.compute.manager [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.749 247403 DEBUG oslo.service.loopingcall [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.750 247403 DEBUG nova.compute.manager [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:03:33 np0005603621 nova_compute[247399]: 2026-01-31 09:03:33.750 247403 DEBUG nova.network.neutron [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:03:34 np0005603621 nova_compute[247399]: 2026-01-31 09:03:34.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:03:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 369 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 545 KiB/s rd, 1.8 MiB/s wr, 116 op/s
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.162 247403 DEBUG nova.network.neutron [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.180 247403 DEBUG nova.network.neutron [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:35.184 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.184 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:35 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:35.185 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:03:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:35.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.226 247403 INFO nova.compute.manager [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Took 1.48 seconds to deallocate network for instance.#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.230 247403 INFO nova.compute.manager [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Took 2.22 seconds to deallocate network for instance.#033[00m
Jan 31 04:03:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:35.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.306 247403 DEBUG nova.network.neutron [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updated VIF entry in instance network info cache for port 4f52d762-814b-4a00-a616-c2e6586a29dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.307 247403 DEBUG nova.network.neutron [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Updating instance_info_cache with network_info: [{"id": "4f52d762-814b-4a00-a616-c2e6586a29dd", "address": "fa:16:3e:d1:e9:05", "network": {"id": "a1b24494-72ae-4ffa-a7cb-1e8e7578dd60", "bridge": "br-int", "label": "tempest-network-smoke--1350756430", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f52d762-81", "ovs_interfaceid": "4f52d762-814b-4a00-a616-c2e6586a29dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.345 247403 DEBUG nova.compute.manager [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-changed-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.345 247403 DEBUG nova.compute.manager [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Refreshing instance network info cache due to event network-changed-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.346 247403 DEBUG oslo_concurrency.lockutils [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.346 247403 DEBUG oslo_concurrency.lockutils [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.346 247403 DEBUG nova.network.neutron [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Refreshing network info cache for port acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.378 247403 DEBUG oslo_concurrency.lockutils [req-ea19f35b-7dee-4aa6-8679-e586ac048be3 req-35b3a20e-7e59-4736-9b1c-a4f5af5ac8a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-0469d90c-1c5c-40d4-ac77-94e6496bc9ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.411 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.412 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.561 247403 DEBUG nova.compute.manager [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-vif-unplugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.561 247403 DEBUG oslo_concurrency.lockutils [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.561 247403 DEBUG oslo_concurrency.lockutils [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.562 247403 DEBUG oslo_concurrency.lockutils [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.562 247403 DEBUG nova.compute.manager [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] No waiting events found dispatching network-vif-unplugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.562 247403 DEBUG nova.compute.manager [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-vif-unplugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.562 247403 DEBUG nova.compute.manager [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.562 247403 DEBUG oslo_concurrency.lockutils [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.563 247403 DEBUG oslo_concurrency.lockutils [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.563 247403 DEBUG oslo_concurrency.lockutils [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.563 247403 DEBUG nova.compute.manager [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] No waiting events found dispatching network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.563 247403 WARNING nova.compute.manager [req-d6c60b02-a243-4ae8-b6d5-bce52e2a2379 req-8d054ae0-88b5-468d-b399-e02cb9175e47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received unexpected event network-vif-plugged-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.629 247403 DEBUG oslo_concurrency.processutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.747 247403 DEBUG nova.compute.manager [req-97f27ed0-0004-420c-8d9b-34b68574763a req-0439d630-7857-4fbf-93f3-42fb553f54df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Received event network-vif-deleted-acfc2f5c-0e80-48f1-ba84-7ec66c3d64b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.749 247403 DEBUG nova.network.neutron [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.950 247403 INFO nova.compute.manager [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Took 0.72 seconds to detach 1 volumes for instance.#033[00m
Jan 31 04:03:35 np0005603621 nova_compute[247399]: 2026-01-31 09:03:35.951 247403 DEBUG nova.compute.manager [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Deleting volume: 766e40b4-9f67-4558-bd4a-c5d2d46d1ef1 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 31 04:03:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:03:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3420050456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.020 247403 DEBUG oslo_concurrency.processutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.025 247403 DEBUG nova.compute.provider_tree [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.054 247403 DEBUG nova.scheduler.client.report [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.098 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.253 247403 INFO nova.scheduler.client.report [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.452 247403 DEBUG oslo_concurrency.lockutils [None req-fe377a83-bd73-4ef8-bd0d-faf13c7e959e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "0469d90c-1c5c-40d4-ac77-94e6496bc9ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.630 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.631 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.635 247403 DEBUG nova.network.neutron [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.687 247403 DEBUG oslo_concurrency.lockutils [req-a2780688-6b66-4af4-8911-4ade6cd582f1 req-73e0dcc5-b5f6-4c8e-beb4-89ce36ea878a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.731 247403 DEBUG oslo_concurrency.processutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:03:36 np0005603621 nova_compute[247399]: 2026-01-31 09:03:36.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 348 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Jan 31 04:03:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:03:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538180429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:03:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:37.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.226 247403 DEBUG oslo_concurrency.processutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.232 247403 DEBUG nova.compute.provider_tree [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:03:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.255 247403 DEBUG nova.scheduler.client.report [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.283 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.321 247403 INFO nova.scheduler.client.report [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Deleted allocations for instance 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.460 247403 DEBUG oslo_concurrency.lockutils [None req-1a13295d-43cf-42ce-a808-fd7aef87352f 3859f52c5b70471097d1e4ffa75ecc0e 1f293713f6854265a89a1a4a002088d5 - - default default] Lock "2fbbeeee-ff60-4a39-9bea-e3d59301b0ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:03:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.591 247403 DEBUG nova.compute.manager [req-9c2b8eb5-7822-47eb-a43c-02a6428759a2 req-b263e8e3-d13f-43b5-9cf1-644362478e7e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Received event network-vif-deleted-4f52d762-814b-4a00-a616-c2e6586a29dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.591 247403 INFO nova.compute.manager [req-9c2b8eb5-7822-47eb-a43c-02a6428759a2 req-b263e8e3-d13f-43b5-9cf1-644362478e7e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Neutron deleted interface 4f52d762-814b-4a00-a616-c2e6586a29dd; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.591 247403 DEBUG nova.network.neutron [req-9c2b8eb5-7822-47eb-a43c-02a6428759a2 req-b263e8e3-d13f-43b5-9cf1-644362478e7e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 31 04:03:37 np0005603621 nova_compute[247399]: 2026-01-31 09:03:37.593 247403 DEBUG nova.compute.manager [req-9c2b8eb5-7822-47eb-a43c-02a6428759a2 req-b263e8e3-d13f-43b5-9cf1-644362478e7e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Detach interface failed, port_id=4f52d762-814b-4a00-a616-c2e6586a29dd, reason: Instance 0469d90c-1c5c-40d4-ac77-94e6496bc9ae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 04:03:38 np0005603621 nova_compute[247399]: 2026-01-31 09:03:38.259 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:03:38
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', 'images']
Jan 31 04:03:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 313 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 228 op/s
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:03:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:03:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:39.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:39.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 911 KiB/s wr, 306 op/s
Jan 31 04:03:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:41.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:41 np0005603621 nova_compute[247399]: 2026-01-31 09:03:41.990 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:03:42.187 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:03:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.1 KiB/s wr, 298 op/s
Jan 31 04:03:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:43.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:43.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:43 np0005603621 nova_compute[247399]: 2026-01-31 09:03:43.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:44 np0005603621 nova_compute[247399]: 2026-01-31 09:03:44.072 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:44 np0005603621 nova_compute[247399]: 2026-01-31 09:03:44.179 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.8 KiB/s wr, 267 op/s
Jan 31 04:03:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:45.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:03:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:03:46 np0005603621 nova_compute[247399]: 2026-01-31 09:03:46.100 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850211.0984428, 0469d90c-1c5c-40d4-ac77-94e6496bc9ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:46 np0005603621 nova_compute[247399]: 2026-01-31 09:03:46.100 247403 INFO nova.compute.manager [-] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:03:46 np0005603621 nova_compute[247399]: 2026-01-31 09:03:46.124 247403 DEBUG nova.compute.manager [None req-6fa8b43b-6f4a-4d00-9d09-718857305e1d - - - - - -] [instance: 0469d90c-1c5c-40d4-ac77-94e6496bc9ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:46 np0005603621 nova_compute[247399]: 2026-01-31 09:03:46.993 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 KiB/s wr, 236 op/s
Jan 31 04:03:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:47.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:48 np0005603621 nova_compute[247399]: 2026-01-31 09:03:48.050 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850213.0487993, 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:03:48 np0005603621 nova_compute[247399]: 2026-01-31 09:03:48.050 247403 INFO nova.compute.manager [-] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:03:48 np0005603621 nova_compute[247399]: 2026-01-31 09:03:48.079 247403 DEBUG nova.compute.manager [None req-691fa1d2-94ce-4a8f-b658-3771dd8e5cdc - - - - - -] [instance: 2fbbeeee-ff60-4a39-9bea-e3d59301b0ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:03:48 np0005603621 nova_compute[247399]: 2026-01-31 09:03:48.264 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 KiB/s wr, 155 op/s
Jan 31 04:03:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:49.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:49.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002173592208728829 of space, bias 1.0, pg target 0.6520776626186487 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003152699260189838 of space, bias 1.0, pg target 0.9458097780569514 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:03:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:03:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 123 op/s
Jan 31 04:03:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:51 np0005603621 nova_compute[247399]: 2026-01-31 09:03:51.994 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 28 KiB/s wr, 83 op/s
Jan 31 04:03:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:53 np0005603621 nova_compute[247399]: 2026-01-31 09:03:53.266 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 37 KiB/s wr, 118 op/s
Jan 31 04:03:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:55.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:56 np0005603621 nova_compute[247399]: 2026-01-31 09:03:56.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 37 KiB/s wr, 117 op/s
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:03:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:57.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:57.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:03:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c111fe3c-164c-4c35-a217-a4d25456495d does not exist
Jan 31 04:03:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ee34c067-98ca-49fb-91d5-b922e483b808 does not exist
Jan 31 04:03:57 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8367a0a0-f70a-493a-894f-d9f9ae927d39 does not exist
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:03:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:03:57 np0005603621 podman[392525]: 2026-01-31 09:03:57.842273122 +0000 UTC m=+0.065193400 container create 40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_stonebraker, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 04:03:57 np0005603621 podman[392525]: 2026-01-31 09:03:57.796490456 +0000 UTC m=+0.019410784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:03:57 np0005603621 systemd[1]: Started libpod-conmon-40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426.scope.
Jan 31 04:03:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:03:57 np0005603621 podman[392525]: 2026-01-31 09:03:57.936833338 +0000 UTC m=+0.159753626 container init 40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_stonebraker, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:03:57 np0005603621 podman[392525]: 2026-01-31 09:03:57.942564949 +0000 UTC m=+0.165485237 container start 40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_stonebraker, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:03:57 np0005603621 systemd[1]: libpod-40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426.scope: Deactivated successfully.
Jan 31 04:03:57 np0005603621 epic_stonebraker[392541]: 167 167
Jan 31 04:03:57 np0005603621 conmon[392541]: conmon 40edf922402ad446e222 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426.scope/container/memory.events
Jan 31 04:03:57 np0005603621 podman[392525]: 2026-01-31 09:03:57.988510279 +0000 UTC m=+0.211430577 container attach 40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_stonebraker, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:03:57 np0005603621 podman[392525]: 2026-01-31 09:03:57.989658055 +0000 UTC m=+0.212578353 container died 40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 04:03:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-73f70b24a533747e6ab4d84720dda122eb851374c9efd0c289f0302cf30bcce2-merged.mount: Deactivated successfully.
Jan 31 04:03:58 np0005603621 nova_compute[247399]: 2026-01-31 09:03:58.268 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:03:58 np0005603621 podman[392525]: 2026-01-31 09:03:58.296592676 +0000 UTC m=+0.519512954 container remove 40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:03:58 np0005603621 systemd[1]: libpod-conmon-40edf922402ad446e222d40bc97dabcc53d14585147a9d84b4e6a60e8bd8b426.scope: Deactivated successfully.
Jan 31 04:03:58 np0005603621 podman[392568]: 2026-01-31 09:03:58.43669737 +0000 UTC m=+0.031682551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:03:58 np0005603621 podman[392568]: 2026-01-31 09:03:58.646974269 +0000 UTC m=+0.241959410 container create 4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 04:03:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:03:58 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:03:58 np0005603621 systemd[1]: Started libpod-conmon-4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63.scope.
Jan 31 04:03:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:03:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91ee38d530dccdd2184874cee03068932e36ad822f87313acf44c33fa3ec706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91ee38d530dccdd2184874cee03068932e36ad822f87313acf44c33fa3ec706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91ee38d530dccdd2184874cee03068932e36ad822f87313acf44c33fa3ec706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91ee38d530dccdd2184874cee03068932e36ad822f87313acf44c33fa3ec706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:58 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91ee38d530dccdd2184874cee03068932e36ad822f87313acf44c33fa3ec706/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:03:58 np0005603621 podman[392568]: 2026-01-31 09:03:58.841865093 +0000 UTC m=+0.436850254 container init 4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:03:58 np0005603621 podman[392568]: 2026-01-31 09:03:58.8468502 +0000 UTC m=+0.441835341 container start 4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:03:58 np0005603621 podman[392568]: 2026-01-31 09:03:58.932010538 +0000 UTC m=+0.526995679 container attach 4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:03:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 37 KiB/s wr, 113 op/s
Jan 31 04:03:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:03:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:03:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:03:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:03:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:03:59 np0005603621 determined_brown[392584]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:03:59 np0005603621 determined_brown[392584]: --> relative data size: 1.0
Jan 31 04:03:59 np0005603621 determined_brown[392584]: --> All data devices are unavailable
Jan 31 04:03:59 np0005603621 systemd[1]: libpod-4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63.scope: Deactivated successfully.
Jan 31 04:03:59 np0005603621 conmon[392584]: conmon 4591b958e4b8bcd1c2ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63.scope/container/memory.events
Jan 31 04:03:59 np0005603621 podman[392568]: 2026-01-31 09:03:59.588930101 +0000 UTC m=+1.183915242 container died 4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:03:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e91ee38d530dccdd2184874cee03068932e36ad822f87313acf44c33fa3ec706-merged.mount: Deactivated successfully.
Jan 31 04:04:00 np0005603621 podman[392568]: 2026-01-31 09:04:00.039445355 +0000 UTC m=+1.634430496 container remove 4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_brown, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:04:00 np0005603621 systemd[1]: libpod-conmon-4591b958e4b8bcd1c2ce7590302d67462527d441ee22b0bf3415bc76de038d63.scope: Deactivated successfully.
Jan 31 04:04:00 np0005603621 podman[392599]: 2026-01-31 09:04:00.122066943 +0000 UTC m=+0.504719636 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 04:04:00 np0005603621 podman[392607]: 2026-01-31 09:04:00.145464232 +0000 UTC m=+0.528552029 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:04:00 np0005603621 podman[392799]: 2026-01-31 09:04:00.492361235 +0000 UTC m=+0.019895429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:04:00 np0005603621 podman[392799]: 2026-01-31 09:04:00.667549157 +0000 UTC m=+0.195083331 container create bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 04:04:00 np0005603621 systemd[1]: Started libpod-conmon-bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af.scope.
Jan 31 04:04:00 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:04:00 np0005603621 podman[392799]: 2026-01-31 09:04:00.920543554 +0000 UTC m=+0.448077758 container init bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:04:00 np0005603621 podman[392799]: 2026-01-31 09:04:00.925556442 +0000 UTC m=+0.453090616 container start bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 31 04:04:00 np0005603621 inspiring_kirch[392865]: 167 167
Jan 31 04:04:00 np0005603621 systemd[1]: libpod-bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af.scope: Deactivated successfully.
Jan 31 04:04:01 np0005603621 podman[392799]: 2026-01-31 09:04:01.01890784 +0000 UTC m=+0.546442014 container attach bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:04:01 np0005603621 podman[392799]: 2026-01-31 09:04:01.019615872 +0000 UTC m=+0.547150046 container died bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:04:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 269 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 38 KiB/s wr, 93 op/s
Jan 31 04:04:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:01.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:01.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c2719bc9d6c73d83766c460aed6521a321052e70ae1d0a4a2a5a1f7abea188f3-merged.mount: Deactivated successfully.
Jan 31 04:04:01 np0005603621 podman[392799]: 2026-01-31 09:04:01.856252438 +0000 UTC m=+1.383786612 container remove bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kirch, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:04:01 np0005603621 systemd[1]: libpod-conmon-bd9e9e17fac558c4139238667db7d209cd2c92757460015f4254005fa26e88af.scope: Deactivated successfully.
Jan 31 04:04:02 np0005603621 nova_compute[247399]: 2026-01-31 09:04:01.997 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:02 np0005603621 podman[392890]: 2026-01-31 09:04:01.950802083 +0000 UTC m=+0.019321341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:04:02 np0005603621 podman[392890]: 2026-01-31 09:04:02.420640827 +0000 UTC m=+0.489160105 container create bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:04:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:02 np0005603621 systemd[1]: Started libpod-conmon-bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa.scope.
Jan 31 04:04:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:04:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431043097a6251d84fadb6a4eb4a8194a9b128d4575f7377b838752094e7d894/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431043097a6251d84fadb6a4eb4a8194a9b128d4575f7377b838752094e7d894/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431043097a6251d84fadb6a4eb4a8194a9b128d4575f7377b838752094e7d894/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431043097a6251d84fadb6a4eb4a8194a9b128d4575f7377b838752094e7d894/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:02 np0005603621 podman[392890]: 2026-01-31 09:04:02.973041119 +0000 UTC m=+1.041560397 container init bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 31 04:04:02 np0005603621 podman[392890]: 2026-01-31 09:04:02.979028888 +0000 UTC m=+1.047548126 container start bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:04:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 271 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 320 KiB/s wr, 82 op/s
Jan 31 04:04:03 np0005603621 podman[392890]: 2026-01-31 09:04:03.065602772 +0000 UTC m=+1.134122040 container attach bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:04:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:03.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:03 np0005603621 nova_compute[247399]: 2026-01-31 09:04:03.270 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:03.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]: {
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:    "0": [
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:        {
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "devices": [
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "/dev/loop3"
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            ],
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "lv_name": "ceph_lv0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "lv_size": "7511998464",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "name": "ceph_lv0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "tags": {
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.cluster_name": "ceph",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.crush_device_class": "",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.encrypted": "0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.osd_id": "0",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.type": "block",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:                "ceph.vdo": "0"
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            },
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "type": "block",
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:            "vg_name": "ceph_vg0"
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:        }
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]:    ]
Jan 31 04:04:03 np0005603621 agitated_chaplygin[392908]: }
Jan 31 04:04:03 np0005603621 systemd[1]: libpod-bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa.scope: Deactivated successfully.
Jan 31 04:04:03 np0005603621 podman[392890]: 2026-01-31 09:04:03.705186106 +0000 UTC m=+1.773705354 container died bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaplygin, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:04:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-431043097a6251d84fadb6a4eb4a8194a9b128d4575f7377b838752094e7d894-merged.mount: Deactivated successfully.
Jan 31 04:04:04 np0005603621 podman[392890]: 2026-01-31 09:04:04.360448604 +0000 UTC m=+2.428967842 container remove bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_chaplygin, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:04:04 np0005603621 systemd[1]: libpod-conmon-bee42b141c891711951d1a14f88f8cb9aaea9ce395b5b86601b6805a6df5a9aa.scope: Deactivated successfully.
Jan 31 04:04:04 np0005603621 podman[393072]: 2026-01-31 09:04:04.810367481 +0000 UTC m=+0.018967320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:04:04 np0005603621 podman[393072]: 2026-01-31 09:04:04.964474917 +0000 UTC m=+0.173074746 container create 31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:04:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.4 MiB/s wr, 53 op/s
Jan 31 04:04:05 np0005603621 systemd[1]: Started libpod-conmon-31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025.scope.
Jan 31 04:04:05 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:04:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:05.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:05.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:05 np0005603621 podman[393072]: 2026-01-31 09:04:05.321199669 +0000 UTC m=+0.529799528 container init 31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:04:05 np0005603621 podman[393072]: 2026-01-31 09:04:05.330830303 +0000 UTC m=+0.539430132 container start 31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:04:05 np0005603621 optimistic_yonath[393088]: 167 167
Jan 31 04:04:05 np0005603621 systemd[1]: libpod-31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025.scope: Deactivated successfully.
Jan 31 04:04:05 np0005603621 podman[393072]: 2026-01-31 09:04:05.404382896 +0000 UTC m=+0.612982725 container attach 31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:04:05 np0005603621 podman[393072]: 2026-01-31 09:04:05.405095148 +0000 UTC m=+0.613694977 container died 31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yonath, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:04:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8567f904f5dd0806fde7511c0eb4dd9f946b3f6bc9ca4223378feaebfebfc029-merged.mount: Deactivated successfully.
Jan 31 04:04:06 np0005603621 podman[393072]: 2026-01-31 09:04:06.271962879 +0000 UTC m=+1.480562708 container remove 31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:04:06 np0005603621 systemd[1]: libpod-conmon-31c6c17b54bd25d16edc37d1b2356797bf97352a74b69901e654cdf6f90f1025.scope: Deactivated successfully.
Jan 31 04:04:06 np0005603621 podman[393115]: 2026-01-31 09:04:06.440103607 +0000 UTC m=+0.094483684 container create d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:04:06 np0005603621 podman[393115]: 2026-01-31 09:04:06.365097729 +0000 UTC m=+0.019477826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:04:06 np0005603621 systemd[1]: Started libpod-conmon-d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2.scope.
Jan 31 04:04:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:04:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad49e887f56579d32b38410b8629597aa1856b040e5e81a67d882e1a48a3940f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad49e887f56579d32b38410b8629597aa1856b040e5e81a67d882e1a48a3940f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad49e887f56579d32b38410b8629597aa1856b040e5e81a67d882e1a48a3940f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad49e887f56579d32b38410b8629597aa1856b040e5e81a67d882e1a48a3940f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:06 np0005603621 podman[393115]: 2026-01-31 09:04:06.978421633 +0000 UTC m=+0.632801730 container init d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:04:06 np0005603621 podman[393115]: 2026-01-31 09:04:06.983394491 +0000 UTC m=+0.637774568 container start d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:04:07 np0005603621 nova_compute[247399]: 2026-01-31 09:04:07.000 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 16 op/s
Jan 31 04:04:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:07 np0005603621 podman[393115]: 2026-01-31 09:04:07.261381127 +0000 UTC m=+0.915761204 container attach d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:04:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:07.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]: {
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:        "osd_id": 0,
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:        "type": "bluestore"
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]:    }
Jan 31 04:04:07 np0005603621 flamboyant_heyrovsky[393132]: }
Jan 31 04:04:07 np0005603621 systemd[1]: libpod-d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2.scope: Deactivated successfully.
Jan 31 04:04:07 np0005603621 podman[393115]: 2026-01-31 09:04:07.774726926 +0000 UTC m=+1.429107013 container died d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:04:08 np0005603621 nova_compute[247399]: 2026-01-31 09:04:08.271 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ad49e887f56579d32b38410b8629597aa1856b040e5e81a67d882e1a48a3940f-merged.mount: Deactivated successfully.
Jan 31 04:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:04:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:04:08 np0005603621 podman[393115]: 2026-01-31 09:04:08.993024952 +0000 UTC m=+2.647405039 container remove d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 04:04:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:04:09 np0005603621 systemd[1]: libpod-conmon-d101f14fc9f579ffe5d392d4ae6c830d5bf2041266fdc6d280a29d4423c556a2.scope: Deactivated successfully.
Jan 31 04:04:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 294 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 132 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 31 04:04:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:09.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:04:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:04:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:04:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 9f3ccd8f-e6b3-4d09-934f-f170aedcaa04 does not exist
Jan 31 04:04:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 82e5b82d-b63b-4948-a213-0bfa7674028d does not exist
Jan 31 04:04:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a3eb8c0f-1ca2-44c2-9b79-160cb7b25324 does not exist
Jan 31 04:04:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:04:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:04:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 294 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 31 04:04:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:11.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:12 np0005603621 nova_compute[247399]: 2026-01-31 09:04:12.000 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 296 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 246 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 31 04:04:13 np0005603621 nova_compute[247399]: 2026-01-31 09:04:13.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:13 np0005603621 nova_compute[247399]: 2026-01-31 09:04:13.273 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:13.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 301 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 259 KiB/s rd, 1.9 MiB/s wr, 69 op/s
Jan 31 04:04:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:15.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.304 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "2d7d585f-a87a-4520-b942-19de63e4e43a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.304 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.373 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.584 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.585 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.596 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.597 247403 INFO nova.compute.claims [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:04:15 np0005603621 nova_compute[247399]: 2026-01-31 09:04:15.884 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:04:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089215855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.300 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.305 247403 DEBUG nova.compute.provider_tree [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.326 247403 DEBUG nova.scheduler.client.report [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.359 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.360 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.500 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.501 247403 DEBUG nova.network.neutron [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.595 247403 INFO nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.638 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.919 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.920 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.921 247403 INFO nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Creating image(s)#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.944 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.970 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.992 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:16 np0005603621 nova_compute[247399]: 2026-01-31 09:04:16.996 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.016 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.049 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.050 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.051 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.051 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 301 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 745 KiB/s wr, 59 op/s
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.074 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:17 np0005603621 nova_compute[247399]: 2026-01-31 09:04:17.078 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 2d7d585f-a87a-4520-b942-19de63e4e43a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:17.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:17.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Jan 31 04:04:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Jan 31 04:04:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.275 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.302 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 2d7d585f-a87a-4520-b942-19de63e4e43a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.361 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.502 247403 DEBUG nova.objects.instance [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid 2d7d585f-a87a-4520-b942-19de63e4e43a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.684 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.685 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Ensure instance console log exists: /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.685 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.686 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.686 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:18 np0005603621 nova_compute[247399]: 2026-01-31 09:04:18.715 247403 DEBUG nova.policy [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:04:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 128 KiB/s wr, 66 op/s
Jan 31 04:04:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:19.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:19.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:20 np0005603621 nova_compute[247399]: 2026-01-31 09:04:20.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Jan 31 04:04:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 241 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 1.3 MiB/s wr, 79 op/s
Jan 31 04:04:21 np0005603621 nova_compute[247399]: 2026-01-31 09:04:21.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:21.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:21.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Jan 31 04:04:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.003 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.273 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.274 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.274 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.275 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.275 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.464 247403 DEBUG nova.network.neutron [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Successfully updated port: 9f804c26-f01b-41b8-b1d4-03168c468d85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.541 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.542 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.542 247403 DEBUG nova.network.neutron [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:04:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:04:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686451102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:04:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.712 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.887 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.890 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4130MB free_disk=20.97597885131836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.891 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:22 np0005603621 nova_compute[247399]: 2026-01-31 09:04:22.891 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 2.7 MiB/s wr, 93 op/s
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.103 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 2d7d585f-a87a-4520-b942-19de63e4e43a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.104 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.104 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.132 247403 DEBUG nova.compute.manager [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Received event network-changed-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.133 247403 DEBUG nova.compute.manager [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Refreshing instance network info cache due to event network-changed-9f804c26-f01b-41b8-b1d4-03168c468d85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.133 247403 DEBUG oslo_concurrency.lockutils [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.222 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.277 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.526 247403 DEBUG nova.network.neutron [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:04:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:04:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1088873127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.661 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.667 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.690 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.777 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:04:23 np0005603621 nova_compute[247399]: 2026-01-31 09:04:23.778 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 70 KiB/s rd, 2.7 MiB/s wr, 104 op/s
Jan 31 04:04:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:25.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:25 np0005603621 nova_compute[247399]: 2026-01-31 09:04:25.779 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:25 np0005603621 nova_compute[247399]: 2026-01-31 09:04:25.780 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:04:25 np0005603621 nova_compute[247399]: 2026-01-31 09:04:25.813 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:04:25 np0005603621 nova_compute[247399]: 2026-01-31 09:04:25.814 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:25 np0005603621 nova_compute[247399]: 2026-01-31 09:04:25.814 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:04:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Jan 31 04:04:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Jan 31 04:04:26 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Jan 31 04:04:27 np0005603621 nova_compute[247399]: 2026-01-31 09:04:27.003 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 267 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 2.7 MiB/s wr, 73 op/s
Jan 31 04:04:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:27.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:27 np0005603621 nova_compute[247399]: 2026-01-31 09:04:27.672 247403 DEBUG nova.network.neutron [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Updating instance_info_cache with network_info: [{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.280 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.550 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.550 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Instance network_info: |[{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.551 247403 DEBUG oslo_concurrency.lockutils [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.551 247403 DEBUG nova.network.neutron [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Refreshing network info cache for port 9f804c26-f01b-41b8-b1d4-03168c468d85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.553 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Start _get_guest_xml network_info=[{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.557 247403 WARNING nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.569 247403 DEBUG nova.virt.libvirt.host [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.569 247403 DEBUG nova.virt.libvirt.host [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.577 247403 DEBUG nova.virt.libvirt.host [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.577 247403 DEBUG nova.virt.libvirt.host [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.578 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.578 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.579 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.579 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.579 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.580 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.580 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.580 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.580 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.581 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.581 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.581 247403 DEBUG nova.virt.hardware [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.584 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:04:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381884490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:04:28 np0005603621 nova_compute[247399]: 2026-01-31 09:04:28.989 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.017 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.021 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 251 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.1 MiB/s wr, 37 op/s
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.201 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:29.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:04:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527994943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.469 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.470 247403 DEBUG nova.virt.libvirt.vif [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-611558236',display_name='tempest-TestNetworkBasicOps-server-611558236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-611558236',id=200,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZrEzWeR6tHRl79sRWzQLceDvUL5plhjDV3xc5FsSIADZxecZH8ALfivOSatf3iNVCHuYSDQ4eejoigIH1tb5fS8Gig4EzLGyPJlx46h/5/bhiv8KJGQp+sCFqrWpnQSA==',key_name='tempest-TestNetworkBasicOps-1095979119',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-a3nzyhw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:04:16Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=2d7d585f-a87a-4520-b942-19de63e4e43a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.471 247403 DEBUG nova.network.os_vif_util [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.472 247403 DEBUG nova.network.os_vif_util [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.473 247403 DEBUG nova.objects.instance [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d7d585f-a87a-4520-b942-19de63e4e43a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.501 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <uuid>2d7d585f-a87a-4520-b942-19de63e4e43a</uuid>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <name>instance-000000c8</name>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-611558236</nova:name>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:04:28</nova:creationTime>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <nova:port uuid="9f804c26-f01b-41b8-b1d4-03168c468d85">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <entry name="serial">2d7d585f-a87a-4520-b942-19de63e4e43a</entry>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <entry name="uuid">2d7d585f-a87a-4520-b942-19de63e4e43a</entry>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/2d7d585f-a87a-4520-b942-19de63e4e43a_disk">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/2d7d585f-a87a-4520-b942-19de63e4e43a_disk.config">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:dc:d9:2e"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <target dev="tap9f804c26-f0"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/console.log" append="off"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:04:29 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:04:29 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:04:29 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:04:29 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.502 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Preparing to wait for external event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.502 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.502 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.502 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.503 247403 DEBUG nova.virt.libvirt.vif [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-611558236',display_name='tempest-TestNetworkBasicOps-server-611558236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-611558236',id=200,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZrEzWeR6tHRl79sRWzQLceDvUL5plhjDV3xc5FsSIADZxecZH8ALfivOSatf3iNVCHuYSDQ4eejoigIH1tb5fS8Gig4EzLGyPJlx46h/5/bhiv8KJGQp+sCFqrWpnQSA==',key_name='tempest-TestNetworkBasicOps-1095979119',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-a3nzyhw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:04:16Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=2d7d585f-a87a-4520-b942-19de63e4e43a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.503 247403 DEBUG nova.network.os_vif_util [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.504 247403 DEBUG nova.network.os_vif_util [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.504 247403 DEBUG os_vif [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.505 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.505 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.505 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.508 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.508 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f804c26-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.509 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f804c26-f0, col_values=(('external_ids', {'iface-id': '9f804c26-f01b-41b8-b1d4-03168c468d85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:d9:2e', 'vm-uuid': '2d7d585f-a87a-4520-b942-19de63e4e43a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.510 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:29 np0005603621 NetworkManager[49013]: <info>  [1769850269.5121] manager: (tap9f804c26-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.514 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.517 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.517 247403 INFO os_vif [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0')#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.574 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.575 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.575 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:dc:d9:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.575 247403 INFO nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Using config drive#033[00m
Jan 31 04:04:29 np0005603621 nova_compute[247399]: 2026-01-31 09:04:29.599 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.374 247403 INFO nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Creating config drive at /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/disk.config#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.377 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8zpxn412 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:30 np0005603621 podman[393595]: 2026-01-31 09:04:30.489516816 +0000 UTC m=+0.048604715 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.503 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp8zpxn412" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:30 np0005603621 podman[393596]: 2026-01-31 09:04:30.51052893 +0000 UTC m=+0.069541767 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.530 247403 DEBUG nova.storage.rbd_utils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image 2d7d585f-a87a-4520-b942-19de63e4e43a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.534 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/disk.config 2d7d585f-a87a-4520-b942-19de63e4e43a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.545 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.545 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.545 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.691 247403 DEBUG oslo_concurrency.processutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/disk.config 2d7d585f-a87a-4520-b942-19de63e4e43a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.692 247403 INFO nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Deleting local config drive /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a/disk.config because it was imported into RBD.#033[00m
Jan 31 04:04:30 np0005603621 kernel: tap9f804c26-f0: entered promiscuous mode
Jan 31 04:04:30 np0005603621 NetworkManager[49013]: <info>  [1769850270.7332] manager: (tap9f804c26-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.733 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:30Z|00851|binding|INFO|Claiming lport 9f804c26-f01b-41b8-b1d4-03168c468d85 for this chassis.
Jan 31 04:04:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:30Z|00852|binding|INFO|9f804c26-f01b-41b8-b1d4-03168c468d85: Claiming fa:16:3e:dc:d9:2e 10.100.0.7
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.739 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.743 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 systemd-machined[212769]: New machine qemu-98-instance-000000c8.
Jan 31 04:04:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:30Z|00853|binding|INFO|Setting lport 9f804c26-f01b-41b8-b1d4-03168c468d85 ovn-installed in OVS
Jan 31 04:04:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:30Z|00854|binding|INFO|Setting lport 9f804c26-f01b-41b8-b1d4-03168c468d85 up in Southbound
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.765 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.769 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:d9:2e 10.100.0.7'], port_security=['fa:16:3e:dc:d9:2e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2d7d585f-a87a-4520-b942-19de63e4e43a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89356e16-4ed0-4fd6-bb34-95ca088d447a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=9f804c26-f01b-41b8-b1d4-03168c468d85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.770 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 9f804c26-f01b-41b8-b1d4-03168c468d85 in datapath 91760e6c-de0b-4340-85b5-40ec6c5aa9f4 bound to our chassis#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.772 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91760e6c-de0b-4340-85b5-40ec6c5aa9f4#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.777 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[96638e7b-bc0e-4f38-a552-fa24e8ad9cb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.778 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap91760e6c-d1 in ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:04:30 np0005603621 systemd[1]: Started Virtual Machine qemu-98-instance-000000c8.
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.780 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap91760e6c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.780 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ddafcd-22f9-42e2-b8ad-6e7d1fdb4204]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.781 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[86be47c7-bebc-42c8-8aff-7bebfcedb717]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 systemd-udevd[393693]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.790 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[4a91a380-0e26-4bc1-82ea-c35de90bead8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 NetworkManager[49013]: <info>  [1769850270.7980] device (tap9f804c26-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:04:30 np0005603621 NetworkManager[49013]: <info>  [1769850270.7988] device (tap9f804c26-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.801 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ea69fdb6-574d-4c3b-9519-4c7d0f81038c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.824 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[8db4fb60-384a-4bf7-bc46-46cfefa4296a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.828 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[22b1c345-af7f-4e0c-bdb1-cf8f3601fd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 NetworkManager[49013]: <info>  [1769850270.8290] manager: (tap91760e6c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/387)
Jan 31 04:04:30 np0005603621 systemd-udevd[393698]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.854 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[6b261bda-cf6a-4171-a4c0-715930cf9f5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.857 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[972080ee-026b-49aa-b561-5b84b5dd10ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 NetworkManager[49013]: <info>  [1769850270.8716] device (tap91760e6c-d0): carrier: link connected
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.877 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[378c6ca1-549f-421d-9d9e-7a44d98ec5b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.889 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8861303e-045a-4e07-be38-20b7c43fb448]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91760e6c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:36:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957638, 'reachable_time': 15687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393724, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.898 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd131a8-c30a-4c49-b83d-8a465b949ab8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:36eb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 957638, 'tstamp': 957638}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393725, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.913 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3b849e23-0b02-4caa-af58-a88a86bcaa39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91760e6c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:36:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957638, 'reachable_time': 15687, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393726, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.941 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9e96e860-a43a-46b5-bc6e-2a460541228b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.982 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c821f82b-cf26-4585-ae77-2f0c813c4625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.983 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91760e6c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.984 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.984 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91760e6c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:30 np0005603621 NetworkManager[49013]: <info>  [1769850270.9863] manager: (tap91760e6c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Jan 31 04:04:30 np0005603621 kernel: tap91760e6c-d0: entered promiscuous mode
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.990 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91760e6c-d0, col_values=(('external_ids', {'iface-id': '14bba26c-8965-43f0-99e9-0b6d5ee40016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:30Z|00855|binding|INFO|Releasing lport 14bba26c-8965-43f0-99e9-0b6d5ee40016 from this chassis (sb_readonly=0)
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.988 247403 DEBUG nova.network.neutron [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Updated VIF entry in instance network info cache for port 9f804c26-f01b-41b8-b1d4-03168c468d85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.988 247403 DEBUG nova.network.neutron [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Updating instance_info_cache with network_info: [{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.990 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.992 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.997 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[76467571-94b2-4c0e-9dba-fd3e71718b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:30 np0005603621 nova_compute[247399]: 2026-01-31 09:04:30.997 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.998 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-91760e6c-de0b-4340-85b5-40ec6c5aa9f4
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.pid.haproxy
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 91760e6c-de0b-4340-85b5-40ec6c5aa9f4
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:04:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:30.998 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'env', 'PROCESS_TAG=haproxy-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.024 247403 DEBUG oslo_concurrency.lockutils [req-f112c46e-c567-4ead-9854-0a815973827a req-a37c1724-16f4-47e0-981f-e44bd384be1a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:04:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 960 KiB/s wr, 54 op/s
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.243 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850271.2429025, 2d7d585f-a87a-4520-b942-19de63e4e43a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.245 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] VM Started (Lifecycle Event)#033[00m
Jan 31 04:04:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:31.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.270 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.274 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850271.2440803, 2d7d585f-a87a-4520-b942-19de63e4e43a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.274 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.300 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.303 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:04:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:31.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:31 np0005603621 podman[393800]: 2026-01-31 09:04:31.337208231 +0000 UTC m=+0.048330837 container create eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:04:31 np0005603621 systemd[1]: Started libpod-conmon-eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5.scope.
Jan 31 04:04:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:04:31 np0005603621 podman[393800]: 2026-01-31 09:04:31.307120501 +0000 UTC m=+0.018243107 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:04:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20e98c45c817533a2534ba8d2be27f3ff45692832d9f5c5c404d089ea98571f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:04:31 np0005603621 podman[393800]: 2026-01-31 09:04:31.415186044 +0000 UTC m=+0.126308770 container init eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:04:31 np0005603621 podman[393800]: 2026-01-31 09:04:31.421895335 +0000 UTC m=+0.133017951 container start eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:04:31 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [NOTICE]   (393819) : New worker (393821) forked
Jan 31 04:04:31 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [NOTICE]   (393819) : Loading success.
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.776 247403 DEBUG nova.compute.manager [req-098228be-74bf-459c-a643-dfe28bd16ef3 req-d63506ee-fd69-43c9-ac89-bf152c6e8d27 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Received event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.776 247403 DEBUG oslo_concurrency.lockutils [req-098228be-74bf-459c-a643-dfe28bd16ef3 req-d63506ee-fd69-43c9-ac89-bf152c6e8d27 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.777 247403 DEBUG oslo_concurrency.lockutils [req-098228be-74bf-459c-a643-dfe28bd16ef3 req-d63506ee-fd69-43c9-ac89-bf152c6e8d27 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.777 247403 DEBUG oslo_concurrency.lockutils [req-098228be-74bf-459c-a643-dfe28bd16ef3 req-d63506ee-fd69-43c9-ac89-bf152c6e8d27 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.777 247403 DEBUG nova.compute.manager [req-098228be-74bf-459c-a643-dfe28bd16ef3 req-d63506ee-fd69-43c9-ac89-bf152c6e8d27 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Processing event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.778 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.782 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.785 247403 INFO nova.virt.libvirt.driver [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Instance spawned successfully.#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.785 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.970 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.971 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850271.7816567, 2d7d585f-a87a-4520-b942-19de63e4e43a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.971 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.979 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.979 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.980 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.980 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.981 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:04:31 np0005603621 nova_compute[247399]: 2026-01-31 09:04:31.981 247403 DEBUG nova.virt.libvirt.driver [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:04:32 np0005603621 nova_compute[247399]: 2026-01-31 09:04:32.005 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:32 np0005603621 nova_compute[247399]: 2026-01-31 09:04:32.035 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:04:32 np0005603621 nova_compute[247399]: 2026-01-31 09:04:32.040 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:04:32 np0005603621 nova_compute[247399]: 2026-01-31 09:04:32.165 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:04:32 np0005603621 nova_compute[247399]: 2026-01-31 09:04:32.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:04:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Jan 31 04:04:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Jan 31 04:04:32 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Jan 31 04:04:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 752 KiB/s rd, 22 KiB/s wr, 68 op/s
Jan 31 04:04:33 np0005603621 nova_compute[247399]: 2026-01-31 09:04:33.110 247403 INFO nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Took 16.19 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:04:33 np0005603621 nova_compute[247399]: 2026-01-31 09:04:33.111 247403 DEBUG nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:04:33 np0005603621 nova_compute[247399]: 2026-01-31 09:04:33.216 247403 INFO nova.compute.manager [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Took 17.73 seconds to build instance.#033[00m
Jan 31 04:04:33 np0005603621 nova_compute[247399]: 2026-01-31 09:04:33.255 247403 DEBUG oslo_concurrency.lockutils [None req-458ade6a-5a33-4dec-8bc0-48052bdb9b93 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:33.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:33.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.511 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.629 247403 DEBUG nova.compute.manager [req-16680d07-8072-4138-b631-2eb9a3139841 req-03c752ff-d318-4b2a-8a22-f0c29a9787bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Received event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.630 247403 DEBUG oslo_concurrency.lockutils [req-16680d07-8072-4138-b631-2eb9a3139841 req-03c752ff-d318-4b2a-8a22-f0c29a9787bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.630 247403 DEBUG oslo_concurrency.lockutils [req-16680d07-8072-4138-b631-2eb9a3139841 req-03c752ff-d318-4b2a-8a22-f0c29a9787bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.630 247403 DEBUG oslo_concurrency.lockutils [req-16680d07-8072-4138-b631-2eb9a3139841 req-03c752ff-d318-4b2a-8a22-f0c29a9787bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.630 247403 DEBUG nova.compute.manager [req-16680d07-8072-4138-b631-2eb9a3139841 req-03c752ff-d318-4b2a-8a22-f0c29a9787bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] No waiting events found dispatching network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:04:34 np0005603621 nova_compute[247399]: 2026-01-31 09:04:34.630 247403 WARNING nova.compute.manager [req-16680d07-8072-4138-b631-2eb9a3139841 req-03c752ff-d318-4b2a-8a22-f0c29a9787bb fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Received unexpected event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:04:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 21 KiB/s wr, 115 op/s
Jan 31 04:04:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:35.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:37 np0005603621 nova_compute[247399]: 2026-01-31 09:04:37.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 18 KiB/s wr, 96 op/s
Jan 31 04:04:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:37.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:04:38
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'vms']
Jan 31 04:04:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 111 op/s
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:04:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:04:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:39.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:39 np0005603621 nova_compute[247399]: 2026-01-31 09:04:39.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:39 np0005603621 nova_compute[247399]: 2026-01-31 09:04:39.794 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:39 np0005603621 NetworkManager[49013]: <info>  [1769850279.7963] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Jan 31 04:04:39 np0005603621 NetworkManager[49013]: <info>  [1769850279.7969] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 31 04:04:39 np0005603621 nova_compute[247399]: 2026-01-31 09:04:39.854 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:39 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:39Z|00856|binding|INFO|Releasing lport 14bba26c-8965-43f0-99e9-0b6d5ee40016 from this chassis (sb_readonly=0)
Jan 31 04:04:39 np0005603621 nova_compute[247399]: 2026-01-31 09:04:39.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.042 247403 DEBUG nova.compute.manager [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Received event network-changed-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.042 247403 DEBUG nova.compute.manager [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Refreshing instance network info cache due to event network-changed-9f804c26-f01b-41b8-b1d4-03168c468d85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.043 247403 DEBUG oslo_concurrency.lockutils [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.043 247403 DEBUG oslo_concurrency.lockutils [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.043 247403 DEBUG nova.network.neutron [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Refreshing network info cache for port 9f804c26-f01b-41b8-b1d4-03168c468d85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:04:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 31 04:04:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:41.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.385 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "2d7d585f-a87a-4520-b942-19de63e4e43a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.386 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.387 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.387 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.387 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.389 247403 INFO nova.compute.manager [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Terminating instance#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.390 247403 DEBUG nova.compute.manager [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:04:41 np0005603621 kernel: tap9f804c26-f0 (unregistering): left promiscuous mode
Jan 31 04:04:41 np0005603621 NetworkManager[49013]: <info>  [1769850281.7524] device (tap9f804c26-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:04:41 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:41Z|00857|binding|INFO|Releasing lport 9f804c26-f01b-41b8-b1d4-03168c468d85 from this chassis (sb_readonly=0)
Jan 31 04:04:41 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:41Z|00858|binding|INFO|Setting lport 9f804c26-f01b-41b8-b1d4-03168c468d85 down in Southbound
Jan 31 04:04:41 np0005603621 ovn_controller[149152]: 2026-01-31T09:04:41Z|00859|binding|INFO|Removing iface tap9f804c26-f0 ovn-installed in OVS
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.759 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.762 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.767 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.769 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:d9:2e 10.100.0.7'], port_security=['fa:16:3e:dc:d9:2e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2d7d585f-a87a-4520-b942-19de63e4e43a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89356e16-4ed0-4fd6-bb34-95ca088d447a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=9f804c26-f01b-41b8-b1d4-03168c468d85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.771 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 9f804c26-f01b-41b8-b1d4-03168c468d85 in datapath 91760e6c-de0b-4340-85b5-40ec6c5aa9f4 unbound from our chassis#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.772 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 91760e6c-de0b-4340-85b5-40ec6c5aa9f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.773 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c41c9290-1327-4982-a6b0-e7a6c972ff43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.774 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 namespace which is not needed anymore#033[00m
Jan 31 04:04:41 np0005603621 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c8.scope: Deactivated successfully.
Jan 31 04:04:41 np0005603621 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000c8.scope: Consumed 10.190s CPU time.
Jan 31 04:04:41 np0005603621 systemd-machined[212769]: Machine qemu-98-instance-000000c8 terminated.
Jan 31 04:04:41 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [NOTICE]   (393819) : haproxy version is 2.8.14-c23fe91
Jan 31 04:04:41 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [NOTICE]   (393819) : path to executable is /usr/sbin/haproxy
Jan 31 04:04:41 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [WARNING]  (393819) : Exiting Master process...
Jan 31 04:04:41 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [ALERT]    (393819) : Current worker (393821) exited with code 143 (Terminated)
Jan 31 04:04:41 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[393815]: [WARNING]  (393819) : All workers exited. Exiting... (0)
Jan 31 04:04:41 np0005603621 systemd[1]: libpod-eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5.scope: Deactivated successfully.
Jan 31 04:04:41 np0005603621 podman[393911]: 2026-01-31 09:04:41.882834816 +0000 UTC m=+0.039209568 container died eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 04:04:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5-userdata-shm.mount: Deactivated successfully.
Jan 31 04:04:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a20e98c45c817533a2534ba8d2be27f3ff45692832d9f5c5c404d089ea98571f-merged.mount: Deactivated successfully.
Jan 31 04:04:41 np0005603621 podman[393911]: 2026-01-31 09:04:41.9235133 +0000 UTC m=+0.079888032 container cleanup eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 04:04:41 np0005603621 systemd[1]: libpod-conmon-eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5.scope: Deactivated successfully.
Jan 31 04:04:41 np0005603621 podman[393941]: 2026-01-31 09:04:41.981045176 +0000 UTC m=+0.045255419 container remove eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.984 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1902a61c-0654-4565-904b-c87c4d72d408]: (4, ('Sat Jan 31 09:04:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 (eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5)\neb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5\nSat Jan 31 09:04:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 (eb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5)\neb5dc73062e3993553a174ec79896691472ecf519f837058349368571ac09ce5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.985 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb8e649-542a-4246-b612-05cd1e8612c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.986 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91760e6c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 kernel: tap91760e6c-d0: left promiscuous mode
Jan 31 04:04:41 np0005603621 nova_compute[247399]: 2026-01-31 09:04:41.995 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:41 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:41.997 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3c095bf6-cc4d-48d3-bae1-01e91ed41582]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.007 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:42.009 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[22da68a1-41d5-4705-9d2e-aa17f5009c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:42.010 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[377dccd7-7613-48d0-b983-480a65ff4daf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:42.022 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[71d1db16-3781-40a1-9eb9-6a7cda5b3077]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 957633, 'reachable_time': 38276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393965, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:42 np0005603621 systemd[1]: run-netns-ovnmeta\x2d91760e6c\x2dde0b\x2d4340\x2d85b5\x2d40ec6c5aa9f4.mount: Deactivated successfully.
Jan 31 04:04:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:42.025 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:04:42 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:42.026 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[b79dc880-f34d-4df6-b689-faf9b8427d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.027 247403 INFO nova.virt.libvirt.driver [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Instance destroyed successfully.#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.027 247403 DEBUG nova.objects.instance [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid 2d7d585f-a87a-4520-b942-19de63e4e43a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.066 247403 DEBUG nova.virt.libvirt.vif [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:04:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-611558236',display_name='tempest-TestNetworkBasicOps-server-611558236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-611558236',id=200,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZrEzWeR6tHRl79sRWzQLceDvUL5plhjDV3xc5FsSIADZxecZH8ALfivOSatf3iNVCHuYSDQ4eejoigIH1tb5fS8Gig4EzLGyPJlx46h/5/bhiv8KJGQp+sCFqrWpnQSA==',key_name='tempest-TestNetworkBasicOps-1095979119',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:04:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-a3nzyhw5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:04:33Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=2d7d585f-a87a-4520-b942-19de63e4e43a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.066 247403 DEBUG nova.network.os_vif_util [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.067 247403 DEBUG nova.network.os_vif_util [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.067 247403 DEBUG os_vif [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.071 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.071 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f804c26-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.075 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:42 np0005603621 nova_compute[247399]: 2026-01-31 09:04:42.077 247403 INFO os_vif [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0')#033[00m
Jan 31 04:04:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 225 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.4 KiB/s wr, 88 op/s
Jan 31 04:04:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:43.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:43 np0005603621 nova_compute[247399]: 2026-01-31 09:04:43.681 247403 INFO nova.virt.libvirt.driver [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Deleting instance files /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a_del#033[00m
Jan 31 04:04:43 np0005603621 nova_compute[247399]: 2026-01-31 09:04:43.681 247403 INFO nova.virt.libvirt.driver [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Deletion of /var/lib/nova/instances/2d7d585f-a87a-4520-b942-19de63e4e43a_del complete#033[00m
Jan 31 04:04:43 np0005603621 nova_compute[247399]: 2026-01-31 09:04:43.843 247403 INFO nova.compute.manager [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Took 2.45 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:04:43 np0005603621 nova_compute[247399]: 2026-01-31 09:04:43.844 247403 DEBUG oslo.service.loopingcall [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:04:43 np0005603621 nova_compute[247399]: 2026-01-31 09:04:43.844 247403 DEBUG nova.compute.manager [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:04:43 np0005603621 nova_compute[247399]: 2026-01-31 09:04:43.844 247403 DEBUG nova.network.neutron [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:04:44 np0005603621 nova_compute[247399]: 2026-01-31 09:04:44.726 247403 DEBUG nova.network.neutron [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Updated VIF entry in instance network info cache for port 9f804c26-f01b-41b8-b1d4-03168c468d85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:04:44 np0005603621 nova_compute[247399]: 2026-01-31 09:04:44.727 247403 DEBUG nova.network.neutron [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Updating instance_info_cache with network_info: [{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:44 np0005603621 nova_compute[247399]: 2026-01-31 09:04:44.763 247403 DEBUG oslo_concurrency.lockutils [req-b4f50882-e145-4b5c-8ac6-33887457a644 req-b85156fa-9ae5-4c45-8252-500cce3065d5 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-2d7d585f-a87a-4520-b942-19de63e4e43a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:04:45 np0005603621 nova_compute[247399]: 2026-01-31 09:04:45.011 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 204 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.7 KiB/s wr, 79 op/s
Jan 31 04:04:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:45.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:46 np0005603621 nova_compute[247399]: 2026-01-31 09:04:46.700 247403 DEBUG nova.network.neutron [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:04:46 np0005603621 nova_compute[247399]: 2026-01-31 09:04:46.738 247403 INFO nova.compute.manager [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Took 2.89 seconds to deallocate network for instance.#033[00m
Jan 31 04:04:46 np0005603621 nova_compute[247399]: 2026-01-31 09:04:46.829 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:04:46 np0005603621 nova_compute[247399]: 2026-01-31 09:04:46.829 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.015 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.078 247403 DEBUG nova.scheduler.client.report [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:04:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 204 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 4.3 KiB/s wr, 44 op/s
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.232 247403 DEBUG nova.scheduler.client.report [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.232 247403 DEBUG nova.compute.provider_tree [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:04:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:47.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.347 247403 DEBUG nova.scheduler.client.report [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.374 247403 DEBUG nova.scheduler.client.report [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.437 247403 DEBUG oslo_concurrency.processutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:04:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:04:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222490686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.839 247403 DEBUG oslo_concurrency.processutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.844 247403 DEBUG nova.compute.provider_tree [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.867 247403 DEBUG nova.scheduler.client.report [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:04:47 np0005603621 nova_compute[247399]: 2026-01-31 09:04:47.895 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:48 np0005603621 nova_compute[247399]: 2026-01-31 09:04:48.152 247403 INFO nova.scheduler.client.report [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance 2d7d585f-a87a-4520-b942-19de63e4e43a#033[00m
Jan 31 04:04:48 np0005603621 nova_compute[247399]: 2026-01-31 09:04:48.258 247403 DEBUG oslo_concurrency.lockutils [None req-cd2bb757-fb54-4d22-9e1a-0e1667a3d2de d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "2d7d585f-a87a-4520-b942-19de63e4e43a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:04:48 np0005603621 nova_compute[247399]: 2026-01-31 09:04:48.719 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:48 np0005603621 nova_compute[247399]: 2026-01-31 09:04:48.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 480 KiB/s rd, 4.7 KiB/s wr, 54 op/s
Jan 31 04:04:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:49.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004328645542527452 of space, bias 1.0, pg target 1.2985936627582355 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:04:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:04:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 5.0 KiB/s wr, 39 op/s
Jan 31 04:04:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:51.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:51.295 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:04:51 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:51.296 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:04:51 np0005603621 nova_compute[247399]: 2026-01-31 09:04:51.296 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:51.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:52 np0005603621 nova_compute[247399]: 2026-01-31 09:04:52.016 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:52 np0005603621 nova_compute[247399]: 2026-01-31 09:04:52.075 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 20 KiB/s wr, 61 op/s
Jan 31 04:04:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:53.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 18 KiB/s wr, 89 op/s
Jan 31 04:04:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:55.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:55.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:56 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:04:56.298 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:04:57 np0005603621 nova_compute[247399]: 2026-01-31 09:04:57.018 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:57 np0005603621 nova_compute[247399]: 2026-01-31 09:04:57.025 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850282.0234575, 2d7d585f-a87a-4520-b942-19de63e4e43a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:04:57 np0005603621 nova_compute[247399]: 2026-01-31 09:04:57.026 247403 INFO nova.compute.manager [-] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:04:57 np0005603621 nova_compute[247399]: 2026-01-31 09:04:57.068 247403 DEBUG nova.compute.manager [None req-300b7dd0-1331-4b5e-a20f-5a9288f29e6c - - - - - -] [instance: 2d7d585f-a87a-4520-b942-19de63e4e43a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:04:57 np0005603621 nova_compute[247399]: 2026-01-31 09:04:57.076 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:04:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 15 KiB/s wr, 71 op/s
Jan 31 04:04:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:04:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:04:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:04:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 16 KiB/s wr, 97 op/s
Jan 31 04:04:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:04:59.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:04:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:04:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:04:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:04:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:00 np0005603621 nova_compute[247399]: 2026-01-31 09:05:00.898 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:00 np0005603621 nova_compute[247399]: 2026-01-31 09:05:00.898 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:00 np0005603621 nova_compute[247399]: 2026-01-31 09:05:00.923 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.042 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.043 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.052 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.052 247403 INFO nova.compute.claims [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:05:01 np0005603621 podman[394047]: 2026-01-31 09:05:01.079615032 +0000 UTC m=+0.078555441 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:05:01 np0005603621 podman[394048]: 2026-01-31 09:05:01.079725025 +0000 UTC m=+0.078405437 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 04:05:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 208 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 290 KiB/s wr, 99 op/s
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.220 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2869524196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.650 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.654 247403 DEBUG nova.compute.provider_tree [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.681 247403 DEBUG nova.scheduler.client.report [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.729 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.731 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.800 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.800 247403 DEBUG nova.network.neutron [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.835 247403 INFO nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:05:01 np0005603621 nova_compute[247399]: 2026-01-31 09:05:01.900 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.020 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.056 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.058 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.059 247403 INFO nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Creating image(s)#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.087 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.113 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.139 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.142 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.163 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.201 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.202 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.203 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.203 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.229 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.233 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:05:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1856883529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.642 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.724 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.852 247403 DEBUG nova.policy [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.858 247403 DEBUG nova.objects.instance [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid b2aba414-44a8-4432-92fc-c23c3abcf4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.875 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.876 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Ensure instance console log exists: /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.877 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.877 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:02 np0005603621 nova_compute[247399]: 2026-01-31 09:05:02.877 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.2 MiB/s wr, 138 op/s
Jan 31 04:05:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:03.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:03.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Jan 31 04:05:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Jan 31 04:05:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.202 247403 DEBUG nova.network.neutron [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Successfully updated port: 9f804c26-f01b-41b8-b1d4-03168c468d85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.228 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-b2aba414-44a8-4432-92fc-c23c3abcf4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.228 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-b2aba414-44a8-4432-92fc-c23c3abcf4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.228 247403 DEBUG nova.network.neutron [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.344 247403 DEBUG nova.compute.manager [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received event network-changed-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.345 247403 DEBUG nova.compute.manager [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Refreshing instance network info cache due to event network-changed-9f804c26-f01b-41b8-b1d4-03168c468d85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.345 247403 DEBUG oslo_concurrency.lockutils [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-b2aba414-44a8-4432-92fc-c23c3abcf4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:05:04 np0005603621 nova_compute[247399]: 2026-01-31 09:05:04.523 247403 DEBUG nova.network.neutron [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:05:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Jan 31 04:05:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Jan 31 04:05:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Jan 31 04:05:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.5 MiB/s wr, 137 op/s
Jan 31 04:05:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:05.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.791 247403 DEBUG nova.network.neutron [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Updating instance_info_cache with network_info: [{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:05:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.816 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-b2aba414-44a8-4432-92fc-c23c3abcf4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.816 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Instance network_info: |[{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.817 247403 DEBUG oslo_concurrency.lockutils [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-b2aba414-44a8-4432-92fc-c23c3abcf4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.817 247403 DEBUG nova.network.neutron [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Refreshing network info cache for port 9f804c26-f01b-41b8-b1d4-03168c468d85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.820 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Start _get_guest_xml network_info=[{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.826 247403 WARNING nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.832 247403 DEBUG nova.virt.libvirt.host [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.832 247403 DEBUG nova.virt.libvirt.host [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.840 247403 DEBUG nova.virt.libvirt.host [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.840 247403 DEBUG nova.virt.libvirt.host [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.841 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.842 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.842 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.842 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.843 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.843 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.843 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.843 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.843 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.843 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.844 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.844 247403 DEBUG nova.virt.hardware [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:05:05 np0005603621 nova_compute[247399]: 2026-01-31 09:05:05.847 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Jan 31 04:05:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085218544' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.279 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.308 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.312 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733482016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.724 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.725 247403 DEBUG nova.virt.libvirt.vif [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-90775299',display_name='tempest-TestNetworkBasicOps-server-90775299',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-90775299',id=202,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBgOKq7YCkpBGeBUgUhAHWNvJY7nlw+fNPD3n7gd6evkiM0+aOdaHIKYACIgc7snVclbAWXy87grwytzPllPR8S/Q+HBTk7WHlIw+T0F1jBq+4obRoDTJwSKlpIhzGt/mA==',key_name='tempest-TestNetworkBasicOps-327699935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-d5gr31fc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:05:01Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=b2aba414-44a8-4432-92fc-c23c3abcf4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.726 247403 DEBUG nova.network.os_vif_util [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.726 247403 DEBUG nova.network.os_vif_util [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.727 247403 DEBUG nova.objects.instance [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid b2aba414-44a8-4432-92fc-c23c3abcf4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.861 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <uuid>b2aba414-44a8-4432-92fc-c23c3abcf4e2</uuid>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <name>instance-000000ca</name>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-90775299</nova:name>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:05:05</nova:creationTime>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <nova:port uuid="9f804c26-f01b-41b8-b1d4-03168c468d85">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <entry name="serial">b2aba414-44a8-4432-92fc-c23c3abcf4e2</entry>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <entry name="uuid">b2aba414-44a8-4432-92fc-c23c3abcf4e2</entry>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk.config">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:dc:d9:2e"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <target dev="tap9f804c26-f0"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/console.log" append="off"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:05:06 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:05:06 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:05:06 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:05:06 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.862 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Preparing to wait for external event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.862 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.863 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.863 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.864 247403 DEBUG nova.virt.libvirt.vif [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-90775299',display_name='tempest-TestNetworkBasicOps-server-90775299',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-90775299',id=202,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBgOKq7YCkpBGeBUgUhAHWNvJY7nlw+fNPD3n7gd6evkiM0+aOdaHIKYACIgc7snVclbAWXy87grwytzPllPR8S/Q+HBTk7WHlIw+T0F1jBq+4obRoDTJwSKlpIhzGt/mA==',key_name='tempest-TestNetworkBasicOps-327699935',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-d5gr31fc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:05:01Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=b2aba414-44a8-4432-92fc-c23c3abcf4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.864 247403 DEBUG nova.network.os_vif_util [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.864 247403 DEBUG nova.network.os_vif_util [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.865 247403 DEBUG os_vif [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.865 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.866 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.866 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.869 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f804c26-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.869 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f804c26-f0, col_values=(('external_ids', {'iface-id': '9f804c26-f01b-41b8-b1d4-03168c468d85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:d9:2e', 'vm-uuid': 'b2aba414-44a8-4432-92fc-c23c3abcf4e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:06 np0005603621 NetworkManager[49013]: <info>  [1769850306.8718] manager: (tap9f804c26-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:06 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.877 247403 INFO os_vif [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0')#033[00m
Jan 31 04:05:06 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Jan 31 04:05:06 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.943 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.944 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.944 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:dc:d9:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.945 247403 INFO nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Using config drive#033[00m
Jan 31 04:05:06 np0005603621 nova_compute[247399]: 2026-01-31 09:05:06.967 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:07 np0005603621 nova_compute[247399]: 2026-01-31 09:05:07.021 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 887 KiB/s rd, 1.5 MiB/s wr, 40 op/s
Jan 31 04:05:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:07.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:05:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.814 247403 INFO nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Creating config drive at /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/disk.config#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.818 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvpgvdlfz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.844 247403 DEBUG nova.network.neutron [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Updated VIF entry in instance network info cache for port 9f804c26-f01b-41b8-b1d4-03168c468d85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.845 247403 DEBUG nova.network.neutron [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Updating instance_info_cache with network_info: [{"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.928 247403 DEBUG oslo_concurrency.lockutils [req-63dc45a6-77eb-4546-9c3f-b9a2a83633a8 req-0c4a2e9e-aa30-41cd-b1f2-b632136d21a8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-b2aba414-44a8-4432-92fc-c23c3abcf4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.948 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpvpgvdlfz" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.980 247403 DEBUG nova.storage.rbd_utils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:08 np0005603621 nova_compute[247399]: 2026-01-31 09:05:08.983 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/disk.config b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 340 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 4.6 MiB/s wr, 180 op/s
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.130 247403 DEBUG oslo_concurrency.processutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/disk.config b2aba414-44a8-4432-92fc-c23c3abcf4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.131 247403 INFO nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Deleting local config drive /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2/disk.config because it was imported into RBD.#033[00m
Jan 31 04:05:09 np0005603621 kernel: tap9f804c26-f0: entered promiscuous mode
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.1673] manager: (tap9f804c26-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.168 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:09Z|00860|binding|INFO|Claiming lport 9f804c26-f01b-41b8-b1d4-03168c468d85 for this chassis.
Jan 31 04:05:09 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:09Z|00861|binding|INFO|9f804c26-f01b-41b8-b1d4-03168c468d85: Claiming fa:16:3e:dc:d9:2e 10.100.0.7
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.172 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.174 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 systemd-udevd[394445]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:05:09 np0005603621 systemd-machined[212769]: New machine qemu-99-instance-000000ca.
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.1925] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.192 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.1933] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.1994] device (tap9f804c26-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.2000] device (tap9f804c26-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:05:09 np0005603621 systemd[1]: Started Virtual Machine qemu-99-instance-000000ca.
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.206 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:d9:2e 10.100.0.7'], port_security=['fa:16:3e:dc:d9:2e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b2aba414-44a8-4432-92fc-c23c3abcf4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '6', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89356e16-4ed0-4fd6-bb34-95ca088d447a, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=9f804c26-f01b-41b8-b1d4-03168c468d85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.207 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 9f804c26-f01b-41b8-b1d4-03168c468d85 in datapath 91760e6c-de0b-4340-85b5-40ec6c5aa9f4 bound to our chassis#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.208 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91760e6c-de0b-4340-85b5-40ec6c5aa9f4#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.215 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cef2e86f-b224-4a23-a9ef-b19d8e9a6741]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.215 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap91760e6c-d1 in ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.217 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap91760e6c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.217 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6446e4e4-02b4-4da4-9cad-13267ba1f631]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.217 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[057127f6-3614-4043-bbc6-0f790ed81684]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.226 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[2e2ae9ba-c25e-4e4f-af20-59371a9a7ab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.245 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5ff58b-a4ea-4f07-9253-8f98fea796d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.261 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[bd26ad66-3240-4164-8d07-be82f7dd1e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.264 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 systemd-udevd[394448]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.2672] manager: (tap91760e6c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/395)
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.265 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ddcf04a1-32ea-48bd-94b1-8e3af68a3916]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.291 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[87fc6212-db2f-431f-b789-0e1a8134ace4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.295 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[70b5a440-cfbf-489f-9f5b-bcb3e2868e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.296 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:09Z|00862|binding|INFO|Setting lport 9f804c26-f01b-41b8-b1d4-03168c468d85 ovn-installed in OVS
Jan 31 04:05:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:09 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:09Z|00863|binding|INFO|Setting lport 9f804c26-f01b-41b8-b1d4-03168c468d85 up in Southbound
Jan 31 04:05:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:09.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.3106] device (tap91760e6c-d0): carrier: link connected
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.315 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[29b67516-0dd6-4ad7-99f7-fd5d34618b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.327 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[39afefbc-32f2-457b-98ad-39ef10a6f8e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91760e6c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:36:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 961482, 'reachable_time': 42823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394479, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.336 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0255f772-7e41-4ca2-b3f1-30c2165bf94c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe28:36eb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 961482, 'tstamp': 961482}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394480, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.349 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[70046a8c-c56b-441e-8d00-564dd2d38b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91760e6c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:28:36:eb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 961482, 'reachable_time': 42823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394481, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.370 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[50939254-c0fa-4cea-89cc-4b074d86cfa7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.407 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d17d76a2-1928-464e-90a4-6a2c7ed13fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.408 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91760e6c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.408 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.409 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91760e6c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 NetworkManager[49013]: <info>  [1769850309.4110] manager: (tap91760e6c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 31 04:05:09 np0005603621 kernel: tap91760e6c-d0: entered promiscuous mode
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.412 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.414 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91760e6c-d0, col_values=(('external_ids', {'iface-id': '14bba26c-8965-43f0-99e9-0b6d5ee40016'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.415 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:09Z|00864|binding|INFO|Releasing lport 14bba26c-8965-43f0-99e9-0b6d5ee40016 from this chassis (sb_readonly=0)
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.415 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.417 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.418 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d685ab12-319e-4208-a980-b9dc68c5327c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.419 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-91760e6c-de0b-4340-85b5-40ec6c5aa9f4
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.pid.haproxy
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 91760e6c-de0b-4340-85b5-40ec6c5aa9f4
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:05:09 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:09.420 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'env', 'PROCESS_TAG=haproxy-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/91760e6c-de0b-4340-85b5-40ec6c5aa9f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.421 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.602 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850309.6016576, b2aba414-44a8-4432-92fc-c23c3abcf4e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.602 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] VM Started (Lifecycle Event)#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.631 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.635 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850309.6017911, b2aba414-44a8-4432-92fc-c23c3abcf4e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.635 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.672 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.675 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.693 247403 DEBUG nova.compute.manager [req-6185adec-891f-448a-a96f-f01d1ccdef76 req-37e3e060-37f8-4c9d-889e-339de8a15ad8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.693 247403 DEBUG oslo_concurrency.lockutils [req-6185adec-891f-448a-a96f-f01d1ccdef76 req-37e3e060-37f8-4c9d-889e-339de8a15ad8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.693 247403 DEBUG oslo_concurrency.lockutils [req-6185adec-891f-448a-a96f-f01d1ccdef76 req-37e3e060-37f8-4c9d-889e-339de8a15ad8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.693 247403 DEBUG oslo_concurrency.lockutils [req-6185adec-891f-448a-a96f-f01d1ccdef76 req-37e3e060-37f8-4c9d-889e-339de8a15ad8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.693 247403 DEBUG nova.compute.manager [req-6185adec-891f-448a-a96f-f01d1ccdef76 req-37e3e060-37f8-4c9d-889e-339de8a15ad8 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Processing event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.694 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.699 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.701 247403 INFO nova.virt.libvirt.driver [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Instance spawned successfully.#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.702 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.710 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.710 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850309.6964726, b2aba414-44a8-4432-92fc-c23c3abcf4e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.710 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.741 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.741 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.741 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.742 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.742 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.743 247403 DEBUG nova.virt.libvirt.driver [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:09 np0005603621 podman[394555]: 2026-01-31 09:05:09.743048058 +0000 UTC m=+0.047853922 container create 9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.746 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.752 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:05:09 np0005603621 systemd[1]: Started libpod-conmon-9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7.scope.
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.791 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:05:09 np0005603621 podman[394555]: 2026-01-31 09:05:09.714680942 +0000 UTC m=+0.019486826 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:05:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bc46e0f3f36ca70794cad6f0982adca18424fe27d19a23094b5875bc4aee648/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:09 np0005603621 podman[394555]: 2026-01-31 09:05:09.826326837 +0000 UTC m=+0.131132721 container init 9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:05:09 np0005603621 podman[394555]: 2026-01-31 09:05:09.829858439 +0000 UTC m=+0.134664303 container start 9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.840 247403 INFO nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Took 7.78 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.840 247403 DEBUG nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:09 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [NOTICE]   (394574) : New worker (394576) forked
Jan 31 04:05:09 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [NOTICE]   (394574) : Loading success.
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.960 247403 INFO nova.compute.manager [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Took 8.96 seconds to build instance.#033[00m
Jan 31 04:05:09 np0005603621 nova_compute[247399]: 2026-01-31 09:05:09.988 247403 DEBUG oslo_concurrency.lockutils [None req-983810e3-05a0-48df-a472-7c4ee515318f d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:05:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 46079a68-c21e-4b71-aae3-db81403f7c7f does not exist
Jan 31 04:05:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3937345c-477e-4058-962c-e1853f129df7 does not exist
Jan 31 04:05:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 573b3056-e87e-4d31-8acf-14f3947fd6d4 does not exist
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:05:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:05:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.1 MiB/s wr, 158 op/s
Jan 31 04:05:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:05:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:05:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:05:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:11.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.304378592 +0000 UTC m=+0.091244291 container create 96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ardinghelli, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.235214579 +0000 UTC m=+0.022080298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:05:11 np0005603621 systemd[1]: Started libpod-conmon-96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0.scope.
Jan 31 04:05:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.375668612 +0000 UTC m=+0.162534311 container init 96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.380872956 +0000 UTC m=+0.167738655 container start 96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.383895602 +0000 UTC m=+0.170761301 container attach 96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:05:11 np0005603621 hungry_ardinghelli[394872]: 167 167
Jan 31 04:05:11 np0005603621 systemd[1]: libpod-96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0.scope: Deactivated successfully.
Jan 31 04:05:11 np0005603621 conmon[394872]: conmon 96a30c57610f7cc53216 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0.scope/container/memory.events
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.385605525 +0000 UTC m=+0.172471254 container died 96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:05:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-88d8b00871b982bf8b856064f844b1662d4462e08ebc30e7d6a02235dcef9635-merged.mount: Deactivated successfully.
Jan 31 04:05:11 np0005603621 podman[394856]: 2026-01-31 09:05:11.414975063 +0000 UTC m=+0.201840762 container remove 96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:05:11 np0005603621 systemd[1]: libpod-conmon-96a30c57610f7cc53216d239aac7b3b9c163760ed7cffe340759751e474715f0.scope: Deactivated successfully.
Jan 31 04:05:11 np0005603621 podman[394896]: 2026-01-31 09:05:11.582993126 +0000 UTC m=+0.068592997 container create 3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:05:11 np0005603621 systemd[1]: Started libpod-conmon-3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad.scope.
Jan 31 04:05:11 np0005603621 podman[394896]: 2026-01-31 09:05:11.537876352 +0000 UTC m=+0.023476243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:05:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861d24c8fdac9c13434afd2ccde66b138f4f90fcfd1eeff01a5fae152e74752c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861d24c8fdac9c13434afd2ccde66b138f4f90fcfd1eeff01a5fae152e74752c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861d24c8fdac9c13434afd2ccde66b138f4f90fcfd1eeff01a5fae152e74752c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861d24c8fdac9c13434afd2ccde66b138f4f90fcfd1eeff01a5fae152e74752c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861d24c8fdac9c13434afd2ccde66b138f4f90fcfd1eeff01a5fae152e74752c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:11 np0005603621 podman[394896]: 2026-01-31 09:05:11.656607998 +0000 UTC m=+0.142207899 container init 3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:05:11 np0005603621 podman[394896]: 2026-01-31 09:05:11.665676375 +0000 UTC m=+0.151276236 container start 3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:05:11 np0005603621 podman[394896]: 2026-01-31 09:05:11.668661419 +0000 UTC m=+0.154261290 container attach 3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:05:11 np0005603621 nova_compute[247399]: 2026-01-31 09:05:11.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.024 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.029 247403 DEBUG nova.compute.manager [req-39d39331-2272-4a4c-819e-786895a4ec91 req-129ccf87-9775-4a83-9270-dfbc1a586f63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.030 247403 DEBUG oslo_concurrency.lockutils [req-39d39331-2272-4a4c-819e-786895a4ec91 req-129ccf87-9775-4a83-9270-dfbc1a586f63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.030 247403 DEBUG oslo_concurrency.lockutils [req-39d39331-2272-4a4c-819e-786895a4ec91 req-129ccf87-9775-4a83-9270-dfbc1a586f63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.030 247403 DEBUG oslo_concurrency.lockutils [req-39d39331-2272-4a4c-819e-786895a4ec91 req-129ccf87-9775-4a83-9270-dfbc1a586f63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.031 247403 DEBUG nova.compute.manager [req-39d39331-2272-4a4c-819e-786895a4ec91 req-129ccf87-9775-4a83-9270-dfbc1a586f63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] No waiting events found dispatching network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:05:12 np0005603621 nova_compute[247399]: 2026-01-31 09:05:12.031 247403 WARNING nova.compute.manager [req-39d39331-2272-4a4c-819e-786895a4ec91 req-129ccf87-9775-4a83-9270-dfbc1a586f63 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received unexpected event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:05:12 np0005603621 exciting_mestorf[394913]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:05:12 np0005603621 exciting_mestorf[394913]: --> relative data size: 1.0
Jan 31 04:05:12 np0005603621 exciting_mestorf[394913]: --> All data devices are unavailable
Jan 31 04:05:12 np0005603621 systemd[1]: libpod-3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad.scope: Deactivated successfully.
Jan 31 04:05:12 np0005603621 podman[394896]: 2026-01-31 09:05:12.484425626 +0000 UTC m=+0.970025507 container died 3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:05:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Jan 31 04:05:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-861d24c8fdac9c13434afd2ccde66b138f4f90fcfd1eeff01a5fae152e74752c-merged.mount: Deactivated successfully.
Jan 31 04:05:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Jan 31 04:05:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Jan 31 04:05:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 398 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.9 MiB/s rd, 6.2 MiB/s wr, 266 op/s
Jan 31 04:05:13 np0005603621 podman[394896]: 2026-01-31 09:05:13.17337816 +0000 UTC m=+1.658978021 container remove 3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:05:13 np0005603621 systemd[1]: libpod-conmon-3f055f9785dd1d6383f38a7d047576be8f2cc171b397aef9fb33bc11ba21e1ad.scope: Deactivated successfully.
Jan 31 04:05:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:13.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.676620583 +0000 UTC m=+0.034938564 container create 8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:05:13 np0005603621 systemd[1]: Started libpod-conmon-8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628.scope.
Jan 31 04:05:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.662376743 +0000 UTC m=+0.020694724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.761089599 +0000 UTC m=+0.119407590 container init 8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.767011966 +0000 UTC m=+0.125329947 container start 8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:05:13 np0005603621 suspicious_payne[395097]: 167 167
Jan 31 04:05:13 np0005603621 systemd[1]: libpod-8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628.scope: Deactivated successfully.
Jan 31 04:05:13 np0005603621 conmon[395097]: conmon 8b059aad8624ad431377 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628.scope/container/memory.events
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.770678841 +0000 UTC m=+0.128996852 container attach 8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.771162566 +0000 UTC m=+0.129480547 container died 8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 04:05:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0c74f8704877eccaa646028457531dcce15dbb3aba9d793cd94d34b11455a858-merged.mount: Deactivated successfully.
Jan 31 04:05:13 np0005603621 podman[395081]: 2026-01-31 09:05:13.806407209 +0000 UTC m=+0.164725200 container remove 8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:05:13 np0005603621 systemd[1]: libpod-conmon-8b059aad8624ad43137773c64253e602db2468d612be60ca3bded346c272f628.scope: Deactivated successfully.
Jan 31 04:05:13 np0005603621 podman[395120]: 2026-01-31 09:05:13.910902237 +0000 UTC m=+0.031245798 container create c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:05:13 np0005603621 systemd[1]: Started libpod-conmon-c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22.scope.
Jan 31 04:05:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e450eecccc071147b3f736d2c18a812149fbcd0c4ed1c6513e1b46d9b9f2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e450eecccc071147b3f736d2c18a812149fbcd0c4ed1c6513e1b46d9b9f2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e450eecccc071147b3f736d2c18a812149fbcd0c4ed1c6513e1b46d9b9f2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f9e450eecccc071147b3f736d2c18a812149fbcd0c4ed1c6513e1b46d9b9f2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:13 np0005603621 podman[395120]: 2026-01-31 09:05:13.97689031 +0000 UTC m=+0.097233901 container init c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:05:13 np0005603621 podman[395120]: 2026-01-31 09:05:13.980986279 +0000 UTC m=+0.101329840 container start c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:05:13 np0005603621 podman[395120]: 2026-01-31 09:05:13.983684174 +0000 UTC m=+0.104027735 container attach c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:05:13 np0005603621 podman[395120]: 2026-01-31 09:05:13.8976873 +0000 UTC m=+0.018030861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.445 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.671 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.672 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.672 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.673 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.673 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.674 247403 INFO nova.compute.manager [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Terminating instance#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.676 247403 DEBUG nova.compute.manager [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:05:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:05:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596759210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:05:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:05:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3596759210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:05:14 np0005603621 kernel: tap9f804c26-f0 (unregistering): left promiscuous mode
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]: {
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:    "0": [
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:        {
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "devices": [
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "/dev/loop3"
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            ],
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "lv_name": "ceph_lv0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "lv_size": "7511998464",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "name": "ceph_lv0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "tags": {
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.cluster_name": "ceph",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.crush_device_class": "",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.encrypted": "0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.osd_id": "0",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.type": "block",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:                "ceph.vdo": "0"
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            },
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "type": "block",
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:            "vg_name": "ceph_vg0"
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:        }
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]:    ]
Jan 31 04:05:14 np0005603621 sweet_almeida[395137]: }
Jan 31 04:05:14 np0005603621 NetworkManager[49013]: <info>  [1769850314.7159] device (tap9f804c26-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:05:14 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:14Z|00865|binding|INFO|Releasing lport 9f804c26-f01b-41b8-b1d4-03168c468d85 from this chassis (sb_readonly=0)
Jan 31 04:05:14 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:14Z|00866|binding|INFO|Setting lport 9f804c26-f01b-41b8-b1d4-03168c468d85 down in Southbound
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:14Z|00867|binding|INFO|Removing iface tap9f804c26-f0 ovn-installed in OVS
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.723 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.731 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:14.733 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:d9:2e 10.100.0.7'], port_security=['fa:16:3e:dc:d9:2e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b2aba414-44a8-4432-92fc-c23c3abcf4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-903337233', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '8', 'neutron:security_group_ids': '00bd35f9-3831-4580-a3f7-f87667bbbdd7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.224', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89356e16-4ed0-4fd6-bb34-95ca088d447a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=9f804c26-f01b-41b8-b1d4-03168c468d85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:05:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:14.734 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 9f804c26-f01b-41b8-b1d4-03168c468d85 in datapath 91760e6c-de0b-4340-85b5-40ec6c5aa9f4 unbound from our chassis#033[00m
Jan 31 04:05:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:14.736 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 91760e6c-de0b-4340-85b5-40ec6c5aa9f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:05:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:14.737 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[10ccd3f6-08c1-4259-9f30-0dcd1743acf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:14 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:14.738 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 namespace which is not needed anymore#033[00m
Jan 31 04:05:14 np0005603621 systemd[1]: libpod-c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22.scope: Deactivated successfully.
Jan 31 04:05:14 np0005603621 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Jan 31 04:05:14 np0005603621 podman[395120]: 2026-01-31 09:05:14.754007046 +0000 UTC m=+0.874350607 container died c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:05:14 np0005603621 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000ca.scope: Consumed 5.456s CPU time.
Jan 31 04:05:14 np0005603621 systemd-machined[212769]: Machine qemu-99-instance-000000ca terminated.
Jan 31 04:05:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9f9e450eecccc071147b3f736d2c18a812149fbcd0c4ed1c6513e1b46d9b9f2b-merged.mount: Deactivated successfully.
Jan 31 04:05:14 np0005603621 podman[395120]: 2026-01-31 09:05:14.863301476 +0000 UTC m=+0.983645057 container remove c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:05:14 np0005603621 systemd[1]: libpod-conmon-c39d1e0052a7d0750d48ad5683b015f040e94bd0653eaabee56739d53701ac22.scope: Deactivated successfully.
Jan 31 04:05:14 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [NOTICE]   (394574) : haproxy version is 2.8.14-c23fe91
Jan 31 04:05:14 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [NOTICE]   (394574) : path to executable is /usr/sbin/haproxy
Jan 31 04:05:14 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [WARNING]  (394574) : Exiting Master process...
Jan 31 04:05:14 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [WARNING]  (394574) : Exiting Master process...
Jan 31 04:05:14 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [ALERT]    (394574) : Current worker (394576) exited with code 143 (Terminated)
Jan 31 04:05:14 np0005603621 neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4[394570]: [WARNING]  (394574) : All workers exited. Exiting... (0)
Jan 31 04:05:14 np0005603621 systemd[1]: libpod-9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7.scope: Deactivated successfully.
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.897 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 podman[395181]: 2026-01-31 09:05:14.898046613 +0000 UTC m=+0.079699537 container died 9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.905 247403 INFO nova.virt.libvirt.driver [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Instance destroyed successfully.#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.906 247403 DEBUG nova.objects.instance [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid b2aba414-44a8-4432-92fc-c23c3abcf4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.927 247403 DEBUG nova.virt.libvirt.vif [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-90775299',display_name='tempest-TestNetworkBasicOps-server-90775299',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-90775299',id=202,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBgOKq7YCkpBGeBUgUhAHWNvJY7nlw+fNPD3n7gd6evkiM0+aOdaHIKYACIgc7snVclbAWXy87grwytzPllPR8S/Q+HBTk7WHlIw+T0F1jBq+4obRoDTJwSKlpIhzGt/mA==',key_name='tempest-TestNetworkBasicOps-327699935',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:05:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-d5gr31fc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:05:09Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=b2aba414-44a8-4432-92fc-c23c3abcf4e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.927 247403 DEBUG nova.network.os_vif_util [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "9f804c26-f01b-41b8-b1d4-03168c468d85", "address": "fa:16:3e:dc:d9:2e", "network": {"id": "91760e6c-de0b-4340-85b5-40ec6c5aa9f4", "bridge": "br-int", "label": "tempest-network-smoke--1655108605", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f804c26-f0", "ovs_interfaceid": "9f804c26-f01b-41b8-b1d4-03168c468d85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.928 247403 DEBUG nova.network.os_vif_util [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.928 247403 DEBUG os_vif [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.930 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f804c26-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.931 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.932 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:14 np0005603621 nova_compute[247399]: 2026-01-31 09:05:14.934 247403 INFO os_vif [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:d9:2e,bridge_name='br-int',has_traffic_filtering=True,id=9f804c26-f01b-41b8-b1d4-03168c468d85,network=Network(91760e6c-de0b-4340-85b5-40ec6c5aa9f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap9f804c26-f0')#033[00m
Jan 31 04:05:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7-userdata-shm.mount: Deactivated successfully.
Jan 31 04:05:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0bc46e0f3f36ca70794cad6f0982adca18424fe27d19a23094b5875bc4aee648-merged.mount: Deactivated successfully.
Jan 31 04:05:14 np0005603621 podman[395181]: 2026-01-31 09:05:14.962378533 +0000 UTC m=+0.144031467 container cleanup 9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 04:05:14 np0005603621 systemd[1]: libpod-conmon-9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7.scope: Deactivated successfully.
Jan 31 04:05:15 np0005603621 podman[395269]: 2026-01-31 09:05:15.021801818 +0000 UTC m=+0.039164897 container remove 9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.025 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[546ca04a-0f44-4c22-b8fc-4907281fad9f]: (4, ('Sat Jan 31 09:05:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 (9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7)\n9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7\nSat Jan 31 09:05:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 (9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7)\n9b1dbc3ba2fbcf1a2f4dc83d0edc14838a13a9b849e2fab6f05e898742a817f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.027 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4406bca6-7f6d-4b70-aa0c-cfb1596777e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.028 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91760e6c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.029 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:15 np0005603621 kernel: tap91760e6c-d0: left promiscuous mode
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.035 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.038 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4aab1a16-d89a-4d0b-9031-2fb2b8b8b995]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.053 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a27d5e25-7071-47d7-92bd-98a4ebc86d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.054 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa7e307-be73-4fcf-898d-683a7d8f69e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.064 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1563426d-ec07-44fc-b2a0-9cd39ca055a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 961477, 'reachable_time': 15051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395340, 'error': None, 'target': 'ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.067 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-91760e6c-de0b-4340-85b5-40ec6c5aa9f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:05:15 np0005603621 systemd[1]: run-netns-ovnmeta\x2d91760e6c\x2dde0b\x2d4340\x2d85b5\x2d40ec6c5aa9f4.mount: Deactivated successfully.
Jan 31 04:05:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:15.067 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[2b371fcb-1ad8-4e28-b0be-9af267d1fb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.092 247403 DEBUG nova.compute.manager [req-9dfbb1bc-faa5-44cc-a1a8-780f2ef1e13a req-c8e3f16f-2c79-4885-80f6-4c6294ee5831 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received event network-vif-unplugged-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.092 247403 DEBUG oslo_concurrency.lockutils [req-9dfbb1bc-faa5-44cc-a1a8-780f2ef1e13a req-c8e3f16f-2c79-4885-80f6-4c6294ee5831 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.093 247403 DEBUG oslo_concurrency.lockutils [req-9dfbb1bc-faa5-44cc-a1a8-780f2ef1e13a req-c8e3f16f-2c79-4885-80f6-4c6294ee5831 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.093 247403 DEBUG oslo_concurrency.lockutils [req-9dfbb1bc-faa5-44cc-a1a8-780f2ef1e13a req-c8e3f16f-2c79-4885-80f6-4c6294ee5831 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.093 247403 DEBUG nova.compute.manager [req-9dfbb1bc-faa5-44cc-a1a8-780f2ef1e13a req-c8e3f16f-2c79-4885-80f6-4c6294ee5831 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] No waiting events found dispatching network-vif-unplugged-9f804c26-f01b-41b8-b1d4-03168c468d85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.093 247403 DEBUG nova.compute.manager [req-9dfbb1bc-faa5-44cc-a1a8-780f2ef1e13a req-c8e3f16f-2c79-4885-80f6-4c6294ee5831 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received event network-vif-unplugged-9f804c26-f01b-41b8-b1d4-03168c468d85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:05:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.2 MiB/s rd, 5.8 MiB/s wr, 272 op/s
Jan 31 04:05:15 np0005603621 nova_compute[247399]: 2026-01-31 09:05:15.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:15.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:15.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:15 np0005603621 podman[395388]: 2026-01-31 09:05:15.375355387 +0000 UTC m=+0.068627457 container create 37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:05:15 np0005603621 podman[395388]: 2026-01-31 09:05:15.328368764 +0000 UTC m=+0.021640854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:05:15 np0005603621 systemd[1]: Started libpod-conmon-37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156.scope.
Jan 31 04:05:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:15 np0005603621 podman[395388]: 2026-01-31 09:05:15.529358657 +0000 UTC m=+0.222630757 container init 37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elgamal, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:05:15 np0005603621 podman[395388]: 2026-01-31 09:05:15.535381407 +0000 UTC m=+0.228653477 container start 37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 04:05:15 np0005603621 systemd[1]: libpod-37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156.scope: Deactivated successfully.
Jan 31 04:05:15 np0005603621 focused_elgamal[395404]: 167 167
Jan 31 04:05:15 np0005603621 conmon[395404]: conmon 37181fb2f2a0729d8204 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156.scope/container/memory.events
Jan 31 04:05:15 np0005603621 podman[395388]: 2026-01-31 09:05:15.578270121 +0000 UTC m=+0.271542211 container attach 37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:05:15 np0005603621 podman[395388]: 2026-01-31 09:05:15.579850561 +0000 UTC m=+0.273122641 container died 37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:05:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b6517532cf7c4dab44c891ad87bc80aed19343060c673b3263763a6577c3cee0-merged.mount: Deactivated successfully.
Jan 31 04:05:16 np0005603621 podman[395388]: 2026-01-31 09:05:16.050413053 +0000 UTC m=+0.743685123 container remove 37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:05:16 np0005603621 systemd[1]: libpod-conmon-37181fb2f2a0729d820461b3815bbea894fa5266a8836e5d0f17fd1859070156.scope: Deactivated successfully.
Jan 31 04:05:16 np0005603621 nova_compute[247399]: 2026-01-31 09:05:16.160 247403 INFO nova.virt.libvirt.driver [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Deleting instance files /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2_del#033[00m
Jan 31 04:05:16 np0005603621 nova_compute[247399]: 2026-01-31 09:05:16.161 247403 INFO nova.virt.libvirt.driver [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Deletion of /var/lib/nova/instances/b2aba414-44a8-4432-92fc-c23c3abcf4e2_del complete#033[00m
Jan 31 04:05:16 np0005603621 nova_compute[247399]: 2026-01-31 09:05:16.220 247403 INFO nova.compute.manager [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Took 1.54 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:05:16 np0005603621 nova_compute[247399]: 2026-01-31 09:05:16.220 247403 DEBUG oslo.service.loopingcall [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:05:16 np0005603621 nova_compute[247399]: 2026-01-31 09:05:16.221 247403 DEBUG nova.compute.manager [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:05:16 np0005603621 nova_compute[247399]: 2026-01-31 09:05:16.221 247403 DEBUG nova.network.neutron [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:05:16 np0005603621 podman[395428]: 2026-01-31 09:05:16.142781478 +0000 UTC m=+0.018487564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:05:16 np0005603621 podman[395428]: 2026-01-31 09:05:16.256217248 +0000 UTC m=+0.131923304 container create 6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:05:16 np0005603621 systemd[1]: Started libpod-conmon-6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6.scope.
Jan 31 04:05:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d330ad1944f83b65c7a065fcfa18faa9b6e97e4e15399446f6d94b50cb1fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d330ad1944f83b65c7a065fcfa18faa9b6e97e4e15399446f6d94b50cb1fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d330ad1944f83b65c7a065fcfa18faa9b6e97e4e15399446f6d94b50cb1fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18d330ad1944f83b65c7a065fcfa18faa9b6e97e4e15399446f6d94b50cb1fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:16 np0005603621 podman[395428]: 2026-01-31 09:05:16.493836159 +0000 UTC m=+0.369542235 container init 6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:05:16 np0005603621 podman[395428]: 2026-01-31 09:05:16.499709953 +0000 UTC m=+0.375416009 container start 6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:05:16 np0005603621 podman[395428]: 2026-01-31 09:05:16.584776458 +0000 UTC m=+0.460482514 container attach 6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.026 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 4.7 MiB/s wr, 222 op/s
Jan 31 04:05:17 np0005603621 kind_goodall[395446]: {
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:        "osd_id": 0,
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:        "type": "bluestore"
Jan 31 04:05:17 np0005603621 kind_goodall[395446]:    }
Jan 31 04:05:17 np0005603621 kind_goodall[395446]: }
Jan 31 04:05:17 np0005603621 systemd[1]: libpod-6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6.scope: Deactivated successfully.
Jan 31 04:05:17 np0005603621 podman[395428]: 2026-01-31 09:05:17.261463485 +0000 UTC m=+1.137169581 container died 6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 04:05:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d18d330ad1944f83b65c7a065fcfa18faa9b6e97e4e15399446f6d94b50cb1fe-merged.mount: Deactivated successfully.
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.301 247403 DEBUG nova.compute.manager [req-da09f475-c314-4ce5-b169-faa06d92748d req-2213a81e-01f0-4204-9727-5dba551bd683 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.302 247403 DEBUG oslo_concurrency.lockutils [req-da09f475-c314-4ce5-b169-faa06d92748d req-2213a81e-01f0-4204-9727-5dba551bd683 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.302 247403 DEBUG oslo_concurrency.lockutils [req-da09f475-c314-4ce5-b169-faa06d92748d req-2213a81e-01f0-4204-9727-5dba551bd683 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.302 247403 DEBUG oslo_concurrency.lockutils [req-da09f475-c314-4ce5-b169-faa06d92748d req-2213a81e-01f0-4204-9727-5dba551bd683 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.302 247403 DEBUG nova.compute.manager [req-da09f475-c314-4ce5-b169-faa06d92748d req-2213a81e-01f0-4204-9727-5dba551bd683 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] No waiting events found dispatching network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:05:17 np0005603621 nova_compute[247399]: 2026-01-31 09:05:17.302 247403 WARNING nova.compute.manager [req-da09f475-c314-4ce5-b169-faa06d92748d req-2213a81e-01f0-4204-9727-5dba551bd683 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Received unexpected event network-vif-plugged-9f804c26-f01b-41b8-b1d4-03168c468d85 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:05:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:17.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:17 np0005603621 podman[395428]: 2026-01-31 09:05:17.314602232 +0000 UTC m=+1.190308288 container remove 6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:05:17 np0005603621 systemd[1]: libpod-conmon-6282946d46b001470df9e05ecc8eab929b8e85be04f13bc31390d3af964a47b6.scope: Deactivated successfully.
Jan 31 04:05:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:05:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:05:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:05:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:05:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c32d6b71-eda0-463e-8e33-c5dbc81c8e35 does not exist
Jan 31 04:05:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 48966c40-636c-48e9-b3d5-ea8d141f168f does not exist
Jan 31 04:05:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4b7e4821-aa8c-4149-982d-034cae62a429 does not exist
Jan 31 04:05:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:05:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:05:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 368 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.8 MiB/s wr, 179 op/s
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.214 247403 DEBUG nova.network.neutron [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.245 247403 INFO nova.compute.manager [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Took 3.02 seconds to deallocate network for instance.#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.297 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.297 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.349 247403 DEBUG oslo_concurrency.processutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:19.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399772262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.767 247403 DEBUG oslo_concurrency.processutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.772 247403 DEBUG nova.compute.provider_tree [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.823 247403 DEBUG nova.scheduler.client.report [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.856 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.921 247403 INFO nova.scheduler.client.report [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance b2aba414-44a8-4432-92fc-c23c3abcf4e2#033[00m
Jan 31 04:05:19 np0005603621 nova_compute[247399]: 2026-01-31 09:05:19.934 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:20 np0005603621 nova_compute[247399]: 2026-01-31 09:05:20.045 247403 DEBUG oslo_concurrency.lockutils [None req-d72c121a-21be-44f6-817f-379cf9baad84 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b2aba414-44a8-4432-92fc-c23c3abcf4e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.270 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.271 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.296 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:05:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:21.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.409 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.410 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.415 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.415 247403 INFO nova.compute.claims [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:05:21 np0005603621 nova_compute[247399]: 2026-01-31 09:05:21.605 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668573933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.001 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.006 247403 DEBUG nova.compute.provider_tree [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.028 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.034 247403 DEBUG nova.scheduler.client.report [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.064 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.065 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.129 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.129 247403 DEBUG nova.network.neutron [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.160 247403 INFO nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.187 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.330 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.331 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.332 247403 INFO nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Creating image(s)#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.360 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.392 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.423 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.428 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.519 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.520 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.520 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.521 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.544 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.547 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 043a7f0d-cf1d-4486-9f27-6191777451e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.870 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 043a7f0d-cf1d-4486-9f27-6191777451e5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:22 np0005603621 nova_compute[247399]: 2026-01-31 09:05:22.947 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] resizing rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.069 247403 DEBUG nova.objects.instance [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'migration_context' on Instance uuid 043a7f0d-cf1d-4486-9f27-6191777451e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.088 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.088 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Ensure instance console log exists: /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.089 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.089 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.089 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 357 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 254 KiB/s wr, 73 op/s
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.174 247403 DEBUG nova.policy [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ebd43008d7a64b8bbf97a2304b1f78b6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.226 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.226 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.226 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.227 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.227 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:23.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3473808379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.649 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.781 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.782 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4000MB free_disk=20.986831665039062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.783 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.783 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.880 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 043a7f0d-cf1d-4486-9f27-6191777451e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.880 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.880 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:05:23 np0005603621 nova_compute[247399]: 2026-01-31 09:05:23.956 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:05:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214288493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.380 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.385 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.412 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.489 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.489 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.864 247403 DEBUG nova.network.neutron [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Successfully created port: df02fc02-d5d2-434a-bbaa-ec067a59a165 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:05:24 np0005603621 nova_compute[247399]: 2026-01-31 09:05:24.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 371 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 982 KiB/s rd, 736 KiB/s wr, 85 op/s
Jan 31 04:05:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:25.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:25.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.490 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.491 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.491 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.513 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.513 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.514 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:25 np0005603621 nova_compute[247399]: 2026-01-31 09:05:25.514 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.674 247403 DEBUG nova.network.neutron [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Successfully updated port: df02fc02-d5d2-434a-bbaa-ec067a59a165 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.694 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.694 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquired lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.694 247403 DEBUG nova.network.neutron [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.953 247403 DEBUG nova.compute.manager [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-changed-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.953 247403 DEBUG nova.compute.manager [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Refreshing instance network info cache due to event network-changed-df02fc02-d5d2-434a-bbaa-ec067a59a165. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:05:26 np0005603621 nova_compute[247399]: 2026-01-31 09:05:26.953 247403 DEBUG oslo_concurrency.lockutils [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:05:27 np0005603621 nova_compute[247399]: 2026-01-31 09:05:27.030 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 371 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 531 KiB/s wr, 59 op/s
Jan 31 04:05:27 np0005603621 nova_compute[247399]: 2026-01-31 09:05:27.203 247403 DEBUG nova.network.neutron [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:05:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:27.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:27.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:28 np0005603621 nova_compute[247399]: 2026-01-31 09:05:28.573 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:28 np0005603621 nova_compute[247399]: 2026-01-31 09:05:28.703 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.250 247403 DEBUG nova.network.neutron [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updating instance_info_cache with network_info: [{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.278 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Releasing lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.279 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Instance network_info: |[{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.281 247403 DEBUG oslo_concurrency.lockutils [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.281 247403 DEBUG nova.network.neutron [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Refreshing network info cache for port df02fc02-d5d2-434a-bbaa-ec067a59a165 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.287 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Start _get_guest_xml network_info=[{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.296 247403 WARNING nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.301 247403 DEBUG nova.virt.libvirt.host [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.302 247403 DEBUG nova.virt.libvirt.host [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.309 247403 DEBUG nova.virt.libvirt.host [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.309 247403 DEBUG nova.virt.libvirt.host [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.310 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.310 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.311 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.311 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.311 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.311 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.312 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.312 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.312 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.312 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.312 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.313 247403 DEBUG nova.virt.hardware [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.315 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:29.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:29.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:05:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343089799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.804 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.828 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.832 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.906 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850314.9048553, b2aba414-44a8-4432-92fc-c23c3abcf4e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.907 247403 INFO nova.compute.manager [-] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.943 247403 DEBUG nova.compute.manager [None req-3d1589b1-ea8b-46f2-9d2e-a6661544e9f9 - - - - - -] [instance: b2aba414-44a8-4432-92fc-c23c3abcf4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:29 np0005603621 nova_compute[247399]: 2026-01-31 09:05:29.991 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:05:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654666404' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.258 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.259 247403 DEBUG nova.virt.libvirt.vif [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:05:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ac',id=203,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsM8ucbcLk7gHnPNXCFsfbSILu282eZqKZSJV5U/2sSlklHOZS+gPhRNh+sslA5BGbKks4QdgEA5arD5QttOnofBo05fYkaik2+ZtO+xPryR+haHMGCwxK1z5EvdE26yQ==',key_name='tempest-TestSecurityGroupsBasicOps-1996952146',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-z5mz1fvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:05:22Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=043a7f0d-cf1d-4486-9f27-6191777451e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.259 247403 DEBUG nova.network.os_vif_util [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.260 247403 DEBUG nova.network.os_vif_util [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.261 247403 DEBUG nova.objects.instance [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'pci_devices' on Instance uuid 043a7f0d-cf1d-4486-9f27-6191777451e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.290 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <uuid>043a7f0d-cf1d-4486-9f27-6191777451e5</uuid>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <name>instance-000000cb</name>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536</nova:name>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:05:29</nova:creationTime>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:user uuid="ebd43008d7a64b8bbf97a2304b1f78b6">tempest-TestSecurityGroupsBasicOps-1802479850-project-member</nova:user>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:project uuid="0c7930b92fc3471f87d9fe78ee56e71e">tempest-TestSecurityGroupsBasicOps-1802479850</nova:project>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <nova:port uuid="df02fc02-d5d2-434a-bbaa-ec067a59a165">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <entry name="serial">043a7f0d-cf1d-4486-9f27-6191777451e5</entry>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <entry name="uuid">043a7f0d-cf1d-4486-9f27-6191777451e5</entry>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/043a7f0d-cf1d-4486-9f27-6191777451e5_disk">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/043a7f0d-cf1d-4486-9f27-6191777451e5_disk.config">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:58:69:03"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <target dev="tapdf02fc02-d5"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/console.log" append="off"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:05:30 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:05:30 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:05:30 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:05:30 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.290 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Preparing to wait for external event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.291 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.291 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.291 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.292 247403 DEBUG nova.virt.libvirt.vif [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:05:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ac',id=203,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsM8ucbcLk7gHnPNXCFsfbSILu282eZqKZSJV5U/2sSlklHOZS+gPhRNh+sslA5BGbKks4QdgEA5arD5QttOnofBo05fYkaik2+ZtO+xPryR+haHMGCwxK1z5EvdE26yQ==',key_name='tempest-TestSecurityGroupsBasicOps-1996952146',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-z5mz1fvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:05:22Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=043a7f0d-cf1d-4486-9f27-6191777451e5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.292 247403 DEBUG nova.network.os_vif_util [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.293 247403 DEBUG nova.network.os_vif_util [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.293 247403 DEBUG os_vif [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.294 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.294 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.294 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.297 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.297 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf02fc02-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.297 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf02fc02-d5, col_values=(('external_ids', {'iface-id': 'df02fc02-d5d2-434a-bbaa-ec067a59a165', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:69:03', 'vm-uuid': '043a7f0d-cf1d-4486-9f27-6191777451e5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.299 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:30 np0005603621 NetworkManager[49013]: <info>  [1769850330.3005] manager: (tapdf02fc02-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.308 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.311 247403 INFO os_vif [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5')#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.373 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.373 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.374 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No VIF found with MAC fa:16:3e:58:69:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.374 247403 INFO nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Using config drive#033[00m
Jan 31 04:05:30 np0005603621 nova_compute[247399]: 2026-01-31 09:05:30.396 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:30.546 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:30.546 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:30.546 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:31.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.377 247403 INFO nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Creating config drive at /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/disk.config#033[00m
Jan 31 04:05:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:31.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.382 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps7c8edkt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.508 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmps7c8edkt" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:31 np0005603621 podman[395930]: 2026-01-31 09:05:31.52025955 +0000 UTC m=+0.069903017 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:05:31 np0005603621 podman[395929]: 2026-01-31 09:05:31.521128078 +0000 UTC m=+0.071037614 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.534 247403 DEBUG nova.storage.rbd_utils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 043a7f0d-cf1d-4486-9f27-6191777451e5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.538 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/disk.config 043a7f0d-cf1d-4486-9f27-6191777451e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.713 247403 DEBUG oslo_concurrency.processutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/disk.config 043a7f0d-cf1d-4486-9f27-6191777451e5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.714 247403 INFO nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Deleting local config drive /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5/disk.config because it was imported into RBD.#033[00m
Jan 31 04:05:31 np0005603621 kernel: tapdf02fc02-d5: entered promiscuous mode
Jan 31 04:05:31 np0005603621 NetworkManager[49013]: <info>  [1769850331.7635] manager: (tapdf02fc02-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Jan 31 04:05:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:31Z|00868|binding|INFO|Claiming lport df02fc02-d5d2-434a-bbaa-ec067a59a165 for this chassis.
Jan 31 04:05:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:31Z|00869|binding|INFO|df02fc02-d5d2-434a-bbaa-ec067a59a165: Claiming fa:16:3e:58:69:03 10.100.0.12
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.769 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:31 np0005603621 systemd-udevd[396022]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:05:31 np0005603621 NetworkManager[49013]: <info>  [1769850331.7917] device (tapdf02fc02-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:05:31 np0005603621 NetworkManager[49013]: <info>  [1769850331.7925] device (tapdf02fc02-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.794 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:31Z|00870|binding|INFO|Setting lport df02fc02-d5d2-434a-bbaa-ec067a59a165 ovn-installed in OVS
Jan 31 04:05:31 np0005603621 nova_compute[247399]: 2026-01-31 09:05:31.799 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.800 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:69:03 10.100.0.12'], port_security=['fa:16:3e:58:69:03 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '043a7f0d-cf1d-4486-9f27-6191777451e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e443d0c-908b-4b97-93d0-cb59d6beeccd c2f34d7d-4ab1-44e0-94fb-fd650b894cb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d1940d0-cb6c-4d17-bfb8-17698f086013, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=df02fc02-d5d2-434a-bbaa-ec067a59a165) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:05:31 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:31Z|00871|binding|INFO|Setting lport df02fc02-d5d2-434a-bbaa-ec067a59a165 up in Southbound
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.801 159734 INFO neutron.agent.ovn.metadata.agent [-] Port df02fc02-d5d2-434a-bbaa-ec067a59a165 in datapath 16fc744d-a1ee-45f8-ba60-eca1d72dccd9 bound to our chassis#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.803 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16fc744d-a1ee-45f8-ba60-eca1d72dccd9#033[00m
Jan 31 04:05:31 np0005603621 systemd-machined[212769]: New machine qemu-100-instance-000000cb.
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.810 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7f658b35-98e7-48e0-980d-c6f06b77ea27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.811 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16fc744d-a1 in ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.813 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16fc744d-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.813 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e15891bf-bc56-4a10-8ba1-baae6a946885]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.814 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9d07e85e-05e6-4dab-9254-375aeda88322]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 systemd[1]: Started Virtual Machine qemu-100-instance-000000cb.
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.822 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[14cfad9e-a503-4915-a372-16f1ba1bb229]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.830 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[66358b71-25ca-454d-bb02-952799b3e71d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.854 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[126d8e73-a0ba-48c4-bb9b-75b145067628]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.859 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[952bec95-0b94-422f-ae8d-e580aec5a1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 NetworkManager[49013]: <info>  [1769850331.8615] manager: (tap16fc744d-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.884 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf951b8-3eef-4f9a-8e2a-f0c32b940b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.888 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad85273-1bf0-424e-85b0-f50225057a69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 NetworkManager[49013]: <info>  [1769850331.9097] device (tap16fc744d-a0): carrier: link connected
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.914 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b4eeecd2-5535-4a2a-894e-8dcbd82ee1db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.931 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3d95d485-2d61-45ea-b2d2-00c5a06fbdb8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fc744d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:92:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 963742, 'reachable_time': 38613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396058, 'error': None, 'target': 'ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.946 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a86a8171-c86a-4005-8c1a-16d8ba20389a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:9262'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 963742, 'tstamp': 963742}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 396059, 'error': None, 'target': 'ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.964 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7ad86d-9b5c-413e-9c8d-bc1cf7b96678]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16fc744d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:92:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 963742, 'reachable_time': 38613, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 396060, 'error': None, 'target': 'ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:31.990 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[639846b9-8178-4adf-9459-8db429514f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.031 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.033 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[09d114f1-b219-4821-9a49-a496e29daf66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.034 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fc744d-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.034 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.035 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16fc744d-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:32 np0005603621 NetworkManager[49013]: <info>  [1769850332.0370] manager: (tap16fc744d-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 31 04:05:32 np0005603621 kernel: tap16fc744d-a0: entered promiscuous mode
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.037 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.040 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16fc744d-a0, col_values=(('external_ids', {'iface-id': 'bd8594ef-0ca2-4242-9821-52605e8c82a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:32 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:32Z|00872|binding|INFO|Releasing lport bd8594ef-0ca2-4242-9821-52605e8c82a6 from this chassis (sb_readonly=0)
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.042 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.043 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16fc744d-a1ee-45f8-ba60-eca1d72dccd9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16fc744d-a1ee-45f8-ba60-eca1d72dccd9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.044 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a0773e08-0086-41a3-ac57-5d2a5a146109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.045 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-16fc744d-a1ee-45f8-ba60-eca1d72dccd9
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/16fc744d-a1ee-45f8-ba60-eca1d72dccd9.pid.haproxy
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 16fc744d-a1ee-45f8-ba60-eca1d72dccd9
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:05:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:32.045 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'env', 'PROCESS_TAG=haproxy-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16fc744d-a1ee-45f8-ba60-eca1d72dccd9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.137 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850332.1362455, 043a7f0d-cf1d-4486-9f27-6191777451e5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.137 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] VM Started (Lifecycle Event)#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.178 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.183 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850332.1368449, 043a7f0d-cf1d-4486-9f27-6191777451e5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.184 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.235 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.238 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.293 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:05:32 np0005603621 podman[396135]: 2026-01-31 09:05:32.362912875 +0000 UTC m=+0.046066474 container create a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 31 04:05:32 np0005603621 systemd[1]: Started libpod-conmon-a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854.scope.
Jan 31 04:05:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:05:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ffab1346322ed705e861418ef45e48ae42c65a0251727aa1aaa321d0a860321/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:05:32 np0005603621 podman[396135]: 2026-01-31 09:05:32.334482438 +0000 UTC m=+0.017636067 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:05:32 np0005603621 podman[396135]: 2026-01-31 09:05:32.438911024 +0000 UTC m=+0.122064663 container init a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:05:32 np0005603621 podman[396135]: 2026-01-31 09:05:32.446577666 +0000 UTC m=+0.129731295 container start a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:05:32 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [NOTICE]   (396155) : New worker (396157) forked
Jan 31 04:05:32 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [NOTICE]   (396155) : Loading success.
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.724 247403 DEBUG nova.compute.manager [req-f0f5d332-e4a4-465e-ba65-74c7b4da36f5 req-a002ea7f-fd13-4747-8a93-803b318f6bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.724 247403 DEBUG oslo_concurrency.lockutils [req-f0f5d332-e4a4-465e-ba65-74c7b4da36f5 req-a002ea7f-fd13-4747-8a93-803b318f6bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.724 247403 DEBUG oslo_concurrency.lockutils [req-f0f5d332-e4a4-465e-ba65-74c7b4da36f5 req-a002ea7f-fd13-4747-8a93-803b318f6bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.725 247403 DEBUG oslo_concurrency.lockutils [req-f0f5d332-e4a4-465e-ba65-74c7b4da36f5 req-a002ea7f-fd13-4747-8a93-803b318f6bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.725 247403 DEBUG nova.compute.manager [req-f0f5d332-e4a4-465e-ba65-74c7b4da36f5 req-a002ea7f-fd13-4747-8a93-803b318f6bbc fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Processing event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.725 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.728 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850332.728039, 043a7f0d-cf1d-4486-9f27-6191777451e5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.728 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.731 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.733 247403 INFO nova.virt.libvirt.driver [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Instance spawned successfully.#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.733 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.781 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.784 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.793 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.794 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.794 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.794 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.795 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.795 247403 DEBUG nova.virt.libvirt.driver [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.836 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:05:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.961 247403 INFO nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Took 10.63 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:05:32 np0005603621 nova_compute[247399]: 2026-01-31 09:05:32.962 247403 DEBUG nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:05:33 np0005603621 nova_compute[247399]: 2026-01-31 09:05:33.076 247403 INFO nova.compute.manager [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Took 11.70 seconds to build instance.#033[00m
Jan 31 04:05:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 04:05:33 np0005603621 nova_compute[247399]: 2026-01-31 09:05:33.110 247403 DEBUG oslo_concurrency.lockutils [None req-9a157eb1-0fce-45df-8e00-6a3c2c2a630b ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:33 np0005603621 nova_compute[247399]: 2026-01-31 09:05:33.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:33.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:33.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:33 np0005603621 nova_compute[247399]: 2026-01-31 09:05:33.724 247403 DEBUG nova.network.neutron [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updated VIF entry in instance network info cache for port df02fc02-d5d2-434a-bbaa-ec067a59a165. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:05:33 np0005603621 nova_compute[247399]: 2026-01-31 09:05:33.725 247403 DEBUG nova.network.neutron [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updating instance_info_cache with network_info: [{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:05:33 np0005603621 nova_compute[247399]: 2026-01-31 09:05:33.779 247403 DEBUG oslo_concurrency.lockutils [req-5d6ed1b2-c2d3-4ede-a7a8-35ddbf7c8217 req-bc7a100b-980b-48e4-a817-a565b7e15305 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.956 247403 DEBUG nova.compute.manager [req-0f18e10c-a322-4f68-87b9-0468994c4095 req-28fdd51a-69b1-47c2-828f-80969d23bd76 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.956 247403 DEBUG oslo_concurrency.lockutils [req-0f18e10c-a322-4f68-87b9-0468994c4095 req-28fdd51a-69b1-47c2-828f-80969d23bd76 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.956 247403 DEBUG oslo_concurrency.lockutils [req-0f18e10c-a322-4f68-87b9-0468994c4095 req-28fdd51a-69b1-47c2-828f-80969d23bd76 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.957 247403 DEBUG oslo_concurrency.lockutils [req-0f18e10c-a322-4f68-87b9-0468994c4095 req-28fdd51a-69b1-47c2-828f-80969d23bd76 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.957 247403 DEBUG nova.compute.manager [req-0f18e10c-a322-4f68-87b9-0468994c4095 req-28fdd51a-69b1-47c2-828f-80969d23bd76 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] No waiting events found dispatching network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:05:34 np0005603621 nova_compute[247399]: 2026-01-31 09:05:34.958 247403 WARNING nova.compute.manager [req-0f18e10c-a322-4f68-87b9-0468994c4095 req-28fdd51a-69b1-47c2-828f-80969d23bd76 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received unexpected event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:05:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 488 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 31 04:05:35 np0005603621 nova_compute[247399]: 2026-01-31 09:05:35.300 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:35.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:37 np0005603621 nova_compute[247399]: 2026-01-31 09:05:37.034 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 403 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 1.3 MiB/s wr, 29 op/s
Jan 31 04:05:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:37.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:38 np0005603621 NetworkManager[49013]: <info>  [1769850338.5024] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Jan 31 04:05:38 np0005603621 NetworkManager[49013]: <info>  [1769850338.5031] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Jan 31 04:05:38 np0005603621 nova_compute[247399]: 2026-01-31 09:05:38.501 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:05:38 np0005603621 nova_compute[247399]: 2026-01-31 09:05:38.568 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:38 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:38Z|00873|binding|INFO|Releasing lport bd8594ef-0ca2-4242-9821-52605e8c82a6 from this chassis (sb_readonly=0)
Jan 31 04:05:38 np0005603621 nova_compute[247399]: 2026-01-31 09:05:38.592 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:05:38
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'volumes']
Jan 31 04:05:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 393 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.3 MiB/s wr, 160 op/s
Jan 31 04:05:39 np0005603621 nova_compute[247399]: 2026-01-31 09:05:39.127 247403 DEBUG nova.compute.manager [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-changed-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:05:39 np0005603621 nova_compute[247399]: 2026-01-31 09:05:39.127 247403 DEBUG nova.compute.manager [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Refreshing instance network info cache due to event network-changed-df02fc02-d5d2-434a-bbaa-ec067a59a165. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:05:39 np0005603621 nova_compute[247399]: 2026-01-31 09:05:39.127 247403 DEBUG oslo_concurrency.lockutils [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:05:39 np0005603621 nova_compute[247399]: 2026-01-31 09:05:39.127 247403 DEBUG oslo_concurrency.lockutils [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:05:39 np0005603621 nova_compute[247399]: 2026-01-31 09:05:39.128 247403 DEBUG nova.network.neutron [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Refreshing network info cache for port df02fc02-d5d2-434a-bbaa-ec067a59a165 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:05:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:39.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:39.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3835187053
Jan 31 04:05:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Jan 31 04:05:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Jan 31 04:05:39 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Jan 31 04:05:40 np0005603621 nova_compute[247399]: 2026-01-31 09:05:40.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 226 op/s
Jan 31 04:05:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:41.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:41 np0005603621 nova_compute[247399]: 2026-01-31 09:05:41.332 247403 DEBUG nova.network.neutron [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updated VIF entry in instance network info cache for port df02fc02-d5d2-434a-bbaa-ec067a59a165. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:05:41 np0005603621 nova_compute[247399]: 2026-01-31 09:05:41.332 247403 DEBUG nova.network.neutron [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updating instance_info_cache with network_info: [{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:05:41 np0005603621 nova_compute[247399]: 2026-01-31 09:05:41.366 247403 DEBUG oslo_concurrency.lockutils [req-7f6711a9-530f-421a-9678-43806bdb7102 req-9d4d3bbd-9e27-45de-99db-ed56d8c31494 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:05:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:41.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:42 np0005603621 nova_compute[247399]: 2026-01-31 09:05:42.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 385 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 240 op/s
Jan 31 04:05:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:43.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:43.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:44 np0005603621 nova_compute[247399]: 2026-01-31 09:05:44.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 389 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 486 KiB/s wr, 234 op/s
Jan 31 04:05:45 np0005603621 nova_compute[247399]: 2026-01-31 09:05:45.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:45.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:45.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:45 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:45Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:69:03 10.100.0.12
Jan 31 04:05:45 np0005603621 ovn_controller[149152]: 2026-01-31T09:05:45Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:69:03 10.100.0.12
Jan 31 04:05:47 np0005603621 nova_compute[247399]: 2026-01-31 09:05:47.038 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 389 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 486 KiB/s wr, 234 op/s
Jan 31 04:05:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:47.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:47.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Jan 31 04:05:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Jan 31 04:05:47 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Jan 31 04:05:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Jan 31 04:05:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.014648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850349014678, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1744, "num_deletes": 254, "total_data_size": 2909784, "memory_usage": 2955600, "flush_reason": "Manual Compaction"}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850349054183, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 2864427, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76317, "largest_seqno": 78060, "table_properties": {"data_size": 2856398, "index_size": 4842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17462, "raw_average_key_size": 20, "raw_value_size": 2840038, "raw_average_value_size": 3368, "num_data_blocks": 212, "num_entries": 843, "num_filter_entries": 843, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850192, "oldest_key_time": 1769850192, "file_creation_time": 1769850349, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 39589 microseconds, and 4698 cpu microseconds.
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.054235) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 2864427 bytes OK
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.054251) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.062666) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.062682) EVENT_LOG_v1 {"time_micros": 1769850349062677, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.062698) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2902362, prev total WAL file size 2902362, number of live WAL files 2.
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.063208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(2797KB)], [176(10MB)]
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850349063238, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 13586825, "oldest_snapshot_seqno": -1}
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 390 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 590 KiB/s rd, 5.0 MiB/s wr, 137 op/s
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10461 keys, 11763207 bytes, temperature: kUnknown
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850349207507, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 11763207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11698011, "index_size": 37947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26181, "raw_key_size": 275896, "raw_average_key_size": 26, "raw_value_size": 11517677, "raw_average_value_size": 1101, "num_data_blocks": 1436, "num_entries": 10461, "num_filter_entries": 10461, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850349, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.208084) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11763207 bytes
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.213075) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 94.1 rd, 81.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.2 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(8.8) write-amplify(4.1) OK, records in: 10985, records dropped: 524 output_compression: NoCompression
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.213100) EVENT_LOG_v1 {"time_micros": 1769850349213089, "job": 110, "event": "compaction_finished", "compaction_time_micros": 144341, "compaction_time_cpu_micros": 21679, "output_level": 6, "num_output_files": 1, "total_output_size": 11763207, "num_input_records": 10985, "num_output_records": 10461, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850349213498, "job": 110, "event": "table_file_deletion", "file_number": 178}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850349214396, "job": 110, "event": "table_file_deletion", "file_number": 176}
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.063130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.214447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.214451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.214453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.214455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:49 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:05:49.214457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:05:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:49.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:49.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00216759433742788 of space, bias 1.0, pg target 0.650278301228364 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0059437087055648615 of space, bias 1.0, pg target 1.7831126116694584 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:05:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:05:50 np0005603621 nova_compute[247399]: 2026-01-31 09:05:50.310 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 373 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 938 KiB/s rd, 6.3 MiB/s wr, 231 op/s
Jan 31 04:05:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:51.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:52 np0005603621 nova_compute[247399]: 2026-01-31 09:05:52.041 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 943 KiB/s rd, 5.8 MiB/s wr, 224 op/s
Jan 31 04:05:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:53.127 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:05:53 np0005603621 nova_compute[247399]: 2026-01-31 09:05:53.127 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:53 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:05:53.128 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:05:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:53.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 950 KiB/s rd, 5.8 MiB/s wr, 233 op/s
Jan 31 04:05:55 np0005603621 nova_compute[247399]: 2026-01-31 09:05:55.314 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:05:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:05:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:05:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:55.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:05:57 np0005603621 nova_compute[247399]: 2026-01-31 09:05:57.043 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:05:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 381 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 1.2 MiB/s wr, 126 op/s
Jan 31 04:05:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:57.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:05:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Jan 31 04:05:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Jan 31 04:05:57 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Jan 31 04:05:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 395 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 387 KiB/s rd, 1.5 MiB/s wr, 147 op/s
Jan 31 04:05:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:05:59.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:05:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:05:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:05:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:05:59.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:00 np0005603621 nova_compute[247399]: 2026-01-31 09:06:00.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:01 np0005603621 nova_compute[247399]: 2026-01-31 09:06:01.009 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 427 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 927 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 31 04:06:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb4196f0 =====
Jan 31 04:06:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb4196f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb4196f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:01.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:02 np0005603621 podman[396283]: 2026-01-31 09:06:02.017766026 +0000 UTC m=+0.072602424 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 04:06:02 np0005603621 podman[396282]: 2026-01-31 09:06:02.020292135 +0000 UTC m=+0.075129903 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 31 04:06:02 np0005603621 nova_compute[247399]: 2026-01-31 09:06:02.045 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:02.130 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:06:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Jan 31 04:06:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 427 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 79 op/s
Jan 31 04:06:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Jan 31 04:06:03 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Jan 31 04:06:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:03.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:03.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Jan 31 04:06:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Jan 31 04:06:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Jan 31 04:06:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 392 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.5 MiB/s wr, 165 op/s
Jan 31 04:06:05 np0005603621 nova_compute[247399]: 2026-01-31 09:06:05.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:06:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2104648191' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:06:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:06:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2104648191' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:06:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:05.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:07 np0005603621 nova_compute[247399]: 2026-01-31 09:06:07.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 392 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 31 04:06:07 np0005603621 nova_compute[247399]: 2026-01-31 09:06:07.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:07.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:06:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:06:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 287 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.1 MiB/s wr, 254 op/s
Jan 31 04:06:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:09.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:09.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:10 np0005603621 nova_compute[247399]: 2026-01-31 09:06:10.322 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 239 op/s
Jan 31 04:06:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:06:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:11.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:06:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:11.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:12 np0005603621 nova_compute[247399]: 2026-01-31 09:06:12.091 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:12 np0005603621 ovn_controller[149152]: 2026-01-31T09:06:12Z|00874|binding|INFO|Releasing lport bd8594ef-0ca2-4242-9821-52605e8c82a6 from this chassis (sb_readonly=0)
Jan 31 04:06:12 np0005603621 nova_compute[247399]: 2026-01-31 09:06:12.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Jan 31 04:06:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Jan 31 04:06:12 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Jan 31 04:06:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 292 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 523 KiB/s rd, 2.4 MiB/s wr, 128 op/s
Jan 31 04:06:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:13.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 2.3 MiB/s wr, 124 op/s
Jan 31 04:06:15 np0005603621 nova_compute[247399]: 2026-01-31 09:06:15.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:15.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:15.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:16 np0005603621 nova_compute[247399]: 2026-01-31 09:06:16.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:06:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1484847298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:06:17 np0005603621 nova_compute[247399]: 2026-01-31 09:06:17.093 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 295 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 2.3 MiB/s wr, 124 op/s
Jan 31 04:06:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:17.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2192afe7-4567-4ad2-8d5d-f8137cae60de does not exist
Jan 31 04:06:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6a720e87-60ed-4815-b213-23cd671b3ba0 does not exist
Jan 31 04:06:18 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 088a9358-ddcb-4e84-87bb-210ad62bb02f does not exist
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:06:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:06:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 362 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 221 KiB/s rd, 4.5 MiB/s wr, 73 op/s
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.313818698 +0000 UTC m=+0.041035396 container create b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 04:06:19 np0005603621 systemd[1]: Started libpod-conmon-b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78.scope.
Jan 31 04:06:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.382038631 +0000 UTC m=+0.109255359 container init b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.387046099 +0000 UTC m=+0.114262797 container start b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.294214199 +0000 UTC m=+0.021430937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.391119868 +0000 UTC m=+0.118336616 container attach b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elbakyan, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:06:19 np0005603621 festive_elbakyan[396746]: 167 167
Jan 31 04:06:19 np0005603621 systemd[1]: libpod-b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78.scope: Deactivated successfully.
Jan 31 04:06:19 np0005603621 conmon[396746]: conmon b72555f97652becf22e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78.scope/container/memory.events
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.393939537 +0000 UTC m=+0.121156225 container died b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 04:06:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d69391c5903aea175ea37150713bff1f02211549fddbfcb38c68dd64144a8b28-merged.mount: Deactivated successfully.
Jan 31 04:06:19 np0005603621 podman[396730]: 2026-01-31 09:06:19.425493613 +0000 UTC m=+0.152710311 container remove b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:06:19 np0005603621 systemd[1]: libpod-conmon-b72555f97652becf22e262a51a5a255755e984f8edf7d02a3944b03e722f0b78.scope: Deactivated successfully.
Jan 31 04:06:19 np0005603621 podman[396769]: 2026-01-31 09:06:19.544909712 +0000 UTC m=+0.034093507 container create 9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:06:19 np0005603621 systemd[1]: Started libpod-conmon-9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666.scope.
Jan 31 04:06:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:06:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172a5968a6da5e17863b004fcde5fd9f60fc327981dce855de0950f66aef11ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172a5968a6da5e17863b004fcde5fd9f60fc327981dce855de0950f66aef11ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172a5968a6da5e17863b004fcde5fd9f60fc327981dce855de0950f66aef11ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172a5968a6da5e17863b004fcde5fd9f60fc327981dce855de0950f66aef11ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/172a5968a6da5e17863b004fcde5fd9f60fc327981dce855de0950f66aef11ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:19 np0005603621 podman[396769]: 2026-01-31 09:06:19.617381289 +0000 UTC m=+0.106565104 container init 9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:06:19 np0005603621 podman[396769]: 2026-01-31 09:06:19.623889475 +0000 UTC m=+0.113073300 container start 9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:06:19 np0005603621 podman[396769]: 2026-01-31 09:06:19.530710334 +0000 UTC m=+0.019894159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:06:19 np0005603621 podman[396769]: 2026-01-31 09:06:19.627621962 +0000 UTC m=+0.116805797 container attach 9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:06:19 np0005603621 ovn_controller[149152]: 2026-01-31T09:06:19Z|00875|binding|INFO|Releasing lport bd8594ef-0ca2-4242-9821-52605e8c82a6 from this chassis (sb_readonly=0)
Jan 31 04:06:19 np0005603621 nova_compute[247399]: 2026-01-31 09:06:19.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:19.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:20 np0005603621 nova_compute[247399]: 2026-01-31 09:06:20.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:20 np0005603621 reverent_archimedes[396785]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:06:20 np0005603621 reverent_archimedes[396785]: --> relative data size: 1.0
Jan 31 04:06:20 np0005603621 reverent_archimedes[396785]: --> All data devices are unavailable
Jan 31 04:06:20 np0005603621 systemd[1]: libpod-9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666.scope: Deactivated successfully.
Jan 31 04:06:20 np0005603621 podman[396801]: 2026-01-31 09:06:20.388154676 +0000 UTC m=+0.022761050 container died 9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:06:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-172a5968a6da5e17863b004fcde5fd9f60fc327981dce855de0950f66aef11ba-merged.mount: Deactivated successfully.
Jan 31 04:06:20 np0005603621 podman[396801]: 2026-01-31 09:06:20.431042739 +0000 UTC m=+0.065649083 container remove 9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:06:20 np0005603621 systemd[1]: libpod-conmon-9b796a11afcb619ddea095545b8c36bb0927f1c6fa50deff33df47deabbed666.scope: Deactivated successfully.
Jan 31 04:06:20 np0005603621 podman[396956]: 2026-01-31 09:06:20.905917137 +0000 UTC m=+0.035540533 container create 8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:06:20 np0005603621 systemd[1]: Started libpod-conmon-8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9.scope.
Jan 31 04:06:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:06:20 np0005603621 podman[396956]: 2026-01-31 09:06:20.979076956 +0000 UTC m=+0.108700352 container init 8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:06:20 np0005603621 podman[396956]: 2026-01-31 09:06:20.985198369 +0000 UTC m=+0.114821765 container start 8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:06:20 np0005603621 podman[396956]: 2026-01-31 09:06:20.890314934 +0000 UTC m=+0.019938350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:06:20 np0005603621 romantic_proskuriakova[396972]: 167 167
Jan 31 04:06:20 np0005603621 podman[396956]: 2026-01-31 09:06:20.988051989 +0000 UTC m=+0.117675385 container attach 8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 04:06:20 np0005603621 systemd[1]: libpod-8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9.scope: Deactivated successfully.
Jan 31 04:06:20 np0005603621 podman[396956]: 2026-01-31 09:06:20.989183665 +0000 UTC m=+0.118807061 container died 8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:06:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7b369bcfebfdae453177c668511662215f3ee5e955dbb36d8d88d64f53684cab-merged.mount: Deactivated successfully.
Jan 31 04:06:21 np0005603621 podman[396956]: 2026-01-31 09:06:21.01976519 +0000 UTC m=+0.149388586 container remove 8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 04:06:21 np0005603621 systemd[1]: libpod-conmon-8d1e29c334f4b60146455ff90b299006bcf6441fba58288ed40d018019b4d8a9.scope: Deactivated successfully.
Jan 31 04:06:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 427 KiB/s rd, 4.7 MiB/s wr, 110 op/s
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.136519404 +0000 UTC m=+0.037571846 container create 0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_knuth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 04:06:21 np0005603621 systemd[1]: Started libpod-conmon-0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44.scope.
Jan 31 04:06:21 np0005603621 nova_compute[247399]: 2026-01-31 09:06:21.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:06:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81cd4d574eb67f03fbfa61d0d4c9ce2f649c3418fa5a58ba6f0e1d902d38d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.119517218 +0000 UTC m=+0.020569680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:06:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81cd4d574eb67f03fbfa61d0d4c9ce2f649c3418fa5a58ba6f0e1d902d38d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81cd4d574eb67f03fbfa61d0d4c9ce2f649c3418fa5a58ba6f0e1d902d38d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:21 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f81cd4d574eb67f03fbfa61d0d4c9ce2f649c3418fa5a58ba6f0e1d902d38d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.227517277 +0000 UTC m=+0.128569749 container init 0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.233123344 +0000 UTC m=+0.134175786 container start 0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_knuth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.237276444 +0000 UTC m=+0.138328906 container attach 0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:06:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:21.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:21 np0005603621 charming_knuth[397013]: {
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:    "0": [
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:        {
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "devices": [
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "/dev/loop3"
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            ],
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "lv_name": "ceph_lv0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "lv_size": "7511998464",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "name": "ceph_lv0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "tags": {
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.cluster_name": "ceph",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.crush_device_class": "",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.encrypted": "0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.osd_id": "0",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.type": "block",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:                "ceph.vdo": "0"
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            },
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "type": "block",
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:            "vg_name": "ceph_vg0"
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:        }
Jan 31 04:06:21 np0005603621 charming_knuth[397013]:    ]
Jan 31 04:06:21 np0005603621 charming_knuth[397013]: }
Jan 31 04:06:21 np0005603621 systemd[1]: libpod-0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44.scope: Deactivated successfully.
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.934292433 +0000 UTC m=+0.835344875 container died 0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 04:06:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1f81cd4d574eb67f03fbfa61d0d4c9ce2f649c3418fa5a58ba6f0e1d902d38d1-merged.mount: Deactivated successfully.
Jan 31 04:06:21 np0005603621 podman[396996]: 2026-01-31 09:06:21.978970373 +0000 UTC m=+0.880022815 container remove 0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_knuth, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:06:21 np0005603621 systemd[1]: libpod-conmon-0711516ed4429b093bcef2e0fafcef82f719ef0e2e43735a1823b62995e8ea44.scope: Deactivated successfully.
Jan 31 04:06:22 np0005603621 nova_compute[247399]: 2026-01-31 09:06:22.096 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:06:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1712964226' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.464760196 +0000 UTC m=+0.031241787 container create 1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ganguly, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:06:22 np0005603621 systemd[1]: Started libpod-conmon-1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7.scope.
Jan 31 04:06:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.522088585 +0000 UTC m=+0.088570176 container init 1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.527710782 +0000 UTC m=+0.094192373 container start 1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.530964605 +0000 UTC m=+0.097446226 container attach 1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:06:22 np0005603621 ecstatic_ganguly[397242]: 167 167
Jan 31 04:06:22 np0005603621 systemd[1]: libpod-1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7.scope: Deactivated successfully.
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.532002888 +0000 UTC m=+0.098484479 container died 1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ganguly, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.451491737 +0000 UTC m=+0.017973338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:06:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4086d0a6015a0c3f2e48ae72ee6646abd1b0e648aa672e28667721d3d0af1bc3-merged.mount: Deactivated successfully.
Jan 31 04:06:22 np0005603621 podman[397226]: 2026-01-31 09:06:22.564635388 +0000 UTC m=+0.131116979 container remove 1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 04:06:22 np0005603621 systemd[1]: libpod-conmon-1cf16277e6027b393a6490d64bd0c6f25224946a9a86775f2073cfbcd51904f7.scope: Deactivated successfully.
Jan 31 04:06:22 np0005603621 podman[397267]: 2026-01-31 09:06:22.682780916 +0000 UTC m=+0.036211353 container create 77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:06:22 np0005603621 systemd[1]: Started libpod-conmon-77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e.scope.
Jan 31 04:06:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:06:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a396f25b83cf2deb4d0c004ac5f534f64e3bbcb5268967f121eb320511142c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a396f25b83cf2deb4d0c004ac5f534f64e3bbcb5268967f121eb320511142c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a396f25b83cf2deb4d0c004ac5f534f64e3bbcb5268967f121eb320511142c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a396f25b83cf2deb4d0c004ac5f534f64e3bbcb5268967f121eb320511142c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:06:22 np0005603621 podman[397267]: 2026-01-31 09:06:22.761048777 +0000 UTC m=+0.114479214 container init 77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:06:22 np0005603621 podman[397267]: 2026-01-31 09:06:22.667311619 +0000 UTC m=+0.020742056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:06:22 np0005603621 podman[397267]: 2026-01-31 09:06:22.767606234 +0000 UTC m=+0.121036671 container start 77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:06:22 np0005603621 podman[397267]: 2026-01-31 09:06:22.770596458 +0000 UTC m=+0.124026895 container attach 77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:06:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 421 KiB/s rd, 4.6 MiB/s wr, 108 op/s
Jan 31 04:06:23 np0005603621 nova_compute[247399]: 2026-01-31 09:06:23.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]: {
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:        "osd_id": 0,
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:        "type": "bluestore"
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]:    }
Jan 31 04:06:23 np0005603621 elastic_neumann[397283]: }
Jan 31 04:06:23 np0005603621 systemd[1]: libpod-77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e.scope: Deactivated successfully.
Jan 31 04:06:23 np0005603621 podman[397267]: 2026-01-31 09:06:23.553411784 +0000 UTC m=+0.906842241 container died 77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:06:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-39a396f25b83cf2deb4d0c004ac5f534f64e3bbcb5268967f121eb320511142c-merged.mount: Deactivated successfully.
Jan 31 04:06:23 np0005603621 podman[397267]: 2026-01-31 09:06:23.600219271 +0000 UTC m=+0.953649708 container remove 77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_neumann, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 04:06:23 np0005603621 systemd[1]: libpod-conmon-77d26e8d72b061d799aa54cf46fb99a3c8be7ed31871f8fe08233800426b679e.scope: Deactivated successfully.
Jan 31 04:06:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:06:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:06:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0ff3671e-de4c-42fe-8805-621fe8645d09 does not exist
Jan 31 04:06:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1b1ef8c0-2bcf-4e37-a54d-40ff309856fc does not exist
Jan 31 04:06:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4d31c5f1-a4c2-4666-bace-2825e08a4e0c does not exist
Jan 31 04:06:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:23.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:06:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 31 04:06:25 np0005603621 nova_compute[247399]: 2026-01-31 09:06:25.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:25 np0005603621 nova_compute[247399]: 2026-01-31 09:06:25.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:06:25 np0005603621 nova_compute[247399]: 2026-01-31 09:06:25.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:06:25 np0005603621 nova_compute[247399]: 2026-01-31 09:06:25.329 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:06:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3657784007' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:06:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:25.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:26 np0005603621 nova_compute[247399]: 2026-01-31 09:06:26.702 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:06:26 np0005603621 nova_compute[247399]: 2026-01-31 09:06:26.702 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:06:26 np0005603621 nova_compute[247399]: 2026-01-31 09:06:26.703 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:06:26 np0005603621 nova_compute[247399]: 2026-01-31 09:06:26.703 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 043a7f0d-cf1d-4486-9f27-6191777451e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:06:27 np0005603621 nova_compute[247399]: 2026-01-31 09:06:27.098 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 332 KiB/s rd, 3.8 MiB/s wr, 84 op/s
Jan 31 04:06:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:27.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.8 MiB/s wr, 138 op/s
Jan 31 04:06:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:29.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:06:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:29.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:06:30 np0005603621 nova_compute[247399]: 2026-01-31 09:06:30.334 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:30.547 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:06:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:30.548 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:06:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:30.548 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.102 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updating instance_info_cache with network_info: [{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:06:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 609 KiB/s wr, 114 op/s
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.137 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.138 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.138 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.138 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.139 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.182 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.182 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.182 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.182 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.183 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:06:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:06:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380840176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.585 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.687 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.687 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000cb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.852 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.854 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3931MB free_disk=20.87627410888672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.854 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:06:31 np0005603621 nova_compute[247399]: 2026-01-31 09:06:31.855 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:06:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:31.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.002 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 043a7f0d-cf1d-4486-9f27-6191777451e5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.002 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.003 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.068 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.099 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447598443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.486 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:06:32 np0005603621 podman[397415]: 2026-01-31 09:06:32.489784844 +0000 UTC m=+0.046790868 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.491 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:06:32 np0005603621 podman[397416]: 2026-01-31 09:06:32.515180365 +0000 UTC m=+0.071364794 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.522 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.563 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.564 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.565 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.565 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:06:32 np0005603621 nova_compute[247399]: 2026-01-31 09:06:32.623 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.930338) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850392930388, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 781, "num_deletes": 259, "total_data_size": 1009602, "memory_usage": 1024424, "flush_reason": "Manual Compaction"}
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850392937866, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 987887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78061, "largest_seqno": 78841, "table_properties": {"data_size": 983844, "index_size": 1758, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9779, "raw_average_key_size": 20, "raw_value_size": 975365, "raw_average_value_size": 2011, "num_data_blocks": 75, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850349, "oldest_key_time": 1769850349, "file_creation_time": 1769850392, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 7556 microseconds, and 2376 cpu microseconds.
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.937902) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 987887 bytes OK
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.937919) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.941413) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.941460) EVENT_LOG_v1 {"time_micros": 1769850392941451, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.941484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 1005580, prev total WAL file size 1005580, number of live WAL files 2.
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.942137) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323634' seq:72057594037927935, type:22 .. '6C6F676D0033353135' seq:0, type:0; will stop at (end)
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(964KB)], [179(11MB)]
Jan 31 04:06:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850392942195, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 12751094, "oldest_snapshot_seqno": -1}
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10408 keys, 12593867 bytes, temperature: kUnknown
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850393113879, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 12593867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12527752, "index_size": 38994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 275804, "raw_average_key_size": 26, "raw_value_size": 12347210, "raw_average_value_size": 1186, "num_data_blocks": 1477, "num_entries": 10408, "num_filter_entries": 10408, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850392, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.114168) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 12593867 bytes
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.115151) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.2 rd, 73.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(25.7) write-amplify(12.7) OK, records in: 10946, records dropped: 538 output_compression: NoCompression
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.115173) EVENT_LOG_v1 {"time_micros": 1769850393115162, "job": 112, "event": "compaction_finished", "compaction_time_micros": 171762, "compaction_time_cpu_micros": 22582, "output_level": 6, "num_output_files": 1, "total_output_size": 12593867, "num_input_records": 10946, "num_output_records": 10408, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850393115332, "job": 112, "event": "table_file_deletion", "file_number": 181}
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850393116204, "job": 112, "event": "table_file_deletion", "file_number": 179}
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:32.942045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.116396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.116405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.116406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.116408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:06:33 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:06:33.116410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:06:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 330 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 43 KiB/s wr, 164 op/s
Jan 31 04:06:33 np0005603621 nova_compute[247399]: 2026-01-31 09:06:33.682 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:33.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:33.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:34 np0005603621 nova_compute[247399]: 2026-01-31 09:06:34.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:34 np0005603621 nova_compute[247399]: 2026-01-31 09:06:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 42 KiB/s wr, 176 op/s
Jan 31 04:06:35 np0005603621 nova_compute[247399]: 2026-01-31 09:06:35.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:35.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:35.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:37 np0005603621 nova_compute[247399]: 2026-01-31 09:06:37.101 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 175 op/s
Jan 31 04:06:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:37.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:37.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:06:38
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log']
Jan 31 04:06:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 314 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:06:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:06:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:39.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:39.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:40 np0005603621 nova_compute[247399]: 2026-01-31 09:06:40.340 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 318 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 169 op/s
Jan 31 04:06:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:41.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:41.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:42 np0005603621 nova_compute[247399]: 2026-01-31 09:06:42.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:42 np0005603621 ovn_controller[149152]: 2026-01-31T09:06:42Z|00876|binding|INFO|Releasing lport bd8594ef-0ca2-4242-9821-52605e8c82a6 from this chassis (sb_readonly=0)
Jan 31 04:06:42 np0005603621 nova_compute[247399]: 2026-01-31 09:06:42.236 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 355 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Jan 31 04:06:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:43.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:43.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 858 KiB/s rd, 4.3 MiB/s wr, 141 op/s
Jan 31 04:06:45 np0005603621 nova_compute[247399]: 2026-01-31 09:06:45.343 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:45.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:46 np0005603621 nova_compute[247399]: 2026-01-31 09:06:46.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:06:46 np0005603621 nova_compute[247399]: 2026-01-31 09:06:46.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:06:47 np0005603621 nova_compute[247399]: 2026-01-31 09:06:47.103 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 31 04:06:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:48 np0005603621 nova_compute[247399]: 2026-01-31 09:06:48.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004347911432160804 of space, bias 1.0, pg target 1.3043734296482412 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004325192222687512 of space, bias 1.0, pg target 1.2932324745835662 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:06:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 04:06:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:50 np0005603621 nova_compute[247399]: 2026-01-31 09:06:50.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 2.5 MiB/s wr, 96 op/s
Jan 31 04:06:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:06:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:51.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:06:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:52 np0005603621 nova_compute[247399]: 2026-01-31 09:06:52.105 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 473 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 31 04:06:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:53.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:54.141 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:06:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:54.142 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:06:54 np0005603621 nova_compute[247399]: 2026-01-31 09:06:54.162 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 136 KiB/s rd, 391 KiB/s wr, 36 op/s
Jan 31 04:06:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:06:55.144 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:06:55 np0005603621 nova_compute[247399]: 2026-01-31 09:06:55.349 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:55.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:55.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:56 np0005603621 nova_compute[247399]: 2026-01-31 09:06:56.896 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:57 np0005603621 nova_compute[247399]: 2026-01-31 09:06:57.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:06:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 55 KiB/s wr, 18 op/s
Jan 31 04:06:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:06:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:57.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 58 KiB/s wr, 19 op/s
Jan 31 04:06:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:06:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:06:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:06:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:06:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:06:59.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:00 np0005603621 nova_compute[247399]: 2026-01-31 09:07:00.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 26 KiB/s wr, 17 op/s
Jan 31 04:07:01 np0005603621 nova_compute[247399]: 2026-01-31 09:07:01.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:01.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:02 np0005603621 nova_compute[247399]: 2026-01-31 09:07:02.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 289 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 29 KiB/s wr, 42 op/s
Jan 31 04:07:03 np0005603621 podman[397577]: 2026-01-31 09:07:03.488340946 +0000 UTC m=+0.044294929 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:07:03 np0005603621 podman[397578]: 2026-01-31 09:07:03.50491038 +0000 UTC m=+0.060083608 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:07:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:03.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:03.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 27 KiB/s wr, 32 op/s
Jan 31 04:07:05 np0005603621 nova_compute[247399]: 2026-01-31 09:07:05.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:05.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:07 np0005603621 nova_compute[247399]: 2026-01-31 09:07:07.109 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 31 04:07:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:07.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:07:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:07:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 31 04:07:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:09.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:10 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:10Z|00877|binding|INFO|Releasing lport bd8594ef-0ca2-4242-9821-52605e8c82a6 from this chassis (sb_readonly=0)
Jan 31 04:07:10 np0005603621 nova_compute[247399]: 2026-01-31 09:07:10.139 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:10 np0005603621 nova_compute[247399]: 2026-01-31 09:07:10.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Jan 31 04:07:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:11.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:11.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:12 np0005603621 nova_compute[247399]: 2026-01-31 09:07:12.111 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 3.5 KiB/s wr, 27 op/s
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.180 247403 DEBUG nova.compute.manager [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-changed-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.180 247403 DEBUG nova.compute.manager [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Refreshing instance network info cache due to event network-changed-df02fc02-d5d2-434a-bbaa-ec067a59a165. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.180 247403 DEBUG oslo_concurrency.lockutils [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.180 247403 DEBUG oslo_concurrency.lockutils [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.181 247403 DEBUG nova.network.neutron [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Refreshing network info cache for port df02fc02-d5d2-434a-bbaa-ec067a59a165 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.485 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.486 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.486 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.486 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.486 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.487 247403 INFO nova.compute.manager [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Terminating instance#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.488 247403 DEBUG nova.compute.manager [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:07:13 np0005603621 kernel: tapdf02fc02-d5 (unregistering): left promiscuous mode
Jan 31 04:07:13 np0005603621 NetworkManager[49013]: <info>  [1769850433.5380] device (tapdf02fc02-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:07:13 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:13Z|00878|binding|INFO|Releasing lport df02fc02-d5d2-434a-bbaa-ec067a59a165 from this chassis (sb_readonly=0)
Jan 31 04:07:13 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:13Z|00879|binding|INFO|Setting lport df02fc02-d5d2-434a-bbaa-ec067a59a165 down in Southbound
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.545 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:13Z|00880|binding|INFO|Removing iface tapdf02fc02-d5 ovn-installed in OVS
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.547 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.555 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000cb.scope: Deactivated successfully.
Jan 31 04:07:13 np0005603621 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000cb.scope: Consumed 15.303s CPU time.
Jan 31 04:07:13 np0005603621 systemd-machined[212769]: Machine qemu-100-instance-000000cb terminated.
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.633 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:69:03 10.100.0.12'], port_security=['fa:16:3e:58:69:03 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '043a7f0d-cf1d-4486-9f27-6191777451e5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e443d0c-908b-4b97-93d0-cb59d6beeccd c2f34d7d-4ab1-44e0-94fb-fd650b894cb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d1940d0-cb6c-4d17-bfb8-17698f086013, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=df02fc02-d5d2-434a-bbaa-ec067a59a165) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.635 159734 INFO neutron.agent.ovn.metadata.agent [-] Port df02fc02-d5d2-434a-bbaa-ec067a59a165 in datapath 16fc744d-a1ee-45f8-ba60-eca1d72dccd9 unbound from our chassis#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.636 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16fc744d-a1ee-45f8-ba60-eca1d72dccd9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.638 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3dcc860d-44d8-44cb-a4ca-dedcb55f00cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.639 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9 namespace which is not needed anymore#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.702 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.713 247403 INFO nova.virt.libvirt.driver [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Instance destroyed successfully.#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.714 247403 DEBUG nova.objects.instance [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'resources' on Instance uuid 043a7f0d-cf1d-4486-9f27-6191777451e5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.728 247403 DEBUG nova.virt.libvirt.vif [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:05:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-997359536',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ac',id=203,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLsM8ucbcLk7gHnPNXCFsfbSILu282eZqKZSJV5U/2sSlklHOZS+gPhRNh+sslA5BGbKks4QdgEA5arD5QttOnofBo05fYkaik2+ZtO+xPryR+haHMGCwxK1z5EvdE26yQ==',key_name='tempest-TestSecurityGroupsBasicOps-1996952146',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:05:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-z5mz1fvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:05:33Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=043a7f0d-cf1d-4486-9f27-6191777451e5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.729 247403 DEBUG nova.network.os_vif_util [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.730 247403 DEBUG nova.network.os_vif_util [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.730 247403 DEBUG os_vif [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.732 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.732 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf02fc02-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.747 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.750 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.752 247403 INFO os_vif [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:69:03,bridge_name='br-int',has_traffic_filtering=True,id=df02fc02-d5d2-434a-bbaa-ec067a59a165,network=Network(16fc744d-a1ee-45f8-ba60-eca1d72dccd9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf02fc02-d5')#033[00m
Jan 31 04:07:13 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [NOTICE]   (396155) : haproxy version is 2.8.14-c23fe91
Jan 31 04:07:13 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [NOTICE]   (396155) : path to executable is /usr/sbin/haproxy
Jan 31 04:07:13 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [WARNING]  (396155) : Exiting Master process...
Jan 31 04:07:13 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [WARNING]  (396155) : Exiting Master process...
Jan 31 04:07:13 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [ALERT]    (396155) : Current worker (396157) exited with code 143 (Terminated)
Jan 31 04:07:13 np0005603621 neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9[396151]: [WARNING]  (396155) : All workers exited. Exiting... (0)
Jan 31 04:07:13 np0005603621 systemd[1]: libpod-a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854.scope: Deactivated successfully.
Jan 31 04:07:13 np0005603621 podman[397654]: 2026-01-31 09:07:13.764869044 +0000 UTC m=+0.048814522 container died a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:07:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854-userdata-shm.mount: Deactivated successfully.
Jan 31 04:07:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8ffab1346322ed705e861418ef45e48ae42c65a0251727aa1aaa321d0a860321-merged.mount: Deactivated successfully.
Jan 31 04:07:13 np0005603621 podman[397654]: 2026-01-31 09:07:13.803038308 +0000 UTC m=+0.086983786 container cleanup a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:07:13 np0005603621 systemd[1]: libpod-conmon-a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854.scope: Deactivated successfully.
Jan 31 04:07:13 np0005603621 podman[397711]: 2026-01-31 09:07:13.851320202 +0000 UTC m=+0.033445906 container remove a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.854 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[889c8b74-910b-4a6a-916d-28f80078c6bd]: (4, ('Sat Jan 31 09:07:13 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9 (a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854)\na8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854\nSat Jan 31 09:07:13 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9 (a8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854)\na8943ce14fd361cad42742433a211f0c965335a79c2879b19ce4dfd504219854\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.856 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b968e8da-4c0e-499d-84cc-0b314dd37c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.856 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16fc744d-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.858 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 kernel: tap16fc744d-a0: left promiscuous mode
Jan 31 04:07:13 np0005603621 nova_compute[247399]: 2026-01-31 09:07:13.864 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.866 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b77cf6-d2f8-491d-97d4-62b8614bdc9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.878 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e826b39a-1b49-4eea-91b3-635ab524eedd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.880 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7ba7f858-6977-4121-a09e-4e0d7e7d15db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.895 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[df1c30ff-bee9-4d9a-a2c3-13224aced080]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 963736, 'reachable_time': 38290, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397726, 'error': None, 'target': 'ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 systemd[1]: run-netns-ovnmeta\x2d16fc744d\x2da1ee\x2d45f8\x2dba60\x2deca1d72dccd9.mount: Deactivated successfully.
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.898 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16fc744d-a1ee-45f8-ba60-eca1d72dccd9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:07:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:13.899 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[24a9b75e-c1b4-40d6-93f1-f991073b1c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:13.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:14 np0005603621 nova_compute[247399]: 2026-01-31 09:07:14.475 247403 INFO nova.virt.libvirt.driver [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Deleting instance files /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5_del#033[00m
Jan 31 04:07:14 np0005603621 nova_compute[247399]: 2026-01-31 09:07:14.476 247403 INFO nova.virt.libvirt.driver [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Deletion of /var/lib/nova/instances/043a7f0d-cf1d-4486-9f27-6191777451e5_del complete#033[00m
Jan 31 04:07:14 np0005603621 nova_compute[247399]: 2026-01-31 09:07:14.754 247403 INFO nova.compute.manager [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:07:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:07:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753603475' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:07:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:07:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1753603475' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:07:14 np0005603621 nova_compute[247399]: 2026-01-31 09:07:14.877 247403 DEBUG oslo.service.loopingcall [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:07:14 np0005603621 nova_compute[247399]: 2026-01-31 09:07:14.878 247403 DEBUG nova.compute.manager [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:07:14 np0005603621 nova_compute[247399]: 2026-01-31 09:07:14.878 247403 DEBUG nova.network.neutron [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:07:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 286 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 395 KiB/s wr, 22 op/s
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.422 247403 DEBUG nova.compute.manager [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-vif-unplugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.423 247403 DEBUG oslo_concurrency.lockutils [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.423 247403 DEBUG oslo_concurrency.lockutils [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.423 247403 DEBUG oslo_concurrency.lockutils [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.423 247403 DEBUG nova.compute.manager [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] No waiting events found dispatching network-vif-unplugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 DEBUG nova.compute.manager [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-vif-unplugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 DEBUG nova.compute.manager [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 DEBUG oslo_concurrency.lockutils [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 DEBUG oslo_concurrency.lockutils [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 DEBUG oslo_concurrency.lockutils [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 DEBUG nova.compute.manager [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] No waiting events found dispatching network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:07:15 np0005603621 nova_compute[247399]: 2026-01-31 09:07:15.424 247403 WARNING nova.compute.manager [req-6367f809-99c3-4984-bb2d-86fb07e62d70 req-a2bacc1e-9790-4c4f-9856-cc7cd5b510c4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received unexpected event network-vif-plugged-df02fc02-d5d2-434a-bbaa-ec067a59a165 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:07:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:15.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.275 247403 DEBUG nova.network.neutron [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.303 247403 INFO nova.compute.manager [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Took 1.43 seconds to deallocate network for instance.#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.371 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.372 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.435 247403 DEBUG nova.network.neutron [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updated VIF entry in instance network info cache for port df02fc02-d5d2-434a-bbaa-ec067a59a165. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.436 247403 DEBUG nova.network.neutron [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Updating instance_info_cache with network_info: [{"id": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "address": "fa:16:3e:58:69:03", "network": {"id": "16fc744d-a1ee-45f8-ba60-eca1d72dccd9", "bridge": "br-int", "label": "tempest-network-smoke--1323156779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf02fc02-d5", "ovs_interfaceid": "df02fc02-d5d2-434a-bbaa-ec067a59a165", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.461 247403 DEBUG oslo_concurrency.lockutils [req-df2abe33-3a03-47a1-a0e4-27b0b530cd45 req-6a195e21-4ece-4b59-a15c-c5d6b72d5608 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-043a7f0d-cf1d-4486-9f27-6191777451e5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.484 247403 DEBUG oslo_concurrency.processutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3216738378' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.906 247403 DEBUG oslo_concurrency.processutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.911 247403 DEBUG nova.compute.provider_tree [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.938 247403 DEBUG nova.scheduler.client.report [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:16 np0005603621 nova_compute[247399]: 2026-01-31 09:07:16.982 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.012 247403 INFO nova.scheduler.client.report [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Deleted allocations for instance 043a7f0d-cf1d-4486-9f27-6191777451e5#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.112 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 280 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 806 KiB/s rd, 896 KiB/s wr, 50 op/s
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.148 247403 DEBUG oslo_concurrency.lockutils [None req-c88fcff2-e649-46a9-9d56-f5b2cca46834 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "043a7f0d-cf1d-4486-9f27-6191777451e5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.219 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.680 247403 DEBUG nova.compute.manager [req-7ab72ba9-1c57-4310-bb5c-24e14e0031f5 req-68606c57-c7c5-4ef0-a1d2-c5482d690048 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Received event network-vif-deleted-df02fc02-d5d2-434a-bbaa-ec067a59a165 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.680 247403 INFO nova.compute.manager [req-7ab72ba9-1c57-4310-bb5c-24e14e0031f5 req-68606c57-c7c5-4ef0-a1d2-c5482d690048 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Neutron deleted interface df02fc02-d5d2-434a-bbaa-ec067a59a165; detaching it from the instance and deleting it from the info cache#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.680 247403 DEBUG nova.network.neutron [req-7ab72ba9-1c57-4310-bb5c-24e14e0031f5 req-68606c57-c7c5-4ef0-a1d2-c5482d690048 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Jan 31 04:07:17 np0005603621 nova_compute[247399]: 2026-01-31 09:07:17.683 247403 DEBUG nova.compute.manager [req-7ab72ba9-1c57-4310-bb5c-24e14e0031f5 req-68606c57-c7c5-4ef0-a1d2-c5482d690048 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Detach interface failed, port_id=df02fc02-d5d2-434a-bbaa-ec067a59a165, reason: Instance 043a7f0d-cf1d-4486-9f27-6191777451e5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 31 04:07:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:17.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:17.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:18 np0005603621 nova_compute[247399]: 2026-01-31 09:07:18.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 31 04:07:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:19.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:19.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 31 04:07:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:21.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:21.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:22 np0005603621 nova_compute[247399]: 2026-01-31 09:07:22.114 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 31 04:07:23 np0005603621 nova_compute[247399]: 2026-01-31 09:07:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:23 np0005603621 nova_compute[247399]: 2026-01-31 09:07:23.750 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:23.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:24 np0005603621 nova_compute[247399]: 2026-01-31 09:07:24.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:07:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a3a1cbad-1d7a-41bf-a4d4-8edf3bd99aa8 does not exist
Jan 31 04:07:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8aa6e133-5a7b-4e8d-a86b-dc291491b88e does not exist
Jan 31 04:07:24 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7622a774-5546-4365-a354-716ec1ce1e07 does not exist
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:07:24 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.146024056 +0000 UTC m=+0.051795615 container create 52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 04:07:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 31 04:07:25 np0005603621 systemd[1]: Started libpod-conmon-52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f.scope.
Jan 31 04:07:25 np0005603621 nova_compute[247399]: 2026-01-31 09:07:25.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:25 np0005603621 nova_compute[247399]: 2026-01-31 09:07:25.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:07:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.131301842 +0000 UTC m=+0.037073421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.223453571 +0000 UTC m=+0.129225160 container init 52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.228634704 +0000 UTC m=+0.134406263 container start 52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.232035292 +0000 UTC m=+0.137806851 container attach 52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:07:25 np0005603621 sweet_easley[398095]: 167 167
Jan 31 04:07:25 np0005603621 systemd[1]: libpod-52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f.scope: Deactivated successfully.
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.234407146 +0000 UTC m=+0.140178705 container died 52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:07:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cde896a6f773ca1bba6b6ce0182002de45cf3895a86e6467cb543e9d4c2aad4c-merged.mount: Deactivated successfully.
Jan 31 04:07:25 np0005603621 podman[398078]: 2026-01-31 09:07:25.266354054 +0000 UTC m=+0.172125613 container remove 52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:07:25 np0005603621 systemd[1]: libpod-conmon-52c274ee0c70eefc1c4d59f1a0d90005d596d17484b3f12d56a3f702cc591a2f.scope: Deactivated successfully.
Jan 31 04:07:25 np0005603621 podman[398118]: 2026-01-31 09:07:25.417252337 +0000 UTC m=+0.046705665 container create f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:07:25 np0005603621 systemd[1]: Started libpod-conmon-f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5.scope.
Jan 31 04:07:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2751132169634dcf781339a65a681821ec527b2d62064f42a00669cc7069cb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2751132169634dcf781339a65a681821ec527b2d62064f42a00669cc7069cb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2751132169634dcf781339a65a681821ec527b2d62064f42a00669cc7069cb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2751132169634dcf781339a65a681821ec527b2d62064f42a00669cc7069cb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2751132169634dcf781339a65a681821ec527b2d62064f42a00669cc7069cb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:25 np0005603621 podman[398118]: 2026-01-31 09:07:25.397153373 +0000 UTC m=+0.026606741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:07:25 np0005603621 podman[398118]: 2026-01-31 09:07:25.506266746 +0000 UTC m=+0.135720094 container init f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:07:25 np0005603621 podman[398118]: 2026-01-31 09:07:25.517172751 +0000 UTC m=+0.146626069 container start f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:07:25 np0005603621 podman[398118]: 2026-01-31 09:07:25.520812175 +0000 UTC m=+0.150265493 container attach f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:07:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:25.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:25.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:26 np0005603621 nova_compute[247399]: 2026-01-31 09:07:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:26 np0005603621 nova_compute[247399]: 2026-01-31 09:07:26.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:07:26 np0005603621 nova_compute[247399]: 2026-01-31 09:07:26.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:07:26 np0005603621 youthful_lamarr[398135]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:07:26 np0005603621 youthful_lamarr[398135]: --> relative data size: 1.0
Jan 31 04:07:26 np0005603621 youthful_lamarr[398135]: --> All data devices are unavailable
Jan 31 04:07:26 np0005603621 nova_compute[247399]: 2026-01-31 09:07:26.229 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:07:26 np0005603621 systemd[1]: libpod-f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5.scope: Deactivated successfully.
Jan 31 04:07:26 np0005603621 podman[398118]: 2026-01-31 09:07:26.256929078 +0000 UTC m=+0.886382396 container died f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:07:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a2751132169634dcf781339a65a681821ec527b2d62064f42a00669cc7069cb7-merged.mount: Deactivated successfully.
Jan 31 04:07:26 np0005603621 podman[398118]: 2026-01-31 09:07:26.304306734 +0000 UTC m=+0.933760052 container remove f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 31 04:07:26 np0005603621 systemd[1]: libpod-conmon-f7b56a3662a5b64f121d41179ea4e4924faf0a939265a7453ee72486581a29e5.scope: Deactivated successfully.
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.800679009 +0000 UTC m=+0.043372490 container create 8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:07:26 np0005603621 systemd[1]: Started libpod-conmon-8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520.scope.
Jan 31 04:07:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.778003603 +0000 UTC m=+0.020697154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.875945474 +0000 UTC m=+0.118639015 container init 8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.880756007 +0000 UTC m=+0.123449478 container start 8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:07:26 np0005603621 elastic_blackwell[398321]: 167 167
Jan 31 04:07:26 np0005603621 systemd[1]: libpod-8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520.scope: Deactivated successfully.
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.884481744 +0000 UTC m=+0.127175335 container attach 8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:07:26 np0005603621 conmon[398321]: conmon 8bb8251aecb67bb09d92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520.scope/container/memory.events
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.885567748 +0000 UTC m=+0.128261279 container died 8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:07:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bd020bb91eba11c315dd3067b2e9fc1c0d4a33dc21d3e510d91328e05b6449a0-merged.mount: Deactivated successfully.
Jan 31 04:07:26 np0005603621 podman[398304]: 2026-01-31 09:07:26.930151345 +0000 UTC m=+0.172844836 container remove 8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:07:26 np0005603621 systemd[1]: libpod-conmon-8bb8251aecb67bb09d925ef4fea9422d8901b70affcbc2fe62365e4c09d9d520.scope: Deactivated successfully.
Jan 31 04:07:27 np0005603621 podman[398345]: 2026-01-31 09:07:27.050875395 +0000 UTC m=+0.030743521 container create 5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:07:27 np0005603621 systemd[1]: Started libpod-conmon-5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf.scope.
Jan 31 04:07:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbba0fd90225c05e0c908552b2bfb76a8813c519cf05b0e7cab4ba644c4d4058/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbba0fd90225c05e0c908552b2bfb76a8813c519cf05b0e7cab4ba644c4d4058/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbba0fd90225c05e0c908552b2bfb76a8813c519cf05b0e7cab4ba644c4d4058/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbba0fd90225c05e0c908552b2bfb76a8813c519cf05b0e7cab4ba644c4d4058/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:27 np0005603621 podman[398345]: 2026-01-31 09:07:27.104396235 +0000 UTC m=+0.084264381 container init 5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:07:27 np0005603621 podman[398345]: 2026-01-31 09:07:27.110958181 +0000 UTC m=+0.090826307 container start 5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:07:27 np0005603621 podman[398345]: 2026-01-31 09:07:27.114671499 +0000 UTC m=+0.094539625 container attach 5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.116 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:27 np0005603621 podman[398345]: 2026-01-31 09:07:27.037044699 +0000 UTC m=+0.016912855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:07:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.4 MiB/s wr, 143 op/s
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.240 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.241 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.242 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.243 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.243 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/293534406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.650 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:27 np0005603621 kind_jang[398362]: {
Jan 31 04:07:27 np0005603621 kind_jang[398362]:    "0": [
Jan 31 04:07:27 np0005603621 kind_jang[398362]:        {
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "devices": [
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "/dev/loop3"
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            ],
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "lv_name": "ceph_lv0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "lv_size": "7511998464",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "name": "ceph_lv0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "tags": {
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.cluster_name": "ceph",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.crush_device_class": "",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.encrypted": "0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.osd_id": "0",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.type": "block",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:                "ceph.vdo": "0"
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            },
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "type": "block",
Jan 31 04:07:27 np0005603621 kind_jang[398362]:            "vg_name": "ceph_vg0"
Jan 31 04:07:27 np0005603621 kind_jang[398362]:        }
Jan 31 04:07:27 np0005603621 kind_jang[398362]:    ]
Jan 31 04:07:27 np0005603621 kind_jang[398362]: }
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.793 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.794 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4045MB free_disk=20.967235565185547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.794 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.794 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:27 np0005603621 systemd[1]: libpod-5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf.scope: Deactivated successfully.
Jan 31 04:07:27 np0005603621 podman[398394]: 2026-01-31 09:07:27.846435954 +0000 UTC m=+0.022395268 container died 5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:07:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cbba0fd90225c05e0c908552b2bfb76a8813c519cf05b0e7cab4ba644c4d4058-merged.mount: Deactivated successfully.
Jan 31 04:07:27 np0005603621 podman[398394]: 2026-01-31 09:07:27.907198602 +0000 UTC m=+0.083157915 container remove 5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jang, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:07:27 np0005603621 systemd[1]: libpod-conmon-5868f6f88e7f6a1e08d320958e161e0b215cc655fb996369dbc22578c6428bcf.scope: Deactivated successfully.
Jan 31 04:07:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.938 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.938 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:07:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:27.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:27 np0005603621 nova_compute[247399]: 2026-01-31 09:07:27.977 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:27.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.333463396 +0000 UTC m=+0.031913839 container create 01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:07:28 np0005603621 systemd[1]: Started libpod-conmon-01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3.scope.
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.377 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.382 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.394025457 +0000 UTC m=+0.092475930 container init 01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.398047204 +0000 UTC m=+0.096497657 container start 01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.400992807 +0000 UTC m=+0.099443270 container attach 01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:07:28 np0005603621 nostalgic_carver[398588]: 167 167
Jan 31 04:07:28 np0005603621 systemd[1]: libpod-01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3.scope: Deactivated successfully.
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.403276769 +0000 UTC m=+0.101727232 container died 01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.403 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.320033552 +0000 UTC m=+0.018484015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:07:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-204b8ff9a5bc8fae787d3cb0230ae6f2c109ea12d527d321838357d0dc7bc54f-merged.mount: Deactivated successfully.
Jan 31 04:07:28 np0005603621 podman[398570]: 2026-01-31 09:07:28.434627429 +0000 UTC m=+0.133077882 container remove 01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.439 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.439 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:28 np0005603621 systemd[1]: libpod-conmon-01c333f3405213e0c12c746b2e7b3a168791e1f9d57babd8c7cce3cc3f4a42f3.scope: Deactivated successfully.
Jan 31 04:07:28 np0005603621 podman[398612]: 2026-01-31 09:07:28.545153116 +0000 UTC m=+0.035333965 container create 5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:07:28 np0005603621 systemd[1]: Started libpod-conmon-5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230.scope.
Jan 31 04:07:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d239175830ccbb6827c0703071090888d2910770f712cae9880945dd4da7f2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d239175830ccbb6827c0703071090888d2910770f712cae9880945dd4da7f2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d239175830ccbb6827c0703071090888d2910770f712cae9880945dd4da7f2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:28 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d239175830ccbb6827c0703071090888d2910770f712cae9880945dd4da7f2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:28 np0005603621 podman[398612]: 2026-01-31 09:07:28.609789096 +0000 UTC m=+0.099969955 container init 5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamport, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:07:28 np0005603621 podman[398612]: 2026-01-31 09:07:28.615241819 +0000 UTC m=+0.105422688 container start 5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamport, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:07:28 np0005603621 podman[398612]: 2026-01-31 09:07:28.618535083 +0000 UTC m=+0.108715962 container attach 5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:07:28 np0005603621 podman[398612]: 2026-01-31 09:07:28.528934175 +0000 UTC m=+0.019115064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.711 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850433.7104394, 043a7f0d-cf1d-4486-9f27-6191777451e5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.713 247403 INFO nova.compute.manager [-] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.752 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.756 247403 DEBUG nova.compute.manager [None req-c8388dee-8047-4d69-ab42-7be7a604e460 - - - - - -] [instance: 043a7f0d-cf1d-4486-9f27-6191777451e5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:28 np0005603621 nova_compute[247399]: 2026-01-31 09:07:28.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:29 np0005603621 nova_compute[247399]: 2026-01-31 09:07:29.035 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 958 KiB/s wr, 192 op/s
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]: {
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:        "osd_id": 0,
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:        "type": "bluestore"
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]:    }
Jan 31 04:07:29 np0005603621 elegant_lamport[398629]: }
Jan 31 04:07:29 np0005603621 systemd[1]: libpod-5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230.scope: Deactivated successfully.
Jan 31 04:07:29 np0005603621 podman[398612]: 2026-01-31 09:07:29.389250298 +0000 UTC m=+0.879431157 container died 5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:07:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1d239175830ccbb6827c0703071090888d2910770f712cae9880945dd4da7f2d-merged.mount: Deactivated successfully.
Jan 31 04:07:29 np0005603621 podman[398612]: 2026-01-31 09:07:29.431927504 +0000 UTC m=+0.922108363 container remove 5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lamport, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:07:29 np0005603621 systemd[1]: libpod-conmon-5cd2d1aa3e4e5f900271522fe14c0bb0d67d04ab77e86046172fd52b04913230.scope: Deactivated successfully.
Jan 31 04:07:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:07:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:07:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:07:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:07:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a385b617-b388-4316-beed-2399116b53cd does not exist
Jan 31 04:07:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4ea133d1-e286-4d48-ae3e-cb557771410d does not exist
Jan 31 04:07:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 58c6c214-c6f8-4bfa-9dd9-94f4e2d9920d does not exist
Jan 31 04:07:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:07:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:29.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:07:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:07:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:07:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:30.549 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:30.549 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:30.549 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 25 KiB/s wr, 114 op/s
Jan 31 04:07:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:31.770 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:31 np0005603621 nova_compute[247399]: 2026-01-31 09:07:31.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:31 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:31.771 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:07:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:31.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:32 np0005603621 nova_compute[247399]: 2026-01-31 09:07:32.117 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 35 KiB/s wr, 119 op/s
Jan 31 04:07:33 np0005603621 nova_compute[247399]: 2026-01-31 09:07:33.783 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:33.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:34 np0005603621 podman[398716]: 2026-01-31 09:07:34.492566483 +0000 UTC m=+0.049800243 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:07:34 np0005603621 podman[398717]: 2026-01-31 09:07:34.513968028 +0000 UTC m=+0.071176647 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:07:34 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:34.772 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 35 KiB/s wr, 119 op/s
Jan 31 04:07:35 np0005603621 nova_compute[247399]: 2026-01-31 09:07:35.440 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:35 np0005603621 nova_compute[247399]: 2026-01-31 09:07:35.440 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:35 np0005603621 nova_compute[247399]: 2026-01-31 09:07:35.441 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:35.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:37 np0005603621 nova_compute[247399]: 2026-01-31 09:07:37.118 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 261 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 975 KiB/s wr, 131 op/s
Jan 31 04:07:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:37.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:37.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:38 np0005603621 nova_compute[247399]: 2026-01-31 09:07:38.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:07:38
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images']
Jan 31 04:07:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:07:38 np0005603621 nova_compute[247399]: 2026-01-31 09:07:38.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:07:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:07:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:39.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:40.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 04:07:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:41.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:42 np0005603621 nova_compute[247399]: 2026-01-31 09:07:42.119 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Jan 31 04:07:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Jan 31 04:07:42 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Jan 31 04:07:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 645 KiB/s rd, 2.6 MiB/s wr, 85 op/s
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.306 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.308 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.347 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.465 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.466 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.480 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.481 247403 INFO nova.compute.claims [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.655 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:43 np0005603621 nova_compute[247399]: 2026-01-31 09:07:43.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:44.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2206325779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.213 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.219 247403 DEBUG nova.compute.provider_tree [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.238 247403 DEBUG nova.scheduler.client.report [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.276 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.277 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.327 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.328 247403 DEBUG nova.network.neutron [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.346 247403 INFO nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.365 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.465 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.467 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.467 247403 INFO nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Creating image(s)#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.502 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.533 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.771 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.775 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.842 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.843 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.844 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.845 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.878 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.882 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:44 np0005603621 nova_compute[247399]: 2026-01-31 09:07:44.947 247403 DEBUG nova.policy [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd442c7ba12ed444ca6d4dcc5cfd36150', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:07:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Jan 31 04:07:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:46.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.436 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.505 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] resizing rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.725 247403 DEBUG nova.objects.instance [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'migration_context' on Instance uuid a3d51eec-c79b-4863-bbb8-d9c2e39742e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.743 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.744 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Ensure instance console log exists: /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.744 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.745 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:46 np0005603621 nova_compute[247399]: 2026-01-31 09:07:46.745 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:47 np0005603621 nova_compute[247399]: 2026-01-31 09:07:47.120 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 289 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 565 KiB/s rd, 1.6 MiB/s wr, 98 op/s
Jan 31 04:07:47 np0005603621 nova_compute[247399]: 2026-01-31 09:07:47.161 247403 DEBUG nova.network.neutron [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Successfully created port: d0168a86-f128-46b0-bf1e-62edd18e00cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:07:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:48.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.465 247403 DEBUG nova.network.neutron [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Successfully updated port: d0168a86-f128-46b0-bf1e-62edd18e00cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.511 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.511 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquired lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.511 247403 DEBUG nova.network.neutron [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.658 247403 DEBUG nova.compute.manager [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-changed-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.659 247403 DEBUG nova.compute.manager [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Refreshing instance network info cache due to event network-changed-d0168a86-f128-46b0-bf1e-62edd18e00cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.659 247403 DEBUG oslo_concurrency.lockutils [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.789 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:48 np0005603621 nova_compute[247399]: 2026-01-31 09:07:48.892 247403 DEBUG nova.network.neutron [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003164876756467523 of space, bias 1.0, pg target 0.9494630269402569 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004334279906476828 of space, bias 1.0, pg target 1.3002839719430486 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:07:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:07:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:50.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.259 247403 DEBUG nova.network.neutron [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updating instance_info_cache with network_info: [{"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.283 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Releasing lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.283 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Instance network_info: |[{"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.284 247403 DEBUG oslo_concurrency.lockutils [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.284 247403 DEBUG nova.network.neutron [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Refreshing network info cache for port d0168a86-f128-46b0-bf1e-62edd18e00cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.286 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Start _get_guest_xml network_info=[{"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.291 247403 WARNING nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.295 247403 DEBUG nova.virt.libvirt.host [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.296 247403 DEBUG nova.virt.libvirt.host [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.302 247403 DEBUG nova.virt.libvirt.host [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.303 247403 DEBUG nova.virt.libvirt.host [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.304 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.304 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.304 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.304 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.305 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.305 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.305 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.305 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.306 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.306 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.306 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.306 247403 DEBUG nova.virt.hardware [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.309 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.644 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.645 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.668 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.779 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.779 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.788 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.788 247403 INFO nova.compute.claims [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:07:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:07:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242413072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.813 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.837 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.841 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:50 np0005603621 nova_compute[247399]: 2026-01-31 09:07:50.977 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 295 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 31 04:07:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:07:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1076654967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.252 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.253 247403 DEBUG nova.virt.libvirt.vif [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:07:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2110560441',display_name='tempest-TestNetworkBasicOps-server-2110560441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2110560441',id=210,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCIzSudM5DGjK6sh7eqAzv26tzRV10gIcCHo4Jdd4eIN5Sfg7kKCXWRdxAz6k1fAJgWtDytVNXIQZGlR6XTZaXKlaF+q8CvU7iwVA2F2jmkDdYuMSJB+YxEBYVq7+ZNZxQ==',key_name='tempest-TestNetworkBasicOps-737960295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-az4u8wpw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:07:44Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=a3d51eec-c79b-4863-bbb8-d9c2e39742e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.254 247403 DEBUG nova.network.os_vif_util [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.254 247403 DEBUG nova.network.os_vif_util [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.255 247403 DEBUG nova.objects.instance [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3d51eec-c79b-4863-bbb8-d9c2e39742e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.270 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <uuid>a3d51eec-c79b-4863-bbb8-d9c2e39742e6</uuid>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <name>instance-000000d2</name>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestNetworkBasicOps-server-2110560441</nova:name>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:07:50</nova:creationTime>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:user uuid="d442c7ba12ed444ca6d4dcc5cfd36150">tempest-TestNetworkBasicOps-104417095-project-member</nova:user>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:project uuid="abf9393aa2b646feb00a3d887a9dee14">tempest-TestNetworkBasicOps-104417095</nova:project>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <nova:port uuid="d0168a86-f128-46b0-bf1e-62edd18e00cd">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <entry name="serial">a3d51eec-c79b-4863-bbb8-d9c2e39742e6</entry>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <entry name="uuid">a3d51eec-c79b-4863-bbb8-d9c2e39742e6</entry>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk.config">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:7b:f2:5e"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <target dev="tapd0168a86-f1"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/console.log" append="off"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:07:51 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:07:51 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:07:51 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:07:51 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.272 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Preparing to wait for external event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.272 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.272 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.273 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.273 247403 DEBUG nova.virt.libvirt.vif [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:07:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2110560441',display_name='tempest-TestNetworkBasicOps-server-2110560441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2110560441',id=210,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCIzSudM5DGjK6sh7eqAzv26tzRV10gIcCHo4Jdd4eIN5Sfg7kKCXWRdxAz6k1fAJgWtDytVNXIQZGlR6XTZaXKlaF+q8CvU7iwVA2F2jmkDdYuMSJB+YxEBYVq7+ZNZxQ==',key_name='tempest-TestNetworkBasicOps-737960295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-az4u8wpw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:07:44Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=a3d51eec-c79b-4863-bbb8-d9c2e39742e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.274 247403 DEBUG nova.network.os_vif_util [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.274 247403 DEBUG nova.network.os_vif_util [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.275 247403 DEBUG os_vif [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.276 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.276 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.277 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.279 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.279 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0168a86-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.280 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0168a86-f1, col_values=(('external_ids', {'iface-id': 'd0168a86-f128-46b0-bf1e-62edd18e00cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:f2:5e', 'vm-uuid': 'a3d51eec-c79b-4863-bbb8-d9c2e39742e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:51 np0005603621 NetworkManager[49013]: <info>  [1769850471.2826] manager: (tapd0168a86-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.285 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.290 247403 INFO os_vif [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1')#033[00m
Jan 31 04:07:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:07:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759097893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.380 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.385 247403 DEBUG nova.compute.provider_tree [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.413 247403 DEBUG nova.scheduler.client.report [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.441 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.442 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.478 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.479 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.479 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] No VIF found with MAC fa:16:3e:7b:f2:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.480 247403 INFO nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Using config drive#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.507 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.527 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.528 247403 DEBUG nova.network.neutron [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.565 247403 INFO nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.597 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.665 247403 INFO nova.virt.block_device [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Booting with volume 64b37453-b9c7-425d-8ee1-936836c623a8 at /dev/vda#033[00m
Jan 31 04:07:51 np0005603621 nova_compute[247399]: 2026-01-31 09:07:51.841 247403 DEBUG nova.policy [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dc42b92a5dd34d32b6b184bdc7acb092', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '76ce367a834b49dfb5b436848118b860', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:07:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:51.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:52.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.032 247403 DEBUG os_brick.utils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.034 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.044 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.045 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[d34e5579-57d6-48b1-b653-ddcb97b95f12]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.046 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.053 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.053 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[faf80f36-d05e-45f0-abf0-48608da2b040]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.056 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.062 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.062 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[6df40804-85ee-4001-b88c-b6d12d9f6002]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.063 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0b613b-e32e-444c-9e94-aee8b98f6172]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.067 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.093 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.098 247403 DEBUG os_brick.initiator.connectors.lightos [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.098 247403 DEBUG os_brick.initiator.connectors.lightos [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.099 247403 DEBUG os_brick.initiator.connectors.lightos [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.099 247403 DEBUG os_brick.utils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.099 247403 DEBUG nova.virt.block_device [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating existing volume attachment record: 1e8a006a-1f38-4a26-bda1-eb725330d2b7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.121 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.133 247403 INFO nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Creating config drive at /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/disk.config#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.138 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_1e5neoj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.264 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp_1e5neoj" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.290 247403 DEBUG nova.storage.rbd_utils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] rbd image a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.294 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/disk.config a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.441 247403 DEBUG oslo_concurrency.processutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/disk.config a3d51eec-c79b-4863-bbb8-d9c2e39742e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.442 247403 INFO nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Deleting local config drive /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6/disk.config because it was imported into RBD.#033[00m
Jan 31 04:07:52 np0005603621 kernel: tapd0168a86-f1: entered promiscuous mode
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.4920] manager: (tapd0168a86-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.492 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:52Z|00881|binding|INFO|Claiming lport d0168a86-f128-46b0-bf1e-62edd18e00cd for this chassis.
Jan 31 04:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:52Z|00882|binding|INFO|d0168a86-f128-46b0-bf1e-62edd18e00cd: Claiming fa:16:3e:7b:f2:5e 10.100.0.13
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.495 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.501 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 systemd-udevd[399169]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.5190] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.5196] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.517 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.5290] device (tapd0168a86-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.5296] device (tapd0168a86-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.529 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:f2:5e 10.100.0.13'], port_security=['fa:16:3e:7b:f2:5e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a3d51eec-c79b-4863-bbb8-d9c2e39742e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b575155-2651-409f-ab96-5a6cf52f7f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '2', 'neutron:security_group_ids': '62dc32c6-48fe-4646-a68f-9ac332b3fbdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18ce706-6e0f-4295-834a-6178a6f7a4c4, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d0168a86-f128-46b0-bf1e-62edd18e00cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.530 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d0168a86-f128-46b0-bf1e-62edd18e00cd in datapath 6b575155-2651-409f-ab96-5a6cf52f7f88 bound to our chassis#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.532 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b575155-2651-409f-ab96-5a6cf52f7f88#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.538 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9e92f08e-4919-432f-a5f8-9b51d201526c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.539 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b575155-21 in ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.540 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b575155-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.541 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[d9036112-dc49-4192-ab36-8e7e78446ffb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.541 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0d0eb6-2c67-495e-965e-9f04d42ad42f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.549 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[e3134942-cbe0-41cc-a0d3-bc5ccd965f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 systemd-machined[212769]: New machine qemu-101-instance-000000d2.
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.569 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[576d861e-ef30-4a91-8831-9de20e614c3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 systemd[1]: Started Virtual Machine qemu-101-instance-000000d2.
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.593 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6c9b2f-0057-40ee-a95c-1eb97a21e208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.595 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.597 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0e15d531-955c-4987-8510-614e5d61f998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.5990] manager: (tap6b575155-20): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.621 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.622 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c76f7c79-6b02-4924-b2ce-a34fca0a6cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:52Z|00883|binding|INFO|Setting lport d0168a86-f128-46b0-bf1e-62edd18e00cd ovn-installed in OVS
Jan 31 04:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:52Z|00884|binding|INFO|Setting lport d0168a86-f128-46b0-bf1e-62edd18e00cd up in Southbound
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.624 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.626 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[288490cd-c449-4dd3-8e92-c96d747bf50b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.6396] device (tap6b575155-20): carrier: link connected
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.641 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[590177bb-17b7-49d1-9422-998010844414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.652 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0a11f38e-bd4c-4448-8d31-e935139102c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b575155-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:3f:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977815, 'reachable_time': 15645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399205, 'error': None, 'target': 'ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.661 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8b67d8b0-075c-48ac-9641-4c2f3192a417]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:3f5d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977815, 'tstamp': 977815}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399206, 'error': None, 'target': 'ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.670 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ceee71-dd87-400d-a094-3ac0f7b48468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b575155-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:3f:5d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977815, 'reachable_time': 15645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399207, 'error': None, 'target': 'ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.689 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[646a0beb-f1c1-4052-9afb-a5dec71904f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.725 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[66f75dcd-7fdb-4355-9f5a-96b041e1ba70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.727 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b575155-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.727 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.728 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b575155-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 NetworkManager[49013]: <info>  [1769850472.7303] manager: (tap6b575155-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Jan 31 04:07:52 np0005603621 kernel: tap6b575155-20: entered promiscuous mode
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.732 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.736 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b575155-20, col_values=(('external_ids', {'iface-id': '49cdbdd6-d7fe-466c-bd27-e6f93335cffd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.737 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:52Z|00885|binding|INFO|Releasing lport 49cdbdd6-d7fe-466c-bd27-e6f93335cffd from this chassis (sb_readonly=0)
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.740 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b575155-2651-409f-ab96-5a6cf52f7f88.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b575155-2651-409f-ab96-5a6cf52f7f88.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.741 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[912cdbe7-9418-45d5-9166-e1fd42272619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.743 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-6b575155-2651-409f-ab96-5a6cf52f7f88
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/6b575155-2651-409f-ab96-5a6cf52f7f88.pid.haproxy
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 6b575155-2651-409f-ab96-5a6cf52f7f88
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:07:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:52.744 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88', 'env', 'PROCESS_TAG=haproxy-6b575155-2651-409f-ab96-5a6cf52f7f88', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b575155-2651-409f-ab96-5a6cf52f7f88.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.879 247403 DEBUG nova.network.neutron [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updated VIF entry in instance network info cache for port d0168a86-f128-46b0-bf1e-62edd18e00cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.879 247403 DEBUG nova.network.neutron [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updating instance_info_cache with network_info: [{"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:52 np0005603621 nova_compute[247399]: 2026-01-31 09:07:52.915 247403 DEBUG oslo_concurrency.lockutils [req-3448fc2a-2428-4d5e-94ff-5778e673118c req-839663dc-ff1f-43ff-a94d-143d65eaa33c fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:53 np0005603621 podman[399275]: 2026-01-31 09:07:53.058333576 +0000 UTC m=+0.050500765 container create 06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 04:07:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:07:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/182111337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:07:53 np0005603621 systemd[1]: Started libpod-conmon-06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa.scope.
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.093 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850473.0930245, a3d51eec-c79b-4863-bbb8-d9c2e39742e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.094 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] VM Started (Lifecycle Event)#033[00m
Jan 31 04:07:53 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:53 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09a0dcfe8d8fc1c5f1545c4bb6e66f54f427a272722a16025c0d7d076d4d3a7c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:53 np0005603621 podman[399275]: 2026-01-31 09:07:53.110581005 +0000 UTC m=+0.102748244 container init 06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Jan 31 04:07:53 np0005603621 podman[399275]: 2026-01-31 09:07:53.115036926 +0000 UTC m=+0.107204135 container start 06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:07:53 np0005603621 podman[399275]: 2026-01-31 09:07:53.027824503 +0000 UTC m=+0.019991712 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.123 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.127 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850473.0932612, a3d51eec-c79b-4863-bbb8-d9c2e39742e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.127 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:07:53 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [NOTICE]   (399301) : New worker (399303) forked
Jan 31 04:07:53 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [NOTICE]   (399301) : Loading success.
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.136 247403 DEBUG nova.compute.manager [req-40866175-337f-4bd2-a473-f6f50ad5b298 req-407e00e8-7ae3-4a90-bcd9-705c4944e7c3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.137 247403 DEBUG oslo_concurrency.lockutils [req-40866175-337f-4bd2-a473-f6f50ad5b298 req-407e00e8-7ae3-4a90-bcd9-705c4944e7c3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.137 247403 DEBUG oslo_concurrency.lockutils [req-40866175-337f-4bd2-a473-f6f50ad5b298 req-407e00e8-7ae3-4a90-bcd9-705c4944e7c3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.137 247403 DEBUG oslo_concurrency.lockutils [req-40866175-337f-4bd2-a473-f6f50ad5b298 req-407e00e8-7ae3-4a90-bcd9-705c4944e7c3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.138 247403 DEBUG nova.compute.manager [req-40866175-337f-4bd2-a473-f6f50ad5b298 req-407e00e8-7ae3-4a90-bcd9-705c4944e7c3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Processing event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.138 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.145 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.148 247403 INFO nova.virt.libvirt.driver [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Instance spawned successfully.#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.148 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:07:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 370 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 184 KiB/s rd, 3.7 MiB/s wr, 86 op/s
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.176 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.183 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850473.1420872, a3d51eec-c79b-4863-bbb8-d9c2e39742e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.183 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.190 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.190 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.191 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.192 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.193 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.193 247403 DEBUG nova.virt.libvirt.driver [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.227 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.229 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.271 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.304 247403 INFO nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Took 8.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.305 247403 DEBUG nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.316 247403 DEBUG nova.network.neutron [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Successfully created port: 22f7384a-ca75-447f-a16d-e1519837d337 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.413 247403 INFO nova.compute.manager [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Took 10.00 seconds to build instance.#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.443 247403 DEBUG oslo_concurrency.lockutils [None req-712da468-07a1-4521-b0cd-4801f8dbf089 d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.646 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.647 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.647 247403 INFO nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Creating image(s)#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.648 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.648 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Ensure instance console log exists: /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.648 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.648 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:53 np0005603621 nova_compute[247399]: 2026-01-31 09:07:53.649 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:54.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:54 np0005603621 nova_compute[247399]: 2026-01-31 09:07:54.489 247403 DEBUG nova.network.neutron [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Successfully updated port: 22f7384a-ca75-447f-a16d-e1519837d337 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:07:54 np0005603621 nova_compute[247399]: 2026-01-31 09:07:54.519 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:54 np0005603621 nova_compute[247399]: 2026-01-31 09:07:54.520 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquired lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:54 np0005603621 nova_compute[247399]: 2026-01-31 09:07:54.520 247403 DEBUG nova.network.neutron [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:07:54 np0005603621 nova_compute[247399]: 2026-01-31 09:07:54.778 247403 DEBUG nova.network.neutron [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:07:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 576 KiB/s rd, 3.6 MiB/s wr, 93 op/s
Jan 31 04:07:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:55.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.057 247403 DEBUG nova.compute.manager [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-changed-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.058 247403 DEBUG nova.compute.manager [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Refreshing instance network info cache due to event network-changed-22f7384a-ca75-447f-a16d-e1519837d337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.058 247403 DEBUG oslo_concurrency.lockutils [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.067 247403 DEBUG nova.compute.manager [req-60f6668e-7537-4c4f-a60f-ed6a1d80fe89 req-3f9dcc51-e6e1-43c9-8065-8a6aac654869 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.068 247403 DEBUG oslo_concurrency.lockutils [req-60f6668e-7537-4c4f-a60f-ed6a1d80fe89 req-3f9dcc51-e6e1-43c9-8065-8a6aac654869 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.068 247403 DEBUG oslo_concurrency.lockutils [req-60f6668e-7537-4c4f-a60f-ed6a1d80fe89 req-3f9dcc51-e6e1-43c9-8065-8a6aac654869 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.068 247403 DEBUG oslo_concurrency.lockutils [req-60f6668e-7537-4c4f-a60f-ed6a1d80fe89 req-3f9dcc51-e6e1-43c9-8065-8a6aac654869 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.069 247403 DEBUG nova.compute.manager [req-60f6668e-7537-4c4f-a60f-ed6a1d80fe89 req-3f9dcc51-e6e1-43c9-8065-8a6aac654869 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] No waiting events found dispatching network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.069 247403 WARNING nova.compute.manager [req-60f6668e-7537-4c4f-a60f-ed6a1d80fe89 req-3f9dcc51-e6e1-43c9-8065-8a6aac654869 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received unexpected event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd for instance with vm_state active and task_state None.#033[00m
Jan 31 04:07:56 np0005603621 nova_compute[247399]: 2026-01-31 09:07:56.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.122 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 668 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.185 247403 DEBUG nova.network.neutron [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating instance_info_cache with network_info: [{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.209 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Releasing lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.210 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Instance network_info: |[{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.210 247403 DEBUG oslo_concurrency.lockutils [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.210 247403 DEBUG nova.network.neutron [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Refreshing network info cache for port 22f7384a-ca75-447f-a16d-e1519837d337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.213 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Start _get_guest_xml network_info=[{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-64b37453-b9c7-425d-8ee1-936836c623a8', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '64b37453-b9c7-425d-8ee1-936836c623a8', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e5867604-e99e-4623-8c07-a3ca7d95fe78', 'attached_at': '', 'detached_at': '', 'volume_id': '64b37453-b9c7-425d-8ee1-936836c623a8', 'serial': '64b37453-b9c7-425d-8ee1-936836c623a8'}, 'device_type': 'disk', 'boot_index': 0, 'mount_device': '/dev/vda', 'delete_on_termination': False, 'attachment_id': '1e8a006a-1f38-4a26-bda1-eb725330d2b7', 'disk_bus': 'virtio', 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.217 247403 WARNING nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.223 247403 DEBUG nova.virt.libvirt.host [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.223 247403 DEBUG nova.virt.libvirt.host [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.227 247403 DEBUG nova.virt.libvirt.host [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.227 247403 DEBUG nova.virt.libvirt.host [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.228 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.228 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.228 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.229 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.229 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.229 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.229 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.229 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.229 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.230 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.230 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.230 247403 DEBUG nova.virt.hardware [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.257 247403 DEBUG nova.storage.rbd_utils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] rbd image e5867604-e99e-4623-8c07-a3ca7d95fe78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.261 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:07:57 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360519326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.688 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.717 247403 DEBUG nova.virt.libvirt.vif [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-922331869',display_name='tempest-TestVolumeBootPattern-server-922331869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-922331869',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWvTxk1zh2OCmPH3tumEbxR7y880uhj4vJDAspX9r3EATf0w5oe5DG3NVBcNRbWTPcgVwlnXcyaRQZseLc7edDTe4kwfjogsRoplvkAsMWW9sCSaJlX0XBkMxl/Ghv8Fw==',key_name='tempest-TestVolumeBootPattern-11482540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76ce367a834b49dfb5b436848118b860',ramdisk_id='',reservation_id='r-e82erhrp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1392945362',owner_user_name='tempest-TestVolumeBootPattern-1392945362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:07:51Z,user_data=None,user_id='dc42b92a5dd34d32b6b184bdc7acb092',uuid=e5867604-e99e-4623-8c07-a3ca7d95fe78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.718 247403 DEBUG nova.network.os_vif_util [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Converting VIF {"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.719 247403 DEBUG nova.network.os_vif_util [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.720 247403 DEBUG nova.objects.instance [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lazy-loading 'pci_devices' on Instance uuid e5867604-e99e-4623-8c07-a3ca7d95fe78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.739 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <uuid>e5867604-e99e-4623-8c07-a3ca7d95fe78</uuid>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <name>instance-000000d4</name>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestVolumeBootPattern-server-922331869</nova:name>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:07:57</nova:creationTime>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:user uuid="dc42b92a5dd34d32b6b184bdc7acb092">tempest-TestVolumeBootPattern-1392945362-project-member</nova:user>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:project uuid="76ce367a834b49dfb5b436848118b860">tempest-TestVolumeBootPattern-1392945362</nova:project>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <nova:port uuid="22f7384a-ca75-447f-a16d-e1519837d337">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <entry name="serial">e5867604-e99e-4623-8c07-a3ca7d95fe78</entry>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <entry name="uuid">e5867604-e99e-4623-8c07-a3ca7d95fe78</entry>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/e5867604-e99e-4623-8c07-a3ca7d95fe78_disk.config">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="volumes/volume-64b37453-b9c7-425d-8ee1-936836c623a8">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <serial>64b37453-b9c7-425d-8ee1-936836c623a8</serial>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:c6:0e:b4"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <target dev="tap22f7384a-ca"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/console.log" append="off"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:07:57 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:07:57 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:07:57 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:07:57 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.741 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Preparing to wait for external event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.741 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.741 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.742 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.742 247403 DEBUG nova.virt.libvirt.vif [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-922331869',display_name='tempest-TestVolumeBootPattern-server-922331869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-922331869',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWvTxk1zh2OCmPH3tumEbxR7y880uhj4vJDAspX9r3EATf0w5oe5DG3NVBcNRbWTPcgVwlnXcyaRQZseLc7edDTe4kwfjogsRoplvkAsMWW9sCSaJlX0XBkMxl/Ghv8Fw==',key_name='tempest-TestVolumeBootPattern-11482540',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='76ce367a834b49dfb5b436848118b860',ramdisk_id='',reservation_id='r-e82erhrp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1392945362',owner_user_name='tempest-TestVolumeBootPattern-1392945362-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:07:51Z,user_data=None,user_id='dc42b92a5dd34d32b6b184bdc7acb092',uuid=e5867604-e99e-4623-8c07-a3ca7d95fe78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.743 247403 DEBUG nova.network.os_vif_util [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Converting VIF {"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.743 247403 DEBUG nova.network.os_vif_util [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.744 247403 DEBUG os_vif [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.744 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.745 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.745 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.747 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.748 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap22f7384a-ca, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.748 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap22f7384a-ca, col_values=(('external_ids', {'iface-id': '22f7384a-ca75-447f-a16d-e1519837d337', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:0e:b4', 'vm-uuid': 'e5867604-e99e-4623-8c07-a3ca7d95fe78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.749 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603621 NetworkManager[49013]: <info>  [1769850477.7507] manager: (tap22f7384a-ca): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.752 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.754 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.755 247403 INFO os_vif [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca')#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.808 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.808 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.809 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] No VIF found with MAC fa:16:3e:c6:0e:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.809 247403 INFO nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Using config drive#033[00m
Jan 31 04:07:57 np0005603621 nova_compute[247399]: 2026-01-31 09:07:57.845 247403 DEBUG nova.storage.rbd_utils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] rbd image e5867604-e99e-4623-8c07-a3ca7d95fe78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:07:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:07:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:07:57.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:07:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:07:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:07:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:07:58.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:07:58 np0005603621 nova_compute[247399]: 2026-01-31 09:07:58.967 247403 INFO nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Creating config drive at /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/disk.config#033[00m
Jan 31 04:07:58 np0005603621 nova_compute[247399]: 2026-01-31 09:07:58.970 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1rhfkkoj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.107 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmp1rhfkkoj" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.144 247403 DEBUG nova.storage.rbd_utils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] rbd image e5867604-e99e-4623-8c07-a3ca7d95fe78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.150 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/disk.config e5867604-e99e-4623-8c07-a3ca7d95fe78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:07:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 129 op/s
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.295 247403 DEBUG oslo_concurrency.processutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/disk.config e5867604-e99e-4623-8c07-a3ca7d95fe78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.295 247403 INFO nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Deleting local config drive /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78/disk.config because it was imported into RBD.#033[00m
Jan 31 04:07:59 np0005603621 NetworkManager[49013]: <info>  [1769850479.3267] manager: (tap22f7384a-ca): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Jan 31 04:07:59 np0005603621 kernel: tap22f7384a-ca: entered promiscuous mode
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:59Z|00886|binding|INFO|Claiming lport 22f7384a-ca75-447f-a16d-e1519837d337 for this chassis.
Jan 31 04:07:59 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:59Z|00887|binding|INFO|22f7384a-ca75-447f-a16d-e1519837d337: Claiming fa:16:3e:c6:0e:b4 10.100.0.8
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.340 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:0e:b4 10.100.0.8'], port_security=['fa:16:3e:c6:0e:b4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e5867604-e99e-4623-8c07-a3ca7d95fe78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-650eb345-8346-4e8f-8e83-eeb0117654f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76ce367a834b49dfb5b436848118b860', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cea15428-ed6f-44a7-98e5-24c0fab7b796', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ecdc171-9d09-4cba-9bb9-cd2f8ef8e6c3, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=22f7384a-ca75-447f-a16d-e1519837d337) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.342 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 22f7384a-ca75-447f-a16d-e1519837d337 in datapath 650eb345-8346-4e8f-8e83-eeb0117654f6 bound to our chassis#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.343 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 650eb345-8346-4e8f-8e83-eeb0117654f6#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.346 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:59Z|00888|binding|INFO|Setting lport 22f7384a-ca75-447f-a16d-e1519837d337 ovn-installed in OVS
Jan 31 04:07:59 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:59Z|00889|binding|INFO|Setting lport 22f7384a-ca75-447f-a16d-e1519837d337 up in Southbound
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.350 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.352 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[97ed0f64-b9bd-41e8-a6a3-6940a18f994f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.354 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap650eb345-81 in ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.355 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap650eb345-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.355 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[18fc01c9-0ea8-4adb-86ff-43757ec53252]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 systemd-udevd[399427]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.357 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e924bc97-501c-4753-a4ca-05b35d431473]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.368 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d9eb29c3-d513-45dc-b829-16ac438a42d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 systemd-machined[212769]: New machine qemu-102-instance-000000d4.
Jan 31 04:07:59 np0005603621 NetworkManager[49013]: <info>  [1769850479.3745] device (tap22f7384a-ca): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:07:59 np0005603621 NetworkManager[49013]: <info>  [1769850479.3755] device (tap22f7384a-ca): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.381 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4f56eb-985a-47fb-8c33-bcb127b0e993]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 systemd[1]: Started Virtual Machine qemu-102-instance-000000d4.
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.404 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[5c67e575-5ac8-4eb6-bdaf-7ea001bd6e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.411 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[caaeb50f-06f1-4297-92fa-f6fc5035ad2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 NetworkManager[49013]: <info>  [1769850479.4121] manager: (tap650eb345-80): new Veth device (/org/freedesktop/NetworkManager/Devices/411)
Jan 31 04:07:59 np0005603621 systemd-udevd[399433]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.432 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7fedd24a-b8a7-4cc5-a659-fd433e71df2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.435 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[d6bac130-8c15-4c6f-95d2-eaf84374b4e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 NetworkManager[49013]: <info>  [1769850479.4512] device (tap650eb345-80): carrier: link connected
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.457 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a20ccabd-17a1-46be-ad21-812b0dfb3df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.471 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b605b8bf-2431-4e96-986b-591bd2ebe6c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap650eb345-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:27:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978496, 'reachable_time': 40234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399462, 'error': None, 'target': 'ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.483 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[240234b5-3a57-41f7-ad38-7d734ce9613e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:27ec'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 978496, 'tstamp': 978496}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399463, 'error': None, 'target': 'ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.497 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11e201ed-4056-4c22-a52c-31ff438ba68b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap650eb345-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:27:ec'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 268], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978496, 'reachable_time': 40234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399464, 'error': None, 'target': 'ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.522 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5af70692-57d0-42fc-b7d0-91fafedca690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.567 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[468294b3-7a51-4d9b-a82d-4ce13efc6d76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.568 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap650eb345-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.569 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.569 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap650eb345-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.570 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 NetworkManager[49013]: <info>  [1769850479.5714] manager: (tap650eb345-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Jan 31 04:07:59 np0005603621 kernel: tap650eb345-80: entered promiscuous mode
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.572 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.575 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap650eb345-80, col_values=(('external_ids', {'iface-id': '74bde109-0188-4ce3-87c3-02a3eb853dc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.576 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 ovn_controller[149152]: 2026-01-31T09:07:59Z|00890|binding|INFO|Releasing lport 74bde109-0188-4ce3-87c3-02a3eb853dc2 from this chassis (sb_readonly=0)
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.578 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/650eb345-8346-4e8f-8e83-eeb0117654f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/650eb345-8346-4e8f-8e83-eeb0117654f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.579 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2d87b1-ac2e-4145-b2ca-2e21630ff202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.579 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-650eb345-8346-4e8f-8e83-eeb0117654f6
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/650eb345-8346-4e8f-8e83-eeb0117654f6.pid.haproxy
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 650eb345-8346-4e8f-8e83-eeb0117654f6
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:07:59 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:07:59.580 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6', 'env', 'PROCESS_TAG=haproxy-650eb345-8346-4e8f-8e83-eeb0117654f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/650eb345-8346-4e8f-8e83-eeb0117654f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.581 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.807 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850479.8068101, e5867604-e99e-4623-8c07-a3ca7d95fe78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.807 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] VM Started (Lifecycle Event)#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.828 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.831 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850479.8077004, e5867604-e99e-4623-8c07-a3ca7d95fe78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.831 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.854 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.857 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.885 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:07:59 np0005603621 podman[399538]: 2026-01-31 09:07:59.892492981 +0000 UTC m=+0.044809515 container create ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.924 247403 DEBUG nova.compute.manager [req-6e97fa66-0c29-4475-ac95-e27fae051f92 req-cea1c754-775b-4fe2-9348-5731c6cb907b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.925 247403 DEBUG oslo_concurrency.lockutils [req-6e97fa66-0c29-4475-ac95-e27fae051f92 req-cea1c754-775b-4fe2-9348-5731c6cb907b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.925 247403 DEBUG oslo_concurrency.lockutils [req-6e97fa66-0c29-4475-ac95-e27fae051f92 req-cea1c754-775b-4fe2-9348-5731c6cb907b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.926 247403 DEBUG oslo_concurrency.lockutils [req-6e97fa66-0c29-4475-ac95-e27fae051f92 req-cea1c754-775b-4fe2-9348-5731c6cb907b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.926 247403 DEBUG nova.compute.manager [req-6e97fa66-0c29-4475-ac95-e27fae051f92 req-cea1c754-775b-4fe2-9348-5731c6cb907b fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Processing event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.926 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:07:59 np0005603621 systemd[1]: Started libpod-conmon-ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a.scope.
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.936 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850479.9291124, e5867604-e99e-4623-8c07-a3ca7d95fe78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.937 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.938 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.941 247403 INFO nova.virt.libvirt.driver [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Instance spawned successfully.#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.941 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:07:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:07:59 np0005603621 podman[399538]: 2026-01-31 09:07:59.870113915 +0000 UTC m=+0.022430479 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:07:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48702f4af4a565939b62ed21e3a80e6443f6b668bb883aac3de207cf0f684f41/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.976 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:07:59 np0005603621 podman[399538]: 2026-01-31 09:07:59.980806788 +0000 UTC m=+0.133123352 container init ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.980 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.981 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.981 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.981 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.982 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.982 247403 DEBUG nova.virt.libvirt.driver [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:07:59 np0005603621 nova_compute[247399]: 2026-01-31 09:07:59.985 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:07:59 np0005603621 podman[399538]: 2026-01-31 09:07:59.986836338 +0000 UTC m=+0.139152872 container start ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 31 04:08:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:00.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:00 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [NOTICE]   (399558) : New worker (399560) forked
Jan 31 04:08:00 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [NOTICE]   (399558) : Loading success.
Jan 31 04:08:00 np0005603621 nova_compute[247399]: 2026-01-31 09:08:00.020 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:08:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:08:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:08:00 np0005603621 nova_compute[247399]: 2026-01-31 09:08:00.052 247403 INFO nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Took 6.41 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:08:00 np0005603621 nova_compute[247399]: 2026-01-31 09:08:00.052 247403 DEBUG nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:00 np0005603621 nova_compute[247399]: 2026-01-31 09:08:00.131 247403 INFO nova.compute.manager [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Took 9.39 seconds to build instance.#033[00m
Jan 31 04:08:00 np0005603621 nova_compute[247399]: 2026-01-31 09:08:00.156 247403 DEBUG oslo_concurrency.lockutils [None req-3dbf2985-9154-48ee-a45e-573175ef260e dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:01 np0005603621 nova_compute[247399]: 2026-01-31 09:08:01.042 247403 DEBUG nova.network.neutron [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updated VIF entry in instance network info cache for port 22f7384a-ca75-447f-a16d-e1519837d337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:01 np0005603621 nova_compute[247399]: 2026-01-31 09:08:01.043 247403 DEBUG nova.network.neutron [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating instance_info_cache with network_info: [{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:01 np0005603621 nova_compute[247399]: 2026-01-31 09:08:01.066 247403 DEBUG oslo_concurrency.lockutils [req-3002a972-5217-4eca-baed-d5e9644799cf req-aa5b661b-1562-4a33-a201-86b266587e55 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 31 04:08:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:02.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.125 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.271 247403 DEBUG nova.compute.manager [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.271 247403 DEBUG oslo_concurrency.lockutils [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.272 247403 DEBUG oslo_concurrency.lockutils [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.272 247403 DEBUG oslo_concurrency.lockutils [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.272 247403 DEBUG nova.compute.manager [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] No waiting events found dispatching network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.273 247403 WARNING nova.compute.manager [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received unexpected event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.273 247403 DEBUG nova.compute.manager [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-changed-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.273 247403 DEBUG nova.compute.manager [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Refreshing instance network info cache due to event network-changed-d0168a86-f128-46b0-bf1e-62edd18e00cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.273 247403 DEBUG oslo_concurrency.lockutils [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.274 247403 DEBUG oslo_concurrency.lockutils [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.274 247403 DEBUG nova.network.neutron [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Refreshing network info cache for port d0168a86-f128-46b0-bf1e-62edd18e00cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:02 np0005603621 nova_compute[247399]: 2026-01-31 09:08:02.750 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.8 MiB/s wr, 223 op/s
Jan 31 04:08:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:04 np0005603621 nova_compute[247399]: 2026-01-31 09:08:04.989 247403 DEBUG nova.network.neutron [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updated VIF entry in instance network info cache for port d0168a86-f128-46b0-bf1e-62edd18e00cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:04 np0005603621 nova_compute[247399]: 2026-01-31 09:08:04.990 247403 DEBUG nova.network.neutron [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updating instance_info_cache with network_info: [{"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:05 np0005603621 nova_compute[247399]: 2026-01-31 09:08:05.017 247403 DEBUG oslo_concurrency.lockutils [req-c3494a07-6ecb-4458-8c03-8960ead903c3 req-4d4838f1-475a-4773-b464-f3978e7b87df fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 374 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 285 KiB/s wr, 223 op/s
Jan 31 04:08:05 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:05Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7b:f2:5e 10.100.0.13
Jan 31 04:08:05 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:05Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7b:f2:5e 10.100.0.13
Jan 31 04:08:05 np0005603621 podman[399623]: 2026-01-31 09:08:05.519416483 +0000 UTC m=+0.068470151 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:08:05 np0005603621 podman[399622]: 2026-01-31 09:08:05.53040199 +0000 UTC m=+0.078596042 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 04:08:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:06.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:06 np0005603621 nova_compute[247399]: 2026-01-31 09:08:06.022 247403 DEBUG nova.compute.manager [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-changed-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:06 np0005603621 nova_compute[247399]: 2026-01-31 09:08:06.022 247403 DEBUG nova.compute.manager [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Refreshing instance network info cache due to event network-changed-22f7384a-ca75-447f-a16d-e1519837d337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:06 np0005603621 nova_compute[247399]: 2026-01-31 09:08:06.023 247403 DEBUG oslo_concurrency.lockutils [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:06 np0005603621 nova_compute[247399]: 2026-01-31 09:08:06.023 247403 DEBUG oslo_concurrency.lockutils [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:06 np0005603621 nova_compute[247399]: 2026-01-31 09:08:06.024 247403 DEBUG nova.network.neutron [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Refreshing network info cache for port 22f7384a-ca75-447f-a16d-e1519837d337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:06.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:07 np0005603621 nova_compute[247399]: 2026-01-31 09:08:07.127 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 387 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 834 KiB/s wr, 214 op/s
Jan 31 04:08:07 np0005603621 nova_compute[247399]: 2026-01-31 09:08:07.752 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:07 np0005603621 nova_compute[247399]: 2026-01-31 09:08:07.939 247403 DEBUG nova.network.neutron [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updated VIF entry in instance network info cache for port 22f7384a-ca75-447f-a16d-e1519837d337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:07 np0005603621 nova_compute[247399]: 2026-01-31 09:08:07.940 247403 DEBUG nova.network.neutron [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating instance_info_cache with network_info: [{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:07 np0005603621 nova_compute[247399]: 2026-01-31 09:08:07.964 247403 DEBUG oslo_concurrency.lockutils [req-f7d773f1-b348-4c4b-8d0a-62f2e11bcb9c req-27eed805-d054-48dc-b896-9d2c3192afd3 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:08.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:08.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:08:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:08:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.2 MiB/s wr, 259 op/s
Jan 31 04:08:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:10.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 407 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 206 op/s
Jan 31 04:08:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:12.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:12.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:12 np0005603621 nova_compute[247399]: 2026-01-31 09:08:12.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:12 np0005603621 nova_compute[247399]: 2026-01-31 09:08:12.753 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 433 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.0 MiB/s wr, 301 op/s
Jan 31 04:08:13 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:13Z|00118|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.8
Jan 31 04:08:13 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:13Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c6:0e:b4 10.100.0.8
Jan 31 04:08:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:14.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 454 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 200 op/s
Jan 31 04:08:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:16.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:16 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:16Z|00120|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.8
Jan 31 04:08:16 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:16Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c6:0e:b4 10.100.0.8
Jan 31 04:08:17 np0005603621 nova_compute[247399]: 2026-01-31 09:08:17.132 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 454 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Jan 31 04:08:17 np0005603621 nova_compute[247399]: 2026-01-31 09:08:17.756 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:18 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:18Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:0e:b4 10.100.0.8
Jan 31 04:08:18 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:18Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:0e:b4 10.100.0.8
Jan 31 04:08:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.1 MiB/s wr, 170 op/s
Jan 31 04:08:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:19.181 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:08:19 np0005603621 nova_compute[247399]: 2026-01-31 09:08:19.181 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:19 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:19.182 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:08:19 np0005603621 nova_compute[247399]: 2026-01-31 09:08:19.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:20.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 118 op/s
Jan 31 04:08:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:22.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:22 np0005603621 nova_compute[247399]: 2026-01-31 09:08:22.135 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:22 np0005603621 nova_compute[247399]: 2026-01-31 09:08:22.758 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 118 op/s
Jan 31 04:08:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:24.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.126 247403 DEBUG nova.compute.manager [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-changed-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.126 247403 DEBUG nova.compute.manager [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Refreshing instance network info cache due to event network-changed-d0168a86-f128-46b0-bf1e-62edd18e00cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.127 247403 DEBUG oslo_concurrency.lockutils [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.127 247403 DEBUG oslo_concurrency.lockutils [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.128 247403 DEBUG nova.network.neutron [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Refreshing network info cache for port d0168a86-f128-46b0-bf1e-62edd18e00cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.256 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.256 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.257 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.257 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.257 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.258 247403 INFO nova.compute.manager [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Terminating instance#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.259 247403 DEBUG nova.compute.manager [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:08:24 np0005603621 kernel: tapd0168a86-f1 (unregistering): left promiscuous mode
Jan 31 04:08:24 np0005603621 NetworkManager[49013]: <info>  [1769850504.3063] device (tapd0168a86-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:08:24 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:24Z|00891|binding|INFO|Releasing lport d0168a86-f128-46b0-bf1e-62edd18e00cd from this chassis (sb_readonly=0)
Jan 31 04:08:24 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:24Z|00892|binding|INFO|Setting lport d0168a86-f128-46b0-bf1e-62edd18e00cd down in Southbound
Jan 31 04:08:24 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:24Z|00893|binding|INFO|Removing iface tapd0168a86-f1 ovn-installed in OVS
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.311 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.313 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.319 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:f2:5e 10.100.0.13'], port_security=['fa:16:3e:7b:f2:5e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a3d51eec-c79b-4863-bbb8-d9c2e39742e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b575155-2651-409f-ab96-5a6cf52f7f88', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'abf9393aa2b646feb00a3d887a9dee14', 'neutron:revision_number': '4', 'neutron:security_group_ids': '62dc32c6-48fe-4646-a68f-9ac332b3fbdb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d18ce706-6e0f-4295-834a-6178a6f7a4c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=d0168a86-f128-46b0-bf1e-62edd18e00cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.321 159734 INFO neutron.agent.ovn.metadata.agent [-] Port d0168a86-f128-46b0-bf1e-62edd18e00cd in datapath 6b575155-2651-409f-ab96-5a6cf52f7f88 unbound from our chassis#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.323 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b575155-2651-409f-ab96-5a6cf52f7f88, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.324 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[762ab2f0-56bd-47d8-844e-ec483917e4e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.325 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88 namespace which is not needed anymore#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Jan 31 04:08:24 np0005603621 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000d2.scope: Consumed 13.347s CPU time.
Jan 31 04:08:24 np0005603621 systemd-machined[212769]: Machine qemu-101-instance-000000d2 terminated.
Jan 31 04:08:24 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [NOTICE]   (399301) : haproxy version is 2.8.14-c23fe91
Jan 31 04:08:24 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [NOTICE]   (399301) : path to executable is /usr/sbin/haproxy
Jan 31 04:08:24 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [WARNING]  (399301) : Exiting Master process...
Jan 31 04:08:24 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [WARNING]  (399301) : Exiting Master process...
Jan 31 04:08:24 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [ALERT]    (399301) : Current worker (399303) exited with code 143 (Terminated)
Jan 31 04:08:24 np0005603621 neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88[399297]: [WARNING]  (399301) : All workers exited. Exiting... (0)
Jan 31 04:08:24 np0005603621 systemd[1]: libpod-06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa.scope: Deactivated successfully.
Jan 31 04:08:24 np0005603621 podman[399752]: 2026-01-31 09:08:24.43109685 +0000 UTC m=+0.039176348 container died 06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:08:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa-userdata-shm.mount: Deactivated successfully.
Jan 31 04:08:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-09a0dcfe8d8fc1c5f1545c4bb6e66f54f427a272722a16025c0d7d076d4d3a7c-merged.mount: Deactivated successfully.
Jan 31 04:08:24 np0005603621 podman[399752]: 2026-01-31 09:08:24.461681195 +0000 UTC m=+0.069760713 container cleanup 06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.476 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 systemd[1]: libpod-conmon-06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa.scope: Deactivated successfully.
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.480 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.490 247403 INFO nova.virt.libvirt.driver [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Instance destroyed successfully.#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.491 247403 DEBUG nova.objects.instance [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lazy-loading 'resources' on Instance uuid a3d51eec-c79b-4863-bbb8-d9c2e39742e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.511 247403 DEBUG nova.virt.libvirt.vif [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:07:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2110560441',display_name='tempest-TestNetworkBasicOps-server-2110560441',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2110560441',id=210,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCIzSudM5DGjK6sh7eqAzv26tzRV10gIcCHo4Jdd4eIN5Sfg7kKCXWRdxAz6k1fAJgWtDytVNXIQZGlR6XTZaXKlaF+q8CvU7iwVA2F2jmkDdYuMSJB+YxEBYVq7+ZNZxQ==',key_name='tempest-TestNetworkBasicOps-737960295',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:07:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='abf9393aa2b646feb00a3d887a9dee14',ramdisk_id='',reservation_id='r-az4u8wpw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-104417095',owner_user_name='tempest-TestNetworkBasicOps-104417095-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:07:53Z,user_data=None,user_id='d442c7ba12ed444ca6d4dcc5cfd36150',uuid=a3d51eec-c79b-4863-bbb8-d9c2e39742e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.511 247403 DEBUG nova.network.os_vif_util [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converting VIF {"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.512 247403 DEBUG nova.network.os_vif_util [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.513 247403 DEBUG os_vif [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.515 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.515 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0168a86-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.518 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:08:24 np0005603621 podman[399778]: 2026-01-31 09:08:24.518928451 +0000 UTC m=+0.041270063 container remove 06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.521 247403 INFO os_vif [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:f2:5e,bridge_name='br-int',has_traffic_filtering=True,id=d0168a86-f128-46b0-bf1e-62edd18e00cd,network=Network(6b575155-2651-409f-ab96-5a6cf52f7f88),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0168a86-f1')#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.523 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6331f89a-5d45-474e-b2b9-4500314cd74f]: (4, ('Sat Jan 31 09:08:24 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88 (06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa)\n06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa\nSat Jan 31 09:08:24 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88 (06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa)\n06630bda00b46d65e609982206447b7f8e94ea8e8134f2813ec612c1b7a2dcfa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.524 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f6546c-a352-4901-b315-38ccc881e232]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.525 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b575155-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:24 np0005603621 kernel: tap6b575155-20: left promiscuous mode
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.538 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[863324f8-a58a-44f6-9aca-01410c352283]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 nova_compute[247399]: 2026-01-31 09:08:24.540 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.555 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[eca5fcc7-b3fc-4af1-b51b-308481a27516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.557 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c53a40fb-a474-419d-8773-b29061c9711a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.569 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[46ad121e-e83c-4c5d-9d90-7167b5eafc39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977810, 'reachable_time': 31925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399818, 'error': None, 'target': 'ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:24 np0005603621 systemd[1]: run-netns-ovnmeta\x2d6b575155\x2d2651\x2d409f\x2dab96\x2d5a6cf52f7f88.mount: Deactivated successfully.
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.573 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b575155-2651-409f-ab96-5a6cf52f7f88 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:08:24 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:24.573 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[44cedf9f-a1c2-47f3-a25d-906374b7230d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:25 np0005603621 nova_compute[247399]: 2026-01-31 09:08:25.135 247403 INFO nova.virt.libvirt.driver [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Deleting instance files /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6_del#033[00m
Jan 31 04:08:25 np0005603621 nova_compute[247399]: 2026-01-31 09:08:25.136 247403 INFO nova.virt.libvirt.driver [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Deletion of /var/lib/nova/instances/a3d51eec-c79b-4863-bbb8-d9c2e39742e6_del complete#033[00m
Jan 31 04:08:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 458 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 438 KiB/s rd, 939 KiB/s wr, 24 op/s
Jan 31 04:08:25 np0005603621 nova_compute[247399]: 2026-01-31 09:08:25.190 247403 INFO nova.compute.manager [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:08:25 np0005603621 nova_compute[247399]: 2026-01-31 09:08:25.191 247403 DEBUG oslo.service.loopingcall [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:08:25 np0005603621 nova_compute[247399]: 2026-01-31 09:08:25.191 247403 DEBUG nova.compute.manager [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:08:25 np0005603621 nova_compute[247399]: 2026-01-31 09:08:25.191 247403 DEBUG nova.network.neutron [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:08:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:26.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.236 247403 DEBUG nova.compute.manager [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-vif-unplugged-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.237 247403 DEBUG oslo_concurrency.lockutils [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.237 247403 DEBUG oslo_concurrency.lockutils [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.237 247403 DEBUG oslo_concurrency.lockutils [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.237 247403 DEBUG nova.compute.manager [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] No waiting events found dispatching network-vif-unplugged-d0168a86-f128-46b0-bf1e-62edd18e00cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.238 247403 DEBUG nova.compute.manager [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-vif-unplugged-d0168a86-f128-46b0-bf1e-62edd18e00cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.238 247403 DEBUG nova.compute.manager [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.238 247403 DEBUG oslo_concurrency.lockutils [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.238 247403 DEBUG oslo_concurrency.lockutils [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.238 247403 DEBUG oslo_concurrency.lockutils [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.239 247403 DEBUG nova.compute.manager [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] No waiting events found dispatching network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:26 np0005603621 nova_compute[247399]: 2026-01-31 09:08:26.239 247403 WARNING nova.compute.manager [req-6ff51b4a-8269-4e0d-bb59-540655822bfa req-4e95f79e-00aa-4679-a305-53f65e58f469 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received unexpected event network-vif-plugged-d0168a86-f128-46b0-bf1e-62edd18e00cd for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.137 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 437 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 149 KiB/s rd, 106 KiB/s wr, 20 op/s
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.320 247403 DEBUG nova.network.neutron [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.341 247403 INFO nova.compute.manager [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Took 2.15 seconds to deallocate network for instance.#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.407 247403 DEBUG nova.compute.manager [req-6346242a-2bee-4c2d-a224-e4af88c45ba7 req-5070170f-63ce-4bba-98d4-be6978635292 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Received event network-vif-deleted-d0168a86-f128-46b0-bf1e-62edd18e00cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.419 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.419 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.565 247403 DEBUG oslo_concurrency.processutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.724 247403 DEBUG nova.network.neutron [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updated VIF entry in instance network info cache for port d0168a86-f128-46b0-bf1e-62edd18e00cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.725 247403 DEBUG nova.network.neutron [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Updating instance_info_cache with network_info: [{"id": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "address": "fa:16:3e:7b:f2:5e", "network": {"id": "6b575155-2651-409f-ab96-5a6cf52f7f88", "bridge": "br-int", "label": "tempest-network-smoke--862739051", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "abf9393aa2b646feb00a3d887a9dee14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0168a86-f1", "ovs_interfaceid": "d0168a86-f128-46b0-bf1e-62edd18e00cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.760 247403 DEBUG oslo_concurrency.lockutils [req-16690bbd-91df-490d-ab04-6f86f6658a08 req-734398fc-92bc-4e28-820c-4bc210a60f45 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-a3d51eec-c79b-4863-bbb8-d9c2e39742e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094093664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.960 247403 DEBUG oslo_concurrency.processutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.965 247403 DEBUG nova.compute.provider_tree [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:08:27 np0005603621 nova_compute[247399]: 2026-01-31 09:08:27.986 247403 DEBUG nova.scheduler.client.report [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.019 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.049 247403 INFO nova.scheduler.client.report [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Deleted allocations for instance a3d51eec-c79b-4863-bbb8-d9c2e39742e6#033[00m
Jan 31 04:08:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.142 247403 DEBUG oslo_concurrency.lockutils [None req-487b3317-276e-4070-9975-a8bdacfb2e9e d442c7ba12ed444ca6d4dcc5cfd36150 abf9393aa2b646feb00a3d887a9dee14 - - default default] Lock "a3d51eec-c79b-4863-bbb8-d9c2e39742e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:28 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:28.183 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.996 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.997 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.997 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:08:28 np0005603621 nova_compute[247399]: 2026-01-31 09:08:28.998 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e5867604-e99e-4623-8c07-a3ca7d95fe78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:08:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 415 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Jan 31 04:08:29 np0005603621 nova_compute[247399]: 2026-01-31 09:08:29.517 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:30.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:30.550 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:30.551 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:30.551 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:08:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d942b71e-0d2a-4a4b-9dfe-03aa6442b24d does not exist
Jan 31 04:08:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 412eb0a5-3ad3-4422-a6a6-eb852b530526 does not exist
Jan 31 04:08:30 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 75079985-511a-4f46-b5b5-5a6b799a928d does not exist
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:08:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.111472842 +0000 UTC m=+0.041932844 container create 76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yonath, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:08:31 np0005603621 systemd[1]: Started libpod-conmon-76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5.scope.
Jan 31 04:08:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:08:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 415 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 1.4 MiB/s wr, 57 op/s
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.1757252 +0000 UTC m=+0.106185232 container init 76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.185346033 +0000 UTC m=+0.115806035 container start 76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.092784032 +0000 UTC m=+0.023244054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:08:31 np0005603621 quirky_yonath[400135]: 167 167
Jan 31 04:08:31 np0005603621 systemd[1]: libpod-76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5.scope: Deactivated successfully.
Jan 31 04:08:31 np0005603621 conmon[400135]: conmon 76fc3263adb6e73e6fa1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5.scope/container/memory.events
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.201024909 +0000 UTC m=+0.131484911 container attach 76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yonath, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.202111442 +0000 UTC m=+0.132571434 container died 76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:08:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-64c388b8d1ef38f5bdf65d70095952ed54a32bf37a41f6e47cf27ab26e8da6d7-merged.mount: Deactivated successfully.
Jan 31 04:08:31 np0005603621 podman[400119]: 2026-01-31 09:08:31.319225319 +0000 UTC m=+0.249685321 container remove 76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_yonath, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:08:31 np0005603621 systemd[1]: libpod-conmon-76fc3263adb6e73e6fa1c06ddf5c74334f8fc34f72338c96b7250493b4974de5.scope: Deactivated successfully.
Jan 31 04:08:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:08:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:08:31 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:08:31 np0005603621 podman[400162]: 2026-01-31 09:08:31.442928333 +0000 UTC m=+0.038869358 container create 80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 31 04:08:31 np0005603621 systemd[1]: Started libpod-conmon-80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0.scope.
Jan 31 04:08:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:08:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711ed2514c3bcc1f74bbe29e354c9880ed71da88390e22091de25c4b298baa80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711ed2514c3bcc1f74bbe29e354c9880ed71da88390e22091de25c4b298baa80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711ed2514c3bcc1f74bbe29e354c9880ed71da88390e22091de25c4b298baa80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711ed2514c3bcc1f74bbe29e354c9880ed71da88390e22091de25c4b298baa80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/711ed2514c3bcc1f74bbe29e354c9880ed71da88390e22091de25c4b298baa80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:31 np0005603621 podman[400162]: 2026-01-31 09:08:31.424088229 +0000 UTC m=+0.020029274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:08:31 np0005603621 podman[400162]: 2026-01-31 09:08:31.522308719 +0000 UTC m=+0.118249744 container init 80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:08:31 np0005603621 podman[400162]: 2026-01-31 09:08:31.53152539 +0000 UTC m=+0.127466425 container start 80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 04:08:31 np0005603621 podman[400162]: 2026-01-31 09:08:31.536900069 +0000 UTC m=+0.132841114 container attach 80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:08:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:32.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:32 np0005603621 nova_compute[247399]: 2026-01-31 09:08:32.139 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:32 np0005603621 vigorous_williamson[400179]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:08:32 np0005603621 vigorous_williamson[400179]: --> relative data size: 1.0
Jan 31 04:08:32 np0005603621 vigorous_williamson[400179]: --> All data devices are unavailable
Jan 31 04:08:32 np0005603621 systemd[1]: libpod-80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0.scope: Deactivated successfully.
Jan 31 04:08:32 np0005603621 conmon[400179]: conmon 80f7256d306f8e5391af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0.scope/container/memory.events
Jan 31 04:08:32 np0005603621 podman[400162]: 2026-01-31 09:08:32.339231201 +0000 UTC m=+0.935172236 container died 80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:08:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-711ed2514c3bcc1f74bbe29e354c9880ed71da88390e22091de25c4b298baa80-merged.mount: Deactivated successfully.
Jan 31 04:08:32 np0005603621 podman[400162]: 2026-01-31 09:08:32.932375961 +0000 UTC m=+1.528317026 container remove 80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:08:32 np0005603621 systemd[1]: libpod-conmon-80f7256d306f8e5391af4ba8cfcc5fc4ceffe71011cc231e2ccd54a3adbce1b0.scope: Deactivated successfully.
Jan 31 04:08:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.039 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating instance_info_cache with network_info: [{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.067 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.067 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.067 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.118 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.119 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.119 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.119 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.119 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 363 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.447713056 +0000 UTC m=+0.039047673 container create fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_buck, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:08:33 np0005603621 systemd[1]: Started libpod-conmon-fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f.scope.
Jan 31 04:08:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.429822552 +0000 UTC m=+0.021157189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.531853542 +0000 UTC m=+0.123188179 container init fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.540801564 +0000 UTC m=+0.132136181 container start fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:08:33 np0005603621 eager_buck[400386]: 167 167
Jan 31 04:08:33 np0005603621 systemd[1]: libpod-fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f.scope: Deactivated successfully.
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.54829587 +0000 UTC m=+0.139630507 container attach fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.548781906 +0000 UTC m=+0.140116543 container died fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 04:08:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-29c26f3b848b4b03c03e8b762ae82b0b622e2863da90fc993372a7810eb6c94f-merged.mount: Deactivated successfully.
Jan 31 04:08:33 np0005603621 podman[400369]: 2026-01-31 09:08:33.604527105 +0000 UTC m=+0.195861722 container remove fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_buck, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:08:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2212740430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:33 np0005603621 systemd[1]: libpod-conmon-fd49bc14594b6151cf0982b888fe3e8d83a1367177d69b0248956bf37050a39f.scope: Deactivated successfully.
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.628 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.704 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.704 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000d4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:08:33 np0005603621 podman[400412]: 2026-01-31 09:08:33.748630624 +0000 UTC m=+0.048941416 container create 270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 04:08:33 np0005603621 systemd[1]: Started libpod-conmon-270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021.scope.
Jan 31 04:08:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:08:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c0072d8248a90b6de330d1e7672066ca2819df7387ec623552c70547678dbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c0072d8248a90b6de330d1e7672066ca2819df7387ec623552c70547678dbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c0072d8248a90b6de330d1e7672066ca2819df7387ec623552c70547678dbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c0072d8248a90b6de330d1e7672066ca2819df7387ec623552c70547678dbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:33 np0005603621 podman[400412]: 2026-01-31 09:08:33.724619045 +0000 UTC m=+0.024929857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:08:33 np0005603621 podman[400412]: 2026-01-31 09:08:33.833557514 +0000 UTC m=+0.133868336 container init 270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:08:33 np0005603621 podman[400412]: 2026-01-31 09:08:33.841054111 +0000 UTC m=+0.141364903 container start 270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:08:33 np0005603621 podman[400412]: 2026-01-31 09:08:33.846335517 +0000 UTC m=+0.146646309 container attach 270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mccarthy, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.852 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.854 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3843MB free_disk=20.88067626953125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.855 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.855 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.970 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance e5867604-e99e-4623-8c07-a3ca7d95fe78 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.971 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:08:33 np0005603621 nova_compute[247399]: 2026-01-31 09:08:33.971 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:08:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:34.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.042 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:34.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097570353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.482 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.487 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.515 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.520 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.546 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:08:34 np0005603621 nova_compute[247399]: 2026-01-31 09:08:34.547 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]: {
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:    "0": [
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:        {
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "devices": [
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "/dev/loop3"
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            ],
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "lv_name": "ceph_lv0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "lv_size": "7511998464",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "name": "ceph_lv0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "tags": {
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.cluster_name": "ceph",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.crush_device_class": "",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.encrypted": "0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.osd_id": "0",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.type": "block",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:                "ceph.vdo": "0"
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            },
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "type": "block",
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:            "vg_name": "ceph_vg0"
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:        }
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]:    ]
Jan 31 04:08:34 np0005603621 stoic_mccarthy[400428]: }
Jan 31 04:08:34 np0005603621 systemd[1]: libpod-270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021.scope: Deactivated successfully.
Jan 31 04:08:34 np0005603621 podman[400412]: 2026-01-31 09:08:34.740410985 +0000 UTC m=+1.040721777 container died 270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:08:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-76c0072d8248a90b6de330d1e7672066ca2819df7387ec623552c70547678dbe-merged.mount: Deactivated successfully.
Jan 31 04:08:34 np0005603621 podman[400412]: 2026-01-31 09:08:34.805110767 +0000 UTC m=+1.105421559 container remove 270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 31 04:08:34 np0005603621 systemd[1]: libpod-conmon-270433da797338a702fb011d1a3913b0aa6fdc9f1040faa5a9736bfb895c3021.scope: Deactivated successfully.
Jan 31 04:08:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.378071461 +0000 UTC m=+0.048942646 container create 1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 04:08:35 np0005603621 systemd[1]: Started libpod-conmon-1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc.scope.
Jan 31 04:08:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.353921128 +0000 UTC m=+0.024792343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.456710293 +0000 UTC m=+0.127581488 container init 1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.463134075 +0000 UTC m=+0.134005260 container start 1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.466504521 +0000 UTC m=+0.137375736 container attach 1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:08:35 np0005603621 wonderful_vaughan[400630]: 167 167
Jan 31 04:08:35 np0005603621 systemd[1]: libpod-1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc.scope: Deactivated successfully.
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.469433384 +0000 UTC m=+0.140304569 container died 1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 04:08:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-62ee35d1f802fde80a1f386342c239a7fbbe073d96de0e7441bba18d52db114b-merged.mount: Deactivated successfully.
Jan 31 04:08:35 np0005603621 podman[400614]: 2026-01-31 09:08:35.523592854 +0000 UTC m=+0.194464039 container remove 1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_vaughan, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:08:35 np0005603621 systemd[1]: libpod-conmon-1f0fef1daf9037853c13e742bf458b227684809b477d220a6a7c8d36a108accc.scope: Deactivated successfully.
Jan 31 04:08:35 np0005603621 podman[400649]: 2026-01-31 09:08:35.643661133 +0000 UTC m=+0.063396122 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:08:35 np0005603621 podman[400651]: 2026-01-31 09:08:35.673445843 +0000 UTC m=+0.091391475 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 04:08:35 np0005603621 nova_compute[247399]: 2026-01-31 09:08:35.677 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:35 np0005603621 nova_compute[247399]: 2026-01-31 09:08:35.678 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:35 np0005603621 nova_compute[247399]: 2026-01-31 09:08:35.678 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:08:35 np0005603621 podman[400687]: 2026-01-31 09:08:35.689682025 +0000 UTC m=+0.055250515 container create 00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:08:35 np0005603621 systemd[1]: Started libpod-conmon-00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601.scope.
Jan 31 04:08:35 np0005603621 podman[400687]: 2026-01-31 09:08:35.665375848 +0000 UTC m=+0.030944368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:08:35 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:08:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ff18fbda07bd2a788b8ad5c9dfd4c2356f4863fb644a94b7335f17da2997d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ff18fbda07bd2a788b8ad5c9dfd4c2356f4863fb644a94b7335f17da2997d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ff18fbda07bd2a788b8ad5c9dfd4c2356f4863fb644a94b7335f17da2997d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:35 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39ff18fbda07bd2a788b8ad5c9dfd4c2356f4863fb644a94b7335f17da2997d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:08:35 np0005603621 podman[400687]: 2026-01-31 09:08:35.795392542 +0000 UTC m=+0.160961062 container init 00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:08:35 np0005603621 podman[400687]: 2026-01-31 09:08:35.803064804 +0000 UTC m=+0.168633304 container start 00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 04:08:35 np0005603621 podman[400687]: 2026-01-31 09:08:35.811369856 +0000 UTC m=+0.176938376 container attach 00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:08:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]: {
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:        "osd_id": 0,
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:        "type": "bluestore"
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]:    }
Jan 31 04:08:36 np0005603621 vigilant_visvesvaraya[400717]: }
Jan 31 04:08:36 np0005603621 systemd[1]: libpod-00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601.scope: Deactivated successfully.
Jan 31 04:08:36 np0005603621 podman[400739]: 2026-01-31 09:08:36.616365542 +0000 UTC m=+0.021362755 container died 00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:08:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-39ff18fbda07bd2a788b8ad5c9dfd4c2356f4863fb644a94b7335f17da2997d3-merged.mount: Deactivated successfully.
Jan 31 04:08:36 np0005603621 podman[400739]: 2026-01-31 09:08:36.686475195 +0000 UTC m=+0.091472368 container remove 00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 04:08:36 np0005603621 systemd[1]: libpod-conmon-00bf5f4297567f2922e0054a6f8dac919a3a463caa3d9581831d5e2c2f2f1601.scope: Deactivated successfully.
Jan 31 04:08:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:08:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:08:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:08:37 np0005603621 nova_compute[247399]: 2026-01-31 09:08:37.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 346 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 93 KiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 31 04:08:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:08:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 77999ead-7b07-4e1e-b095-b4575bc87a0b does not exist
Jan 31 04:08:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f07ee88d-8bf5-4b4c-bbd0-a1178d021afa does not exist
Jan 31 04:08:37 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 612bc992-cc3c-4bb9-9f55-77d368052cd8 does not exist
Jan 31 04:08:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:08:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.003000096s ======
Jan 31 04:08:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000096s
Jan 31 04:08:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:38.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:38 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:08:38
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'backups', 'volumes', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 04:08:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.033 247403 DEBUG nova.compute.manager [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-changed-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.034 247403 DEBUG nova.compute.manager [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Refreshing instance network info cache due to event network-changed-22f7384a-ca75-447f-a16d-e1519837d337. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.034 247403 DEBUG oslo_concurrency.lockutils [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.035 247403 DEBUG oslo_concurrency.lockutils [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.035 247403 DEBUG nova.network.neutron [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Refreshing network info cache for port 22f7384a-ca75-447f-a16d-e1519837d337 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.116 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.117 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.117 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.118 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.118 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.119 247403 INFO nova.compute.manager [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Terminating instance#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.120 247403 DEBUG nova.compute.manager [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 112 KiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:08:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.488 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850504.4870296, a3d51eec-c79b-4863-bbb8-d9c2e39742e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.488 247403 INFO nova.compute.manager [-] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.517 247403 DEBUG nova.compute.manager [None req-d4a54a42-4b26-4dbd-9fa8-18210abe468d - - - - - -] [instance: a3d51eec-c79b-4863-bbb8-d9c2e39742e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.522 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603621 kernel: tap22f7384a-ca (unregistering): left promiscuous mode
Jan 31 04:08:39 np0005603621 NetworkManager[49013]: <info>  [1769850519.8198] device (tap22f7384a-ca): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.824 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:39Z|00894|binding|INFO|Releasing lport 22f7384a-ca75-447f-a16d-e1519837d337 from this chassis (sb_readonly=0)
Jan 31 04:08:39 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:39Z|00895|binding|INFO|Setting lport 22f7384a-ca75-447f-a16d-e1519837d337 down in Southbound
Jan 31 04:08:39 np0005603621 ovn_controller[149152]: 2026-01-31T09:08:39Z|00896|binding|INFO|Removing iface tap22f7384a-ca ovn-installed in OVS
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.826 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.834 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:39.837 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:0e:b4 10.100.0.8'], port_security=['fa:16:3e:c6:0e:b4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e5867604-e99e-4623-8c07-a3ca7d95fe78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-650eb345-8346-4e8f-8e83-eeb0117654f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '76ce367a834b49dfb5b436848118b860', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cea15428-ed6f-44a7-98e5-24c0fab7b796', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8ecdc171-9d09-4cba-9bb9-cd2f8ef8e6c3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=22f7384a-ca75-447f-a16d-e1519837d337) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:08:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:39.839 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 22f7384a-ca75-447f-a16d-e1519837d337 in datapath 650eb345-8346-4e8f-8e83-eeb0117654f6 unbound from our chassis#033[00m
Jan 31 04:08:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:39.840 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 650eb345-8346-4e8f-8e83-eeb0117654f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:08:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:39.841 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[37df5b83-26f1-410b-a84e-a64f29396a97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:39 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:39.841 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6 namespace which is not needed anymore#033[00m
Jan 31 04:08:39 np0005603621 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d4.scope: Deactivated successfully.
Jan 31 04:08:39 np0005603621 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000d4.scope: Consumed 13.713s CPU time.
Jan 31 04:08:39 np0005603621 systemd-machined[212769]: Machine qemu-102-instance-000000d4 terminated.
Jan 31 04:08:39 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [NOTICE]   (399558) : haproxy version is 2.8.14-c23fe91
Jan 31 04:08:39 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [NOTICE]   (399558) : path to executable is /usr/sbin/haproxy
Jan 31 04:08:39 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [WARNING]  (399558) : Exiting Master process...
Jan 31 04:08:39 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [WARNING]  (399558) : Exiting Master process...
Jan 31 04:08:39 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [ALERT]    (399558) : Current worker (399560) exited with code 143 (Terminated)
Jan 31 04:08:39 np0005603621 neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6[399554]: [WARNING]  (399558) : All workers exited. Exiting... (0)
Jan 31 04:08:39 np0005603621 systemd[1]: libpod-ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a.scope: Deactivated successfully.
Jan 31 04:08:39 np0005603621 podman[400830]: 2026-01-31 09:08:39.948686133 +0000 UTC m=+0.043020809 container died ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.954 247403 INFO nova.virt.libvirt.driver [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Instance destroyed successfully.#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.955 247403 DEBUG nova.objects.instance [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lazy-loading 'resources' on Instance uuid e5867604-e99e-4623-8c07-a3ca7d95fe78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:08:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a-userdata-shm.mount: Deactivated successfully.
Jan 31 04:08:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-48702f4af4a565939b62ed21e3a80e6443f6b668bb883aac3de207cf0f684f41-merged.mount: Deactivated successfully.
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.984 247403 DEBUG nova.virt.libvirt.vif [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:07:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-922331869',display_name='tempest-TestVolumeBootPattern-server-922331869',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-922331869',id=212,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWvTxk1zh2OCmPH3tumEbxR7y880uhj4vJDAspX9r3EATf0w5oe5DG3NVBcNRbWTPcgVwlnXcyaRQZseLc7edDTe4kwfjogsRoplvkAsMWW9sCSaJlX0XBkMxl/Ghv8Fw==',key_name='tempest-TestVolumeBootPattern-11482540',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:08:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='76ce367a834b49dfb5b436848118b860',ramdisk_id='',reservation_id='r-e82erhrp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1392945362',owner_user_name='tempest-TestVolumeBootPattern-1392945362-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:08:00Z,user_data=None,user_id='dc42b92a5dd34d32b6b184bdc7acb092',uuid=e5867604-e99e-4623-8c07-a3ca7d95fe78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:08:39 np0005603621 podman[400830]: 2026-01-31 09:08:39.985697071 +0000 UTC m=+0.080031717 container cleanup ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.985 247403 DEBUG nova.network.os_vif_util [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Converting VIF {"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.987 247403 DEBUG nova.network.os_vif_util [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.988 247403 DEBUG os_vif [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.991 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.991 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap22f7384a-ca, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:39 np0005603621 systemd[1]: libpod-conmon-ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a.scope: Deactivated successfully.
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.993 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.996 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:08:39 np0005603621 nova_compute[247399]: 2026-01-31 09:08:39.998 247403 INFO os_vif [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:0e:b4,bridge_name='br-int',has_traffic_filtering=True,id=22f7384a-ca75-447f-a16d-e1519837d337,network=Network(650eb345-8346-4e8f-8e83-eeb0117654f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap22f7384a-ca')#033[00m
Jan 31 04:08:40 np0005603621 podman[400867]: 2026-01-31 09:08:40.042493364 +0000 UTC m=+0.042158651 container remove ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.045 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[045cffa3-2414-4509-b5d2-167ae44690ad]: (4, ('Sat Jan 31 09:08:39 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6 (ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a)\ned74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a\nSat Jan 31 09:08:39 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6 (ed74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a)\ned74bc8b3ad82d4a3625201c192345cba7c228a20d52442bcfee2658a6cdf17a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.047 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[346e6206-4a4c-4b15-a68f-e0e205e2a48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.049 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap650eb345-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:08:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:40 np0005603621 kernel: tap650eb345-80: left promiscuous mode
Jan 31 04:08:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:40.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.052 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.054 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcc878c-3fad-493b-842e-e372cfbc4def]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.056 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.072 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4f28a8-93e9-4471-a478-7fe64aaa476d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.074 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[74255da6-46da-43ea-9e57-c31228d83735]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.092 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[4980b144-c927-4135-b637-c97e0696c540]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 978491, 'reachable_time': 43335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400900, 'error': None, 'target': 'ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.095 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-650eb345-8346-4e8f-8e83-eeb0117654f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:08:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:08:40.095 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[58ca4865-c725-44f5-9d51-36f6dc684e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:08:40 np0005603621 systemd[1]: run-netns-ovnmeta\x2d650eb345\x2d8346\x2d4e8f\x2d8e83\x2deeb0117654f6.mount: Deactivated successfully.
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.402 247403 DEBUG nova.compute.manager [req-7e5f6481-9f88-4b3f-a1ac-b8d073e5c311 req-4d921c04-c562-44e6-91ab-229de6b544a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-vif-unplugged-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.403 247403 DEBUG oslo_concurrency.lockutils [req-7e5f6481-9f88-4b3f-a1ac-b8d073e5c311 req-4d921c04-c562-44e6-91ab-229de6b544a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.403 247403 DEBUG oslo_concurrency.lockutils [req-7e5f6481-9f88-4b3f-a1ac-b8d073e5c311 req-4d921c04-c562-44e6-91ab-229de6b544a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.403 247403 DEBUG oslo_concurrency.lockutils [req-7e5f6481-9f88-4b3f-a1ac-b8d073e5c311 req-4d921c04-c562-44e6-91ab-229de6b544a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.403 247403 DEBUG nova.compute.manager [req-7e5f6481-9f88-4b3f-a1ac-b8d073e5c311 req-4d921c04-c562-44e6-91ab-229de6b544a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] No waiting events found dispatching network-vif-unplugged-22f7384a-ca75-447f-a16d-e1519837d337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.403 247403 DEBUG nova.compute.manager [req-7e5f6481-9f88-4b3f-a1ac-b8d073e5c311 req-4d921c04-c562-44e6-91ab-229de6b544a0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-vif-unplugged-22f7384a-ca75-447f-a16d-e1519837d337 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.725 247403 INFO nova.virt.libvirt.driver [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Deleting instance files /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78_del#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.726 247403 INFO nova.virt.libvirt.driver [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Deletion of /var/lib/nova/instances/e5867604-e99e-4623-8c07-a3ca7d95fe78_del complete#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.792 247403 INFO nova.compute.manager [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Took 1.67 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.793 247403 DEBUG oslo.service.loopingcall [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.793 247403 DEBUG nova.compute.manager [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.793 247403 DEBUG nova.network.neutron [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:40 np0005603621 nova_compute[247399]: 2026-01-31 09:08:40.957 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 452 KiB/s wr, 42 op/s
Jan 31 04:08:41 np0005603621 nova_compute[247399]: 2026-01-31 09:08:41.638 247403 DEBUG nova.network.neutron [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updated VIF entry in instance network info cache for port 22f7384a-ca75-447f-a16d-e1519837d337. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:08:41 np0005603621 nova_compute[247399]: 2026-01-31 09:08:41.639 247403 DEBUG nova.network.neutron [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating instance_info_cache with network_info: [{"id": "22f7384a-ca75-447f-a16d-e1519837d337", "address": "fa:16:3e:c6:0e:b4", "network": {"id": "650eb345-8346-4e8f-8e83-eeb0117654f6", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1550438709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76ce367a834b49dfb5b436848118b860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap22f7384a-ca", "ovs_interfaceid": "22f7384a-ca75-447f-a16d-e1519837d337", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:41 np0005603621 nova_compute[247399]: 2026-01-31 09:08:41.665 247403 DEBUG oslo_concurrency.lockutils [req-a65942a2-3ec6-4a77-93f4-5a4dabdf8bc1 req-4b48496a-91aa-438e-a824-e5d9c6af8892 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-e5867604-e99e-4623-8c07-a3ca7d95fe78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:08:41 np0005603621 nova_compute[247399]: 2026-01-31 09:08:41.865 247403 DEBUG nova.network.neutron [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:08:41 np0005603621 nova_compute[247399]: 2026-01-31 09:08:41.894 247403 INFO nova.compute.manager [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Took 1.10 seconds to deallocate network for instance.#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.029 247403 DEBUG nova.compute.manager [req-c60811a9-61e8-4f8b-b8e5-e3a236521a19 req-5ec30b53-d135-401f-bddf-56e16601f0f9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-vif-deleted-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:42.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:08:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:42.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.165 247403 INFO nova.compute.manager [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Took 0.27 seconds to detach 1 volumes for instance.#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.211 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.212 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.266 247403 DEBUG oslo_concurrency.processutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.554 247403 DEBUG nova.compute.manager [req-7e588b16-7431-4899-949c-b9550957d4cc req-b03a6912-a313-4313-b3b0-244eca541a47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.554 247403 DEBUG oslo_concurrency.lockutils [req-7e588b16-7431-4899-949c-b9550957d4cc req-b03a6912-a313-4313-b3b0-244eca541a47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.555 247403 DEBUG oslo_concurrency.lockutils [req-7e588b16-7431-4899-949c-b9550957d4cc req-b03a6912-a313-4313-b3b0-244eca541a47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.555 247403 DEBUG oslo_concurrency.lockutils [req-7e588b16-7431-4899-949c-b9550957d4cc req-b03a6912-a313-4313-b3b0-244eca541a47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.555 247403 DEBUG nova.compute.manager [req-7e588b16-7431-4899-949c-b9550957d4cc req-b03a6912-a313-4313-b3b0-244eca541a47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] No waiting events found dispatching network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.555 247403 WARNING nova.compute.manager [req-7e588b16-7431-4899-949c-b9550957d4cc req-b03a6912-a313-4313-b3b0-244eca541a47 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Received unexpected event network-vif-plugged-22f7384a-ca75-447f-a16d-e1519837d337 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 04:08:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:08:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750199618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.644 247403 DEBUG oslo_concurrency.processutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.650 247403 DEBUG nova.compute.provider_tree [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.669 247403 DEBUG nova.scheduler.client.report [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.704 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:42 np0005603621 nova_compute[247399]: 2026-01-31 09:08:42.766 247403 INFO nova.scheduler.client.report [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Deleted allocations for instance e5867604-e99e-4623-8c07-a3ca7d95fe78#033[00m
Jan 31 04:08:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:43 np0005603621 nova_compute[247399]: 2026-01-31 09:08:43.069 247403 DEBUG oslo_concurrency.lockutils [None req-ddaa8cee-2b45-4bd9-aa24-4bf917d31f1d dc42b92a5dd34d32b6b184bdc7acb092 76ce367a834b49dfb5b436848118b860 - - default default] Lock "e5867604-e99e-4623-8c07-a3ca7d95fe78" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:08:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 349 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 467 KiB/s wr, 114 op/s
Jan 31 04:08:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:44.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:44.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:44 np0005603621 nova_compute[247399]: 2026-01-31 09:08:44.995 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Jan 31 04:08:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Jan 31 04:08:45 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Jan 31 04:08:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 342 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 75 KiB/s wr, 129 op/s
Jan 31 04:08:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:46.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:46.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:47 np0005603621 nova_compute[247399]: 2026-01-31 09:08:47.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 333 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 70 KiB/s wr, 147 op/s
Jan 31 04:08:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:48.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 23 KiB/s wr, 149 op/s
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003175781977014703 of space, bias 1.0, pg target 0.9527345931044109 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004333916399125256 of space, bias 1.0, pg target 1.3001749197375767 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:08:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:08:50 np0005603621 nova_compute[247399]: 2026-01-31 09:08:49.999 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:50.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 327 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 23 KiB/s wr, 149 op/s
Jan 31 04:08:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:52.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:52 np0005603621 nova_compute[247399]: 2026-01-31 09:08:52.146 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Jan 31 04:08:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Jan 31 04:08:52 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Jan 31 04:08:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 350 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 31 04:08:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:08:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:08:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:54.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:54 np0005603621 nova_compute[247399]: 2026-01-31 09:08:54.951 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850519.9495645, e5867604-e99e-4623-8c07-a3ca7d95fe78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:08:54 np0005603621 nova_compute[247399]: 2026-01-31 09:08:54.951 247403 INFO nova.compute.manager [-] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:08:54 np0005603621 nova_compute[247399]: 2026-01-31 09:08:54.972 247403 DEBUG nova.compute.manager [None req-a21d80d6-922b-4c38-98b9-754b4cd20497 - - - - - -] [instance: e5867604-e99e-4623-8c07-a3ca7d95fe78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:08:55 np0005603621 nova_compute[247399]: 2026-01-31 09:08:55.002 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 343 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 406 KiB/s rd, 2.6 MiB/s wr, 115 op/s
Jan 31 04:08:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:08:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:56.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:08:57 np0005603621 nova_compute[247399]: 2026-01-31 09:08:57.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:08:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 289 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 404 KiB/s rd, 2.6 MiB/s wr, 113 op/s
Jan 31 04:08:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:08:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:08:58.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:08:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:08:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:08:58.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:08:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 31 04:09:00 np0005603621 nova_compute[247399]: 2026-01-31 09:09:00.006 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:00.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 417 KiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 31 04:09:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:02.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:02.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:02 np0005603621 nova_compute[247399]: 2026-01-31 09:09:02.150 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 172 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 147 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 31 04:09:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:04.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:04.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:05 np0005603621 nova_compute[247399]: 2026-01-31 09:09:05.010 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 180 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 134 KiB/s rd, 2.3 MiB/s wr, 96 op/s
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.491635) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850545491685, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1629, "num_deletes": 252, "total_data_size": 2748596, "memory_usage": 2800448, "flush_reason": "Manual Compaction"}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850545507318, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2706276, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78842, "largest_seqno": 80470, "table_properties": {"data_size": 2698889, "index_size": 4329, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15990, "raw_average_key_size": 20, "raw_value_size": 2683866, "raw_average_value_size": 3405, "num_data_blocks": 190, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850393, "oldest_key_time": 1769850393, "file_creation_time": 1769850545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 15742 microseconds, and 4874 cpu microseconds.
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.507373) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2706276 bytes OK
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.507408) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.509329) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.509360) EVENT_LOG_v1 {"time_micros": 1769850545509351, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.509394) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2741693, prev total WAL file size 2741693, number of live WAL files 2.
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.510615) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2642KB)], [182(12MB)]
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850545510701, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 15300143, "oldest_snapshot_seqno": -1}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10675 keys, 13445447 bytes, temperature: kUnknown
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850545603338, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 13445447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13376951, "index_size": 40735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26693, "raw_key_size": 282130, "raw_average_key_size": 26, "raw_value_size": 13191045, "raw_average_value_size": 1235, "num_data_blocks": 1546, "num_entries": 10675, "num_filter_entries": 10675, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.603902) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13445447 bytes
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.605392) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 145.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 12.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(10.6) write-amplify(5.0) OK, records in: 11196, records dropped: 521 output_compression: NoCompression
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.605430) EVENT_LOG_v1 {"time_micros": 1769850545605412, "job": 114, "event": "compaction_finished", "compaction_time_micros": 92609, "compaction_time_cpu_micros": 32890, "output_level": 6, "num_output_files": 1, "total_output_size": 13445447, "num_input_records": 11196, "num_output_records": 10675, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850545606460, "job": 114, "event": "table_file_deletion", "file_number": 184}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850545609454, "job": 114, "event": "table_file_deletion", "file_number": 182}
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.510518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.609572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.609579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.609582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.609584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:05 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:09:05.609586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:09:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:06.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:06.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:06 np0005603621 podman[401038]: 2026-01-31 09:09:06.53434241 +0000 UTC m=+0.075286066 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 31 04:09:06 np0005603621 podman[401039]: 2026-01-31 09:09:06.606679923 +0000 UTC m=+0.146624919 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 04:09:07 np0005603621 nova_compute[247399]: 2026-01-31 09:09:07.152 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 31 04:09:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:08.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:09:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:09:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 31 04:09:10 np0005603621 nova_compute[247399]: 2026-01-31 09:09:10.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:10.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 31 04:09:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:12.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:12.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:12 np0005603621 nova_compute[247399]: 2026-01-31 09:09:12.153 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 31 04:09:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:14.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:14.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:09:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176011879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:09:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:09:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3176011879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:09:15 np0005603621 nova_compute[247399]: 2026-01-31 09:09:15.016 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 812 KiB/s rd, 1.0 MiB/s wr, 63 op/s
Jan 31 04:09:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:16.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:16.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:17 np0005603621 nova_compute[247399]: 2026-01-31 09:09:17.154 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 293 KiB/s wr, 62 op/s
Jan 31 04:09:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:18.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:18.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 04:09:20 np0005603621 nova_compute[247399]: 2026-01-31 09:09:20.019 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:09:20.036 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:09:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:09:20.036 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:09:20 np0005603621 nova_compute[247399]: 2026-01-31 09:09:20.036 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:20.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:20.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:09:21.039 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:09:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 04:09:21 np0005603621 nova_compute[247399]: 2026-01-31 09:09:21.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:22.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:22.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:22 np0005603621 nova_compute[247399]: 2026-01-31 09:09:22.156 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 04:09:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:24.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:24.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:25 np0005603621 nova_compute[247399]: 2026-01-31 09:09:25.022 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 172 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 836 KiB/s wr, 77 op/s
Jan 31 04:09:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:26.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:09:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2169149247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:09:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:26.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:26 np0005603621 nova_compute[247399]: 2026-01-31 09:09:26.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:27 np0005603621 nova_compute[247399]: 2026-01-31 09:09:27.158 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 173 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 996 KiB/s wr, 52 op/s
Jan 31 04:09:27 np0005603621 nova_compute[247399]: 2026-01-31 09:09:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:27 np0005603621 nova_compute[247399]: 2026-01-31 09:09:27.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:09:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:28.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:28.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:28 np0005603621 nova_compute[247399]: 2026-01-31 09:09:28.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 76 op/s
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.025 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:30.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:30.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.223 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.223 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.259 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.260 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:09:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:09:30.551 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:09:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:09:30.551 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:09:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:09:30.551 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:09:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:09:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1931947847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.662 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.779 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.780 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4110MB free_disk=20.930259704589844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.780 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.780 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.846 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.846 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:09:30 np0005603621 nova_compute[247399]: 2026-01-31 09:09:30.861 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:09:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 210 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 211 KiB/s rd, 3.2 MiB/s wr, 50 op/s
Jan 31 04:09:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:09:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211007149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:09:31 np0005603621 nova_compute[247399]: 2026-01-31 09:09:31.294 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:09:31 np0005603621 nova_compute[247399]: 2026-01-31 09:09:31.300 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:09:31 np0005603621 nova_compute[247399]: 2026-01-31 09:09:31.315 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:09:31 np0005603621 nova_compute[247399]: 2026-01-31 09:09:31.339 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:09:31 np0005603621 nova_compute[247399]: 2026-01-31 09:09:31.340 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:09:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:32.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:32.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:32 np0005603621 nova_compute[247399]: 2026-01-31 09:09:32.159 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 244 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 3.9 MiB/s wr, 81 op/s
Jan 31 04:09:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:34.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:34.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:35 np0005603621 nova_compute[247399]: 2026-01-31 09:09:35.027 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 31 04:09:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:36.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:36.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:36 np0005603621 nova_compute[247399]: 2026-01-31 09:09:36.315 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:36 np0005603621 nova_compute[247399]: 2026-01-31 09:09:36.316 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:37 np0005603621 nova_compute[247399]: 2026-01-31 09:09:37.161 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:37 np0005603621 nova_compute[247399]: 2026-01-31 09:09:37.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 335 KiB/s rd, 3.1 MiB/s wr, 78 op/s
Jan 31 04:09:37 np0005603621 podman[401191]: 2026-01-31 09:09:37.486454444 +0000 UTC m=+0.045070333 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 04:09:37 np0005603621 podman[401192]: 2026-01-31 09:09:37.550077482 +0000 UTC m=+0.107055860 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:09:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:38.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:38.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:09:38
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'vms', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.control']
Jan 31 04:09:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:09:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 134 op/s
Jan 31 04:09:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:09:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:09:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:40 np0005603621 nova_compute[247399]: 2026-01-31 09:09:40.032 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:40.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:40.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev eee80d40-8009-4341-8ab6-9cda16e989ed does not exist
Jan 31 04:09:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6d0d8bc5-5c15-41ec-9351-4179db35aa37 does not exist
Jan 31 04:09:40 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ca09f403-6a85-4542-9b63-5af802b07897 does not exist
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:40 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.767618702 +0000 UTC m=+0.030365129 container create d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 04:09:40 np0005603621 systemd[1]: Started libpod-conmon-d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140.scope.
Jan 31 04:09:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.817442994 +0000 UTC m=+0.080189431 container init d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.821781181 +0000 UTC m=+0.084527618 container start d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:09:40 np0005603621 condescending_chatterjee[401527]: 167 167
Jan 31 04:09:40 np0005603621 systemd[1]: libpod-d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140.scope: Deactivated successfully.
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.825279451 +0000 UTC m=+0.088025928 container attach d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.826397777 +0000 UTC m=+0.089144214 container died d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.75363061 +0000 UTC m=+0.016377067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4a0b55a5cbd7a76add84191e0d6a450389d0a15d81e103086ebe13fbec17402c-merged.mount: Deactivated successfully.
Jan 31 04:09:40 np0005603621 podman[401510]: 2026-01-31 09:09:40.903403737 +0000 UTC m=+0.166150164 container remove d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:09:40 np0005603621 systemd[1]: libpod-conmon-d53c9f3d8f6619ed954064f4112e9e51ace7e5e134879612bc7b3ce69dd16140.scope: Deactivated successfully.
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.035704693 +0000 UTC m=+0.038451874 container create 8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:09:41 np0005603621 systemd[1]: Started libpod-conmon-8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643.scope.
Jan 31 04:09:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:09:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eed5a99d315358a5ef3141ac30093379ecad98a27d4575058807fbf4274eef6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eed5a99d315358a5ef3141ac30093379ecad98a27d4575058807fbf4274eef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eed5a99d315358a5ef3141ac30093379ecad98a27d4575058807fbf4274eef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eed5a99d315358a5ef3141ac30093379ecad98a27d4575058807fbf4274eef6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:41 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eed5a99d315358a5ef3141ac30093379ecad98a27d4575058807fbf4274eef6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.019131 +0000 UTC m=+0.021878201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.125416654 +0000 UTC m=+0.128163875 container init 8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.133677855 +0000 UTC m=+0.136425036 container start 8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.137235438 +0000 UTC m=+0.139982639 container attach 8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldstine, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:09:41 np0005603621 nova_compute[247399]: 2026-01-31 09:09:41.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:09:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 745 KiB/s wr, 97 op/s
Jan 31 04:09:41 np0005603621 crazy_goldstine[401567]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:09:41 np0005603621 crazy_goldstine[401567]: --> relative data size: 1.0
Jan 31 04:09:41 np0005603621 crazy_goldstine[401567]: --> All data devices are unavailable
Jan 31 04:09:41 np0005603621 systemd[1]: libpod-8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643.scope: Deactivated successfully.
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.926625582 +0000 UTC m=+0.929372763 container died 8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 04:09:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0eed5a99d315358a5ef3141ac30093379ecad98a27d4575058807fbf4274eef6-merged.mount: Deactivated successfully.
Jan 31 04:09:41 np0005603621 podman[401551]: 2026-01-31 09:09:41.969795154 +0000 UTC m=+0.972542335 container remove 8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldstine, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 04:09:41 np0005603621 systemd[1]: libpod-conmon-8b62e1eabd8d259380c89bccb881bc4e3ee11e5fab137a5f0e400bb6ab1e9643.scope: Deactivated successfully.
Jan 31 04:09:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:42 np0005603621 nova_compute[247399]: 2026-01-31 09:09:42.162 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.434657895 +0000 UTC m=+0.033989074 container create 5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:09:42 np0005603621 systemd[1]: Started libpod-conmon-5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524.scope.
Jan 31 04:09:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.498927684 +0000 UTC m=+0.098258863 container init 5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.504546791 +0000 UTC m=+0.103877970 container start 5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khorana, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:09:42 np0005603621 systemd[1]: libpod-5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524.scope: Deactivated successfully.
Jan 31 04:09:42 np0005603621 vigorous_khorana[401753]: 167 167
Jan 31 04:09:42 np0005603621 conmon[401753]: conmon 5f8b85602a8a0c1a393a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524.scope/container/memory.events
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.511338176 +0000 UTC m=+0.110669355 container attach 5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.511907104 +0000 UTC m=+0.111238283 container died 5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.420086965 +0000 UTC m=+0.019418164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-68ac61f4cbaabd108447cfa6b98cf1003bc69f733fb98987ead3e7778474c49e-merged.mount: Deactivated successfully.
Jan 31 04:09:42 np0005603621 podman[401738]: 2026-01-31 09:09:42.544606336 +0000 UTC m=+0.143937505 container remove 5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khorana, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:09:42 np0005603621 systemd[1]: libpod-conmon-5f8b85602a8a0c1a393ae45eaa5dd7981d0177a4b80a9615c7eeb81cb7a7d524.scope: Deactivated successfully.
Jan 31 04:09:42 np0005603621 podman[401778]: 2026-01-31 09:09:42.655894968 +0000 UTC m=+0.036103251 container create aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:09:42 np0005603621 systemd[1]: Started libpod-conmon-aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4.scope.
Jan 31 04:09:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:09:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0dde572f53bb44f2d0c662486f34ea66e5b0f886b332c350c69d0d20f8e3737/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0dde572f53bb44f2d0c662486f34ea66e5b0f886b332c350c69d0d20f8e3737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0dde572f53bb44f2d0c662486f34ea66e5b0f886b332c350c69d0d20f8e3737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0dde572f53bb44f2d0c662486f34ea66e5b0f886b332c350c69d0d20f8e3737/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:42 np0005603621 podman[401778]: 2026-01-31 09:09:42.64012004 +0000 UTC m=+0.020328343 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:42 np0005603621 podman[401778]: 2026-01-31 09:09:42.751276508 +0000 UTC m=+0.131484861 container init aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:09:42 np0005603621 podman[401778]: 2026-01-31 09:09:42.757815375 +0000 UTC m=+0.138023668 container start aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wright, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:09:42 np0005603621 podman[401778]: 2026-01-31 09:09:42.76179001 +0000 UTC m=+0.141998293 container attach aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wright, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:09:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 746 KiB/s wr, 136 op/s
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]: {
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:    "0": [
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:        {
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "devices": [
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "/dev/loop3"
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            ],
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "lv_name": "ceph_lv0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "lv_size": "7511998464",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "name": "ceph_lv0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "tags": {
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.cluster_name": "ceph",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.crush_device_class": "",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.encrypted": "0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.osd_id": "0",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.type": "block",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:                "ceph.vdo": "0"
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            },
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "type": "block",
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:            "vg_name": "ceph_vg0"
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:        }
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]:    ]
Jan 31 04:09:43 np0005603621 ecstatic_wright[401794]: }
Jan 31 04:09:43 np0005603621 systemd[1]: libpod-aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4.scope: Deactivated successfully.
Jan 31 04:09:43 np0005603621 podman[401778]: 2026-01-31 09:09:43.498801541 +0000 UTC m=+0.879009834 container died aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wright, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:09:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a0dde572f53bb44f2d0c662486f34ea66e5b0f886b332c350c69d0d20f8e3737-merged.mount: Deactivated successfully.
Jan 31 04:09:43 np0005603621 podman[401778]: 2026-01-31 09:09:43.546505116 +0000 UTC m=+0.926713399 container remove aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:09:43 np0005603621 systemd[1]: libpod-conmon-aff442ec6f26c6a513385096078c1fabe2747437688ad78559f59e3b3a408bd4.scope: Deactivated successfully.
Jan 31 04:09:43 np0005603621 ovn_controller[149152]: 2026-01-31T09:09:43Z|00897|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.044730441 +0000 UTC m=+0.055154521 container create f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:09:44 np0005603621 systemd[1]: Started libpod-conmon-f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030.scope.
Jan 31 04:09:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.025624838 +0000 UTC m=+0.036048958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:44.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.128841316 +0000 UTC m=+0.139265426 container init f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.13562268 +0000 UTC m=+0.146046750 container start f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.139005496 +0000 UTC m=+0.149429576 container attach f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 04:09:44 np0005603621 lucid_mcclintock[402025]: 167 167
Jan 31 04:09:44 np0005603621 systemd[1]: libpod-f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030.scope: Deactivated successfully.
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.142452045 +0000 UTC m=+0.152876125 container died f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:09:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6aa1fb64c46d9badc12e18be3afe83e290eb4e39034dfba310328adc06c0365f-merged.mount: Deactivated successfully.
Jan 31 04:09:44 np0005603621 podman[402008]: 2026-01-31 09:09:44.177365408 +0000 UTC m=+0.187789478 container remove f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:09:44 np0005603621 systemd[1]: libpod-conmon-f4ac7156bcfc4932f2316d68783c43c4dd970127a3ea444cc67d8401c51ed030.scope: Deactivated successfully.
Jan 31 04:09:44 np0005603621 podman[402049]: 2026-01-31 09:09:44.370645377 +0000 UTC m=+0.081466732 container create c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:09:44 np0005603621 podman[402049]: 2026-01-31 09:09:44.328798147 +0000 UTC m=+0.039619572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:09:44 np0005603621 systemd[1]: Started libpod-conmon-c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121.scope.
Jan 31 04:09:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90875cfd31387cb25a0770ea39b6f26e191516a985c65020020990f05ea31f09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90875cfd31387cb25a0770ea39b6f26e191516a985c65020020990f05ea31f09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90875cfd31387cb25a0770ea39b6f26e191516a985c65020020990f05ea31f09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90875cfd31387cb25a0770ea39b6f26e191516a985c65020020990f05ea31f09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:09:44 np0005603621 podman[402049]: 2026-01-31 09:09:44.487246087 +0000 UTC m=+0.198067442 container init c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:09:44 np0005603621 podman[402049]: 2026-01-31 09:09:44.502094776 +0000 UTC m=+0.212916121 container start c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:09:44 np0005603621 podman[402049]: 2026-01-31 09:09:44.505769682 +0000 UTC m=+0.216591067 container attach c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:09:45 np0005603621 nova_compute[247399]: 2026-01-31 09:09:45.035 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 38 KiB/s wr, 108 op/s
Jan 31 04:09:45 np0005603621 distracted_napier[402066]: {
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:        "osd_id": 0,
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:        "type": "bluestore"
Jan 31 04:09:45 np0005603621 distracted_napier[402066]:    }
Jan 31 04:09:45 np0005603621 distracted_napier[402066]: }
Jan 31 04:09:45 np0005603621 systemd[1]: libpod-c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121.scope: Deactivated successfully.
Jan 31 04:09:45 np0005603621 podman[402049]: 2026-01-31 09:09:45.348504609 +0000 UTC m=+1.059326014 container died c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:09:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-90875cfd31387cb25a0770ea39b6f26e191516a985c65020020990f05ea31f09-merged.mount: Deactivated successfully.
Jan 31 04:09:45 np0005603621 podman[402049]: 2026-01-31 09:09:45.402355449 +0000 UTC m=+1.113176804 container remove c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 31 04:09:45 np0005603621 systemd[1]: libpod-conmon-c99c38a6921aa2239c792db64569197d82a4541585b3df0b5d9e4e35f08cb121.scope: Deactivated successfully.
Jan 31 04:09:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:09:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:09:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1b684a42-9d09-45cf-b701-b8011209e292 does not exist
Jan 31 04:09:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 074ff122-2df6-4ae6-be13-e35f92928fad does not exist
Jan 31 04:09:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 39b1fcc5-c601-4d0d-8b11-41c3ad88e159 does not exist
Jan 31 04:09:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:46.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:46 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:09:47 np0005603621 nova_compute[247399]: 2026-01-31 09:09:47.164 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 101 op/s
Jan 31 04:09:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:09:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:48.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:09:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:48.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:09:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 18K writes, 80K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1639 writes, 7332 keys, 1638 commit groups, 1.0 writes per commit group, ingest: 10.82 MB, 0.02 MB/s#012Interval WAL: 1638 writes, 1637 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     28.9      3.82              0.26        57    0.067       0      0       0.0       0.0#012  L6      1/0   12.82 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.3     53.6     45.9     12.67              1.27        56    0.226    433K    30K       0.0       0.0#012 Sum      1/0   12.82 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.3     41.2     42.0     16.49              1.53       113    0.146    433K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0     77.1     78.4      1.05              0.17        12    0.087     65K   3146       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     53.6     45.9     12.67              1.27        56    0.226    433K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     28.9      3.81              0.26        56    0.068       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.108, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.68 GB write, 0.10 MB/s write, 0.66 GB read, 0.10 MB/s read, 16.5 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 74.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.001099 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4296,71.66 MB,23.5736%) FilterBlock(114,1.17 MB,0.383934%) IndexBlock(114,1.94 MB,0.637782%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 563 KiB/s wr, 111 op/s
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012940861715987344 of space, bias 1.0, pg target 0.3882258514796203 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:09:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:09:50 np0005603621 nova_compute[247399]: 2026-01-31 09:09:50.039 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:50.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 174 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 504 KiB/s rd, 549 KiB/s wr, 54 op/s
Jan 31 04:09:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:52.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:52 np0005603621 nova_compute[247399]: 2026-01-31 09:09:52.165 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:52.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 195 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 723 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 31 04:09:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:09:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:54.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:09:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:54.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:55 np0005603621 nova_compute[247399]: 2026-01-31 09:09:55.042 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 31 04:09:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:56.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:56.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:57 np0005603621 nova_compute[247399]: 2026-01-31 09:09:57.166 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:09:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 04:09:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:09:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:09:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:09:58.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:09:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:09:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:09:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:09:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:09:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 367 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 04:10:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 04:10:00 np0005603621 nova_compute[247399]: 2026-01-31 09:10:00.044 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:00.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 04:10:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 355 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Jan 31 04:10:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:02.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:02 np0005603621 nova_compute[247399]: 2026-01-31 09:10:02.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 149 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 362 KiB/s rd, 1.6 MiB/s wr, 64 op/s
Jan 31 04:10:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:04.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:04.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:05 np0005603621 nova_compute[247399]: 2026-01-31 09:10:05.047 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 154 KiB/s rd, 73 KiB/s wr, 43 op/s
Jan 31 04:10:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:06.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:06.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:07 np0005603621 nova_compute[247399]: 2026-01-31 09:10:07.170 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 31 04:10:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:08.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:08.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:08 np0005603621 podman[402213]: 2026-01-31 09:10:08.501615681 +0000 UTC m=+0.055622617 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:10:08 np0005603621 podman[402214]: 2026-01-31 09:10:08.526479015 +0000 UTC m=+0.080311005 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:10:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:10:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 31 04:10:10 np0005603621 nova_compute[247399]: 2026-01-31 09:10:10.050 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:10.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 04:10:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:12.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:12 np0005603621 nova_compute[247399]: 2026-01-31 09:10:12.172 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:12.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 31 04:10:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:14.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:10:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241246889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:10:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:10:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2241246889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:10:15 np0005603621 nova_compute[247399]: 2026-01-31 09:10:15.054 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Jan 31 04:10:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:16.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:17 np0005603621 nova_compute[247399]: 2026-01-31 09:10:17.173 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:10:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:18.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:10:20 np0005603621 nova_compute[247399]: 2026-01-31 09:10:20.057 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:20.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:20.168 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:10:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:20.169 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:10:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:20 np0005603621 nova_compute[247399]: 2026-01-31 09:10:20.221 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:21 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:21.171 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:21 np0005603621 nova_compute[247399]: 2026-01-31 09:10:21.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:10:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:22.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:22 np0005603621 nova_compute[247399]: 2026-01-31 09:10:22.176 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:22.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:10:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:24.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:10:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:10:25 np0005603621 nova_compute[247399]: 2026-01-31 09:10:25.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:10:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:26.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:27 np0005603621 nova_compute[247399]: 2026-01-31 09:10:27.178 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:27 np0005603621 nova_compute[247399]: 2026-01-31 09:10:27.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 136 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 467 KiB/s wr, 0 op/s
Jan 31 04:10:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:28.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:28.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:28 np0005603621 nova_compute[247399]: 2026-01-31 09:10:28.621 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:28 np0005603621 nova_compute[247399]: 2026-01-31 09:10:28.621 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:28 np0005603621 nova_compute[247399]: 2026-01-31 09:10:28.639 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.050 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.051 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.057 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.057 247403 INFO nova.compute.claims [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.218 247403 DEBUG nova.scheduler.client.report [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:10:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.295 247403 DEBUG nova.scheduler.client.report [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.296 247403 DEBUG nova.compute.provider_tree [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.314 247403 DEBUG nova.scheduler.client.report [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.344 247403 DEBUG nova.scheduler.client.report [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.379 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:10:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626993907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.783 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.788 247403 DEBUG nova.compute.provider_tree [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.829 247403 DEBUG nova.scheduler.client.report [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.868 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.869 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.923 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.923 247403 DEBUG nova.network.neutron [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.948 247403 INFO nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:10:29 np0005603621 nova_compute[247399]: 2026-01-31 09:10:29.984 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.063 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.087 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.088 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.088 247403 INFO nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Creating image(s)#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.113 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.137 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.160 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.163 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:30.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.209 247403 DEBUG nova.policy [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ebd43008d7a64b8bbf97a2304b1f78b6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:10:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:30.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.243 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.244 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.245 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.245 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.270 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:30 np0005603621 nova_compute[247399]: 2026-01-31 09:10:30.273 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:30.552 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:10:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.241 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.241 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.242 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.265 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.265 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.266 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.266 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.266 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:10:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69440004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.669 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.801 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.802 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.802 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.802 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.906 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.906 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.906 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:10:31 np0005603621 nova_compute[247399]: 2026-01-31 09:10:31.967 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.116 247403 DEBUG nova.network.neutron [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Successfully created port: bc656551-d5b1-4d24-a64b-f6713c6cd8f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:10:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:32.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.180 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:32.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:10:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3851815265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.391 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.398 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.478 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.506 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.590 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] resizing rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.686 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:10:32 np0005603621 nova_compute[247399]: 2026-01-31 09:10:32.687 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 199 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 3.3 MiB/s wr, 39 op/s
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.567 247403 DEBUG nova.network.neutron [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Successfully updated port: bc656551-d5b1-4d24-a64b-f6713c6cd8f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.629 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.630 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquired lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.630 247403 DEBUG nova.network.neutron [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.709 247403 DEBUG nova.compute.manager [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-changed-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.710 247403 DEBUG nova.compute.manager [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Refreshing instance network info cache due to event network-changed-bc656551-d5b1-4d24-a64b-f6713c6cd8f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.710 247403 DEBUG oslo_concurrency.lockutils [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.718 247403 DEBUG nova.objects.instance [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'migration_context' on Instance uuid 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.737 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.738 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Ensure instance console log exists: /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.738 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.739 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:33 np0005603621 nova_compute[247399]: 2026-01-31 09:10:33.739 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:34 np0005603621 nova_compute[247399]: 2026-01-31 09:10:34.085 247403 DEBUG nova.network.neutron [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:10:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:34.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:34.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:35 np0005603621 nova_compute[247399]: 2026-01-31 09:10:35.066 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 52 op/s
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.030 247403 DEBUG nova.network.neutron [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updating instance_info_cache with network_info: [{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.088 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Releasing lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.089 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Instance network_info: |[{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.091 247403 DEBUG oslo_concurrency.lockutils [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.092 247403 DEBUG nova.network.neutron [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Refreshing network info cache for port bc656551-d5b1-4d24-a64b-f6713c6cd8f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.095 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Start _get_guest_xml network_info=[{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.115 247403 WARNING nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.131 247403 DEBUG nova.virt.libvirt.host [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.132 247403 DEBUG nova.virt.libvirt.host [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.136 247403 DEBUG nova.virt.libvirt.host [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.137 247403 DEBUG nova.virt.libvirt.host [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.138 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.139 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.139 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.139 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.140 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.140 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.140 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.141 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.141 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.141 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.141 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.142 247403 DEBUG nova.virt.hardware [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.145 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:36.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:36.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:10:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169355672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.570 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.599 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:36 np0005603621 nova_compute[247399]: 2026-01-31 09:10:36.603 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:10:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3540364892' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.017 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.023 247403 DEBUG nova.virt.libvirt.vif [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ac',id=217,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKIc4iaayDmLhEo3vI6YGRzp7m9GW6fzslqwq++gP9ecVHJRq1tSjzVnTPtJw3RUxXTQDWiA7Ya9j/CawC++Id9BLZED+RHeJDZ4JXh3gvgziK3fUhGR6gajupFnKxcV3w==',key_name='tempest-TestSecurityGroupsBasicOps-762998316',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-4knsd0dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:10:30Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=098c6cd0-6927-41a0-9b9f-6d2cfd743dd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.024 247403 DEBUG nova.network.os_vif_util [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.025 247403 DEBUG nova.network.os_vif_util [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.027 247403 DEBUG nova.objects.instance [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'pci_devices' on Instance uuid 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.059 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <uuid>098c6cd0-6927-41a0-9b9f-6d2cfd743dd3</uuid>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <name>instance-000000d9</name>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091</nova:name>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:10:36</nova:creationTime>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:user uuid="ebd43008d7a64b8bbf97a2304b1f78b6">tempest-TestSecurityGroupsBasicOps-1802479850-project-member</nova:user>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:project uuid="0c7930b92fc3471f87d9fe78ee56e71e">tempest-TestSecurityGroupsBasicOps-1802479850</nova:project>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <nova:port uuid="bc656551-d5b1-4d24-a64b-f6713c6cd8f5">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <entry name="serial">098c6cd0-6927-41a0-9b9f-6d2cfd743dd3</entry>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <entry name="uuid">098c6cd0-6927-41a0-9b9f-6d2cfd743dd3</entry>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk.config">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:0b:98:1e"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <target dev="tapbc656551-d5"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/console.log" append="off"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:10:37 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:10:37 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:10:37 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:10:37 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.061 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Preparing to wait for external event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.062 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.062 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.062 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.063 247403 DEBUG nova.virt.libvirt.vif [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ac',id=217,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKIc4iaayDmLhEo3vI6YGRzp7m9GW6fzslqwq++gP9ecVHJRq1tSjzVnTPtJw3RUxXTQDWiA7Ya9j/CawC++Id9BLZED+RHeJDZ4JXh3gvgziK3fUhGR6gajupFnKxcV3w==',key_name='tempest-TestSecurityGroupsBasicOps-762998316',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-4knsd0dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:10:30Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=098c6cd0-6927-41a0-9b9f-6d2cfd743dd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.063 247403 DEBUG nova.network.os_vif_util [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.064 247403 DEBUG nova.network.os_vif_util [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.064 247403 DEBUG os_vif [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.065 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.065 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.066 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.069 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.069 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc656551-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.070 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc656551-d5, col_values=(('external_ids', {'iface-id': 'bc656551-d5b1-4d24-a64b-f6713c6cd8f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:98:1e', 'vm-uuid': '098c6cd0-6927-41a0-9b9f-6d2cfd743dd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.071 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:37 np0005603621 NetworkManager[49013]: <info>  [1769850637.0723] manager: (tapbc656551-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.073 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.080 247403 INFO os_vif [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5')#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.151 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.152 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.152 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No VIF found with MAC fa:16:3e:0b:98:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.152 247403 INFO nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Using config drive#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.182 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.219 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.640 247403 INFO nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Creating config drive at /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/disk.config#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.648 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb08per9q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.777 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpb08per9q" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.804 247403 DEBUG nova.storage.rbd_utils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.808 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/disk.config 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.977 247403 DEBUG nova.network.neutron [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updated VIF entry in instance network info cache for port bc656551-d5b1-4d24-a64b-f6713c6cd8f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:10:37 np0005603621 nova_compute[247399]: 2026-01-31 09:10:37.978 247403 DEBUG nova.network.neutron [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updating instance_info_cache with network_info: [{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:10:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:38 np0005603621 nova_compute[247399]: 2026-01-31 09:10:38.144 247403 DEBUG oslo_concurrency.lockutils [req-56330fdd-1b74-4c98-aa93-96d6278ab4da req-4b000071-009a-45a5-8c48-98966144a226 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:10:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:38.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:38.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:10:38 np0005603621 nova_compute[247399]: 2026-01-31 09:10:38.644 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:38 np0005603621 nova_compute[247399]: 2026-01-31 09:10:38.645 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:38 np0005603621 nova_compute[247399]: 2026-01-31 09:10:38.645 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:10:38
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'vms', '.mgr']
Jan 31 04:10:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:10:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 3.1 MiB/s wr, 53 op/s
Jan 31 04:10:39 np0005603621 podman[402673]: 2026-01-31 09:10:39.528168544 +0000 UTC m=+0.085294883 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 31 04:10:39 np0005603621 podman[402674]: 2026-01-31 09:10:39.531507569 +0000 UTC m=+0.088465453 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.066 247403 DEBUG oslo_concurrency.processutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/disk.config 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.067 247403 INFO nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Deleting local config drive /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3/disk.config because it was imported into RBD.#033[00m
Jan 31 04:10:40 np0005603621 kernel: tapbc656551-d5: entered promiscuous mode
Jan 31 04:10:40 np0005603621 NetworkManager[49013]: <info>  [1769850640.1274] manager: (tapbc656551-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/414)
Jan 31 04:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:40Z|00898|binding|INFO|Claiming lport bc656551-d5b1-4d24-a64b-f6713c6cd8f5 for this chassis.
Jan 31 04:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:40Z|00899|binding|INFO|bc656551-d5b1-4d24-a64b-f6713c6cd8f5: Claiming fa:16:3e:0b:98:1e 10.100.0.12
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.130 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.140 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:40 np0005603621 systemd-udevd[402730]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.163 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:40Z|00900|binding|INFO|Setting lport bc656551-d5b1-4d24-a64b-f6713c6cd8f5 ovn-installed in OVS
Jan 31 04:10:40 np0005603621 systemd-machined[212769]: New machine qemu-103-instance-000000d9.
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.167 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:40 np0005603621 NetworkManager[49013]: <info>  [1769850640.1763] device (tapbc656551-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:10:40 np0005603621 NetworkManager[49013]: <info>  [1769850640.1774] device (tapbc656551-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:10:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:40 np0005603621 systemd[1]: Started Virtual Machine qemu-103-instance-000000d9.
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.202 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:98:1e 10.100.0.12'], port_security=['fa:16:3e:0b:98:1e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '098c6cd0-6927-41a0-9b9f-6d2cfd743dd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a940e54a-7391-4617-93be-b1a956d3558c b27284c8-1498-47c6-abbd-80dc28bbe6f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be7ac7a5-1e86-4304-8ddd-d276d05956e0, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bc656551-d5b1-4d24-a64b-f6713c6cd8f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:40Z|00901|binding|INFO|Setting lport bc656551-d5b1-4d24-a64b-f6713c6cd8f5 up in Southbound
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.203 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bc656551-d5b1-4d24-a64b-f6713c6cd8f5 in datapath 919288ff-a51c-4b6d-81b3-cc76704eca9e bound to our chassis#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.205 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 919288ff-a51c-4b6d-81b3-cc76704eca9e#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.212 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc6d0c2-f32b-480a-a70d-c3458573336b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.213 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap919288ff-a1 in ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.215 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap919288ff-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.215 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[098229a7-a5c8-4792-9e77-ed4137ff4cd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.216 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fe31f108-a223-4b90-80c5-28c38a5a3d96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.225 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[9c725abf-c1f3-4e5d-bac9-0de4f03da612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.237 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2fc39d-1d80-49a5-a3a1-0926b8ef080d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:40.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.265 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8e6271-6833-48e6-9061-e097dd3b1656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 NetworkManager[49013]: <info>  [1769850640.2706] manager: (tap919288ff-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/415)
Jan 31 04:10:40 np0005603621 systemd-udevd[402732]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.270 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0922a7c4-6d84-479b-9c48-b1b5050830d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.292 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[99592405-0117-4244-aa86-7d77fd0c7d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.298 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[b28f961e-ff0a-4b5a-89d9-cf60d42d0415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 NetworkManager[49013]: <info>  [1769850640.3155] device (tap919288ff-a0): carrier: link connected
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.321 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cbfe15-8223-46db-87ca-9aab4343ca51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.332 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9155e1e2-f852-49c8-bc85-229ec50f5a0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap919288ff-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:27:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 994583, 'reachable_time': 42203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402764, 'error': None, 'target': 'ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.344 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[759709f7-47c5-4b73-9a35-8881bb680250]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:2710'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 994583, 'tstamp': 994583}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402765, 'error': None, 'target': 'ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.356 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a882ce4b-5163-4eeb-a7ca-b3b1b84648b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap919288ff-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:27:10'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 994583, 'reachable_time': 42203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402766, 'error': None, 'target': 'ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.379 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1277ec7f-cd43-421c-a612-1df0f2a35f5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.420 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[77a7c1f7-4832-45f6-97aa-4248ab55b392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.422 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap919288ff-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.422 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.423 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap919288ff-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:40 np0005603621 NetworkManager[49013]: <info>  [1769850640.4256] manager: (tap919288ff-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 31 04:10:40 np0005603621 kernel: tap919288ff-a0: entered promiscuous mode
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.427 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap919288ff-a0, col_values=(('external_ids', {'iface-id': 'da214d50-142b-42ea-aa25-6c69e8caf69b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:10:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:40Z|00902|binding|INFO|Releasing lport da214d50-142b-42ea-aa25-6c69e8caf69b from this chassis (sb_readonly=0)
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.425 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.434 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/919288ff-a51c-4b6d-81b3-cc76704eca9e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/919288ff-a51c-4b6d-81b3-cc76704eca9e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.434 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ba7e78-8b03-485e-bc27-b7bc087afb79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.435 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-919288ff-a51c-4b6d-81b3-cc76704eca9e
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/919288ff-a51c-4b6d-81b3-cc76704eca9e.pid.haproxy
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 919288ff-a51c-4b6d-81b3-cc76704eca9e
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:10:40 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:10:40.437 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'env', 'PROCESS_TAG=haproxy-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/919288ff-a51c-4b6d-81b3-cc76704eca9e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.693 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850640.6931846, 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.694 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] VM Started (Lifecycle Event)#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.720 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.725 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850640.6934075, 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.726 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:10:40 np0005603621 podman[402838]: 2026-01-31 09:10:40.759628901 +0000 UTC m=+0.041715408 container create 589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.761 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.767 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:10:40 np0005603621 systemd[1]: Started libpod-conmon-589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291.scope.
Jan 31 04:10:40 np0005603621 nova_compute[247399]: 2026-01-31 09:10:40.802 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:10:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcdef539a468ee356bd1bca32bfcf2d1cde9e64c2f362609d15b7e22ee3cc6e8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:40 np0005603621 podman[402838]: 2026-01-31 09:10:40.83061434 +0000 UTC m=+0.112700867 container init 589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:10:40 np0005603621 podman[402838]: 2026-01-31 09:10:40.73555232 +0000 UTC m=+0.017638847 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:10:40 np0005603621 podman[402838]: 2026-01-31 09:10:40.835524396 +0000 UTC m=+0.117610903 container start 589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:10:40 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [NOTICE]   (402858) : New worker (402860) forked
Jan 31 04:10:40 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [NOTICE]   (402858) : Loading success.
Jan 31 04:10:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.474 247403 DEBUG nova.compute.manager [req-b3ce00ec-06da-41f9-b914-53926175f982 req-1aea18b7-6ce6-4c27-80c4-8fc639c15f36 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.474 247403 DEBUG oslo_concurrency.lockutils [req-b3ce00ec-06da-41f9-b914-53926175f982 req-1aea18b7-6ce6-4c27-80c4-8fc639c15f36 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.474 247403 DEBUG oslo_concurrency.lockutils [req-b3ce00ec-06da-41f9-b914-53926175f982 req-1aea18b7-6ce6-4c27-80c4-8fc639c15f36 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.475 247403 DEBUG oslo_concurrency.lockutils [req-b3ce00ec-06da-41f9-b914-53926175f982 req-1aea18b7-6ce6-4c27-80c4-8fc639c15f36 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.475 247403 DEBUG nova.compute.manager [req-b3ce00ec-06da-41f9-b914-53926175f982 req-1aea18b7-6ce6-4c27-80c4-8fc639c15f36 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Processing event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.475 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.478 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850641.478071, 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.479 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.481 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.483 247403 INFO nova.virt.libvirt.driver [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Instance spawned successfully.#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.484 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.520 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.523 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.530 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.531 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.531 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.531 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.532 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.532 247403 DEBUG nova.virt.libvirt.driver [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.580 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.713 247403 INFO nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.714 247403 DEBUG nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.880 247403 INFO nova.compute.manager [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Took 13.19 seconds to build instance.#033[00m
Jan 31 04:10:41 np0005603621 nova_compute[247399]: 2026-01-31 09:10:41.945 247403 DEBUG oslo_concurrency.lockutils [None req-f750a6c8-6e89-4828-a298-5fd7c59e5e33 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:42 np0005603621 nova_compute[247399]: 2026-01-31 09:10:42.072 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:42.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:42 np0005603621 nova_compute[247399]: 2026-01-31 09:10:42.218 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:42.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 816 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Jan 31 04:10:43 np0005603621 nova_compute[247399]: 2026-01-31 09:10:43.632 247403 DEBUG nova.compute.manager [req-c7e5f20a-1556-4361-9c63-38443bf3eda0 req-b850550b-c305-4c83-a674-ad485a054d1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:10:43 np0005603621 nova_compute[247399]: 2026-01-31 09:10:43.633 247403 DEBUG oslo_concurrency.lockutils [req-c7e5f20a-1556-4361-9c63-38443bf3eda0 req-b850550b-c305-4c83-a674-ad485a054d1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:10:43 np0005603621 nova_compute[247399]: 2026-01-31 09:10:43.633 247403 DEBUG oslo_concurrency.lockutils [req-c7e5f20a-1556-4361-9c63-38443bf3eda0 req-b850550b-c305-4c83-a674-ad485a054d1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:10:43 np0005603621 nova_compute[247399]: 2026-01-31 09:10:43.633 247403 DEBUG oslo_concurrency.lockutils [req-c7e5f20a-1556-4361-9c63-38443bf3eda0 req-b850550b-c305-4c83-a674-ad485a054d1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:10:43 np0005603621 nova_compute[247399]: 2026-01-31 09:10:43.633 247403 DEBUG nova.compute.manager [req-c7e5f20a-1556-4361-9c63-38443bf3eda0 req-b850550b-c305-4c83-a674-ad485a054d1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] No waiting events found dispatching network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:10:43 np0005603621 nova_compute[247399]: 2026-01-31 09:10:43.633 247403 WARNING nova.compute.manager [req-c7e5f20a-1556-4361-9c63-38443bf3eda0 req-b850550b-c305-4c83-a674-ad485a054d1f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received unexpected event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:10:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:44.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:44.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 326 KiB/s wr, 81 op/s
Jan 31 04:10:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:46.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:46.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:46 np0005603621 podman[403095]: 2026-01-31 09:10:46.562464124 +0000 UTC m=+0.051149495 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:10:46 np0005603621 podman[403095]: 2026-01-31 09:10:46.658825475 +0000 UTC m=+0.147510816 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:10:47 np0005603621 nova_compute[247399]: 2026-01-31 09:10:47.074 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:47 np0005603621 podman[403245]: 2026-01-31 09:10:47.113911149 +0000 UTC m=+0.047670206 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:10:47 np0005603621 podman[403266]: 2026-01-31 09:10:47.173912962 +0000 UTC m=+0.048322226 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:10:47 np0005603621 podman[403245]: 2026-01-31 09:10:47.179124057 +0000 UTC m=+0.112883114 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:10:47 np0005603621 nova_compute[247399]: 2026-01-31 09:10:47.220 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 25 KiB/s wr, 117 op/s
Jan 31 04:10:47 np0005603621 podman[403311]: 2026-01-31 09:10:47.35855452 +0000 UTC m=+0.050379451 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, release=1793, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived)
Jan 31 04:10:47 np0005603621 podman[403328]: 2026-01-31 09:10:47.42285854 +0000 UTC m=+0.050051892 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 31 04:10:47 np0005603621 podman[403311]: 2026-01-31 09:10:47.426565376 +0000 UTC m=+0.118390287 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph)
Jan 31 04:10:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:10:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:10:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6ae974d9-bd31-4350-8209-715c0fd06e5f does not exist
Jan 31 04:10:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 79be5a74-7fac-4db8-817c-4123c3b13064 does not exist
Jan 31 04:10:48 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7fcd4963-d3d2-4db0-84bb-a6a71b5e3fad does not exist
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:10:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:48.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.515935278 +0000 UTC m=+0.040824979 container create ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:10:48 np0005603621 systemd[1]: Started libpod-conmon-ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932.scope.
Jan 31 04:10:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.59456154 +0000 UTC m=+0.119451281 container init ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.501244884 +0000 UTC m=+0.026134615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.599727402 +0000 UTC m=+0.124617113 container start ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.60311985 +0000 UTC m=+0.128009561 container attach ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 04:10:48 np0005603621 condescending_shockley[403632]: 167 167
Jan 31 04:10:48 np0005603621 systemd[1]: libpod-ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932.scope: Deactivated successfully.
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.60471987 +0000 UTC m=+0.129609571 container died ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:10:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-68f35170c9457527e0a20bd7c7b141d193126893d838841d51fd5444069f4cbe-merged.mount: Deactivated successfully.
Jan 31 04:10:48 np0005603621 podman[403615]: 2026-01-31 09:10:48.645324662 +0000 UTC m=+0.170214363 container remove ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:10:48 np0005603621 systemd[1]: libpod-conmon-ec2833569bdfb1b533f473a554d805625e631b8c7a53b8fb9183aca75a1f8932.scope: Deactivated successfully.
Jan 31 04:10:48 np0005603621 podman[403656]: 2026-01-31 09:10:48.782538732 +0000 UTC m=+0.033289411 container create 5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 04:10:48 np0005603621 systemd[1]: Started libpod-conmon-5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992.scope.
Jan 31 04:10:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799271bb2b84d6773e0691727889f13e1b541170c4afdf6842fff7723ea921af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799271bb2b84d6773e0691727889f13e1b541170c4afdf6842fff7723ea921af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799271bb2b84d6773e0691727889f13e1b541170c4afdf6842fff7723ea921af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799271bb2b84d6773e0691727889f13e1b541170c4afdf6842fff7723ea921af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:48 np0005603621 podman[403656]: 2026-01-31 09:10:48.768802799 +0000 UTC m=+0.019553498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:10:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/799271bb2b84d6773e0691727889f13e1b541170c4afdf6842fff7723ea921af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:48 np0005603621 podman[403656]: 2026-01-31 09:10:48.918636098 +0000 UTC m=+0.169386797 container init 5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:10:48 np0005603621 podman[403656]: 2026-01-31 09:10:48.923756779 +0000 UTC m=+0.174507458 container start 5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 04:10:48 np0005603621 podman[403656]: 2026-01-31 09:10:48.937627147 +0000 UTC m=+0.188377856 container attach 5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:10:48 np0005603621 NetworkManager[49013]: <info>  [1769850648.9671] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Jan 31 04:10:48 np0005603621 nova_compute[247399]: 2026-01-31 09:10:48.966 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:48 np0005603621 NetworkManager[49013]: <info>  [1769850648.9682] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.005 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:49Z|00903|binding|INFO|Releasing lport da214d50-142b-42ea-aa25-6c69e8caf69b from this chassis (sb_readonly=0)
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.019 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 148 op/s
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.364 247403 DEBUG nova.compute.manager [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-changed-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.364 247403 DEBUG nova.compute.manager [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Refreshing instance network info cache due to event network-changed-bc656551-d5b1-4d24-a64b-f6713c6cd8f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.365 247403 DEBUG oslo_concurrency.lockutils [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.365 247403 DEBUG oslo_concurrency.lockutils [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:10:49 np0005603621 nova_compute[247399]: 2026-01-31 09:10:49.365 247403 DEBUG nova.network.neutron [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Refreshing network info cache for port bc656551-d5b1-4d24-a64b-f6713c6cd8f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:10:49 np0005603621 thirsty_snyder[403672]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:10:49 np0005603621 thirsty_snyder[403672]: --> relative data size: 1.0
Jan 31 04:10:49 np0005603621 thirsty_snyder[403672]: --> All data devices are unavailable
Jan 31 04:10:49 np0005603621 systemd[1]: libpod-5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992.scope: Deactivated successfully.
Jan 31 04:10:49 np0005603621 podman[403656]: 2026-01-31 09:10:49.749872112 +0000 UTC m=+1.000622791 container died 5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:10:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-799271bb2b84d6773e0691727889f13e1b541170c4afdf6842fff7723ea921af-merged.mount: Deactivated successfully.
Jan 31 04:10:49 np0005603621 podman[403656]: 2026-01-31 09:10:49.795331597 +0000 UTC m=+1.046082276 container remove 5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_snyder, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:10:49 np0005603621 systemd[1]: libpod-conmon-5cc9eb9c05c06250d9a4bfec3e38fe83f5d3ce5f3ca5f7787156d2beae83d992.scope: Deactivated successfully.
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019914750255909173 of space, bias 1.0, pg target 0.5974425076772752 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:10:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:10:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:50.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.305245461 +0000 UTC m=+0.035717718 container create 43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 04:10:50 np0005603621 systemd[1]: Started libpod-conmon-43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9.scope.
Jan 31 04:10:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.369098266 +0000 UTC m=+0.099570523 container init 43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.373253448 +0000 UTC m=+0.103725705 container start 43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:10:50 np0005603621 hungry_kowalevski[403859]: 167 167
Jan 31 04:10:50 np0005603621 systemd[1]: libpod-43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9.scope: Deactivated successfully.
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.377509771 +0000 UTC m=+0.107982048 container attach 43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.378427971 +0000 UTC m=+0.108900238 container died 43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.28780364 +0000 UTC m=+0.018275927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:10:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cfce5afee880618eb07e820c9916323c98b5904a69a9aac7ed09a76bfa775a3a-merged.mount: Deactivated successfully.
Jan 31 04:10:50 np0005603621 podman[403842]: 2026-01-31 09:10:50.417807194 +0000 UTC m=+0.148279451 container remove 43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_kowalevski, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:10:50 np0005603621 systemd[1]: libpod-conmon-43bd63b48a3a05b9a06e81e39a501dba22faf5ab4d3400ada37cb6d6b310f0c9.scope: Deactivated successfully.
Jan 31 04:10:50 np0005603621 podman[403882]: 2026-01-31 09:10:50.553667351 +0000 UTC m=+0.049372319 container create a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:10:50 np0005603621 systemd[1]: Started libpod-conmon-a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac.scope.
Jan 31 04:10:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ca8a85c6c3ad17b1d9668b6c2ed82a62ccaf4a3e3200ceadc5e6ae8e7b0ce0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ca8a85c6c3ad17b1d9668b6c2ed82a62ccaf4a3e3200ceadc5e6ae8e7b0ce0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ca8a85c6c3ad17b1d9668b6c2ed82a62ccaf4a3e3200ceadc5e6ae8e7b0ce0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ca8a85c6c3ad17b1d9668b6c2ed82a62ccaf4a3e3200ceadc5e6ae8e7b0ce0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:50 np0005603621 podman[403882]: 2026-01-31 09:10:50.527510406 +0000 UTC m=+0.023215404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:10:50 np0005603621 podman[403882]: 2026-01-31 09:10:50.625483348 +0000 UTC m=+0.121188316 container init a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:10:50 np0005603621 podman[403882]: 2026-01-31 09:10:50.632926123 +0000 UTC m=+0.128631071 container start a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:10:50 np0005603621 podman[403882]: 2026-01-31 09:10:50.636684572 +0000 UTC m=+0.132389550 container attach a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:10:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 213 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 31 04:10:51 np0005603621 nova_compute[247399]: 2026-01-31 09:10:51.335 247403 DEBUG nova.network.neutron [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updated VIF entry in instance network info cache for port bc656551-d5b1-4d24-a64b-f6713c6cd8f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:10:51 np0005603621 nova_compute[247399]: 2026-01-31 09:10:51.337 247403 DEBUG nova.network.neutron [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updating instance_info_cache with network_info: [{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:10:51 np0005603621 pensive_galois[403899]: {
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:    "0": [
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:        {
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "devices": [
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "/dev/loop3"
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            ],
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "lv_name": "ceph_lv0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "lv_size": "7511998464",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "name": "ceph_lv0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "tags": {
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.cluster_name": "ceph",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.crush_device_class": "",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.encrypted": "0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.osd_id": "0",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.type": "block",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:                "ceph.vdo": "0"
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            },
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "type": "block",
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:            "vg_name": "ceph_vg0"
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:        }
Jan 31 04:10:51 np0005603621 pensive_galois[403899]:    ]
Jan 31 04:10:51 np0005603621 pensive_galois[403899]: }
Jan 31 04:10:51 np0005603621 nova_compute[247399]: 2026-01-31 09:10:51.373 247403 DEBUG oslo_concurrency.lockutils [req-8f718e12-d09b-4694-870e-36689a83a4a6 req-f712fdce-e3cc-45f4-b4c1-3d622906bfb6 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:10:51 np0005603621 systemd[1]: libpod-a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac.scope: Deactivated successfully.
Jan 31 04:10:51 np0005603621 podman[403882]: 2026-01-31 09:10:51.38946659 +0000 UTC m=+0.885171528 container died a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:10:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b7ca8a85c6c3ad17b1d9668b6c2ed82a62ccaf4a3e3200ceadc5e6ae8e7b0ce0-merged.mount: Deactivated successfully.
Jan 31 04:10:51 np0005603621 podman[403882]: 2026-01-31 09:10:51.438480477 +0000 UTC m=+0.934185435 container remove a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:10:51 np0005603621 systemd[1]: libpod-conmon-a8dafd5a7ca609810d273a1ed67cd211a546ad42fe675e305d95971466357dac.scope: Deactivated successfully.
Jan 31 04:10:51 np0005603621 podman[404059]: 2026-01-31 09:10:51.927965005 +0000 UTC m=+0.033710665 container create 9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:10:51 np0005603621 systemd[1]: Started libpod-conmon-9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2.scope.
Jan 31 04:10:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:52 np0005603621 podman[404059]: 2026-01-31 09:10:52.005891976 +0000 UTC m=+0.111637666 container init 9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:10:52 np0005603621 podman[404059]: 2026-01-31 09:10:51.914339385 +0000 UTC m=+0.020085065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:10:52 np0005603621 podman[404059]: 2026-01-31 09:10:52.012363129 +0000 UTC m=+0.118108789 container start 9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:10:52 np0005603621 podman[404059]: 2026-01-31 09:10:52.015810068 +0000 UTC m=+0.121555728 container attach 9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:10:52 np0005603621 keen_moore[404076]: 167 167
Jan 31 04:10:52 np0005603621 systemd[1]: libpod-9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2.scope: Deactivated successfully.
Jan 31 04:10:52 np0005603621 podman[404059]: 2026-01-31 09:10:52.016990566 +0000 UTC m=+0.122736226 container died 9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:10:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-83cd900152b5b950d05350aeb7bcef830e1802ed1cd5ee21916abae1592464c7-merged.mount: Deactivated successfully.
Jan 31 04:10:52 np0005603621 podman[404059]: 2026-01-31 09:10:52.046560729 +0000 UTC m=+0.152306389 container remove 9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_moore, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:10:52 np0005603621 systemd[1]: libpod-conmon-9dcae9f489dc737f69dcadfadbd4f419600b7ac26e90067ae7913c7987b456e2.scope: Deactivated successfully.
Jan 31 04:10:52 np0005603621 nova_compute[247399]: 2026-01-31 09:10:52.076 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:52 np0005603621 podman[404100]: 2026-01-31 09:10:52.163878651 +0000 UTC m=+0.035061297 container create 9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hertz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:10:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:52.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:52 np0005603621 systemd[1]: Started libpod-conmon-9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014.scope.
Jan 31 04:10:52 np0005603621 nova_compute[247399]: 2026-01-31 09:10:52.222 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:52 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:10:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a8e5919ca5630b9d65954197d02121c0797633ce87f7b57ba2591211af50e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a8e5919ca5630b9d65954197d02121c0797633ce87f7b57ba2591211af50e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a8e5919ca5630b9d65954197d02121c0797633ce87f7b57ba2591211af50e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:52 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e35a8e5919ca5630b9d65954197d02121c0797633ce87f7b57ba2591211af50e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:10:52 np0005603621 podman[404100]: 2026-01-31 09:10:52.14830245 +0000 UTC m=+0.019485116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:10:52 np0005603621 podman[404100]: 2026-01-31 09:10:52.259343554 +0000 UTC m=+0.130526220 container init 9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:10:52 np0005603621 podman[404100]: 2026-01-31 09:10:52.265782798 +0000 UTC m=+0.136965444 container start 9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:10:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:52 np0005603621 podman[404100]: 2026-01-31 09:10:52.272281952 +0000 UTC m=+0.143464628 container attach 9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:10:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]: {
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:        "osd_id": 0,
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:        "type": "bluestore"
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]:    }
Jan 31 04:10:53 np0005603621 agitated_hertz[404117]: }
Jan 31 04:10:53 np0005603621 systemd[1]: libpod-9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014.scope: Deactivated successfully.
Jan 31 04:10:53 np0005603621 podman[404100]: 2026-01-31 09:10:53.125223983 +0000 UTC m=+0.996406629 container died 9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hertz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 31 04:10:53 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e35a8e5919ca5630b9d65954197d02121c0797633ce87f7b57ba2591211af50e-merged.mount: Deactivated successfully.
Jan 31 04:10:53 np0005603621 podman[404100]: 2026-01-31 09:10:53.171181233 +0000 UTC m=+1.042363879 container remove 9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hertz, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:10:53 np0005603621 systemd[1]: libpod-conmon-9539167f82b91681c421f56e50f04482a40f2ee653eee4ebd90bb586aaead014.scope: Deactivated successfully.
Jan 31 04:10:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:10:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:10:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:53 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ba836395-abd8-4d94-80d7-6eb09fc8fb72 does not exist
Jan 31 04:10:53 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5aa5ba08-e9a6-4e96-843e-152d843690e9 does not exist
Jan 31 04:10:53 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b59f0c3a-53b3-4c72-af5a-dddb249ecea2 does not exist
Jan 31 04:10:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 228 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.0 MiB/s wr, 160 op/s
Jan 31 04:10:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:54.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:10:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:54Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:98:1e 10.100.0.12
Jan 31 04:10:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:10:54Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:98:1e 10.100.0.12
Jan 31 04:10:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 243 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Jan 31 04:10:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:56.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:10:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:10:57 np0005603621 nova_compute[247399]: 2026-01-31 09:10:57.079 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:57 np0005603621 nova_compute[247399]: 2026-01-31 09:10:57.225 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:10:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.3 MiB/s wr, 156 op/s
Jan 31 04:10:57 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:10:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:10:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:10:58.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:10:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:10:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:10:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:10:58.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:10:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 156 op/s
Jan 31 04:11:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:00.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:00.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 4.3 MiB/s wr, 124 op/s
Jan 31 04:11:02 np0005603621 nova_compute[247399]: 2026-01-31 09:11:02.081 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:02.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:02 np0005603621 nova_compute[247399]: 2026-01-31 09:11:02.225 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:02.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 4.3 MiB/s wr, 125 op/s
Jan 31 04:11:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:04.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:04.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 621 KiB/s rd, 3.3 MiB/s wr, 112 op/s
Jan 31 04:11:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:06.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:07 np0005603621 nova_compute[247399]: 2026-01-31 09:11:07.085 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:07 np0005603621 nova_compute[247399]: 2026-01-31 09:11:07.227 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 472 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Jan 31 04:11:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Jan 31 04:11:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Jan 31 04:11:08 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Jan 31 04:11:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:08.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:08.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:11:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:11:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 303 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 909 KiB/s rd, 1.7 MiB/s wr, 44 op/s
Jan 31 04:11:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Jan 31 04:11:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Jan 31 04:11:10 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Jan 31 04:11:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:10.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:10.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:10 np0005603621 podman[404261]: 2026-01-31 09:11:10.493576497 +0000 UTC m=+0.050225476 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:11:10 np0005603621 podman[404262]: 2026-01-31 09:11:10.516552782 +0000 UTC m=+0.073034846 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 04:11:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Jan 31 04:11:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Jan 31 04:11:11 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Jan 31 04:11:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 303 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 73 op/s
Jan 31 04:11:12 np0005603621 nova_compute[247399]: 2026-01-31 09:11:12.087 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:12 np0005603621 nova_compute[247399]: 2026-01-31 09:11:12.228 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:12.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 358 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 141 op/s
Jan 31 04:11:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:14.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 375 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.5 MiB/s wr, 162 op/s
Jan 31 04:11:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 04:11:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:16.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:17 np0005603621 nova_compute[247399]: 2026-01-31 09:11:17.089 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:17 np0005603621 nova_compute[247399]: 2026-01-31 09:11:17.230 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 382 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 101 op/s
Jan 31 04:11:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Jan 31 04:11:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Jan 31 04:11:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Jan 31 04:11:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:18.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:18.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 404 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.4 MiB/s wr, 120 op/s
Jan 31 04:11:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:20.281 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:11:20 np0005603621 nova_compute[247399]: 2026-01-31 09:11:20.282 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:20 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:20.283 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:11:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:20.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 404 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 97 op/s
Jan 31 04:11:22 np0005603621 nova_compute[247399]: 2026-01-31 09:11:22.092 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:22.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:22 np0005603621 nova_compute[247399]: 2026-01-31 09:11:22.232 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:22 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:22.285 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:23 np0005603621 nova_compute[247399]: 2026-01-31 09:11:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 404 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Jan 31 04:11:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000064s ======
Jan 31 04:11:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:24.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000064s
Jan 31 04:11:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:11:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:24.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:11:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 404 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 1.4 MiB/s wr, 67 op/s
Jan 31 04:11:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:26.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:26.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:27 np0005603621 nova_compute[247399]: 2026-01-31 09:11:27.096 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:27 np0005603621 nova_compute[247399]: 2026-01-31 09:11:27.234 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 405 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 666 KiB/s rd, 1.0 MiB/s wr, 84 op/s
Jan 31 04:11:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:28 np0005603621 nova_compute[247399]: 2026-01-31 09:11:28.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:28.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:28.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:29 np0005603621 nova_compute[247399]: 2026-01-31 09:11:29.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 405 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 153 op/s
Jan 31 04:11:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:30.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:30.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:30.552 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:11:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 405 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 28 KiB/s wr, 142 op/s
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.522 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.522 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.522 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:11:31 np0005603621 nova_compute[247399]: 2026-01-31 09:11:31.523 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:11:32 np0005603621 nova_compute[247399]: 2026-01-31 09:11:32.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:32.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:32 np0005603621 nova_compute[247399]: 2026-01-31 09:11:32.244 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:32.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 405 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 178 op/s
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.301 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updating instance_info_cache with network_info: [{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.324 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.325 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.325 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.326 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.326 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.349 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.349 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.350 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.350 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.350 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:11:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460215443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.781 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.849 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.849 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000d9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.993 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.994 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3891MB free_disk=20.876033782958984GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.994 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:33 np0005603621 nova_compute[247399]: 2026-01-31 09:11:33.994 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.065 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.065 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.066 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.097 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:34.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:34.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:11:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3107901476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.525 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.531 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.550 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.575 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:11:34 np0005603621 nova_compute[247399]: 2026-01-31 09:11:34.575 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:34 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 04:11:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 405 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 176 op/s
Jan 31 04:11:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:36.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:37 np0005603621 nova_compute[247399]: 2026-01-31 09:11:37.145 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:37 np0005603621 nova_compute[247399]: 2026-01-31 09:11:37.246 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 409 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 583 KiB/s wr, 152 op/s
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.035016) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850698035049, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1588, "num_deletes": 253, "total_data_size": 2763516, "memory_usage": 2803288, "flush_reason": "Manual Compaction"}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850698047667, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1654898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80471, "largest_seqno": 82058, "table_properties": {"data_size": 1649225, "index_size": 2812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15106, "raw_average_key_size": 21, "raw_value_size": 1636655, "raw_average_value_size": 2298, "num_data_blocks": 125, "num_entries": 712, "num_filter_entries": 712, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850546, "oldest_key_time": 1769850546, "file_creation_time": 1769850698, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 12695 microseconds, and 2961 cpu microseconds.
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.047706) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1654898 bytes OK
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.047728) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.050352) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.050365) EVENT_LOG_v1 {"time_micros": 1769850698050361, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.050382) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2756683, prev total WAL file size 2756683, number of live WAL files 2.
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.051036) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303038' seq:72057594037927935, type:22 .. '6D6772737461740033323631' seq:0, type:0; will stop at (end)
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1616KB)], [185(12MB)]
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850698051101, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 15100345, "oldest_snapshot_seqno": -1}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 10927 keys, 12261317 bytes, temperature: kUnknown
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850698158174, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 12261317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12193695, "index_size": 39221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 287701, "raw_average_key_size": 26, "raw_value_size": 12006081, "raw_average_value_size": 1098, "num_data_blocks": 1486, "num_entries": 10927, "num_filter_entries": 10927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850698, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.158487) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12261317 bytes
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.159604) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.9 rd, 114.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 12.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(16.5) write-amplify(7.4) OK, records in: 11387, records dropped: 460 output_compression: NoCompression
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.159626) EVENT_LOG_v1 {"time_micros": 1769850698159616, "job": 116, "event": "compaction_finished", "compaction_time_micros": 107182, "compaction_time_cpu_micros": 24018, "output_level": 6, "num_output_files": 1, "total_output_size": 12261317, "num_input_records": 11387, "num_output_records": 10927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850698159916, "job": 116, "event": "table_file_deletion", "file_number": 187}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850698161660, "job": 116, "event": "table_file_deletion", "file_number": 185}
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.050910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.161798) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.161804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.161806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.161808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:11:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:11:38.161811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:11:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:38 np0005603621 nova_compute[247399]: 2026-01-31 09:11:38.448 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:11:38
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'images', 'backups']
Jan 31 04:11:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:11:39 np0005603621 nova_compute[247399]: 2026-01-31 09:11:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:39 np0005603621 nova_compute[247399]: 2026-01-31 09:11:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:11:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 435 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Jan 31 04:11:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:40.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 435 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Jan 31 04:11:41 np0005603621 podman[404414]: 2026-01-31 09:11:41.485511788 +0000 UTC m=+0.043760952 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 31 04:11:41 np0005603621 podman[404415]: 2026-01-31 09:11:41.555563989 +0000 UTC m=+0.111502781 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127)
Jan 31 04:11:42 np0005603621 nova_compute[247399]: 2026-01-31 09:11:42.146 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:42 np0005603621 nova_compute[247399]: 2026-01-31 09:11:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:42 np0005603621 nova_compute[247399]: 2026-01-31 09:11:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:11:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:42 np0005603621 nova_compute[247399]: 2026-01-31 09:11:42.248 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:42 np0005603621 nova_compute[247399]: 2026-01-31 09:11:42.297 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:11:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 452 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 151 op/s
Jan 31 04:11:43 np0005603621 nova_compute[247399]: 2026-01-31 09:11:43.292 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:44.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 452 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 119 op/s
Jan 31 04:11:46 np0005603621 nova_compute[247399]: 2026-01-31 09:11:46.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:11:46 np0005603621 nova_compute[247399]: 2026-01-31 09:11:46.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:11:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:46.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:47 np0005603621 nova_compute[247399]: 2026-01-31 09:11:47.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:47 np0005603621 nova_compute[247399]: 2026-01-31 09:11:47.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 440 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Jan 31 04:11:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:48.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 376 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.576 247403 DEBUG nova.compute.manager [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-changed-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.577 247403 DEBUG nova.compute.manager [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Refreshing instance network info cache due to event network-changed-bc656551-d5b1-4d24-a64b-f6713c6cd8f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.577 247403 DEBUG oslo_concurrency.lockutils [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.578 247403 DEBUG oslo_concurrency.lockutils [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.578 247403 DEBUG nova.network.neutron [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Refreshing network info cache for port bc656551-d5b1-4d24-a64b-f6713c6cd8f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.641 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.642 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.642 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.643 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.643 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.644 247403 INFO nova.compute.manager [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Terminating instance#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.646 247403 DEBUG nova.compute.manager [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:11:49 np0005603621 kernel: tapbc656551-d5 (unregistering): left promiscuous mode
Jan 31 04:11:49 np0005603621 NetworkManager[49013]: <info>  [1769850709.7420] device (tapbc656551-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.749 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:11:49Z|00904|binding|INFO|Releasing lport bc656551-d5b1-4d24-a64b-f6713c6cd8f5 from this chassis (sb_readonly=0)
Jan 31 04:11:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:11:49Z|00905|binding|INFO|Setting lport bc656551-d5b1-4d24-a64b-f6713c6cd8f5 down in Southbound
Jan 31 04:11:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:11:49Z|00906|binding|INFO|Removing iface tapbc656551-d5 ovn-installed in OVS
Jan 31 04:11:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:49.760 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:98:1e 10.100.0.12'], port_security=['fa:16:3e:0b:98:1e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '098c6cd0-6927-41a0-9b9f-6d2cfd743dd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a940e54a-7391-4617-93be-b1a956d3558c b27284c8-1498-47c6-abbd-80dc28bbe6f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be7ac7a5-1e86-4304-8ddd-d276d05956e0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=bc656551-d5b1-4d24-a64b-f6713c6cd8f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.761 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:49.762 159734 INFO neutron.agent.ovn.metadata.agent [-] Port bc656551-d5b1-4d24-a64b-f6713c6cd8f5 in datapath 919288ff-a51c-4b6d-81b3-cc76704eca9e unbound from our chassis#033[00m
Jan 31 04:11:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:49.764 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 919288ff-a51c-4b6d-81b3-cc76704eca9e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:11:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:49.766 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcf7c36-d8fa-42c0-bcff-675d758fe501]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:49.767 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e namespace which is not needed anymore#033[00m
Jan 31 04:11:49 np0005603621 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d9.scope: Deactivated successfully.
Jan 31 04:11:49 np0005603621 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000d9.scope: Consumed 14.564s CPU time.
Jan 31 04:11:49 np0005603621 systemd-machined[212769]: Machine qemu-103-instance-000000d9 terminated.
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004644169923692537 of space, bias 1.0, pg target 1.393250977107761 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6469151312116136 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0040691012935045595 of space, bias 1.0, pg target 1.2166612867578632 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032497557230597434 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:11:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.877 247403 INFO nova.virt.libvirt.driver [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Instance destroyed successfully.#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.877 247403 DEBUG nova.objects.instance [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'resources' on Instance uuid 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.898 247403 DEBUG nova.virt.libvirt.vif [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-access_point-1179545091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ac',id=217,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKIc4iaayDmLhEo3vI6YGRzp7m9GW6fzslqwq++gP9ecVHJRq1tSjzVnTPtJw3RUxXTQDWiA7Ya9j/CawC++Id9BLZED+RHeJDZ4JXh3gvgziK3fUhGR6gajupFnKxcV3w==',key_name='tempest-TestSecurityGroupsBasicOps-762998316',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:10:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-4knsd0dz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:10:41Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=098c6cd0-6927-41a0-9b9f-6d2cfd743dd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.898 247403 DEBUG nova.network.os_vif_util [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.899 247403 DEBUG nova.network.os_vif_util [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.899 247403 DEBUG os_vif [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.901 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.902 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc656551-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.903 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:49 np0005603621 nova_compute[247399]: 2026-01-31 09:11:49.907 247403 INFO os_vif [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:98:1e,bridge_name='br-int',has_traffic_filtering=True,id=bc656551-d5b1-4d24-a64b-f6713c6cd8f5,network=Network(919288ff-a51c-4b6d-81b3-cc76704eca9e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc656551-d5')#033[00m
Jan 31 04:11:49 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [NOTICE]   (402858) : haproxy version is 2.8.14-c23fe91
Jan 31 04:11:49 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [NOTICE]   (402858) : path to executable is /usr/sbin/haproxy
Jan 31 04:11:49 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [WARNING]  (402858) : Exiting Master process...
Jan 31 04:11:49 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [ALERT]    (402858) : Current worker (402860) exited with code 143 (Terminated)
Jan 31 04:11:49 np0005603621 neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e[402854]: [WARNING]  (402858) : All workers exited. Exiting... (0)
Jan 31 04:11:49 np0005603621 systemd[1]: libpod-589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291.scope: Deactivated successfully.
Jan 31 04:11:49 np0005603621 podman[404535]: 2026-01-31 09:11:49.92065676 +0000 UTC m=+0.061215534 container died 589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:11:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291-userdata-shm.mount: Deactivated successfully.
Jan 31 04:11:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-dcdef539a468ee356bd1bca32bfcf2d1cde9e64c2f362609d15b7e22ee3cc6e8-merged.mount: Deactivated successfully.
Jan 31 04:11:50 np0005603621 podman[404535]: 2026-01-31 09:11:50.014033866 +0000 UTC m=+0.154592640 container cleanup 589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:11:50 np0005603621 systemd[1]: libpod-conmon-589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291.scope: Deactivated successfully.
Jan 31 04:11:50 np0005603621 podman[404595]: 2026-01-31 09:11:50.136432529 +0000 UTC m=+0.102639060 container remove 589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.141 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe82c0d-4d4c-485f-bc26-b03a884f8958]: (4, ('Sat Jan 31 09:11:49 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e (589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291)\n589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291\nSat Jan 31 09:11:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e (589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291)\n589678fcbf83519b05106ca3c486b0abedad9c47927f1b3ba179cba4a71f3291\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.144 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3269a7f0-a7bb-41af-bfe8-406c42a939f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.145 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap919288ff-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.146 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:50 np0005603621 kernel: tap919288ff-a0: left promiscuous mode
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.152 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.152 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.155 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e541334d-f6be-46bc-a287-ca30057ff268]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.172 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1b067f09-68be-4d5a-b9a0-277d2bcb7f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.174 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ad392480-676f-400a-9820-5cc40c046bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.188 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c56213b5-97c1-4f21-b21c-a2bc1d33071e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 994577, 'reachable_time': 19062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404610, 'error': None, 'target': 'ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.194 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-919288ff-a51c-4b6d-81b3-cc76704eca9e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:11:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:11:50.194 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[d92a289b-ee65-4d0b-9c0e-ec7fc5faadb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:11:50 np0005603621 systemd[1]: run-netns-ovnmeta\x2d919288ff\x2da51c\x2d4b6d\x2d81b3\x2dcc76704eca9e.mount: Deactivated successfully.
Jan 31 04:11:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:50.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:50.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.682 247403 INFO nova.virt.libvirt.driver [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Deleting instance files /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_del#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.683 247403 INFO nova.virt.libvirt.driver [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Deletion of /var/lib/nova/instances/098c6cd0-6927-41a0-9b9f-6d2cfd743dd3_del complete#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.752 247403 INFO nova.compute.manager [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.752 247403 DEBUG oslo.service.loopingcall [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.753 247403 DEBUG nova.compute.manager [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:11:50 np0005603621 nova_compute[247399]: 2026-01-31 09:11:50.754 247403 DEBUG nova.network.neutron [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:11:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 376 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 948 KiB/s rd, 563 KiB/s wr, 78 op/s
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.385 247403 DEBUG nova.network.neutron [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updated VIF entry in instance network info cache for port bc656551-d5b1-4d24-a64b-f6713c6cd8f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.385 247403 DEBUG nova.network.neutron [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updating instance_info_cache with network_info: [{"id": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "address": "fa:16:3e:0b:98:1e", "network": {"id": "919288ff-a51c-4b6d-81b3-cc76704eca9e", "bridge": "br-int", "label": "tempest-network-smoke--53537063", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc656551-d5", "ovs_interfaceid": "bc656551-d5b1-4d24-a64b-f6713c6cd8f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.409 247403 DEBUG oslo_concurrency.lockutils [req-2d059944-ea35-4eba-8dc8-73b1626bf46f req-fdc0ef20-10a1-4b35-a2ac-46aec3b66a69 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.624 247403 DEBUG nova.network.neutron [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.661 247403 INFO nova.compute.manager [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Took 0.91 seconds to deallocate network for instance.#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.696 247403 DEBUG nova.compute.manager [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-vif-unplugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.696 247403 DEBUG oslo_concurrency.lockutils [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.696 247403 DEBUG oslo_concurrency.lockutils [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.696 247403 DEBUG oslo_concurrency.lockutils [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.697 247403 DEBUG nova.compute.manager [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] No waiting events found dispatching network-vif-unplugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.697 247403 DEBUG nova.compute.manager [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-vif-unplugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.697 247403 DEBUG nova.compute.manager [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.697 247403 DEBUG oslo_concurrency.lockutils [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.697 247403 DEBUG oslo_concurrency.lockutils [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.698 247403 DEBUG oslo_concurrency.lockutils [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.698 247403 DEBUG nova.compute.manager [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] No waiting events found dispatching network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.698 247403 WARNING nova.compute.manager [req-463bb691-e4fe-4093-888e-11fd9f5d414e req-c17dbca2-5cb6-4a2d-93fa-da954fcdefd4 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received unexpected event network-vif-plugged-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.704 247403 DEBUG nova.compute.manager [req-8d3c2931-0271-44f8-94bc-795f1ff816e2 req-b3856bae-b4b5-4978-b3cb-6a3ff7c6936a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Received event network-vif-deleted-bc656551-d5b1-4d24-a64b-f6713c6cd8f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.712 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.713 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:11:51 np0005603621 nova_compute[247399]: 2026-01-31 09:11:51.773 247403 DEBUG oslo_concurrency.processutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:11:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:11:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4101733249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.221 247403 DEBUG oslo_concurrency.processutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.231 247403 DEBUG nova.compute.provider_tree [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:11:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:52.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.253 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.257 247403 DEBUG nova.scheduler.client.report [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.283 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.305 247403 INFO nova.scheduler.client.report [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Deleted allocations for instance 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3#033[00m
Jan 31 04:11:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:52.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:52 np0005603621 nova_compute[247399]: 2026-01-31 09:11:52.371 247403 DEBUG oslo_concurrency.lockutils [None req-dcddab6a-d2f2-46a7-b488-cf484418958a ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "098c6cd0-6927-41a0-9b9f-6d2cfd743dd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:11:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 324 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 952 KiB/s rd, 568 KiB/s wr, 87 op/s
Jan 31 04:11:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:54.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:11:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 06adbd42-24fb-44fa-b37a-8562adb53a1b does not exist
Jan 31 04:11:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 732b3b8c-7a51-4a30-b2f6-079b5189b506 does not exist
Jan 31 04:11:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4b8f5139-accd-4de7-9a7e-ed84d00de9a4 does not exist
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:11:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:11:54 np0005603621 nova_compute[247399]: 2026-01-31 09:11:54.904 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:54.998653077 +0000 UTC m=+0.023296337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:55.136235479 +0000 UTC m=+0.160878739 container create a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:11:55 np0005603621 systemd[1]: Started libpod-conmon-a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42.scope.
Jan 31 04:11:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:55.260290964 +0000 UTC m=+0.284934214 container init a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:55.267722259 +0000 UTC m=+0.292365489 container start a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:55.270717583 +0000 UTC m=+0.295360833 container attach a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 31 04:11:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 144 KiB/s rd, 54 KiB/s wr, 61 op/s
Jan 31 04:11:55 np0005603621 flamboyant_jemison[404926]: 167 167
Jan 31 04:11:55 np0005603621 systemd[1]: libpod-a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42.scope: Deactivated successfully.
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:55.27412703 +0000 UTC m=+0.298770260 container died a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 04:11:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4fe351a2a55bbbc2c94364856487f3bbba25648f1c0ac2ea432add3b9a692c46-merged.mount: Deactivated successfully.
Jan 31 04:11:55 np0005603621 podman[404910]: 2026-01-31 09:11:55.444444556 +0000 UTC m=+0.469087786 container remove a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:11:55 np0005603621 systemd[1]: libpod-conmon-a7f47a56b497c3d4e0ad12358726959b0c7102923a170bb7d6371bea0ef80c42.scope: Deactivated successfully.
Jan 31 04:11:55 np0005603621 podman[404952]: 2026-01-31 09:11:55.541285143 +0000 UTC m=+0.023612477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:11:55 np0005603621 podman[404952]: 2026-01-31 09:11:55.650876041 +0000 UTC m=+0.133203355 container create 11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 31 04:11:55 np0005603621 systemd[1]: Started libpod-conmon-11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66.scope.
Jan 31 04:11:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:11:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78a3f32a8007befd7c4644dac594b12113b5405673814a356bc266e7ee8552e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78a3f32a8007befd7c4644dac594b12113b5405673814a356bc266e7ee8552e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78a3f32a8007befd7c4644dac594b12113b5405673814a356bc266e7ee8552e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78a3f32a8007befd7c4644dac594b12113b5405673814a356bc266e7ee8552e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78a3f32a8007befd7c4644dac594b12113b5405673814a356bc266e7ee8552e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:55 np0005603621 podman[404952]: 2026-01-31 09:11:55.731597769 +0000 UTC m=+0.213925083 container init 11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:11:55 np0005603621 podman[404952]: 2026-01-31 09:11:55.740579703 +0000 UTC m=+0.222907017 container start 11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:11:55 np0005603621 podman[404952]: 2026-01-31 09:11:55.743749253 +0000 UTC m=+0.226076577 container attach 11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lederberg, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:11:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:56.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:56.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:56 np0005603621 focused_lederberg[404968]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:11:56 np0005603621 focused_lederberg[404968]: --> relative data size: 1.0
Jan 31 04:11:56 np0005603621 focused_lederberg[404968]: --> All data devices are unavailable
Jan 31 04:11:56 np0005603621 systemd[1]: libpod-11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66.scope: Deactivated successfully.
Jan 31 04:11:56 np0005603621 podman[404952]: 2026-01-31 09:11:56.445300824 +0000 UTC m=+0.927628148 container died 11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lederberg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:11:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f78a3f32a8007befd7c4644dac594b12113b5405673814a356bc266e7ee8552e-merged.mount: Deactivated successfully.
Jan 31 04:11:56 np0005603621 podman[404952]: 2026-01-31 09:11:56.495107986 +0000 UTC m=+0.977435300 container remove 11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:11:56 np0005603621 systemd[1]: libpod-conmon-11f5b5f2cdf7910c9b20b479dc7c85c7d6a78011b3ac54a211c6fd65406fbb66.scope: Deactivated successfully.
Jan 31 04:11:56 np0005603621 podman[405134]: 2026-01-31 09:11:56.992165794 +0000 UTC m=+0.032123035 container create a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:11:57 np0005603621 systemd[1]: Started libpod-conmon-a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17.scope.
Jan 31 04:11:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:11:57 np0005603621 podman[405134]: 2026-01-31 09:11:57.047297714 +0000 UTC m=+0.087254985 container init a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:11:57 np0005603621 podman[405134]: 2026-01-31 09:11:57.051798566 +0000 UTC m=+0.091755807 container start a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:11:57 np0005603621 nice_turing[405150]: 167 167
Jan 31 04:11:57 np0005603621 systemd[1]: libpod-a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17.scope: Deactivated successfully.
Jan 31 04:11:57 np0005603621 podman[405134]: 2026-01-31 09:11:57.05510446 +0000 UTC m=+0.095061701 container attach a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_turing, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:11:57 np0005603621 conmon[405150]: conmon a3262eb41127851fb57b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17.scope/container/memory.events
Jan 31 04:11:57 np0005603621 podman[405134]: 2026-01-31 09:11:57.055575835 +0000 UTC m=+0.095533086 container died a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_turing, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 04:11:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-635a2680450fde82565ce2fcdff5c4fe2140423514d433c4a6a02d3e0a62ca89-merged.mount: Deactivated successfully.
Jan 31 04:11:57 np0005603621 podman[405134]: 2026-01-31 09:11:56.97938095 +0000 UTC m=+0.019338201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:11:57 np0005603621 podman[405134]: 2026-01-31 09:11:57.084971433 +0000 UTC m=+0.124928674 container remove a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_turing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:11:57 np0005603621 systemd[1]: libpod-conmon-a3262eb41127851fb57bde88dd6e10dbb473694707c00735a85e17a3d58fcd17.scope: Deactivated successfully.
Jan 31 04:11:57 np0005603621 podman[405174]: 2026-01-31 09:11:57.226415717 +0000 UTC m=+0.045741935 container create 426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lalande, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:11:57 np0005603621 nova_compute[247399]: 2026-01-31 09:11:57.255 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:57 np0005603621 systemd[1]: Started libpod-conmon-426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720.scope.
Jan 31 04:11:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 70 KiB/s rd, 53 KiB/s wr, 57 op/s
Jan 31 04:11:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:11:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18133164f0b3bc3442fe7245ef8dd8d4dcb2cb1a28eda173174af486d65a8da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18133164f0b3bc3442fe7245ef8dd8d4dcb2cb1a28eda173174af486d65a8da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18133164f0b3bc3442fe7245ef8dd8d4dcb2cb1a28eda173174af486d65a8da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18133164f0b3bc3442fe7245ef8dd8d4dcb2cb1a28eda173174af486d65a8da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:57 np0005603621 podman[405174]: 2026-01-31 09:11:57.20433424 +0000 UTC m=+0.023660508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:11:57 np0005603621 podman[405174]: 2026-01-31 09:11:57.31809128 +0000 UTC m=+0.137417528 container init 426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:11:57 np0005603621 podman[405174]: 2026-01-31 09:11:57.324163962 +0000 UTC m=+0.143490180 container start 426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:11:57 np0005603621 podman[405174]: 2026-01-31 09:11:57.328665854 +0000 UTC m=+0.147992072 container attach 426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:11:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:11:58 np0005603621 confident_lalande[405191]: {
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:    "0": [
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:        {
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "devices": [
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "/dev/loop3"
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            ],
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "lv_name": "ceph_lv0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "lv_size": "7511998464",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "name": "ceph_lv0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "tags": {
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.cluster_name": "ceph",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.crush_device_class": "",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.encrypted": "0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.osd_id": "0",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.type": "block",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:                "ceph.vdo": "0"
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            },
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "type": "block",
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:            "vg_name": "ceph_vg0"
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:        }
Jan 31 04:11:58 np0005603621 confident_lalande[405191]:    ]
Jan 31 04:11:58 np0005603621 confident_lalande[405191]: }
Jan 31 04:11:58 np0005603621 systemd[1]: libpod-426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720.scope: Deactivated successfully.
Jan 31 04:11:58 np0005603621 podman[405174]: 2026-01-31 09:11:58.098312485 +0000 UTC m=+0.917638813 container died 426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lalande, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 04:11:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f18133164f0b3bc3442fe7245ef8dd8d4dcb2cb1a28eda173174af486d65a8da-merged.mount: Deactivated successfully.
Jan 31 04:11:58 np0005603621 podman[405174]: 2026-01-31 09:11:58.160648392 +0000 UTC m=+0.979974610 container remove 426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:11:58 np0005603621 systemd[1]: libpod-conmon-426c07d7539b0dca4c03caaff0944d512617068f8b023deddc4855355ab79720.scope: Deactivated successfully.
Jan 31 04:11:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:11:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:11:58.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:11:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:11:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:11:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:11:58.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:11:58 np0005603621 podman[405355]: 2026-01-31 09:11:58.857401983 +0000 UTC m=+0.046793748 container create 427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mahavira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:11:58 np0005603621 systemd[1]: Started libpod-conmon-427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759.scope.
Jan 31 04:11:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:11:58 np0005603621 podman[405355]: 2026-01-31 09:11:58.837311289 +0000 UTC m=+0.026703034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:11:58 np0005603621 podman[405355]: 2026-01-31 09:11:58.942204699 +0000 UTC m=+0.131596464 container init 427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mahavira, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:11:58 np0005603621 podman[405355]: 2026-01-31 09:11:58.952905497 +0000 UTC m=+0.142297262 container start 427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 04:11:58 np0005603621 elated_mahavira[405371]: 167 167
Jan 31 04:11:58 np0005603621 systemd[1]: libpod-427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759.scope: Deactivated successfully.
Jan 31 04:11:58 np0005603621 podman[405355]: 2026-01-31 09:11:58.958475203 +0000 UTC m=+0.147866948 container attach 427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 04:11:58 np0005603621 podman[405355]: 2026-01-31 09:11:58.959036961 +0000 UTC m=+0.148428706 container died 427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:11:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e5dcdd71227f526225eb2bfb5ff4c2ae5e14982dfb6b199d49f55c00ae9d7941-merged.mount: Deactivated successfully.
Jan 31 04:11:59 np0005603621 podman[405355]: 2026-01-31 09:11:59.014519882 +0000 UTC m=+0.203911617 container remove 427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 04:11:59 np0005603621 systemd[1]: libpod-conmon-427cdc2face6d693ad11ac6bddb8e9fdd76ff301a83acc147749cdf43832b759.scope: Deactivated successfully.
Jan 31 04:11:59 np0005603621 podman[405393]: 2026-01-31 09:11:59.196395652 +0000 UTC m=+0.055197893 container create ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:11:59 np0005603621 systemd[1]: Started libpod-conmon-ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3.scope.
Jan 31 04:11:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:11:59 np0005603621 podman[405393]: 2026-01-31 09:11:59.171957721 +0000 UTC m=+0.030759952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:11:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b263f1389ca1c1fb0d736b3313f6d51943730d69f22155ce66ada83cbd842126/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b263f1389ca1c1fb0d736b3313f6d51943730d69f22155ce66ada83cbd842126/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b263f1389ca1c1fb0d736b3313f6d51943730d69f22155ce66ada83cbd842126/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b263f1389ca1c1fb0d736b3313f6d51943730d69f22155ce66ada83cbd842126/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:11:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 70 KiB/s rd, 20 KiB/s wr, 56 op/s
Jan 31 04:11:59 np0005603621 podman[405393]: 2026-01-31 09:11:59.293458276 +0000 UTC m=+0.152260547 container init ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:11:59 np0005603621 podman[405393]: 2026-01-31 09:11:59.304614668 +0000 UTC m=+0.163416909 container start ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:11:59 np0005603621 podman[405393]: 2026-01-31 09:11:59.31008444 +0000 UTC m=+0.168886641 container attach ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:11:59 np0005603621 nova_compute[247399]: 2026-01-31 09:11:59.549 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:59 np0005603621 nova_compute[247399]: 2026-01-31 09:11:59.602 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:11:59 np0005603621 nova_compute[247399]: 2026-01-31 09:11:59.906 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]: {
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:        "osd_id": 0,
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:        "type": "bluestore"
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]:    }
Jan 31 04:12:00 np0005603621 distracted_agnesi[405410]: }
Jan 31 04:12:00 np0005603621 systemd[1]: libpod-ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3.scope: Deactivated successfully.
Jan 31 04:12:00 np0005603621 podman[405393]: 2026-01-31 09:12:00.257203042 +0000 UTC m=+1.116005233 container died ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 04:12:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:00.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b263f1389ca1c1fb0d736b3313f6d51943730d69f22155ce66ada83cbd842126-merged.mount: Deactivated successfully.
Jan 31 04:12:00 np0005603621 podman[405393]: 2026-01-31 09:12:00.355225076 +0000 UTC m=+1.214027267 container remove ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_agnesi, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:12:00 np0005603621 systemd[1]: libpod-conmon-ff4df38e4a8e75914887d50dd2ccb7f0e0684ecb2ec9a377e07509cda29f36d3.scope: Deactivated successfully.
Jan 31 04:12:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:12:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:12:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:12:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:12:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev dcd78a21-2bef-4b72-880e-d5e849c0f8b7 does not exist
Jan 31 04:12:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7fef073e-6ecc-4443-8e35-143610b7d2f2 does not exist
Jan 31 04:12:00 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7d4ff2a7-0439-4070-8350-19f7223f88f7 does not exist
Jan 31 04:12:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:12:00 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:12:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 8.8 KiB/s wr, 28 op/s
Jan 31 04:12:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:12:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.3 total, 600.0 interval#012Cumulative writes: 59K writes, 227K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.76 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5202 writes, 20K keys, 5202 commit groups, 1.0 writes per commit group, ingest: 20.24 MB, 0.03 MB/s#012Interval WAL: 5202 writes, 2078 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:12:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:02.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:02 np0005603621 nova_compute[247399]: 2026-01-31 09:12:02.295 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:02.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Jan 31 04:12:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Jan 31 04:12:02 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Jan 31 04:12:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 297 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 4.6 KiB/s wr, 29 op/s
Jan 31 04:12:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:03.660 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:12:03 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:03.661 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:12:03 np0005603621 nova_compute[247399]: 2026-01-31 09:12:03.661 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:04.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Jan 31 04:12:04 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Jan 31 04:12:04 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Jan 31 04:12:04 np0005603621 nova_compute[247399]: 2026-01-31 09:12:04.875 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850709.8747103, 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:12:04 np0005603621 nova_compute[247399]: 2026-01-31 09:12:04.876 247403 INFO nova.compute.manager [-] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:12:04 np0005603621 nova_compute[247399]: 2026-01-31 09:12:04.901 247403 DEBUG nova.compute.manager [None req-79ce5bd1-efa1-4631-979d-fe0914cab25f - - - - - -] [instance: 098c6cd0-6927-41a0-9b9f-6d2cfd743dd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:12:04 np0005603621 nova_compute[247399]: 2026-01-31 09:12:04.910 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 334 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 72 op/s
Jan 31 04:12:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Jan 31 04:12:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Jan 31 04:12:05 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Jan 31 04:12:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:06.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 362 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 9.6 MiB/s wr, 182 op/s
Jan 31 04:12:07 np0005603621 nova_compute[247399]: 2026-01-31 09:12:07.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Jan 31 04:12:07 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Jan 31 04:12:07 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Jan 31 04:12:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:08.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:12:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:12:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 361 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 194 op/s
Jan 31 04:12:09 np0005603621 nova_compute[247399]: 2026-01-31 09:12:09.912 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:10.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 361 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 9.0 MiB/s wr, 101 op/s
Jan 31 04:12:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 04:12:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:12.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:12 np0005603621 nova_compute[247399]: 2026-01-31 09:12:12.302 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:12 np0005603621 podman[405553]: 2026-01-31 09:12:12.505490723 +0000 UTC m=+0.055277276 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:12:12 np0005603621 podman[405554]: 2026-01-31 09:12:12.56558098 +0000 UTC m=+0.115375173 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:12:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Jan 31 04:12:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Jan 31 04:12:13 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Jan 31 04:12:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 8.2 MiB/s wr, 151 op/s
Jan 31 04:12:13 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:13.665 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:12:13 np0005603621 nova_compute[247399]: 2026-01-31 09:12:13.976 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:14.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:14.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:12:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3441511055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:12:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:12:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3441511055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:12:14 np0005603621 nova_compute[247399]: 2026-01-31 09:12:14.915 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Jan 31 04:12:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Jan 31 04:12:15 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Jan 31 04:12:15 np0005603621 nova_compute[247399]: 2026-01-31 09:12:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 359 KiB/s rd, 309 KiB/s wr, 88 op/s
Jan 31 04:12:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 270 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 331 KiB/s rd, 1.0 MiB/s wr, 83 op/s
Jan 31 04:12:17 np0005603621 nova_compute[247399]: 2026-01-31 09:12:17.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Jan 31 04:12:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Jan 31 04:12:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Jan 31 04:12:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:18.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 192 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 103 KiB/s rd, 3.4 MiB/s wr, 153 op/s
Jan 31 04:12:19 np0005603621 nova_compute[247399]: 2026-01-31 09:12:19.918 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:20.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 192 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 2.7 MiB/s wr, 119 op/s
Jan 31 04:12:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:22.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:22 np0005603621 nova_compute[247399]: 2026-01-31 09:12:22.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:22.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:12:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3375999005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:12:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Jan 31 04:12:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Jan 31 04:12:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Jan 31 04:12:23 np0005603621 nova_compute[247399]: 2026-01-31 09:12:23.226 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Jan 31 04:12:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:24.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:24.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:24 np0005603621 nova_compute[247399]: 2026-01-31 09:12:24.921 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 75 KiB/s rd, 1.9 MiB/s wr, 110 op/s
Jan 31 04:12:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:26.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:26.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 1.2 MiB/s wr, 84 op/s
Jan 31 04:12:27 np0005603621 nova_compute[247399]: 2026-01-31 09:12:27.342 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:28.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:29 np0005603621 nova_compute[247399]: 2026-01-31 09:12:29.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:29 np0005603621 nova_compute[247399]: 2026-01-31 09:12:29.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 99 op/s
Jan 31 04:12:29 np0005603621 nova_compute[247399]: 2026-01-31 09:12:29.924 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:30.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:30.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:30.553 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 99 op/s
Jan 31 04:12:32 np0005603621 nova_compute[247399]: 2026-01-31 09:12:32.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:32 np0005603621 nova_compute[247399]: 2026-01-31 09:12:32.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:12:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:32.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:32 np0005603621 nova_compute[247399]: 2026-01-31 09:12:32.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:12:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:32.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:12:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:33 np0005603621 nova_compute[247399]: 2026-01-31 09:12:33.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:33 np0005603621 nova_compute[247399]: 2026-01-31 09:12:33.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:12:33 np0005603621 nova_compute[247399]: 2026-01-31 09:12:33.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:12:33 np0005603621 nova_compute[247399]: 2026-01-31 09:12:33.228 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:12:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.223 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.223 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.223 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.224 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:34.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:12:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1945253179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.709 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.919 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.921 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4105MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.922 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.922 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:34 np0005603621 nova_compute[247399]: 2026-01-31 09:12:34.926 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.030 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.030 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.048 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 04:12:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:12:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/345242904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.589 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.597 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.668 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.701 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:12:35 np0005603621 nova_compute[247399]: 2026-01-31 09:12:35.702 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.262263) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850756262318, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 848, "num_deletes": 254, "total_data_size": 1150961, "memory_usage": 1167696, "flush_reason": "Manual Compaction"}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850756274835, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 1137546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82059, "largest_seqno": 82906, "table_properties": {"data_size": 1133183, "index_size": 2014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9995, "raw_average_key_size": 20, "raw_value_size": 1124337, "raw_average_value_size": 2275, "num_data_blocks": 88, "num_entries": 494, "num_filter_entries": 494, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850699, "oldest_key_time": 1769850699, "file_creation_time": 1769850756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 12629 microseconds, and 5188 cpu microseconds.
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.274890) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 1137546 bytes OK
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.274919) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.277213) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.277254) EVENT_LOG_v1 {"time_micros": 1769850756277241, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.277293) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 1146775, prev total WAL file size 1146775, number of live WAL files 2.
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.278050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(1110KB)], [188(11MB)]
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850756278093, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 13398863, "oldest_snapshot_seqno": -1}
Jan 31 04:12:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:36.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:36.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 10898 keys, 11469409 bytes, temperature: kUnknown
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850756428100, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11469409, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11402573, "index_size": 38499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27269, "raw_key_size": 287915, "raw_average_key_size": 26, "raw_value_size": 11215947, "raw_average_value_size": 1029, "num_data_blocks": 1449, "num_entries": 10898, "num_filter_entries": 10898, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850756, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.428608) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11469409 bytes
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.432484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.2 rd, 76.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(21.9) write-amplify(10.1) OK, records in: 11421, records dropped: 523 output_compression: NoCompression
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.432520) EVENT_LOG_v1 {"time_micros": 1769850756432501, "job": 118, "event": "compaction_finished", "compaction_time_micros": 150158, "compaction_time_cpu_micros": 37140, "output_level": 6, "num_output_files": 1, "total_output_size": 11469409, "num_input_records": 11421, "num_output_records": 10898, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850756432927, "job": 118, "event": "table_file_deletion", "file_number": 190}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850756435368, "job": 118, "event": "table_file_deletion", "file_number": 188}
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.277967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.435458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.435469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.435471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.435473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:36 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:12:36.435475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:12:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 171 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 880 KiB/s wr, 80 op/s
Jan 31 04:12:37 np0005603621 nova_compute[247399]: 2026-01-31 09:12:37.347 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:38.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:12:38
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'images', 'vms', 'cephfs.cephfs.meta', '.mgr']
Jan 31 04:12:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:12:38 np0005603621 nova_compute[247399]: 2026-01-31 09:12:38.704 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:39 np0005603621 nova_compute[247399]: 2026-01-31 09:12:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:12:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 185 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 87 op/s
Jan 31 04:12:39 np0005603621 nova_compute[247399]: 2026-01-31 09:12:39.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:40 np0005603621 nova_compute[247399]: 2026-01-31 09:12:40.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:12:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:40.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 185 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 146 KiB/s rd, 1.9 MiB/s wr, 34 op/s
Jan 31 04:12:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:42 np0005603621 nova_compute[247399]: 2026-01-31 09:12:42.349 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:42.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 199 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 04:12:43 np0005603621 podman[405708]: 2026-01-31 09:12:43.518497159 +0000 UTC m=+0.066353585 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 31 04:12:43 np0005603621 podman[405709]: 2026-01-31 09:12:43.547383241 +0000 UTC m=+0.094953648 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:12:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:44.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:44 np0005603621 nova_compute[247399]: 2026-01-31 09:12:44.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 04:12:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:46.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 04:12:47 np0005603621 nova_compute[247399]: 2026-01-31 09:12:47.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:47.451 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:12:47 np0005603621 nova_compute[247399]: 2026-01-31 09:12:47.452 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:47 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:47.454 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:12:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:48.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 381 KiB/s rd, 1.3 MiB/s wr, 59 op/s
Jan 31 04:12:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:12:49.456 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:12:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:12:49 np0005603621 nova_compute[247399]: 2026-01-31 09:12:49.942 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:50.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:50.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 245 KiB/s rd, 284 KiB/s wr, 31 op/s
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.392 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.392 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.417 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.498 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.499 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.506 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.506 247403 INFO nova.compute.claims [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:12:51 np0005603621 nova_compute[247399]: 2026-01-31 09:12:51.598 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:51 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:12:51 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196966992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.013 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.018 247403 DEBUG nova.compute.provider_tree [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.036 247403 DEBUG nova.scheduler.client.report [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.058 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.058 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.114 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.115 247403 DEBUG nova.network.neutron [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.137 247403 INFO nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.159 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.269 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.271 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.271 247403 INFO nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Creating image(s)#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.300 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:12:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.328 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.353 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.356 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.374 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.408 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.409 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.410 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.410 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:52.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.433 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:12:52 np0005603621 nova_compute[247399]: 2026-01-31 09:12:52.439 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.089 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.164 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] resizing rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.215 247403 DEBUG nova.policy [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ebd43008d7a64b8bbf97a2304b1f78b6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.280 247403 DEBUG nova.objects.instance [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'migration_context' on Instance uuid 8080c31c-59d3-417c-b3d5-642d18ab56b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.298 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.298 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Ensure instance console log exists: /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:12:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 245 KiB/s rd, 284 KiB/s wr, 31 op/s
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.299 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.299 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:12:53 np0005603621 nova_compute[247399]: 2026-01-31 09:12:53.299 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:12:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:54.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:54 np0005603621 nova_compute[247399]: 2026-01-31 09:12:54.985 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 215 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 548 KiB/s wr, 5 op/s
Jan 31 04:12:56 np0005603621 nova_compute[247399]: 2026-01-31 09:12:56.235 247403 DEBUG nova.network.neutron [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Successfully created port: 76e5e961-a600-484a-9b32-4a00defa2cf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:12:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:56.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 231 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 996 KiB/s wr, 15 op/s
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.354 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.831 247403 DEBUG nova.network.neutron [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Successfully updated port: 76e5e961-a600-484a-9b32-4a00defa2cf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.856 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.856 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquired lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.856 247403 DEBUG nova.network.neutron [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.939 247403 DEBUG nova.compute.manager [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-changed-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.939 247403 DEBUG nova.compute.manager [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Refreshing instance network info cache due to event network-changed-76e5e961-a600-484a-9b32-4a00defa2cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.940 247403 DEBUG oslo_concurrency.lockutils [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:12:57 np0005603621 nova_compute[247399]: 2026-01-31 09:12:57.993 247403 DEBUG nova.network.neutron [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:12:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:12:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:12:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:12:58.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:12:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:12:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:12:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:12:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:12:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.689 247403 DEBUG nova.network.neutron [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Updating instance_info_cache with network_info: [{"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.717 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Releasing lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.718 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Instance network_info: |[{"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.718 247403 DEBUG oslo_concurrency.lockutils [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.719 247403 DEBUG nova.network.neutron [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Refreshing network info cache for port 76e5e961-a600-484a-9b32-4a00defa2cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.722 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Start _get_guest_xml network_info=[{"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.726 247403 WARNING nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.732 247403 DEBUG nova.virt.libvirt.host [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.733 247403 DEBUG nova.virt.libvirt.host [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.738 247403 DEBUG nova.virt.libvirt.host [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.739 247403 DEBUG nova.virt.libvirt.host [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.741 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.741 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.742 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.743 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.743 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.743 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.744 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.744 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.745 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.745 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.746 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.746 247403 DEBUG nova.virt.hardware [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.752 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:12:59 np0005603621 nova_compute[247399]: 2026-01-31 09:12:59.988 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:13:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087371013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.173 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.201 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.205 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:00.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:00.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:00 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:13:00 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358291106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.599 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.600 247403 DEBUG nova.virt.libvirt.vif [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ge',id=221,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFC9ERZJ6vqwSOhb+BxoLuCTPY4zXIPbOdYQjYf18qK5EvFlLu3Fd6dU0UfukMij7wWnpSqWAkqu0LocOazNCHHb52PIeAWKGpoQVtLv/Sw5DcBQogLeHH3fNNhS1TtHkw==',key_name='tempest-TestSecurityGroupsBasicOps-763844494',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-n28vnegu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:12:52Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=8080c31c-59d3-417c-b3d5-642d18ab56b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.600 247403 DEBUG nova.network.os_vif_util [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.601 247403 DEBUG nova.network.os_vif_util [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.602 247403 DEBUG nova.objects.instance [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'pci_devices' on Instance uuid 8080c31c-59d3-417c-b3d5-642d18ab56b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.621 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <uuid>8080c31c-59d3-417c-b3d5-642d18ab56b2</uuid>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <name>instance-000000dd</name>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204</nova:name>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:12:59</nova:creationTime>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:user uuid="ebd43008d7a64b8bbf97a2304b1f78b6">tempest-TestSecurityGroupsBasicOps-1802479850-project-member</nova:user>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:project uuid="0c7930b92fc3471f87d9fe78ee56e71e">tempest-TestSecurityGroupsBasicOps-1802479850</nova:project>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <nova:port uuid="76e5e961-a600-484a-9b32-4a00defa2cf4">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <entry name="serial">8080c31c-59d3-417c-b3d5-642d18ab56b2</entry>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <entry name="uuid">8080c31c-59d3-417c-b3d5-642d18ab56b2</entry>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/8080c31c-59d3-417c-b3d5-642d18ab56b2_disk">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/8080c31c-59d3-417c-b3d5-642d18ab56b2_disk.config">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:1c:16:1d"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <target dev="tap76e5e961-a6"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/console.log" append="off"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:13:00 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:13:00 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:13:00 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:13:00 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.623 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Preparing to wait for external event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.623 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.624 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.624 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.624 247403 DEBUG nova.virt.libvirt.vif [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ge',id=221,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFC9ERZJ6vqwSOhb+BxoLuCTPY4zXIPbOdYQjYf18qK5EvFlLu3Fd6dU0UfukMij7wWnpSqWAkqu0LocOazNCHHb52PIeAWKGpoQVtLv/Sw5DcBQogLeHH3fNNhS1TtHkw==',key_name='tempest-TestSecurityGroupsBasicOps-763844494',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-n28vnegu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:12:52Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=8080c31c-59d3-417c-b3d5-642d18ab56b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.625 247403 DEBUG nova.network.os_vif_util [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.625 247403 DEBUG nova.network.os_vif_util [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.626 247403 DEBUG os_vif [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.626 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.627 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.627 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.630 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76e5e961-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.630 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76e5e961-a6, col_values=(('external_ids', {'iface-id': '76e5e961-a600-484a-9b32-4a00defa2cf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1c:16:1d', 'vm-uuid': '8080c31c-59d3-417c-b3d5-642d18ab56b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.631 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:00 np0005603621 NetworkManager[49013]: <info>  [1769850780.6324] manager: (tap76e5e961-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.634 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.694 247403 INFO os_vif [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6')#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.737 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.738 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.738 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] No VIF found with MAC fa:16:3e:1c:16:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.738 247403 INFO nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Using config drive#033[00m
Jan 31 04:13:00 np0005603621 nova_compute[247399]: 2026-01-31 09:13:00.768 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.148 247403 INFO nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Creating config drive at /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/disk.config#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.153 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo53ddizl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.175 247403 DEBUG nova.network.neutron [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Updated VIF entry in instance network info cache for port 76e5e961-a600-484a-9b32-4a00defa2cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.176 247403 DEBUG nova.network.neutron [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Updating instance_info_cache with network_info: [{"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.192 247403 DEBUG oslo_concurrency.lockutils [req-4ae12c48-9faa-4180-936b-7d36489b9295 req-b0ad83d2-9c80-46af-b2ea-b2274cb093c1 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.280 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpo53ddizl" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.309 247403 DEBUG nova.storage.rbd_utils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] rbd image 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.312 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/disk.config 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.809 247403 DEBUG oslo_concurrency.processutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/disk.config 8080c31c-59d3-417c-b3d5-642d18ab56b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.811 247403 INFO nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Deleting local config drive /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2/disk.config because it was imported into RBD.#033[00m
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.8545] manager: (tap76e5e961-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Jan 31 04:13:01 np0005603621 kernel: tap76e5e961-a6: entered promiscuous mode
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:01 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:01Z|00907|binding|INFO|Claiming lport 76e5e961-a600-484a-9b32-4a00defa2cf4 for this chassis.
Jan 31 04:13:01 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:01Z|00908|binding|INFO|76e5e961-a600-484a-9b32-4a00defa2cf4: Claiming fa:16:3e:1c:16:1d 10.100.0.11
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.886 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.8866] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.8873] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Jan 31 04:13:01 np0005603621 systemd-udevd[406266]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:13:01 np0005603621 systemd-machined[212769]: New machine qemu-104-instance-000000dd.
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.897 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:16:1d 10.100.0.11'], port_security=['fa:16:3e:1c:16:1d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8080c31c-59d3-417c-b3d5-642d18ab56b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514e8c9e-2a14-4959-839a-40965c82f800', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1a7e4e75-e92f-4762-8e77-5a420647206a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cde7baed-a23e-44c1-8411-520889d37122, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=76e5e961-a600-484a-9b32-4a00defa2cf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.898 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 76e5e961-a600-484a-9b32-4a00defa2cf4 in datapath 514e8c9e-2a14-4959-839a-40965c82f800 bound to our chassis#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.899 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 514e8c9e-2a14-4959-839a-40965c82f800#033[00m
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.9031] device (tap76e5e961-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.9037] device (tap76e5e961-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:13:01 np0005603621 systemd[1]: Started Virtual Machine qemu-104-instance-000000dd.
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.906 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[16e6cbe5-aec3-481d-a8a1-4fd983c7c96e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.907 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap514e8c9e-21 in ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.909 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap514e8c9e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.909 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebe985a-52f9-4790-b766-a1395a76afa3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.910 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d3443d-885c-4adc-a9d6-dca30482fe6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.917 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:01 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:01Z|00909|binding|INFO|Setting lport 76e5e961-a600-484a-9b32-4a00defa2cf4 ovn-installed in OVS
Jan 31 04:13:01 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:01Z|00910|binding|INFO|Setting lport 76e5e961-a600-484a-9b32-4a00defa2cf4 up in Southbound
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.919 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[f0de0b36-ee8a-4220-967d-ab73576dab23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 nova_compute[247399]: 2026-01-31 09:13:01.919 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.929 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c142a612-b4ab-4223-9f44-fb0100a80a69]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.950 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f1b40a-9e3f-43bc-af6f-d2201c21f4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.955 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[799873e0-c114-4bb6-a0d2-4fed38e143ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.9560] manager: (tap514e8c9e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Jan 31 04:13:01 np0005603621 systemd-udevd[406268]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.977 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[37384e03-6a7a-48eb-9104-be1dacfbedbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:01.980 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[9554e310-0d87-46b2-945b-2483d596873c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:01 np0005603621 NetworkManager[49013]: <info>  [1769850781.9972] device (tap514e8c9e-20): carrier: link connected
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.003 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dbca23-debc-4ef2-b177-5a4c12f031db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.015 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f0487057-7fc0-4ae9-9224-bfdcd8d5ac3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514e8c9e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:f6:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1008751, 'reachable_time': 31380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 406299, 'error': None, 'target': 'ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.032 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[86d3919a-37ce-4adb-8fd2-d6dd544776b3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:f624'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1008751, 'tstamp': 1008751}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 406300, 'error': None, 'target': 'ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.047 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[348e17bf-9f71-4ea6-8c6f-a14f541f69be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap514e8c9e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:f6:24'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 275], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1008751, 'reachable_time': 31380, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 406301, 'error': None, 'target': 'ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.073 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6825b849-2bfd-4fe9-9c16-899dd68620b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.117 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[415f4f77-06ad-4509-aa2b-0c755d244ca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.118 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514e8c9e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.118 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.119 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap514e8c9e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.121 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:02 np0005603621 NetworkManager[49013]: <info>  [1769850782.1222] manager: (tap514e8c9e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 31 04:13:02 np0005603621 kernel: tap514e8c9e-20: entered promiscuous mode
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.124 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap514e8c9e-20, col_values=(('external_ids', {'iface-id': '170f7083-da86-4db5-bf6e-3dc3a556c3c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:02 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:02Z|00911|binding|INFO|Releasing lport 170f7083-da86-4db5-bf6e-3dc3a556c3c4 from this chassis (sb_readonly=0)
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.124 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.130 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/514e8c9e-2a14-4959-839a-40965c82f800.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/514e8c9e-2a14-4959-839a-40965c82f800.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.131 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[11d91b25-0db7-42c1-8253-590b50df6b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.131 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-514e8c9e-2a14-4959-839a-40965c82f800
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/514e8c9e-2a14-4959-839a-40965c82f800.pid.haproxy
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 514e8c9e-2a14-4959-839a-40965c82f800
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:13:02 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:02.132 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800', 'env', 'PROCESS_TAG=haproxy-514e8c9e-2a14-4959-839a-40965c82f800', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/514e8c9e-2a14-4959-839a-40965c82f800.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:13:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:13:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:13:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.342 247403 DEBUG nova.compute.manager [req-31b60d3c-357d-49ac-9344-acc40bb2b3a5 req-35bfb41d-809a-4612-bb3c-d244cd1915a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.342 247403 DEBUG oslo_concurrency.lockutils [req-31b60d3c-357d-49ac-9344-acc40bb2b3a5 req-35bfb41d-809a-4612-bb3c-d244cd1915a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.343 247403 DEBUG oslo_concurrency.lockutils [req-31b60d3c-357d-49ac-9344-acc40bb2b3a5 req-35bfb41d-809a-4612-bb3c-d244cd1915a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.343 247403 DEBUG oslo_concurrency.lockutils [req-31b60d3c-357d-49ac-9344-acc40bb2b3a5 req-35bfb41d-809a-4612-bb3c-d244cd1915a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.343 247403 DEBUG nova.compute.manager [req-31b60d3c-357d-49ac-9344-acc40bb2b3a5 req-35bfb41d-809a-4612-bb3c-d244cd1915a9 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Processing event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:02.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:02 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:13:02 np0005603621 podman[406371]: 2026-01-31 09:13:02.461189581 +0000 UTC m=+0.050399232 container create 55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:13:02 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:02 np0005603621 systemd[1]: Started libpod-conmon-55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d.scope.
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.499 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850782.4992483, 8080c31c-59d3-417c-b3d5-642d18ab56b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.500 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] VM Started (Lifecycle Event)#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.502 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.505 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:13:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.509 247403 INFO nova.virt.libvirt.driver [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Instance spawned successfully.#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.509 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:13:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3065ee4f0016bbef5a835b044823915cc8ecbb1cf51ce30f1bbd3c8247c355/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:02 np0005603621 podman[406371]: 2026-01-31 09:13:02.527177034 +0000 UTC m=+0.116386665 container init 55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:13:02 np0005603621 podman[406371]: 2026-01-31 09:13:02.430571834 +0000 UTC m=+0.019781485 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.530 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:13:02 np0005603621 podman[406371]: 2026-01-31 09:13:02.533516803 +0000 UTC m=+0.122726434 container start 55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.537 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.542 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.543 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.544 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.544 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.545 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.545 247403 DEBUG nova.virt.libvirt.driver [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:13:02 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [NOTICE]   (406394) : New worker (406396) forked
Jan 31 04:13:02 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [NOTICE]   (406394) : Loading success.
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.569 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.570 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850782.5005846, 8080c31c-59d3-417c-b3d5-642d18ab56b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.570 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.597 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.603 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850782.504597, 8080c31c-59d3-417c-b3d5-642d18ab56b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.603 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.608 247403 INFO nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.608 247403 DEBUG nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.623 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.627 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.659 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.688 247403 INFO nova.compute.manager [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Took 11.22 seconds to build instance.#033[00m
Jan 31 04:13:02 np0005603621 nova_compute[247399]: 2026-01-31 09:13:02.722 247403 DEBUG oslo_concurrency.lockutils [None req-96aca5db-414b-4854-9db4-2d5563092284 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f4e4bb4d-0ed8-4404-a593-542a88e3871f does not exist
Jan 31 04:13:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d9ac70b1-e252-4008-a566-11a7b9b30b1d does not exist
Jan 31 04:13:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f45ac303-7629-4cec-9efc-51a0e9ba45d8 does not exist
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:13:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:03 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:13:03 np0005603621 podman[406545]: 2026-01-31 09:13:03.907806107 +0000 UTC m=+0.057122333 container create 272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:13:03 np0005603621 systemd[1]: Started libpod-conmon-272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c.scope.
Jan 31 04:13:03 np0005603621 podman[406545]: 2026-01-31 09:13:03.876868122 +0000 UTC m=+0.026184398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:13:03 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:04 np0005603621 podman[406545]: 2026-01-31 09:13:04.002937941 +0000 UTC m=+0.152254137 container init 272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 04:13:04 np0005603621 podman[406545]: 2026-01-31 09:13:04.010459428 +0000 UTC m=+0.159775624 container start 272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:13:04 np0005603621 podman[406545]: 2026-01-31 09:13:04.014053361 +0000 UTC m=+0.163369677 container attach 272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 04:13:04 np0005603621 agitated_hypatia[406561]: 167 167
Jan 31 04:13:04 np0005603621 conmon[406561]: conmon 272d7f1137b61d543134 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c.scope/container/memory.events
Jan 31 04:13:04 np0005603621 systemd[1]: libpod-272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c.scope: Deactivated successfully.
Jan 31 04:13:04 np0005603621 podman[406545]: 2026-01-31 09:13:04.018983247 +0000 UTC m=+0.168299443 container died 272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:13:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4967e924f25467e26aa86374a1ae0c399066826bce84d159f1f228a6d57a6496-merged.mount: Deactivated successfully.
Jan 31 04:13:04 np0005603621 podman[406545]: 2026-01-31 09:13:04.060409024 +0000 UTC m=+0.209725210 container remove 272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:13:04 np0005603621 systemd[1]: libpod-conmon-272d7f1137b61d543134f1f4179276ba350706e6feda1671a040e7d70e90bd6c.scope: Deactivated successfully.
Jan 31 04:13:04 np0005603621 podman[406586]: 2026-01-31 09:13:04.227972742 +0000 UTC m=+0.049768961 container create 33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shtern, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:13:04 np0005603621 systemd[1]: Started libpod-conmon-33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f.scope.
Jan 31 04:13:04 np0005603621 podman[406586]: 2026-01-31 09:13:04.205824583 +0000 UTC m=+0.027620862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:13:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a8c4f270454184100b7dec90886d6951f9533d615fa3d1531ea29fc93d24f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a8c4f270454184100b7dec90886d6951f9533d615fa3d1531ea29fc93d24f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a8c4f270454184100b7dec90886d6951f9533d615fa3d1531ea29fc93d24f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a8c4f270454184100b7dec90886d6951f9533d615fa3d1531ea29fc93d24f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:04 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7a8c4f270454184100b7dec90886d6951f9533d615fa3d1531ea29fc93d24f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:04.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:04 np0005603621 podman[406586]: 2026-01-31 09:13:04.334462413 +0000 UTC m=+0.156258632 container init 33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:13:04 np0005603621 podman[406586]: 2026-01-31 09:13:04.345035237 +0000 UTC m=+0.166831456 container start 33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:13:04 np0005603621 podman[406586]: 2026-01-31 09:13:04.348500226 +0000 UTC m=+0.170296475 container attach 33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:13:04 np0005603621 nova_compute[247399]: 2026-01-31 09:13:04.445 247403 DEBUG nova.compute.manager [req-ec1caa7e-f28c-4a84-aedb-955b5ee7f773 req-dcf3d52f-1a9a-4db3-94fb-1df8e36d32d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:13:04 np0005603621 nova_compute[247399]: 2026-01-31 09:13:04.447 247403 DEBUG oslo_concurrency.lockutils [req-ec1caa7e-f28c-4a84-aedb-955b5ee7f773 req-dcf3d52f-1a9a-4db3-94fb-1df8e36d32d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:04 np0005603621 nova_compute[247399]: 2026-01-31 09:13:04.447 247403 DEBUG oslo_concurrency.lockutils [req-ec1caa7e-f28c-4a84-aedb-955b5ee7f773 req-dcf3d52f-1a9a-4db3-94fb-1df8e36d32d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:04 np0005603621 nova_compute[247399]: 2026-01-31 09:13:04.447 247403 DEBUG oslo_concurrency.lockutils [req-ec1caa7e-f28c-4a84-aedb-955b5ee7f773 req-dcf3d52f-1a9a-4db3-94fb-1df8e36d32d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:04 np0005603621 nova_compute[247399]: 2026-01-31 09:13:04.448 247403 DEBUG nova.compute.manager [req-ec1caa7e-f28c-4a84-aedb-955b5ee7f773 req-dcf3d52f-1a9a-4db3-94fb-1df8e36d32d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] No waiting events found dispatching network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:13:04 np0005603621 nova_compute[247399]: 2026-01-31 09:13:04.448 247403 WARNING nova.compute.manager [req-ec1caa7e-f28c-4a84-aedb-955b5ee7f773 req-dcf3d52f-1a9a-4db3-94fb-1df8e36d32d2 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received unexpected event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:13:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:04.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 710 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 31 04:13:05 np0005603621 infallible_shtern[406603]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:13:05 np0005603621 infallible_shtern[406603]: --> relative data size: 1.0
Jan 31 04:13:05 np0005603621 infallible_shtern[406603]: --> All data devices are unavailable
Jan 31 04:13:05 np0005603621 systemd[1]: libpod-33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f.scope: Deactivated successfully.
Jan 31 04:13:05 np0005603621 podman[406586]: 2026-01-31 09:13:05.378696431 +0000 UTC m=+1.200492670 container died 33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shtern, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 04:13:05 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f7a8c4f270454184100b7dec90886d6951f9533d615fa3d1531ea29fc93d24f1-merged.mount: Deactivated successfully.
Jan 31 04:13:05 np0005603621 podman[406586]: 2026-01-31 09:13:05.432613282 +0000 UTC m=+1.254409501 container remove 33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_shtern, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:13:05 np0005603621 systemd[1]: libpod-conmon-33246a58ae0d638afd992bb7f27821332fe81fde31c0f30112e6c7234a34949f.scope: Deactivated successfully.
Jan 31 04:13:05 np0005603621 nova_compute[247399]: 2026-01-31 09:13:05.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.059911941 +0000 UTC m=+0.055949707 container create 5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:13:06 np0005603621 systemd[1]: Started libpod-conmon-5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6.scope.
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.032345111 +0000 UTC m=+0.028382957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:13:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.197888745 +0000 UTC m=+0.193926511 container init 5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.204246336 +0000 UTC m=+0.200284082 container start 5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.208543442 +0000 UTC m=+0.204581208 container attach 5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:13:06 np0005603621 gracious_napier[406840]: 167 167
Jan 31 04:13:06 np0005603621 systemd[1]: libpod-5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6.scope: Deactivated successfully.
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.214457649 +0000 UTC m=+0.210495445 container died 5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:13:06 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e325cdad85a200f2be27a0cc3f9aabc6fd6ec9b3a39375763cf112ea8af77b4e-merged.mount: Deactivated successfully.
Jan 31 04:13:06 np0005603621 podman[406823]: 2026-01-31 09:13:06.268832924 +0000 UTC m=+0.264870670 container remove 5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:13:06 np0005603621 systemd[1]: libpod-conmon-5fb80b12f3e997da2cb4a77a351c7fbf27ea25a624ae80df8fad889049681cf6.scope: Deactivated successfully.
Jan 31 04:13:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:06.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:06.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:06 np0005603621 podman[406868]: 2026-01-31 09:13:06.478219713 +0000 UTC m=+0.065039613 container create 2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:13:06 np0005603621 systemd[1]: Started libpod-conmon-2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26.scope.
Jan 31 04:13:06 np0005603621 podman[406868]: 2026-01-31 09:13:06.454427952 +0000 UTC m=+0.041247832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:13:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830e0dbf3728c8bdb6a057e7eefa82d472ecfcd0e488fe703dfd7c9f3e8c4286/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830e0dbf3728c8bdb6a057e7eefa82d472ecfcd0e488fe703dfd7c9f3e8c4286/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830e0dbf3728c8bdb6a057e7eefa82d472ecfcd0e488fe703dfd7c9f3e8c4286/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/830e0dbf3728c8bdb6a057e7eefa82d472ecfcd0e488fe703dfd7c9f3e8c4286/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:06 np0005603621 podman[406868]: 2026-01-31 09:13:06.585503209 +0000 UTC m=+0.172323129 container init 2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:13:06 np0005603621 podman[406868]: 2026-01-31 09:13:06.600278586 +0000 UTC m=+0.187098476 container start 2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wing, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:13:06 np0005603621 podman[406868]: 2026-01-31 09:13:06.603541359 +0000 UTC m=+0.190361279 container attach 2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:13:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 72 op/s
Jan 31 04:13:07 np0005603621 nova_compute[247399]: 2026-01-31 09:13:07.360 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]: {
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:    "0": [
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:        {
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "devices": [
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "/dev/loop3"
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            ],
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "lv_name": "ceph_lv0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "lv_size": "7511998464",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "name": "ceph_lv0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "tags": {
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.cluster_name": "ceph",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.crush_device_class": "",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.encrypted": "0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.osd_id": "0",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.type": "block",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:                "ceph.vdo": "0"
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            },
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "type": "block",
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:            "vg_name": "ceph_vg0"
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:        }
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]:    ]
Jan 31 04:13:07 np0005603621 pedantic_wing[406884]: }
Jan 31 04:13:07 np0005603621 systemd[1]: libpod-2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26.scope: Deactivated successfully.
Jan 31 04:13:07 np0005603621 podman[406893]: 2026-01-31 09:13:07.452154501 +0000 UTC m=+0.024354419 container died 2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wing, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:13:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-830e0dbf3728c8bdb6a057e7eefa82d472ecfcd0e488fe703dfd7c9f3e8c4286-merged.mount: Deactivated successfully.
Jan 31 04:13:07 np0005603621 podman[406893]: 2026-01-31 09:13:07.506886379 +0000 UTC m=+0.079086297 container remove 2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:13:07 np0005603621 systemd[1]: libpod-conmon-2e1868fd26b4bbda4f753547b6436c38a5f9560fa835d3a1c85ad89532255f26.scope: Deactivated successfully.
Jan 31 04:13:07 np0005603621 nova_compute[247399]: 2026-01-31 09:13:07.617 247403 DEBUG nova.compute.manager [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-changed-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:13:07 np0005603621 nova_compute[247399]: 2026-01-31 09:13:07.619 247403 DEBUG nova.compute.manager [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Refreshing instance network info cache due to event network-changed-76e5e961-a600-484a-9b32-4a00defa2cf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:13:07 np0005603621 nova_compute[247399]: 2026-01-31 09:13:07.620 247403 DEBUG oslo_concurrency.lockutils [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:13:07 np0005603621 nova_compute[247399]: 2026-01-31 09:13:07.620 247403 DEBUG oslo_concurrency.lockutils [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:13:07 np0005603621 nova_compute[247399]: 2026-01-31 09:13:07.621 247403 DEBUG nova.network.neutron [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Refreshing network info cache for port 76e5e961-a600-484a-9b32-4a00defa2cf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:13:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.136614294 +0000 UTC m=+0.046420036 container create b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 31 04:13:08 np0005603621 systemd[1]: Started libpod-conmon-b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33.scope.
Jan 31 04:13:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.115035693 +0000 UTC m=+0.024841465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.227054039 +0000 UTC m=+0.136859801 container init b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.239093138 +0000 UTC m=+0.148898880 container start b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.243969772 +0000 UTC m=+0.153775524 container attach b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:13:08 np0005603621 eager_chatterjee[407066]: 167 167
Jan 31 04:13:08 np0005603621 systemd[1]: libpod-b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33.scope: Deactivated successfully.
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.24740072 +0000 UTC m=+0.157206492 container died b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:13:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a4b459fc666ac14ae6fef41935dc607f799ab093996438a3cf8eb24dc42aad49-merged.mount: Deactivated successfully.
Jan 31 04:13:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:08.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:08 np0005603621 podman[407049]: 2026-01-31 09:13:08.359144587 +0000 UTC m=+0.268950369 container remove b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_chatterjee, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:13:08 np0005603621 systemd[1]: libpod-conmon-b17585cc4e223184f4820efc12ba1bcd63d21aed377e2110fd92a3700e54fc33.scope: Deactivated successfully.
Jan 31 04:13:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:08.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:08 np0005603621 podman[407092]: 2026-01-31 09:13:08.510931997 +0000 UTC m=+0.064639391 container create 75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:13:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:13:08 np0005603621 podman[407092]: 2026-01-31 09:13:08.474391604 +0000 UTC m=+0.028099048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:13:08 np0005603621 systemd[1]: Started libpod-conmon-75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c.scope.
Jan 31 04:13:08 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c139c19cb2d400d4bac0a25593b8afc6336a11be56588df0106101f88be6fd48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c139c19cb2d400d4bac0a25593b8afc6336a11be56588df0106101f88be6fd48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c139c19cb2d400d4bac0a25593b8afc6336a11be56588df0106101f88be6fd48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:08 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c139c19cb2d400d4bac0a25593b8afc6336a11be56588df0106101f88be6fd48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:13:08 np0005603621 podman[407092]: 2026-01-31 09:13:08.616922493 +0000 UTC m=+0.170629877 container init 75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:13:08 np0005603621 podman[407092]: 2026-01-31 09:13:08.624528493 +0000 UTC m=+0.178235847 container start 75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:13:08 np0005603621 podman[407092]: 2026-01-31 09:13:08.627613021 +0000 UTC m=+0.181320395 container attach 75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:13:08 np0005603621 nova_compute[247399]: 2026-01-31 09:13:08.810 247403 DEBUG nova.network.neutron [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Updated VIF entry in instance network info cache for port 76e5e961-a600-484a-9b32-4a00defa2cf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:13:08 np0005603621 nova_compute[247399]: 2026-01-31 09:13:08.811 247403 DEBUG nova.network.neutron [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Updating instance_info_cache with network_info: [{"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:13:08 np0005603621 nova_compute[247399]: 2026-01-31 09:13:08.850 247403 DEBUG oslo_concurrency.lockutils [req-89751c45-5035-4359-9301-e50bacb4868f req-e95f1c85-3b23-467a-ade5-86c4543d0661 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-8080c31c-59d3-417c-b3d5-642d18ab56b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:13:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 846 KiB/s wr, 86 op/s
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]: {
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:        "osd_id": 0,
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:        "type": "bluestore"
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]:    }
Jan 31 04:13:09 np0005603621 intelligent_williams[407109]: }
Jan 31 04:13:09 np0005603621 systemd[1]: libpod-75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c.scope: Deactivated successfully.
Jan 31 04:13:09 np0005603621 podman[407092]: 2026-01-31 09:13:09.399098579 +0000 UTC m=+0.952805933 container died 75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:13:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c139c19cb2d400d4bac0a25593b8afc6336a11be56588df0106101f88be6fd48-merged.mount: Deactivated successfully.
Jan 31 04:13:09 np0005603621 podman[407092]: 2026-01-31 09:13:09.445070961 +0000 UTC m=+0.998778315 container remove 75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:13:09 np0005603621 systemd[1]: libpod-conmon-75ff33ee096feab365fa4d625d84fbb4f6609b26f81abada971fec7afbddc21c.scope: Deactivated successfully.
Jan 31 04:13:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:13:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:09 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:13:09 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d94ab9de-24d7-4838-9e48-f5f81b56c1bd does not exist
Jan 31 04:13:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f7d3e9ca-993b-44a8-8e02-ce506559622d does not exist
Jan 31 04:13:09 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3046aa8f-acbe-4c15-898a-c913fe49de3a does not exist
Jan 31 04:13:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:10.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:10.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:13:10 np0005603621 nova_compute[247399]: 2026-01-31 09:13:10.680 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 31 04:13:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:12.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:12 np0005603621 nova_compute[247399]: 2026-01-31 09:13:12.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:12.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 31 04:13:13 np0005603621 nova_compute[247399]: 2026-01-31 09:13:13.384 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:14.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:14.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:14 np0005603621 podman[407196]: 2026-01-31 09:13:14.543019547 +0000 UTC m=+0.087470762 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 04:13:14 np0005603621 podman[407195]: 2026-01-31 09:13:14.546073294 +0000 UTC m=+0.090523839 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Jan 31 04:13:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 246 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.2 KiB/s wr, 69 op/s
Jan 31 04:13:15 np0005603621 nova_compute[247399]: 2026-01-31 09:13:15.685 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 252 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 731 KiB/s wr, 48 op/s
Jan 31 04:13:17 np0005603621 nova_compute[247399]: 2026-01-31 09:13:17.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:18.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:18.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:18 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:18Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1c:16:1d 10.100.0.11
Jan 31 04:13:18 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:18Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1c:16:1d 10.100.0.11
Jan 31 04:13:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 268 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 904 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 31 04:13:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:20.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:13:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:20.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:13:20 np0005603621 nova_compute[247399]: 2026-01-31 09:13:20.688 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 268 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 115 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Jan 31 04:13:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:22 np0005603621 nova_compute[247399]: 2026-01-31 09:13:22.368 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:22.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:23 np0005603621 nova_compute[247399]: 2026-01-31 09:13:23.216 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 277 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 261 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 31 04:13:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:24.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.899 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.900 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.900 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.900 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.901 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.902 247403 INFO nova.compute.manager [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Terminating instance#033[00m
Jan 31 04:13:24 np0005603621 nova_compute[247399]: 2026-01-31 09:13:24.903 247403 DEBUG nova.compute.manager [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:13:24 np0005603621 kernel: tap76e5e961-a6 (unregistering): left promiscuous mode
Jan 31 04:13:24 np0005603621 NetworkManager[49013]: <info>  [1769850804.9607] device (tap76e5e961-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:13:25 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:25Z|00912|binding|INFO|Releasing lport 76e5e961-a600-484a-9b32-4a00defa2cf4 from this chassis (sb_readonly=0)
Jan 31 04:13:25 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:25Z|00913|binding|INFO|Setting lport 76e5e961-a600-484a-9b32-4a00defa2cf4 down in Southbound
Jan 31 04:13:25 np0005603621 ovn_controller[149152]: 2026-01-31T09:13:25Z|00914|binding|INFO|Removing iface tap76e5e961-a6 ovn-installed in OVS
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.018 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:25 np0005603621 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000dd.scope: Deactivated successfully.
Jan 31 04:13:25 np0005603621 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000dd.scope: Consumed 13.651s CPU time.
Jan 31 04:13:25 np0005603621 systemd-machined[212769]: Machine qemu-104-instance-000000dd terminated.
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.058 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:16:1d 10.100.0.11'], port_security=['fa:16:3e:1c:16:1d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8080c31c-59d3-417c-b3d5-642d18ab56b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-514e8c9e-2a14-4959-839a-40965c82f800', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0c7930b92fc3471f87d9fe78ee56e71e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1df50b40-0aa7-444c-b661-16a9de28f983', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cde7baed-a23e-44c1-8411-520889d37122, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=76e5e961-a600-484a-9b32-4a00defa2cf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.060 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 76e5e961-a600-484a-9b32-4a00defa2cf4 in datapath 514e8c9e-2a14-4959-839a-40965c82f800 unbound from our chassis#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.061 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 514e8c9e-2a14-4959-839a-40965c82f800, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.062 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[c30e693f-2a4f-46ae-8888-dd46556ffb8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.062 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800 namespace which is not needed anymore#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.140 247403 INFO nova.virt.libvirt.driver [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Instance destroyed successfully.#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.142 247403 DEBUG nova.objects.instance [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lazy-loading 'resources' on Instance uuid 8080c31c-59d3-417c-b3d5-642d18ab56b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:13:25 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [NOTICE]   (406394) : haproxy version is 2.8.14-c23fe91
Jan 31 04:13:25 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [NOTICE]   (406394) : path to executable is /usr/sbin/haproxy
Jan 31 04:13:25 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [WARNING]  (406394) : Exiting Master process...
Jan 31 04:13:25 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [ALERT]    (406394) : Current worker (406396) exited with code 143 (Terminated)
Jan 31 04:13:25 np0005603621 neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800[406390]: [WARNING]  (406394) : All workers exited. Exiting... (0)
Jan 31 04:13:25 np0005603621 systemd[1]: libpod-55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d.scope: Deactivated successfully.
Jan 31 04:13:25 np0005603621 podman[407323]: 2026-01-31 09:13:25.186482795 +0000 UTC m=+0.050440493 container died 55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:13:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d-userdata-shm.mount: Deactivated successfully.
Jan 31 04:13:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2c3065ee4f0016bbef5a835b044823915cc8ecbb1cf51ce30f1bbd3c8247c355-merged.mount: Deactivated successfully.
Jan 31 04:13:25 np0005603621 podman[407323]: 2026-01-31 09:13:25.219150876 +0000 UTC m=+0.083108574 container cleanup 55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:13:25 np0005603621 systemd[1]: libpod-conmon-55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d.scope: Deactivated successfully.
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.279 247403 DEBUG nova.virt.libvirt.vif [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:12:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1802479850-gen-1-1912755204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1802479850-ge',id=221,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFC9ERZJ6vqwSOhb+BxoLuCTPY4zXIPbOdYQjYf18qK5EvFlLu3Fd6dU0UfukMij7wWnpSqWAkqu0LocOazNCHHb52PIeAWKGpoQVtLv/Sw5DcBQogLeHH3fNNhS1TtHkw==',key_name='tempest-TestSecurityGroupsBasicOps-763844494',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:13:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0c7930b92fc3471f87d9fe78ee56e71e',ramdisk_id='',reservation_id='r-n28vnegu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1802479850',owner_user_name='tempest-TestSecurityGroupsBasicOps-1802479850-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:13:02Z,user_data=None,user_id='ebd43008d7a64b8bbf97a2304b1f78b6',uuid=8080c31c-59d3-417c-b3d5-642d18ab56b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.280 247403 DEBUG nova.network.os_vif_util [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converting VIF {"id": "76e5e961-a600-484a-9b32-4a00defa2cf4", "address": "fa:16:3e:1c:16:1d", "network": {"id": "514e8c9e-2a14-4959-839a-40965c82f800", "bridge": "br-int", "label": "tempest-network-smoke--422685355", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0c7930b92fc3471f87d9fe78ee56e71e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76e5e961-a6", "ovs_interfaceid": "76e5e961-a600-484a-9b32-4a00defa2cf4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.281 247403 DEBUG nova.network.os_vif_util [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.281 247403 DEBUG os_vif [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.283 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.283 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76e5e961-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:25 np0005603621 podman[407362]: 2026-01-31 09:13:25.284338903 +0000 UTC m=+0.044973340 container remove 55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.284 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.286 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.288 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b5846b-ae09-45ed-b435-47271d1c1640]: (4, ('Sat Jan 31 09:13:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800 (55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d)\n55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d\nSat Jan 31 09:13:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800 (55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d)\n55126d17d6f68bb6853e307fc3607960af4c50eb4ca7c38cbe6ea4e9e38f564d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.289 247403 INFO os_vif [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1c:16:1d,bridge_name='br-int',has_traffic_filtering=True,id=76e5e961-a600-484a-9b32-4a00defa2cf4,network=Network(514e8c9e-2a14-4959-839a-40965c82f800),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76e5e961-a6')#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.290 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[17abb94f-eda6-4a10-8447-ca635e1c344c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.291 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap514e8c9e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:25 np0005603621 kernel: tap514e8c9e-20: left promiscuous mode
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.300 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a254695b-68af-4e22-836d-18fec89db632]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 279 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.320 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a79c4a16-3cad-4258-8900-3d211f465986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.321 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[337ff290-cd8c-40cf-800c-9e75f79e7ac9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.334 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[30161667-ac8b-4ea7-952c-8f787695a20c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1008746, 'reachable_time': 32714, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407393, 'error': None, 'target': 'ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 systemd[1]: run-netns-ovnmeta\x2d514e8c9e\x2d2a14\x2d4959\x2d839a\x2d40965c82f800.mount: Deactivated successfully.
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.339 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-514e8c9e-2a14-4959-839a-40965c82f800 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:13:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:25.339 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[931baed4-0381-4480-92f0-883f2f6fe2a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.616 247403 DEBUG nova.compute.manager [req-4f5226b0-a352-446c-80ce-288ccfb60602 req-f1d14ea5-071e-4f5c-98d5-e4004ecc837f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-vif-unplugged-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.616 247403 DEBUG oslo_concurrency.lockutils [req-4f5226b0-a352-446c-80ce-288ccfb60602 req-f1d14ea5-071e-4f5c-98d5-e4004ecc837f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.617 247403 DEBUG oslo_concurrency.lockutils [req-4f5226b0-a352-446c-80ce-288ccfb60602 req-f1d14ea5-071e-4f5c-98d5-e4004ecc837f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.617 247403 DEBUG oslo_concurrency.lockutils [req-4f5226b0-a352-446c-80ce-288ccfb60602 req-f1d14ea5-071e-4f5c-98d5-e4004ecc837f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.617 247403 DEBUG nova.compute.manager [req-4f5226b0-a352-446c-80ce-288ccfb60602 req-f1d14ea5-071e-4f5c-98d5-e4004ecc837f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] No waiting events found dispatching network-vif-unplugged-76e5e961-a600-484a-9b32-4a00defa2cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:13:25 np0005603621 nova_compute[247399]: 2026-01-31 09:13:25.617 247403 DEBUG nova.compute.manager [req-4f5226b0-a352-446c-80ce-288ccfb60602 req-f1d14ea5-071e-4f5c-98d5-e4004ecc837f fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-vif-unplugged-76e5e961-a600-484a-9b32-4a00defa2cf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:13:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:26.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 256 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.370 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.711 247403 DEBUG nova.compute.manager [req-1d81eb18-317f-45ec-bcb3-a26cb58f07b6 req-55dce664-2f90-4a07-a9c0-781e8a5a2790 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.711 247403 DEBUG oslo_concurrency.lockutils [req-1d81eb18-317f-45ec-bcb3-a26cb58f07b6 req-55dce664-2f90-4a07-a9c0-781e8a5a2790 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.712 247403 DEBUG oslo_concurrency.lockutils [req-1d81eb18-317f-45ec-bcb3-a26cb58f07b6 req-55dce664-2f90-4a07-a9c0-781e8a5a2790 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.713 247403 DEBUG oslo_concurrency.lockutils [req-1d81eb18-317f-45ec-bcb3-a26cb58f07b6 req-55dce664-2f90-4a07-a9c0-781e8a5a2790 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.713 247403 DEBUG nova.compute.manager [req-1d81eb18-317f-45ec-bcb3-a26cb58f07b6 req-55dce664-2f90-4a07-a9c0-781e8a5a2790 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] No waiting events found dispatching network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:13:27 np0005603621 nova_compute[247399]: 2026-01-31 09:13:27.714 247403 WARNING nova.compute.manager [req-1d81eb18-317f-45ec-bcb3-a26cb58f07b6 req-55dce664-2f90-4a07-a9c0-781e8a5a2790 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received unexpected event network-vif-plugged-76e5e961-a600-484a-9b32-4a00defa2cf4 for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:13:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:13:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:13:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:28.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 283 KiB/s rd, 1.4 MiB/s wr, 83 op/s
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.407 247403 INFO nova.virt.libvirt.driver [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Deleting instance files /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2_del#033[00m
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.408 247403 INFO nova.virt.libvirt.driver [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Deletion of /var/lib/nova/instances/8080c31c-59d3-417c-b3d5-642d18ab56b2_del complete#033[00m
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.747 247403 INFO nova.compute.manager [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Took 4.84 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.747 247403 DEBUG oslo.service.loopingcall [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.748 247403 DEBUG nova.compute.manager [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:13:29 np0005603621 nova_compute[247399]: 2026-01-31 09:13:29.748 247403 DEBUG nova.network.neutron [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.286 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:30.554 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:30.555 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:30.555 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:30.750 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.750 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:30.751 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:13:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:13:30.752 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.791 247403 DEBUG nova.network.neutron [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.822 247403 INFO nova.compute.manager [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Took 1.07 seconds to deallocate network for instance.#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.884 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.885 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.889 247403 DEBUG nova.compute.manager [req-4e187645-73d0-44ed-949a-61782d0a6e09 req-fa40607c-c42f-4465-aa03-d2c0b96b0983 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Received event network-vif-deleted-76e5e961-a600-484a-9b32-4a00defa2cf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:13:30 np0005603621 nova_compute[247399]: 2026-01-31 09:13:30.948 247403 DEBUG oslo_concurrency.processutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 67 KiB/s wr, 54 op/s
Jan 31 04:13:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:13:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/981120941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.380 247403 DEBUG oslo_concurrency.processutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.384 247403 DEBUG nova.compute.provider_tree [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.399 247403 DEBUG nova.scheduler.client.report [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.427 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.453 247403 INFO nova.scheduler.client.report [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Deleted allocations for instance 8080c31c-59d3-417c-b3d5-642d18ab56b2#033[00m
Jan 31 04:13:31 np0005603621 nova_compute[247399]: 2026-01-31 09:13:31.533 247403 DEBUG oslo_concurrency.lockutils [None req-1794154a-840d-454d-9ba5-ba5a4a38e7a3 ebd43008d7a64b8bbf97a2304b1f78b6 0c7930b92fc3471f87d9fe78ee56e71e - - default default] Lock "8080c31c-59d3-417c-b3d5-642d18ab56b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:32 np0005603621 nova_compute[247399]: 2026-01-31 09:13:32.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:33 np0005603621 nova_compute[247399]: 2026-01-31 09:13:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:33 np0005603621 nova_compute[247399]: 2026-01-31 09:13:33.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:13:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 178 KiB/s rd, 67 KiB/s wr, 63 op/s
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.227 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.228 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.228 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.229 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.229 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:34.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:34.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:13:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446075897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.694 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.829 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.830 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4085MB free_disk=20.94265365600586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.830 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.831 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.897 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.898 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:13:34 np0005603621 nova_compute[247399]: 2026-01-31 09:13:34.914 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:13:35 np0005603621 nova_compute[247399]: 2026-01-31 09:13:35.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 176 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 56 KiB/s wr, 67 op/s
Jan 31 04:13:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:13:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2829410940' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:13:35 np0005603621 nova_compute[247399]: 2026-01-31 09:13:35.357 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:13:35 np0005603621 nova_compute[247399]: 2026-01-31 09:13:35.361 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:13:35 np0005603621 nova_compute[247399]: 2026-01-31 09:13:35.377 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:13:35 np0005603621 nova_compute[247399]: 2026-01-31 09:13:35.399 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:13:35 np0005603621 nova_compute[247399]: 2026-01-31 09:13:35.399 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:13:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:36.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:36 np0005603621 nova_compute[247399]: 2026-01-31 09:13:36.399 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:36 np0005603621 nova_compute[247399]: 2026-01-31 09:13:36.399 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:13:36 np0005603621 nova_compute[247399]: 2026-01-31 09:13:36.400 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:13:36 np0005603621 nova_compute[247399]: 2026-01-31 09:13:36.418 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:13:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:36.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 151 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 89 KiB/s rd, 2.7 KiB/s wr, 139 op/s
Jan 31 04:13:37 np0005603621 nova_compute[247399]: 2026-01-31 09:13:37.374 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.064295) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850818064355, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 779, "num_deletes": 257, "total_data_size": 1128814, "memory_usage": 1151904, "flush_reason": "Manual Compaction"}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850818073709, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1117811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82907, "largest_seqno": 83685, "table_properties": {"data_size": 1113742, "index_size": 1784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8905, "raw_average_key_size": 19, "raw_value_size": 1105644, "raw_average_value_size": 2382, "num_data_blocks": 78, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850757, "oldest_key_time": 1769850757, "file_creation_time": 1769850818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 9470 microseconds, and 4535 cpu microseconds.
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.073767) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1117811 bytes OK
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.073790) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.075967) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.075983) EVENT_LOG_v1 {"time_micros": 1769850818075978, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.076002) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1124916, prev total WAL file size 1124916, number of live WAL files 2.
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.076634) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353134' seq:72057594037927935, type:22 .. '6C6F676D0033373637' seq:0, type:0; will stop at (end)
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1091KB)], [191(10MB)]
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850818076666, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12587220, "oldest_snapshot_seqno": -1}
Jan 31 04:13:38 np0005603621 nova_compute[247399]: 2026-01-31 09:13:38.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 10833 keys, 12446494 bytes, temperature: kUnknown
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850818221940, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 12446494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12378610, "index_size": 39675, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 287499, "raw_average_key_size": 26, "raw_value_size": 12191756, "raw_average_value_size": 1125, "num_data_blocks": 1498, "num_entries": 10833, "num_filter_entries": 10833, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850818, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.222352) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 12446494 bytes
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.224471) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.6 rd, 85.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(22.4) write-amplify(11.1) OK, records in: 11362, records dropped: 529 output_compression: NoCompression
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.224506) EVENT_LOG_v1 {"time_micros": 1769850818224491, "job": 120, "event": "compaction_finished", "compaction_time_micros": 145375, "compaction_time_cpu_micros": 23353, "output_level": 6, "num_output_files": 1, "total_output_size": 12446494, "num_input_records": 11362, "num_output_records": 10833, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850818224975, "job": 120, "event": "table_file_deletion", "file_number": 193}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850818227026, "job": 120, "event": "table_file_deletion", "file_number": 191}
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.076493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.227132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.227138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.227141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.227144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:38 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:13:38.227148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:13:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:13:38
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'vms', 'default.rgw.control']
Jan 31 04:13:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:13:39 np0005603621 nova_compute[247399]: 2026-01-31 09:13:39.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:13:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 132 KiB/s rd, 3.3 KiB/s wr, 214 op/s
Jan 31 04:13:40 np0005603621 nova_compute[247399]: 2026-01-31 09:13:40.141 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850805.1386445, 8080c31c-59d3-417c-b3d5-642d18ab56b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:13:40 np0005603621 nova_compute[247399]: 2026-01-31 09:13:40.141 247403 INFO nova.compute.manager [-] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:13:40 np0005603621 nova_compute[247399]: 2026-01-31 09:13:40.183 247403 DEBUG nova.compute.manager [None req-71e600c7-17b9-4beb-87d2-b88b74567c1e - - - - - -] [instance: 8080c31c-59d3-417c-b3d5-642d18ab56b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:13:40 np0005603621 nova_compute[247399]: 2026-01-31 09:13:40.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:40 np0005603621 nova_compute[247399]: 2026-01-31 09:13:40.292 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:40.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 122 KiB/s rd, 1.2 KiB/s wr, 198 op/s
Jan 31 04:13:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:42 np0005603621 nova_compute[247399]: 2026-01-31 09:13:42.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 124 KiB/s rd, 1.2 KiB/s wr, 202 op/s
Jan 31 04:13:44 np0005603621 nova_compute[247399]: 2026-01-31 09:13:44.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:13:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:44.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:44.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:44 np0005603621 podman[407498]: 2026-01-31 09:13:44.891552489 +0000 UTC m=+0.049213194 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:13:44 np0005603621 podman[407499]: 2026-01-31 09:13:44.921527595 +0000 UTC m=+0.077908860 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:13:45 np0005603621 nova_compute[247399]: 2026-01-31 09:13:45.293 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 1.2 KiB/s wr, 192 op/s
Jan 31 04:13:46 np0005603621 nova_compute[247399]: 2026-01-31 09:13:46.285 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:46 np0005603621 nova_compute[247399]: 2026-01-31 09:13:46.334 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 99 KiB/s rd, 682 B/s wr, 164 op/s
Jan 31 04:13:47 np0005603621 nova_compute[247399]: 2026-01-31 09:13:47.380 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 682 B/s wr, 87 op/s
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:13:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:13:50 np0005603621 nova_compute[247399]: 2026-01-31 09:13:50.298 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:50.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:50.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 04:13:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:52.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:52 np0005603621 nova_compute[247399]: 2026-01-31 09:13:52.383 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 31 04:13:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:13:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:54.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:13:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:55 np0005603621 nova_compute[247399]: 2026-01-31 09:13:55.301 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 31 04:13:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:56.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 31 04:13:57 np0005603621 nova_compute[247399]: 2026-01-31 09:13:57.385 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:13:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:13:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:13:58.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:13:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:13:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:13:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:13:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:00 np0005603621 nova_compute[247399]: 2026-01-31 09:14:00.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:00.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:02 np0005603621 nova_compute[247399]: 2026-01-31 09:14:02.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:02.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:04.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:04.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:05 np0005603621 nova_compute[247399]: 2026-01-31 09:14:05.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:06.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:06.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:07 np0005603621 nova_compute[247399]: 2026-01-31 09:14:07.388 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:08.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:08.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:14:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:14:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:10 np0005603621 nova_compute[247399]: 2026-01-31 09:14:10.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:10.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:14:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1120ae7e-7bfb-4236-95d0-075ebed705d6 does not exist
Jan 31 04:14:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 105083df-539c-4271-a16a-964967867e11 does not exist
Jan 31 04:14:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev eeabc616-6749-431a-bf72-91fbcef15631 does not exist
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:14:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.248225725 +0000 UTC m=+0.037827765 container create 3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_elion, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:14:11 np0005603621 systemd[1]: Started libpod-conmon-3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f.scope.
Jan 31 04:14:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.320435114 +0000 UTC m=+0.110037164 container init 3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.325110451 +0000 UTC m=+0.114712491 container start 3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.231977532 +0000 UTC m=+0.021579572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.329389547 +0000 UTC m=+0.118991617 container attach 3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:14:11 np0005603621 jolly_elion[407920]: 167 167
Jan 31 04:14:11 np0005603621 systemd[1]: libpod-3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f.scope: Deactivated successfully.
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.332225606 +0000 UTC m=+0.121827646 container died 3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_elion, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 04:14:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:11 np0005603621 systemd[1]: var-lib-containers-storage-overlay-410cd8c540dc98d4dc4b65ca6e89b3575dd7e9bc30859b9231bd8a4c81a82952-merged.mount: Deactivated successfully.
Jan 31 04:14:11 np0005603621 podman[407905]: 2026-01-31 09:14:11.3665748 +0000 UTC m=+0.156176840 container remove 3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:14:11 np0005603621 systemd[1]: libpod-conmon-3118394abec16cfd307b26415367c29954d17d2f5a0fc8a6221ca8c7f104bf6f.scope: Deactivated successfully.
Jan 31 04:14:11 np0005603621 podman[407945]: 2026-01-31 09:14:11.472753231 +0000 UTC m=+0.035500061 container create 90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:14:11 np0005603621 systemd[1]: Started libpod-conmon-90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44.scope.
Jan 31 04:14:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:14:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5023121c2d77e359acfa388489d1cc7014c7b69a6473c777423c77b7e49d6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5023121c2d77e359acfa388489d1cc7014c7b69a6473c777423c77b7e49d6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5023121c2d77e359acfa388489d1cc7014c7b69a6473c777423c77b7e49d6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5023121c2d77e359acfa388489d1cc7014c7b69a6473c777423c77b7e49d6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e5023121c2d77e359acfa388489d1cc7014c7b69a6473c777423c77b7e49d6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:11 np0005603621 podman[407945]: 2026-01-31 09:14:11.457397807 +0000 UTC m=+0.020144657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:14:11 np0005603621 podman[407945]: 2026-01-31 09:14:11.573343367 +0000 UTC m=+0.136090197 container init 90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcnulty, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:14:11 np0005603621 podman[407945]: 2026-01-31 09:14:11.580546293 +0000 UTC m=+0.143293163 container start 90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 04:14:11 np0005603621 podman[407945]: 2026-01-31 09:14:11.585102627 +0000 UTC m=+0.147849497 container attach 90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcnulty, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:14:11 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:14:12 np0005603621 funny_mcnulty[407962]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:14:12 np0005603621 funny_mcnulty[407962]: --> relative data size: 1.0
Jan 31 04:14:12 np0005603621 funny_mcnulty[407962]: --> All data devices are unavailable
Jan 31 04:14:12 np0005603621 systemd[1]: libpod-90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44.scope: Deactivated successfully.
Jan 31 04:14:12 np0005603621 podman[407945]: 2026-01-31 09:14:12.330015987 +0000 UTC m=+0.892762817 container died 90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 31 04:14:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1e5023121c2d77e359acfa388489d1cc7014c7b69a6473c777423c77b7e49d6a-merged.mount: Deactivated successfully.
Jan 31 04:14:12 np0005603621 podman[407945]: 2026-01-31 09:14:12.374837512 +0000 UTC m=+0.937584342 container remove 90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:14:12 np0005603621 systemd[1]: libpod-conmon-90a29b897cc7238540d9f7ca7aea37b1f8a47018c94c3b2f1b463c22dd21aa44.scope: Deactivated successfully.
Jan 31 04:14:12 np0005603621 nova_compute[247399]: 2026-01-31 09:14:12.399 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:12.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:12.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.807017122 +0000 UTC m=+0.034802830 container create ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poincare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:14:12 np0005603621 systemd[1]: Started libpod-conmon-ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8.scope.
Jan 31 04:14:12 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.870475505 +0000 UTC m=+0.098261233 container init ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.87635895 +0000 UTC m=+0.104144678 container start ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.879580442 +0000 UTC m=+0.107366150 container attach ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poincare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:14:12 np0005603621 determined_poincare[408144]: 167 167
Jan 31 04:14:12 np0005603621 systemd[1]: libpod-ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8.scope: Deactivated successfully.
Jan 31 04:14:12 np0005603621 conmon[408144]: conmon ed23a9a3c814dd487564 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8.scope/container/memory.events
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.88172791 +0000 UTC m=+0.109513668 container died ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.79077739 +0000 UTC m=+0.018563138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:14:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-934a9306462944014ffd3cd4b7a437928f841346cadd070bd71533a6ec1f4ab7-merged.mount: Deactivated successfully.
Jan 31 04:14:12 np0005603621 podman[408128]: 2026-01-31 09:14:12.917403265 +0000 UTC m=+0.145188973 container remove ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_poincare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:14:12 np0005603621 systemd[1]: libpod-conmon-ed23a9a3c814dd4875649d149cb7940266d56c4ee9322315d79b3e4983e821e8.scope: Deactivated successfully.
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.022701249 +0000 UTC m=+0.032453886 container create aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:14:13 np0005603621 systemd[1]: Started libpod-conmon-aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76.scope.
Jan 31 04:14:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:14:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6d2b36155c4c06c2a7adf90c9e7c02803e3ec6616f767b7e97c00f0b9be13c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6d2b36155c4c06c2a7adf90c9e7c02803e3ec6616f767b7e97c00f0b9be13c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6d2b36155c4c06c2a7adf90c9e7c02803e3ec6616f767b7e97c00f0b9be13c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6d2b36155c4c06c2a7adf90c9e7c02803e3ec6616f767b7e97c00f0b9be13c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.094612898 +0000 UTC m=+0.104365535 container init aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.099217483 +0000 UTC m=+0.108970120 container start aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.101796186 +0000 UTC m=+0.111548843 container attach aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.008957205 +0000 UTC m=+0.018709862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:14:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:13 np0005603621 strange_tharp[408183]: {
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:    "0": [
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:        {
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "devices": [
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "/dev/loop3"
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            ],
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "lv_name": "ceph_lv0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "lv_size": "7511998464",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "name": "ceph_lv0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "tags": {
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.cluster_name": "ceph",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.crush_device_class": "",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.encrypted": "0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.osd_id": "0",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.type": "block",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:                "ceph.vdo": "0"
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            },
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "type": "block",
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:            "vg_name": "ceph_vg0"
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:        }
Jan 31 04:14:13 np0005603621 strange_tharp[408183]:    ]
Jan 31 04:14:13 np0005603621 strange_tharp[408183]: }
Jan 31 04:14:13 np0005603621 systemd[1]: libpod-aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76.scope: Deactivated successfully.
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.792436662 +0000 UTC m=+0.802189299 container died aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:14:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ed6d2b36155c4c06c2a7adf90c9e7c02803e3ec6616f767b7e97c00f0b9be13c-merged.mount: Deactivated successfully.
Jan 31 04:14:13 np0005603621 podman[408167]: 2026-01-31 09:14:13.839053513 +0000 UTC m=+0.848806150 container remove aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:14:13 np0005603621 systemd[1]: libpod-conmon-aab278e8d74464977599e862c2a36315ceef9eb1e1cc8361167fdfa5a9c51d76.scope: Deactivated successfully.
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.280898128 +0000 UTC m=+0.030819504 container create d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pare, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:14:14 np0005603621 systemd[1]: Started libpod-conmon-d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5.scope.
Jan 31 04:14:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.347096028 +0000 UTC m=+0.097017434 container init d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.351674742 +0000 UTC m=+0.101596118 container start d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pare, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:14:14 np0005603621 stupefied_pare[408363]: 167 167
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.354918545 +0000 UTC m=+0.104839951 container attach d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 31 04:14:14 np0005603621 systemd[1]: libpod-d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5.scope: Deactivated successfully.
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.355646748 +0000 UTC m=+0.105568124 container died d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.267904668 +0000 UTC m=+0.017826064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:14:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6767de7a86a9c75c3fa0b00cfc905f22dd36cc1ec4613a0d129d43050bac5317-merged.mount: Deactivated successfully.
Jan 31 04:14:14 np0005603621 podman[408347]: 2026-01-31 09:14:14.392188811 +0000 UTC m=+0.142110177 container remove d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_pare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:14:14 np0005603621 systemd[1]: libpod-conmon-d0d30a5af798f4c8c9f5586c94f51542c7a7798bb62845392dca9f8f39d75ef5.scope: Deactivated successfully.
Jan 31 04:14:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:14.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:14 np0005603621 podman[408386]: 2026-01-31 09:14:14.502252174 +0000 UTC m=+0.032632261 container create 62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_engelbart, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:14:14 np0005603621 systemd[1]: Started libpod-conmon-62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1.scope.
Jan 31 04:14:14 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b83c350880ad5d12ee9e343fc4f3bb8415fce7dce1e346c56d5b3731957185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b83c350880ad5d12ee9e343fc4f3bb8415fce7dce1e346c56d5b3731957185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b83c350880ad5d12ee9e343fc4f3bb8415fce7dce1e346c56d5b3731957185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:14 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b83c350880ad5d12ee9e343fc4f3bb8415fce7dce1e346c56d5b3731957185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:14:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:14.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:14 np0005603621 podman[408386]: 2026-01-31 09:14:14.558586292 +0000 UTC m=+0.088966389 container init 62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:14:14 np0005603621 podman[408386]: 2026-01-31 09:14:14.564455037 +0000 UTC m=+0.094835124 container start 62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:14:14 np0005603621 podman[408386]: 2026-01-31 09:14:14.567800243 +0000 UTC m=+0.098180330 container attach 62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:14:14 np0005603621 podman[408386]: 2026-01-31 09:14:14.489361808 +0000 UTC m=+0.019741895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]: {
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:        "osd_id": 0,
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:        "type": "bluestore"
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]:    }
Jan 31 04:14:15 np0005603621 intelligent_engelbart[408403]: }
Jan 31 04:14:15 np0005603621 systemd[1]: libpod-62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1.scope: Deactivated successfully.
Jan 31 04:14:15 np0005603621 conmon[408403]: conmon 62e6cf75d776e8bac72c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1.scope/container/memory.events
Jan 31 04:14:15 np0005603621 nova_compute[247399]: 2026-01-31 09:14:15.312 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:15 np0005603621 podman[408424]: 2026-01-31 09:14:15.34081549 +0000 UTC m=+0.020740756 container died 62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:14:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-48b83c350880ad5d12ee9e343fc4f3bb8415fce7dce1e346c56d5b3731957185-merged.mount: Deactivated successfully.
Jan 31 04:14:15 np0005603621 podman[408424]: 2026-01-31 09:14:15.388293459 +0000 UTC m=+0.068218705 container remove 62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 04:14:15 np0005603621 systemd[1]: libpod-conmon-62e6cf75d776e8bac72ca4a5ad8c021bc35e9b02e7aca0ee79d7f8a9c5364cd1.scope: Deactivated successfully.
Jan 31 04:14:15 np0005603621 podman[408426]: 2026-01-31 09:14:15.410601543 +0000 UTC m=+0.073138210 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Jan 31 04:14:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:14:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:14:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:14:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:14:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cb103602-5df3-4fb8-a30d-6301149b1fc5 does not exist
Jan 31 04:14:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fdabd135-a4ae-4fae-ab7c-51b832ae419a does not exist
Jan 31 04:14:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ec447be6-a0be-410a-b74c-c94527315611 does not exist
Jan 31 04:14:15 np0005603621 podman[408433]: 2026-01-31 09:14:15.451846324 +0000 UTC m=+0.114254447 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:14:15 np0005603621 nova_compute[247399]: 2026-01-31 09:14:15.947 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:15.947 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:14:15 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:15.948 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:14:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:16.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:14:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:14:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:16.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:17 np0005603621 nova_compute[247399]: 2026-01-31 09:14:17.401 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:18.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:20 np0005603621 nova_compute[247399]: 2026-01-31 09:14:20.314 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:22 np0005603621 nova_compute[247399]: 2026-01-31 09:14:22.403 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:14:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:14:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:22.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:24 np0005603621 nova_compute[247399]: 2026-01-31 09:14:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:24.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:25 np0005603621 nova_compute[247399]: 2026-01-31 09:14:25.317 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:25 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:25.950 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:14:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:26.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:26.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:27 np0005603621 nova_compute[247399]: 2026-01-31 09:14:27.405 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:30 np0005603621 nova_compute[247399]: 2026-01-31 09:14:30.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:30.555 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:14:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:30.555 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:14:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:30.556 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:14:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:31 np0005603621 nova_compute[247399]: 2026-01-31 09:14:31.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:31 np0005603621 nova_compute[247399]: 2026-01-31 09:14:31.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:32 np0005603621 nova_compute[247399]: 2026-01-31 09:14:32.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:34 np0005603621 nova_compute[247399]: 2026-01-31 09:14:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:34 np0005603621 nova_compute[247399]: 2026-01-31 09:14:34.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:14:34 np0005603621 nova_compute[247399]: 2026-01-31 09:14:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:34.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:35 np0005603621 nova_compute[247399]: 2026-01-31 09:14:35.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:36.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:37 np0005603621 nova_compute[247399]: 2026-01-31 09:14:37.408 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:37 np0005603621 nova_compute[247399]: 2026-01-31 09:14:37.928 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:14:37 np0005603621 nova_compute[247399]: 2026-01-31 09:14:37.928 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:14:37 np0005603621 nova_compute[247399]: 2026-01-31 09:14:37.928 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:14:37 np0005603621 nova_compute[247399]: 2026-01-31 09:14:37.929 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:14:37 np0005603621 nova_compute[247399]: 2026-01-31 09:14:37.929 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:14:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:14:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431896015' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.382 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:14:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.513 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.514 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4110MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.514 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.514 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:14:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:38.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.626 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.627 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:14:38 np0005603621 nova_compute[247399]: 2026-01-31 09:14:38.649 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:14:38
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', 'volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log']
Jan 31 04:14:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:14:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:14:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3572322961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:14:39 np0005603621 nova_compute[247399]: 2026-01-31 09:14:39.080 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:14:39 np0005603621 nova_compute[247399]: 2026-01-31 09:14:39.085 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:14:39 np0005603621 nova_compute[247399]: 2026-01-31 09:14:39.119 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:14:39 np0005603621 nova_compute[247399]: 2026-01-31 09:14:39.175 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:14:39 np0005603621 nova_compute[247399]: 2026-01-31 09:14:39.176 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:14:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.176 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.177 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.177 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.239 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.239 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.239 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:40 np0005603621 nova_compute[247399]: 2026-01-31 09:14:40.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:40.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:40.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:41 np0005603621 nova_compute[247399]: 2026-01-31 09:14:41.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:14:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:42 np0005603621 nova_compute[247399]: 2026-01-31 09:14:42.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:42.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:42.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:44.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:44.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:45 np0005603621 nova_compute[247399]: 2026-01-31 09:14:45.328 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:45 np0005603621 podman[408691]: 2026-01-31 09:14:45.509650255 +0000 UTC m=+0.068747361 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:14:45 np0005603621 podman[408710]: 2026-01-31 09:14:45.571656012 +0000 UTC m=+0.072384646 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:14:46 np0005603621 ovn_controller[149152]: 2026-01-31T09:14:46Z|00915|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 31 04:14:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:46.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:46.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:47 np0005603621 nova_compute[247399]: 2026-01-31 09:14:47.412 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:48.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:48.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:14:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:14:50 np0005603621 nova_compute[247399]: 2026-01-31 09:14:50.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:50.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:14:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:50.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:14:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:52 np0005603621 nova_compute[247399]: 2026-01-31 09:14:52.413 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:52.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:52.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:54.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:55 np0005603621 nova_compute[247399]: 2026-01-31 09:14:55.333 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:55.676 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:14:55 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:14:55.677 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:14:55 np0005603621 nova_compute[247399]: 2026-01-31 09:14:55.677 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:56.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:14:57 np0005603621 nova_compute[247399]: 2026-01-31 09:14:57.414 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:14:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:14:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:14:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:14:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:14:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:14:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:14:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:00 np0005603621 nova_compute[247399]: 2026-01-31 09:15:00.336 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:15:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:00.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:15:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:00.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:00.679 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:02 np0005603621 nova_compute[247399]: 2026-01-31 09:15:02.416 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:02.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:05 np0005603621 nova_compute[247399]: 2026-01-31 09:15:05.339 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:07 np0005603621 nova_compute[247399]: 2026-01-31 09:15:07.419 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:15:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:15:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:08.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:10 np0005603621 nova_compute[247399]: 2026-01-31 09:15:10.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:10.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:12 np0005603621 nova_compute[247399]: 2026-01-31 09:15:12.421 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:12.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:12.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:14.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:15 np0005603621 nova_compute[247399]: 2026-01-31 09:15:15.391 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:15 np0005603621 podman[408825]: 2026-01-31 09:15:15.879926415 +0000 UTC m=+0.040203219 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 04:15:15 np0005603621 podman[408826]: 2026-01-31 09:15:15.907675501 +0000 UTC m=+0.065642692 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Jan 31 04:15:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:15:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 876a5493-1ffa-4118-86e0-45956591a796 does not exist
Jan 31 04:15:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f9ff9326-3a17-4b18-88b0-8c8881e7d65f does not exist
Jan 31 04:15:16 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c74aa388-f720-41fe-a2b3-a406bd107893 does not exist
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:15:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:15:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:15:17 np0005603621 podman[409119]: 2026-01-31 09:15:17.056867531 +0000 UTC m=+0.033315682 container create 50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:15:17 np0005603621 systemd[1]: Started libpod-conmon-50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3.scope.
Jan 31 04:15:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:17 np0005603621 podman[409119]: 2026-01-31 09:15:17.125947881 +0000 UTC m=+0.102396092 container init 50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 04:15:17 np0005603621 podman[409119]: 2026-01-31 09:15:17.132815028 +0000 UTC m=+0.109263189 container start 50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:15:17 np0005603621 podman[409119]: 2026-01-31 09:15:17.041604989 +0000 UTC m=+0.018053160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:15:17 np0005603621 systemd[1]: libpod-50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3.scope: Deactivated successfully.
Jan 31 04:15:17 np0005603621 podman[409119]: 2026-01-31 09:15:17.13858664 +0000 UTC m=+0.115034851 container attach 50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:15:17 np0005603621 vigilant_khayyam[409135]: 167 167
Jan 31 04:15:17 np0005603621 conmon[409135]: conmon 50f5b6dedcbb0750984a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3.scope/container/memory.events
Jan 31 04:15:17 np0005603621 podman[409140]: 2026-01-31 09:15:17.193867125 +0000 UTC m=+0.036394760 container died 50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:15:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0ed4a22fb19e83a36264a99b2c1b07ab6417760c2e6ec61a7af388bd3a38b8f0-merged.mount: Deactivated successfully.
Jan 31 04:15:17 np0005603621 podman[409140]: 2026-01-31 09:15:17.245286788 +0000 UTC m=+0.087814423 container remove 50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_khayyam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 04:15:17 np0005603621 systemd[1]: libpod-conmon-50f5b6dedcbb0750984aaecb5d422d938bc0f42a0240014092bb3f0634daaae3.scope: Deactivated successfully.
Jan 31 04:15:17 np0005603621 podman[409161]: 2026-01-31 09:15:17.40315633 +0000 UTC m=+0.040904762 container create d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:15:17 np0005603621 nova_compute[247399]: 2026-01-31 09:15:17.423 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:17 np0005603621 systemd[1]: Started libpod-conmon-d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61.scope.
Jan 31 04:15:17 np0005603621 podman[409161]: 2026-01-31 09:15:17.380764313 +0000 UTC m=+0.018512755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:15:17 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c6673368427ee2a8d22aa95407045f02b377850ceb364cfd2c32356cd401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c6673368427ee2a8d22aa95407045f02b377850ceb364cfd2c32356cd401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c6673368427ee2a8d22aa95407045f02b377850ceb364cfd2c32356cd401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c6673368427ee2a8d22aa95407045f02b377850ceb364cfd2c32356cd401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:17 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c6673368427ee2a8d22aa95407045f02b377850ceb364cfd2c32356cd401/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:17 np0005603621 podman[409161]: 2026-01-31 09:15:17.511610764 +0000 UTC m=+0.149359296 container init d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:15:17 np0005603621 podman[409161]: 2026-01-31 09:15:17.520000139 +0000 UTC m=+0.157748561 container start d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:15:17 np0005603621 podman[409161]: 2026-01-31 09:15:17.522985422 +0000 UTC m=+0.160733844 container attach d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:15:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:18 np0005603621 tender_khayyam[409177]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:15:18 np0005603621 tender_khayyam[409177]: --> relative data size: 1.0
Jan 31 04:15:18 np0005603621 tender_khayyam[409177]: --> All data devices are unavailable
Jan 31 04:15:18 np0005603621 systemd[1]: libpod-d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61.scope: Deactivated successfully.
Jan 31 04:15:18 np0005603621 podman[409161]: 2026-01-31 09:15:18.242396048 +0000 UTC m=+0.880144470 container died d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:15:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4f82c6673368427ee2a8d22aa95407045f02b377850ceb364cfd2c32356cd401-merged.mount: Deactivated successfully.
Jan 31 04:15:18 np0005603621 podman[409161]: 2026-01-31 09:15:18.300574274 +0000 UTC m=+0.938322696 container remove d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:15:18 np0005603621 systemd[1]: libpod-conmon-d88384af8abbc20f3f7362543754c9ba694a2cf4febb776a7823e9ce9237da61.scope: Deactivated successfully.
Jan 31 04:15:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:18.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:18 np0005603621 podman[409346]: 2026-01-31 09:15:18.898708592 +0000 UTC m=+0.041382667 container create 5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:15:18 np0005603621 systemd[1]: Started libpod-conmon-5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f.scope.
Jan 31 04:15:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:18 np0005603621 podman[409346]: 2026-01-31 09:15:18.881190519 +0000 UTC m=+0.023864594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:15:18 np0005603621 podman[409346]: 2026-01-31 09:15:18.980033458 +0000 UTC m=+0.122707543 container init 5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:15:18 np0005603621 podman[409346]: 2026-01-31 09:15:18.98609244 +0000 UTC m=+0.128766495 container start 5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:15:18 np0005603621 podman[409346]: 2026-01-31 09:15:18.990490958 +0000 UTC m=+0.133165023 container attach 5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_austin, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 04:15:18 np0005603621 epic_austin[409362]: 167 167
Jan 31 04:15:18 np0005603621 systemd[1]: libpod-5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f.scope: Deactivated successfully.
Jan 31 04:15:18 np0005603621 podman[409346]: 2026-01-31 09:15:18.992486872 +0000 UTC m=+0.135160937 container died 5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:15:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a34779dfd226413b15e97e8b5a33be38d303c517fb8fea781d68d875b8f68606-merged.mount: Deactivated successfully.
Jan 31 04:15:19 np0005603621 podman[409346]: 2026-01-31 09:15:19.059531818 +0000 UTC m=+0.202205893 container remove 5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_austin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:15:19 np0005603621 systemd[1]: libpod-conmon-5bc8b7f7274fa20c985dbd8bbedfcc6c378d9cf667118970f68e006121c9052f.scope: Deactivated successfully.
Jan 31 04:15:19 np0005603621 podman[409386]: 2026-01-31 09:15:19.190795941 +0000 UTC m=+0.038916619 container create 4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 04:15:19 np0005603621 systemd[1]: Started libpod-conmon-4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153.scope.
Jan 31 04:15:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259a2fa4b1877795b42cde98fc2ca67a98103db4aeb373b5dda8c1cdaf63b7f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259a2fa4b1877795b42cde98fc2ca67a98103db4aeb373b5dda8c1cdaf63b7f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259a2fa4b1877795b42cde98fc2ca67a98103db4aeb373b5dda8c1cdaf63b7f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259a2fa4b1877795b42cde98fc2ca67a98103db4aeb373b5dda8c1cdaf63b7f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:19 np0005603621 podman[409386]: 2026-01-31 09:15:19.171916024 +0000 UTC m=+0.020036732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:15:19 np0005603621 podman[409386]: 2026-01-31 09:15:19.280047417 +0000 UTC m=+0.128168115 container init 4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:15:19 np0005603621 podman[409386]: 2026-01-31 09:15:19.285397416 +0000 UTC m=+0.133518084 container start 4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:15:19 np0005603621 podman[409386]: 2026-01-31 09:15:19.290065763 +0000 UTC m=+0.138186441 container attach 4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.511 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.513 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.568 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.699 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.700 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.708 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.709 247403 INFO nova.compute.claims [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:15:19 np0005603621 nova_compute[247399]: 2026-01-31 09:15:19.956 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:20 np0005603621 happy_keller[409403]: {
Jan 31 04:15:20 np0005603621 happy_keller[409403]:    "0": [
Jan 31 04:15:20 np0005603621 happy_keller[409403]:        {
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "devices": [
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "/dev/loop3"
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            ],
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "lv_name": "ceph_lv0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "lv_size": "7511998464",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "name": "ceph_lv0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "tags": {
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.cluster_name": "ceph",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.crush_device_class": "",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.encrypted": "0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.osd_id": "0",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.type": "block",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:                "ceph.vdo": "0"
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            },
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "type": "block",
Jan 31 04:15:20 np0005603621 happy_keller[409403]:            "vg_name": "ceph_vg0"
Jan 31 04:15:20 np0005603621 happy_keller[409403]:        }
Jan 31 04:15:20 np0005603621 happy_keller[409403]:    ]
Jan 31 04:15:20 np0005603621 happy_keller[409403]: }
Jan 31 04:15:20 np0005603621 systemd[1]: libpod-4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153.scope: Deactivated successfully.
Jan 31 04:15:20 np0005603621 podman[409386]: 2026-01-31 09:15:20.073104328 +0000 UTC m=+0.921225006 container died 4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:15:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-259a2fa4b1877795b42cde98fc2ca67a98103db4aeb373b5dda8c1cdaf63b7f2-merged.mount: Deactivated successfully.
Jan 31 04:15:20 np0005603621 podman[409386]: 2026-01-31 09:15:20.161866289 +0000 UTC m=+1.009986977 container remove 4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:15:20 np0005603621 systemd[1]: libpod-conmon-4217646006af8cb8a859b98dc9b3e9063ba121c598ed8f0b1b0bea03b2e1c153.scope: Deactivated successfully.
Jan 31 04:15:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:15:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302451434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.409 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.415 247403 DEBUG nova.compute.provider_tree [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.456 247403 DEBUG nova.scheduler.client.report [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:15:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.498 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.499 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.577 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.577 247403 DEBUG nova.network.neutron [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.599 247403 INFO nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.625 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:15:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:20.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.667444515 +0000 UTC m=+0.036756291 container create 3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 04:15:20 np0005603621 systemd[1]: Started libpod-conmon-3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40.scope.
Jan 31 04:15:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.732843629 +0000 UTC m=+0.102155425 container init 3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.738752496 +0000 UTC m=+0.108064272 container start 3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:15:20 np0005603621 dreamy_feynman[409607]: 167 167
Jan 31 04:15:20 np0005603621 systemd[1]: libpod-3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40.scope: Deactivated successfully.
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.649267012 +0000 UTC m=+0.018578818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.750367073 +0000 UTC m=+0.119678889 container attach 3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_feynman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.750836927 +0000 UTC m=+0.120148703 container died 3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.759 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.761 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.761 247403 INFO nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Creating image(s)#033[00m
Jan 31 04:15:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ad517508ed17ca8cec02e7116621acf9bc87a614709a07665622ef0b1e7574f6-merged.mount: Deactivated successfully.
Jan 31 04:15:20 np0005603621 podman[409591]: 2026-01-31 09:15:20.795304531 +0000 UTC m=+0.164616307 container remove 3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.795 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:20 np0005603621 systemd[1]: libpod-conmon-3178e949ccbadcdaddbdfc52b8aec2979997741c3083d6c3db39193126d7dd40.scope: Deactivated successfully.
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.825 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.856 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.859 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.912 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.913 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.913 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.914 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:20 np0005603621 podman[409686]: 2026-01-31 09:15:20.919707757 +0000 UTC m=+0.035418319 container create 46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.943 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:20 np0005603621 nova_compute[247399]: 2026-01-31 09:15:20.947 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 bb21d70e-861d-467a-a60d-de4487b9bb2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:20 np0005603621 systemd[1]: Started libpod-conmon-46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263.scope.
Jan 31 04:15:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab36159238a0e5527d397969d49c3252d2e2796dae736f4b3a98106ec72ac5ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab36159238a0e5527d397969d49c3252d2e2796dae736f4b3a98106ec72ac5ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab36159238a0e5527d397969d49c3252d2e2796dae736f4b3a98106ec72ac5ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab36159238a0e5527d397969d49c3252d2e2796dae736f4b3a98106ec72ac5ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:21 np0005603621 podman[409686]: 2026-01-31 09:15:20.903686661 +0000 UTC m=+0.019397253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:15:21 np0005603621 podman[409686]: 2026-01-31 09:15:21.005159014 +0000 UTC m=+0.120869596 container init 46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:15:21 np0005603621 podman[409686]: 2026-01-31 09:15:21.010333387 +0000 UTC m=+0.126043949 container start 46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:15:21 np0005603621 podman[409686]: 2026-01-31 09:15:21.022554983 +0000 UTC m=+0.138265565 container attach 46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.355 247403 DEBUG nova.policy [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'da3c9cc3dae946679d48679d183c0fc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be71ed72a4a048229c95e8ebbcc2d527', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.368 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 bb21d70e-861d-467a-a60d-de4487b9bb2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.437 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] resizing rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:15:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.558 247403 DEBUG nova.objects.instance [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lazy-loading 'migration_context' on Instance uuid bb21d70e-861d-467a-a60d-de4487b9bb2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]: {
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:        "osd_id": 0,
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:        "type": "bluestore"
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]:    }
Jan 31 04:15:21 np0005603621 flamboyant_feynman[409723]: }
Jan 31 04:15:21 np0005603621 systemd[1]: libpod-46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263.scope: Deactivated successfully.
Jan 31 04:15:21 np0005603621 podman[409834]: 2026-01-31 09:15:21.80854251 +0000 UTC m=+0.019366042 container died 46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.822 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.822 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Ensure instance console log exists: /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.823 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.823 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:21 np0005603621 nova_compute[247399]: 2026-01-31 09:15:21.823 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ab36159238a0e5527d397969d49c3252d2e2796dae736f4b3a98106ec72ac5ff-merged.mount: Deactivated successfully.
Jan 31 04:15:21 np0005603621 podman[409834]: 2026-01-31 09:15:21.858580589 +0000 UTC m=+0.069404111 container remove 46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:15:21 np0005603621 systemd[1]: libpod-conmon-46ace4b9550c49d691744a369abbb39bc9e1daa999cb1e3e132a269b80d96263.scope: Deactivated successfully.
Jan 31 04:15:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:15:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:15:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:15:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:15:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e8c98854-8c8d-4161-b404-2b303220de82 does not exist
Jan 31 04:15:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 94c20d43-bbd1-4d29-9962-8399ef778511 does not exist
Jan 31 04:15:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 85018e90-6454-47d8-821a-ee6b6b8d2599 does not exist
Jan 31 04:15:22 np0005603621 nova_compute[247399]: 2026-01-31 09:15:22.426 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:15:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:15:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:22.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:15:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:15:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:23 np0005603621 nova_compute[247399]: 2026-01-31 09:15:23.355 247403 DEBUG nova.network.neutron [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Successfully created port: fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:15:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 141 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 915 KiB/s wr, 14 op/s
Jan 31 04:15:24 np0005603621 nova_compute[247399]: 2026-01-31 09:15:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:24.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:25 np0005603621 nova_compute[247399]: 2026-01-31 09:15:25.395 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.300 247403 DEBUG nova.network.neutron [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Successfully updated port: fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:15:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:26.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.557 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.557 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquired lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.557 247403 DEBUG nova.network.neutron [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:15:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:26.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.763 247403 DEBUG nova.compute.manager [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-changed-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.763 247403 DEBUG nova.compute.manager [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Refreshing instance network info cache due to event network-changed-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:15:26 np0005603621 nova_compute[247399]: 2026-01-31 09:15:26.763 247403 DEBUG oslo_concurrency.lockutils [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:15:27 np0005603621 nova_compute[247399]: 2026-01-31 09:15:27.298 247403 DEBUG nova.network.neutron [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:15:27 np0005603621 nova_compute[247399]: 2026-01-31 09:15:27.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:15:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.111 247403 DEBUG nova.network.neutron [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updating instance_info_cache with network_info: [{"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.176 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Releasing lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.177 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Instance network_info: |[{"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.177 247403 DEBUG oslo_concurrency.lockutils [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.177 247403 DEBUG nova.network.neutron [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Refreshing network info cache for port fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.180 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Start _get_guest_xml network_info=[{"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.183 247403 WARNING nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.188 247403 DEBUG nova.virt.libvirt.host [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.188 247403 DEBUG nova.virt.libvirt.host [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.191 247403 DEBUG nova.virt.libvirt.host [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.192 247403 DEBUG nova.virt.libvirt.host [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.193 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.193 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.194 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.194 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.194 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.194 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.195 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.195 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.195 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.195 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.195 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.196 247403 DEBUG nova.virt.hardware [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.198 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:15:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915057686' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.592 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:28.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.637 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:28 np0005603621 nova_compute[247399]: 2026-01-31 09:15:28.643 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:15:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/260395166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.064 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.066 247403 DEBUG nova.virt.libvirt.vif [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1254948873',display_name='tempest-TestServerBasicOps-server-1254948873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1254948873',id=222,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLQBgL1dCKTXKzMEuZnhDHhRGcyZN6DLn5upjN2FPRbtUMrW2HJvOPULSU3gw4WnbollQP/DiqrL2N0x9TS3Ak6h7/Z4cQcqh+bYz6cnCC71ni8FP/AHN7/tNCmZHXsJ+Q==',key_name='tempest-TestServerBasicOps-2010811091',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be71ed72a4a048229c95e8ebbcc2d527',ramdisk_id='',reservation_id='r-vui362za',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1851245477',owner_user_name='tempest-TestServerBasicOps-1851245477-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:15:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='da3c9cc3dae946679d48679d183c0fc7',uuid=bb21d70e-861d-467a-a60d-de4487b9bb2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.067 247403 DEBUG nova.network.os_vif_util [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Converting VIF {"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.067 247403 DEBUG nova.network.os_vif_util [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.069 247403 DEBUG nova.objects.instance [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lazy-loading 'pci_devices' on Instance uuid bb21d70e-861d-467a-a60d-de4487b9bb2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.087 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <uuid>bb21d70e-861d-467a-a60d-de4487b9bb2a</uuid>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <name>instance-000000de</name>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestServerBasicOps-server-1254948873</nova:name>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:15:28</nova:creationTime>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:user uuid="da3c9cc3dae946679d48679d183c0fc7">tempest-TestServerBasicOps-1851245477-project-member</nova:user>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:project uuid="be71ed72a4a048229c95e8ebbcc2d527">tempest-TestServerBasicOps-1851245477</nova:project>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <nova:port uuid="fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <entry name="serial">bb21d70e-861d-467a-a60d-de4487b9bb2a</entry>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <entry name="uuid">bb21d70e-861d-467a-a60d-de4487b9bb2a</entry>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/bb21d70e-861d-467a-a60d-de4487b9bb2a_disk">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/bb21d70e-861d-467a-a60d-de4487b9bb2a_disk.config">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:0e:d0:1f"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <target dev="tapfb4a7c4b-3f"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/console.log" append="off"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:15:29 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:15:29 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:15:29 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:15:29 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.089 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Preparing to wait for external event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.089 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.089 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.089 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.090 247403 DEBUG nova.virt.libvirt.vif [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1254948873',display_name='tempest-TestServerBasicOps-server-1254948873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1254948873',id=222,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLQBgL1dCKTXKzMEuZnhDHhRGcyZN6DLn5upjN2FPRbtUMrW2HJvOPULSU3gw4WnbollQP/DiqrL2N0x9TS3Ak6h7/Z4cQcqh+bYz6cnCC71ni8FP/AHN7/tNCmZHXsJ+Q==',key_name='tempest-TestServerBasicOps-2010811091',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be71ed72a4a048229c95e8ebbcc2d527',ramdisk_id='',reservation_id='r-vui362za',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1851245477',owner_user_name='tempest-TestServerBasicOps-1851245477-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:15:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='da3c9cc3dae946679d48679d183c0fc7',uuid=bb21d70e-861d-467a-a60d-de4487b9bb2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.090 247403 DEBUG nova.network.os_vif_util [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Converting VIF {"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.091 247403 DEBUG nova.network.os_vif_util [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.091 247403 DEBUG os_vif [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.092 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.092 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.092 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.095 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.095 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb4a7c4b-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.096 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb4a7c4b-3f, col_values=(('external_ids', {'iface-id': 'fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:d0:1f', 'vm-uuid': 'bb21d70e-861d-467a-a60d-de4487b9bb2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 NetworkManager[49013]: <info>  [1769850929.0992] manager: (tapfb4a7c4b-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.100 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.104 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.105 247403 INFO os_vif [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f')#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.149 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.149 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.149 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] No VIF found with MAC fa:16:3e:0e:d0:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.150 247403 INFO nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Using config drive#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.175 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.512 247403 INFO nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Creating config drive at /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/disk.config#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.517 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdgk42mxh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.645 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpdgk42mxh" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.683 247403 DEBUG nova.storage.rbd_utils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] rbd image bb21d70e-861d-467a-a60d-de4487b9bb2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.687 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/disk.config bb21d70e-861d-467a-a60d-de4487b9bb2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.832 247403 DEBUG oslo_concurrency.processutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/disk.config bb21d70e-861d-467a-a60d-de4487b9bb2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.833 247403 INFO nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Deleting local config drive /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a/disk.config because it was imported into RBD.#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.857 247403 DEBUG nova.network.neutron [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updated VIF entry in instance network info cache for port fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.857 247403 DEBUG nova.network.neutron [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updating instance_info_cache with network_info: [{"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:15:29 np0005603621 kernel: tapfb4a7c4b-3f: entered promiscuous mode
Jan 31 04:15:29 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:29Z|00916|binding|INFO|Claiming lport fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb for this chassis.
Jan 31 04:15:29 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:29Z|00917|binding|INFO|fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb: Claiming fa:16:3e:0e:d0:1f 10.100.0.3
Jan 31 04:15:29 np0005603621 NetworkManager[49013]: <info>  [1769850929.8768] manager: (tapfb4a7c4b-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.882 247403 DEBUG oslo_concurrency.lockutils [req-496787cd-fd04-42bb-918f-c4d674c576d3 req-e5a5f861-143d-4260-89dc-af8428f29dca fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.884 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.893 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:d0:1f 10.100.0.3'], port_security=['fa:16:3e:0e:d0:1f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bb21d70e-861d-467a-a60d-de4487b9bb2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-408ee2a7-16b5-490a-949b-0003daf995e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be71ed72a4a048229c95e8ebbcc2d527', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29239e6a-5a4f-4c3c-8430-6c2969b86c4c d3279d3d-3677-43dd-ae1e-8954ed9fcba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e9a10c-f013-4eb4-9da8-19a589526e6d, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.894 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb in datapath 408ee2a7-16b5-490a-949b-0003daf995e7 bound to our chassis#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.895 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 408ee2a7-16b5-490a-949b-0003daf995e7#033[00m
Jan 31 04:15:29 np0005603621 systemd-machined[212769]: New machine qemu-105-instance-000000de.
Jan 31 04:15:29 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:29Z|00918|binding|INFO|Setting lport fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb ovn-installed in OVS
Jan 31 04:15:29 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:29Z|00919|binding|INFO|Setting lport fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb up in Southbound
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.902 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[ce58186c-ddc2-449c-a42d-c37190434868]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.903 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap408ee2a7-11 in ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:15:29 np0005603621 nova_compute[247399]: 2026-01-31 09:15:29.903 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.904 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap408ee2a7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.905 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2426a462-bf6d-4c1a-bdea-7754851f6038]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.905 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9234d149-8571-4fa9-ab9c-4fde0555ca95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 systemd[1]: Started Virtual Machine qemu-105-instance-000000de.
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.913 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6de29f5c-c322-4db6-91b8-a24fd4a50506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 systemd-udevd[410089]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.922 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e872b176-f3ba-4223-9016-6f209514221b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 NetworkManager[49013]: <info>  [1769850929.9266] device (tapfb4a7c4b-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:15:29 np0005603621 NetworkManager[49013]: <info>  [1769850929.9280] device (tapfb4a7c4b-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.948 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[53308cac-17bb-47d3-a287-3dfe81edae2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 systemd-udevd[410093]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:15:29 np0005603621 NetworkManager[49013]: <info>  [1769850929.9544] manager: (tap408ee2a7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.953 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e9ee50-e53d-428f-a470-047aead09599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.976 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[bb358488-ebff-4977-88df-a32522191f98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.980 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[16c809b4-cc7a-426b-9ccc-54386100be61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:29 np0005603621 NetworkManager[49013]: <info>  [1769850929.9936] device (tap408ee2a7-10): carrier: link connected
Jan 31 04:15:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:29.997 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[afb7a2df-d91d-4b33-b2c9-b8681acaa3e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.007 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6cbf7e4b-09d8-40b5-9eb7-e9d2f81b1b4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap408ee2a7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:79:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1023550, 'reachable_time': 36910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410120, 'error': None, 'target': 'ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.015 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e1838c-4b54-4439-a1b3-2f7cb2faba5d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:794a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1023550, 'tstamp': 1023550}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410121, 'error': None, 'target': 'ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.024 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[6512448e-48dc-4748-90a7-aef3dbf0acde]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap408ee2a7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:79:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1023550, 'reachable_time': 36910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 410122, 'error': None, 'target': 'ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.044 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[29fe1f9c-4769-4279-afc1-fe045763ed99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.084 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b5086a96-b15f-4066-93de-dfd8d02472b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.085 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap408ee2a7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.085 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.086 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap408ee2a7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.087 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:30 np0005603621 NetworkManager[49013]: <info>  [1769850930.0881] manager: (tap408ee2a7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Jan 31 04:15:30 np0005603621 kernel: tap408ee2a7-10: entered promiscuous mode
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.090 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap408ee2a7-10, col_values=(('external_ids', {'iface-id': '41213e4c-4f85-4009-b490-051e871e1eda'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:30 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:30Z|00920|binding|INFO|Releasing lport 41213e4c-4f85-4009-b490-051e871e1eda from this chassis (sb_readonly=0)
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.096 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/408ee2a7-16b5-490a-949b-0003daf995e7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/408ee2a7-16b5-490a-949b-0003daf995e7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.096 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.097 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[04995a0b-3059-43e7-b2c5-aaa347d1bb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.097 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-408ee2a7-16b5-490a-949b-0003daf995e7
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/408ee2a7-16b5-490a-949b-0003daf995e7.pid.haproxy
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 408ee2a7-16b5-490a-949b-0003daf995e7
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.098 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7', 'env', 'PROCESS_TAG=haproxy-408ee2a7-16b5-490a-949b-0003daf995e7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/408ee2a7-16b5-490a-949b-0003daf995e7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.119 247403 DEBUG nova.compute.manager [req-f7f2238e-a7b1-428b-b110-e6be7e4f55d1 req-a33a71f6-d141-4a91-b042-fd9171758e4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.120 247403 DEBUG oslo_concurrency.lockutils [req-f7f2238e-a7b1-428b-b110-e6be7e4f55d1 req-a33a71f6-d141-4a91-b042-fd9171758e4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.120 247403 DEBUG oslo_concurrency.lockutils [req-f7f2238e-a7b1-428b-b110-e6be7e4f55d1 req-a33a71f6-d141-4a91-b042-fd9171758e4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.120 247403 DEBUG oslo_concurrency.lockutils [req-f7f2238e-a7b1-428b-b110-e6be7e4f55d1 req-a33a71f6-d141-4a91-b042-fd9171758e4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.120 247403 DEBUG nova.compute.manager [req-f7f2238e-a7b1-428b-b110-e6be7e4f55d1 req-a33a71f6-d141-4a91-b042-fd9171758e4e fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Processing event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.306 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850930.3055758, bb21d70e-861d-467a-a60d-de4487b9bb2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.306 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] VM Started (Lifecycle Event)#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.307 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.311 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.315 247403 INFO nova.virt.libvirt.driver [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Instance spawned successfully.#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.316 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.328 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.332 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.341 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.343 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.343 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.343 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.344 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.345 247403 DEBUG nova.virt.libvirt.driver [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.376 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.377 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850930.3057237, bb21d70e-861d-467a-a60d-de4487b9bb2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.377 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.413 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.417 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769850930.3108695, bb21d70e-861d-467a-a60d-de4487b9bb2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.417 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.424 247403 INFO nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Took 9.66 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.425 247403 DEBUG nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:15:30 np0005603621 podman[410197]: 2026-01-31 09:15:30.432406228 +0000 UTC m=+0.055319227 container create 589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.443 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.447 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:15:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.479 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:15:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:15:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:30.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:15:30 np0005603621 systemd[1]: Started libpod-conmon-589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c.scope.
Jan 31 04:15:30 np0005603621 podman[410197]: 2026-01-31 09:15:30.397319172 +0000 UTC m=+0.020232210 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:15:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.511 247403 INFO nova.compute.manager [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Took 10.84 seconds to build instance.#033[00m
Jan 31 04:15:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f71640eb37efe5da92fa7a0d5996c8b1f1cb300522c74c3b0ed71e13acad9759/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:15:30 np0005603621 podman[410197]: 2026-01-31 09:15:30.523838985 +0000 UTC m=+0.146752003 container init 589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Jan 31 04:15:30 np0005603621 podman[410197]: 2026-01-31 09:15:30.528082948 +0000 UTC m=+0.150995946 container start 589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:15:30 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [NOTICE]   (410217) : New worker (410219) forked
Jan 31 04:15:30 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [NOTICE]   (410217) : Loading success.
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.557 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.557 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:30.557 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:30 np0005603621 nova_compute[247399]: 2026-01-31 09:15:30.556 247403 DEBUG oslo_concurrency.lockutils [None req-767770e9-fc03-49eb-a6fa-14779a40f585 da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:30.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:31 np0005603621 nova_compute[247399]: 2026-01-31 09:15:31.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.247 247403 DEBUG nova.compute.manager [req-131be8be-53d1-4a76-9028-280a80548194 req-c9a67592-e44a-437b-8ff9-18dfa2bf5915 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.247 247403 DEBUG oslo_concurrency.lockutils [req-131be8be-53d1-4a76-9028-280a80548194 req-c9a67592-e44a-437b-8ff9-18dfa2bf5915 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.248 247403 DEBUG oslo_concurrency.lockutils [req-131be8be-53d1-4a76-9028-280a80548194 req-c9a67592-e44a-437b-8ff9-18dfa2bf5915 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.248 247403 DEBUG oslo_concurrency.lockutils [req-131be8be-53d1-4a76-9028-280a80548194 req-c9a67592-e44a-437b-8ff9-18dfa2bf5915 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.248 247403 DEBUG nova.compute.manager [req-131be8be-53d1-4a76-9028-280a80548194 req-c9a67592-e44a-437b-8ff9-18dfa2bf5915 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] No waiting events found dispatching network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.248 247403 WARNING nova.compute.manager [req-131be8be-53d1-4a76-9028-280a80548194 req-c9a67592-e44a-437b-8ff9-18dfa2bf5915 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received unexpected event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb for instance with vm_state active and task_state None.#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.337 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:32 np0005603621 NetworkManager[49013]: <info>  [1769850932.3377] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Jan 31 04:15:32 np0005603621 NetworkManager[49013]: <info>  [1769850932.3382] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Jan 31 04:15:32 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:32Z|00921|binding|INFO|Releasing lport 41213e4c-4f85-4009-b490-051e871e1eda from this chassis (sb_readonly=0)
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.344 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:32 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:32Z|00922|binding|INFO|Releasing lport 41213e4c-4f85-4009-b490-051e871e1eda from this chassis (sb_readonly=0)
Jan 31 04:15:32 np0005603621 nova_compute[247399]: 2026-01-31 09:15:32.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:32.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:32.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 951 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 31 04:15:34 np0005603621 nova_compute[247399]: 2026-01-31 09:15:34.097 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:34 np0005603621 nova_compute[247399]: 2026-01-31 09:15:34.470 247403 DEBUG nova.compute.manager [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-changed-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:34 np0005603621 nova_compute[247399]: 2026-01-31 09:15:34.470 247403 DEBUG nova.compute.manager [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Refreshing instance network info cache due to event network-changed-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:15:34 np0005603621 nova_compute[247399]: 2026-01-31 09:15:34.470 247403 DEBUG oslo_concurrency.lockutils [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:15:34 np0005603621 nova_compute[247399]: 2026-01-31 09:15:34.471 247403 DEBUG oslo_concurrency.lockutils [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:15:34 np0005603621 nova_compute[247399]: 2026-01-31 09:15:34.471 247403 DEBUG nova.network.neutron [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Refreshing network info cache for port fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:15:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:34.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:34.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:35 np0005603621 nova_compute[247399]: 2026-01-31 09:15:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:35 np0005603621 nova_compute[247399]: 2026-01-31 09:15:35.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:15:35 np0005603621 nova_compute[247399]: 2026-01-31 09:15:35.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:15:35 np0005603621 nova_compute[247399]: 2026-01-31 09:15:35.427 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:15:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 915 KiB/s wr, 86 op/s
Jan 31 04:15:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:15:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:36.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:15:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.567 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.720 247403 DEBUG nova.network.neutron [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updated VIF entry in instance network info cache for port fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.721 247403 DEBUG nova.network.neutron [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updating instance_info_cache with network_info: [{"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.747 247403 DEBUG oslo_concurrency.lockutils [req-f5fc1d4d-abd1-4de7-aa13-42b1895e7a57 req-d65a6220-3f1f-4dff-8af8-cae0b2aa1551 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.748 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.748 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:15:37 np0005603621 nova_compute[247399]: 2026-01-31 09:15:37.748 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bb21d70e-861d-467a-a60d-de4487b9bb2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:15:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:38.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:15:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:15:38
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'images', 'volumes', 'vms', '.rgw.root']
Jan 31 04:15:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.100 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.323 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updating instance_info_cache with network_info: [{"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.342 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-bb21d70e-861d-467a-a60d-de4487b9bb2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.342 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.342 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.343 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.343 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.343 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.366 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.367 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.368 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.368 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.368 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 04:15:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:15:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305354783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.776 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.863 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:15:39 np0005603621 nova_compute[247399]: 2026-01-31 09:15:39.864 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.013 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.015 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3875MB free_disk=20.967357635498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.015 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.015 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.136 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance bb21d70e-861d-467a-a60d-de4487b9bb2a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.137 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.137 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.198 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.272 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.273 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.287 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.308 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.341 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:40.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:15:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/366693247' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.750 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.755 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.790 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.838 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:15:40 np0005603621 nova_compute[247399]: 2026-01-31 09:15:40.838 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 31 04:15:41 np0005603621 nova_compute[247399]: 2026-01-31 09:15:41.694 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:42.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:42 np0005603621 nova_compute[247399]: 2026-01-31 09:15:42.569 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:42 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:42Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:d0:1f 10.100.0.3
Jan 31 04:15:42 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:42Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:d0:1f 10.100.0.3
Jan 31 04:15:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:43 np0005603621 nova_compute[247399]: 2026-01-31 09:15:43.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 184 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 88 op/s
Jan 31 04:15:44 np0005603621 nova_compute[247399]: 2026-01-31 09:15:44.101 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:45 np0005603621 nova_compute[247399]: 2026-01-31 09:15:45.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:15:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 193 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 31 04:15:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:46 np0005603621 podman[410335]: 2026-01-31 09:15:46.51520444 +0000 UTC m=+0.073163340 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 04:15:46 np0005603621 podman[410334]: 2026-01-31 09:15:46.515628682 +0000 UTC m=+0.073581692 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 31 04:15:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 31 04:15:47 np0005603621 nova_compute[247399]: 2026-01-31 09:15:47.571 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:48.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:48.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:49 np0005603621 nova_compute[247399]: 2026-01-31 09:15:49.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021646862786152987 of space, bias 1.0, pg target 0.6494058835845896 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:15:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:15:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:50.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:50.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:50.878 159995 DEBUG eventlet.wsgi.server [-] (159995) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:50.879 159995 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: Accept: */*#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: Connection: close#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: Content-Type: text/plain#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: Host: 169.254.169.254#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: User-Agent: curl/7.84.0#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: X-Forwarded-For: 10.100.0.3#015
Jan 31 04:15:50 np0005603621 ovn_metadata_agent[159729]: X-Ovn-Network-Id: 408ee2a7-16b5-490a-949b-0003daf995e7 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 31 04:15:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:52.345 159995 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:52.345 159995 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4667804#033[00m
Jan 31 04:15:52 np0005603621 haproxy-metadata-proxy-408ee2a7-16b5-490a-949b-0003daf995e7[410219]: 10.100.0.3:51876 [31/Jan/2026:09:15:50.877] listener listener/metadata 0/0/0/1468/1468 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:52.465 159995 DEBUG eventlet.wsgi.server [-] (159995) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:52.466 159995 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: Accept: */*#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: Connection: close#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: Content-Length: 100#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: Content-Type: application/x-www-form-urlencoded#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: Host: 169.254.169.254#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: User-Agent: curl/7.84.0#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: X-Forwarded-For: 10.100.0.3#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: X-Ovn-Network-Id: 408ee2a7-16b5-490a-949b-0003daf995e7#015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: #015
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 31 04:15:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:52.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:52 np0005603621 nova_compute[247399]: 2026-01-31 09:15:52.619 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:52.744 159995 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 31 04:15:52 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:52.745 159995 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2791927#033[00m
Jan 31 04:15:52 np0005603621 haproxy-metadata-proxy-408ee2a7-16b5-490a-949b-0003daf995e7[410219]: 10.100.0.3:51886 [31/Jan/2026:09:15:52.464] listener listener/metadata 0/0/0/280/280 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 31 04:15:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.104 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:15:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:54.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.549 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.549 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.550 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.550 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.550 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.551 247403 INFO nova.compute.manager [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Terminating instance#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.552 247403 DEBUG nova.compute.manager [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:15:54 np0005603621 kernel: tapfb4a7c4b-3f (unregistering): left promiscuous mode
Jan 31 04:15:54 np0005603621 NetworkManager[49013]: <info>  [1769850954.6457] device (tapfb4a7c4b-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:15:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:54Z|00923|binding|INFO|Releasing lport fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb from this chassis (sb_readonly=0)
Jan 31 04:15:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:54Z|00924|binding|INFO|Setting lport fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb down in Southbound
Jan 31 04:15:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:15:54Z|00925|binding|INFO|Removing iface tapfb4a7c4b-3f ovn-installed in OVS
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.651 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.652 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.658 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:54.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.662 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:d0:1f 10.100.0.3'], port_security=['fa:16:3e:0e:d0:1f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'bb21d70e-861d-467a-a60d-de4487b9bb2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-408ee2a7-16b5-490a-949b-0003daf995e7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be71ed72a4a048229c95e8ebbcc2d527', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29239e6a-5a4f-4c3c-8430-6c2969b86c4c d3279d3d-3677-43dd-ae1e-8954ed9fcba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e9a10c-f013-4eb4-9da8-19a589526e6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.663 159734 INFO neutron.agent.ovn.metadata.agent [-] Port fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb in datapath 408ee2a7-16b5-490a-949b-0003daf995e7 unbound from our chassis#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.665 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 408ee2a7-16b5-490a-949b-0003daf995e7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.666 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fc139be6-d9c5-4dc0-a105-053c11c5f09b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.667 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7 namespace which is not needed anymore#033[00m
Jan 31 04:15:54 np0005603621 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000de.scope: Deactivated successfully.
Jan 31 04:15:54 np0005603621 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000de.scope: Consumed 12.836s CPU time.
Jan 31 04:15:54 np0005603621 systemd-machined[212769]: Machine qemu-105-instance-000000de terminated.
Jan 31 04:15:54 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [NOTICE]   (410217) : haproxy version is 2.8.14-c23fe91
Jan 31 04:15:54 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [NOTICE]   (410217) : path to executable is /usr/sbin/haproxy
Jan 31 04:15:54 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [WARNING]  (410217) : Exiting Master process...
Jan 31 04:15:54 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [WARNING]  (410217) : Exiting Master process...
Jan 31 04:15:54 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [ALERT]    (410217) : Current worker (410219) exited with code 143 (Terminated)
Jan 31 04:15:54 np0005603621 neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7[410213]: [WARNING]  (410217) : All workers exited. Exiting... (0)
Jan 31 04:15:54 np0005603621 systemd[1]: libpod-589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c.scope: Deactivated successfully.
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.785 247403 INFO nova.virt.libvirt.driver [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Instance destroyed successfully.#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.786 247403 DEBUG nova.objects.instance [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lazy-loading 'resources' on Instance uuid bb21d70e-861d-467a-a60d-de4487b9bb2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:15:54 np0005603621 podman[410408]: 2026-01-31 09:15:54.793585564 +0000 UTC m=+0.044196487 container died 589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.802 247403 DEBUG nova.virt.libvirt.vif [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:15:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1254948873',display_name='tempest-TestServerBasicOps-server-1254948873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1254948873',id=222,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLQBgL1dCKTXKzMEuZnhDHhRGcyZN6DLn5upjN2FPRbtUMrW2HJvOPULSU3gw4WnbollQP/DiqrL2N0x9TS3Ak6h7/Z4cQcqh+bYz6cnCC71ni8FP/AHN7/tNCmZHXsJ+Q==',key_name='tempest-TestServerBasicOps-2010811091',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:15:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be71ed72a4a048229c95e8ebbcc2d527',ramdisk_id='',reservation_id='r-vui362za',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1851245477',owner_user_name='tempest-TestServerBasicOps-1851245477-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:15:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='da3c9cc3dae946679d48679d183c0fc7',uuid=bb21d70e-861d-467a-a60d-de4487b9bb2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.803 247403 DEBUG nova.network.os_vif_util [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Converting VIF {"id": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "address": "fa:16:3e:0e:d0:1f", "network": {"id": "408ee2a7-16b5-490a-949b-0003daf995e7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1228068582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be71ed72a4a048229c95e8ebbcc2d527", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb4a7c4b-3f", "ovs_interfaceid": "fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.804 247403 DEBUG nova.network.os_vif_util [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.805 247403 DEBUG os_vif [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.807 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.807 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb4a7c4b-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.810 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.813 247403 INFO os_vif [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:d0:1f,bridge_name='br-int',has_traffic_filtering=True,id=fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb,network=Network(408ee2a7-16b5-490a-949b-0003daf995e7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb4a7c4b-3f')#033[00m
Jan 31 04:15:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c-userdata-shm.mount: Deactivated successfully.
Jan 31 04:15:54 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f71640eb37efe5da92fa7a0d5996c8b1f1cb300522c74c3b0ed71e13acad9759-merged.mount: Deactivated successfully.
Jan 31 04:15:54 np0005603621 podman[410408]: 2026-01-31 09:15:54.847515295 +0000 UTC m=+0.098126198 container cleanup 589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:15:54 np0005603621 systemd[1]: libpod-conmon-589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c.scope: Deactivated successfully.
Jan 31 04:15:54 np0005603621 podman[410470]: 2026-01-31 09:15:54.908553002 +0000 UTC m=+0.045060603 container remove 589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.912 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[96746b33-b2ea-4c39-b557-f0a8e03eef4c]: (4, ('Sat Jan 31 09:15:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7 (589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c)\n589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c\nSat Jan 31 09:15:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7 (589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c)\n589005cd4e32a0e86124a240e4b81a57c673ba8bad72c320e41849190a40512c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.914 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bef824f5-f5bf-4aa2-b2f4-319e638e0e99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.915 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap408ee2a7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.916 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 kernel: tap408ee2a7-10: left promiscuous mode
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.918 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.922 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb6edc8-d45f-455f-a12b-76c944081c3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.924 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.927 247403 DEBUG nova.compute.manager [req-b201ae41-49c0-4398-9ac4-2d7f09fb308d req-fd351611-dd5a-473b-aa02-14cf4123ed0a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-vif-unplugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.928 247403 DEBUG oslo_concurrency.lockutils [req-b201ae41-49c0-4398-9ac4-2d7f09fb308d req-fd351611-dd5a-473b-aa02-14cf4123ed0a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.929 247403 DEBUG oslo_concurrency.lockutils [req-b201ae41-49c0-4398-9ac4-2d7f09fb308d req-fd351611-dd5a-473b-aa02-14cf4123ed0a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.929 247403 DEBUG oslo_concurrency.lockutils [req-b201ae41-49c0-4398-9ac4-2d7f09fb308d req-fd351611-dd5a-473b-aa02-14cf4123ed0a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.929 247403 DEBUG nova.compute.manager [req-b201ae41-49c0-4398-9ac4-2d7f09fb308d req-fd351611-dd5a-473b-aa02-14cf4123ed0a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] No waiting events found dispatching network-vif-unplugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:15:54 np0005603621 nova_compute[247399]: 2026-01-31 09:15:54.929 247403 DEBUG nova.compute.manager [req-b201ae41-49c0-4398-9ac4-2d7f09fb308d req-fd351611-dd5a-473b-aa02-14cf4123ed0a fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-vif-unplugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.940 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9912c6c1-6e28-40aa-9b08-ed1093938cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.942 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[76478eff-d641-4f76-9968-5f02d02306b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.954 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[44db1829-3876-46a7-b0cc-a25e10e44b18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1023545, 'reachable_time': 18946, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410485, 'error': None, 'target': 'ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.957 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-408ee2a7-16b5-490a-949b-0003daf995e7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:15:54 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:54.957 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3d47af-7109-4b50-8bdd-1ffe32036db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:15:54 np0005603621 systemd[1]: run-netns-ovnmeta\x2d408ee2a7\x2d16b5\x2d490a\x2d949b\x2d0003daf995e7.mount: Deactivated successfully.
Jan 31 04:15:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 961 KiB/s wr, 52 op/s
Jan 31 04:15:55 np0005603621 nova_compute[247399]: 2026-01-31 09:15:55.767 247403 INFO nova.virt.libvirt.driver [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Deleting instance files /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a_del#033[00m
Jan 31 04:15:55 np0005603621 nova_compute[247399]: 2026-01-31 09:15:55.767 247403 INFO nova.virt.libvirt.driver [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Deletion of /var/lib/nova/instances/bb21d70e-861d-467a-a60d-de4487b9bb2a_del complete#033[00m
Jan 31 04:15:55 np0005603621 nova_compute[247399]: 2026-01-31 09:15:55.869 247403 INFO nova.compute.manager [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Took 1.32 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:15:55 np0005603621 nova_compute[247399]: 2026-01-31 09:15:55.870 247403 DEBUG oslo.service.loopingcall [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:15:55 np0005603621 nova_compute[247399]: 2026-01-31 09:15:55.870 247403 DEBUG nova.compute.manager [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:15:55 np0005603621 nova_compute[247399]: 2026-01-31 09:15:55.871 247403 DEBUG nova.network.neutron [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:15:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:56.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:56.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.019 247403 DEBUG nova.compute.manager [req-406fca85-a5de-4586-a708-42ce010d6336 req-2d692123-3335-4f1d-a932-24ae4ba0ef70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.020 247403 DEBUG oslo_concurrency.lockutils [req-406fca85-a5de-4586-a708-42ce010d6336 req-2d692123-3335-4f1d-a932-24ae4ba0ef70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.020 247403 DEBUG oslo_concurrency.lockutils [req-406fca85-a5de-4586-a708-42ce010d6336 req-2d692123-3335-4f1d-a932-24ae4ba0ef70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.020 247403 DEBUG oslo_concurrency.lockutils [req-406fca85-a5de-4586-a708-42ce010d6336 req-2d692123-3335-4f1d-a932-24ae4ba0ef70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.020 247403 DEBUG nova.compute.manager [req-406fca85-a5de-4586-a708-42ce010d6336 req-2d692123-3335-4f1d-a932-24ae4ba0ef70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] No waiting events found dispatching network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.021 247403 WARNING nova.compute.manager [req-406fca85-a5de-4586-a708-42ce010d6336 req-2d692123-3335-4f1d-a932-24ae4ba0ef70 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received unexpected event network-vif-plugged-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb for instance with vm_state active and task_state deleting.#033[00m
Jan 31 04:15:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 38 KiB/s wr, 27 op/s
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.622 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:57.818 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.819 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:15:57 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:15:57.820 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.833 247403 DEBUG nova.network.neutron [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.856 247403 INFO nova.compute.manager [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Took 1.99 seconds to deallocate network for instance.#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.907 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.908 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.920 247403 DEBUG nova.compute.manager [req-48918eaf-35d8-496d-9418-ee390e375d19 req-7949f2ef-544d-40db-a1f7-41e5ab744d67 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Received event network-vif-deleted-fb4a7c4b-3fc5-4bb8-bbf3-cb41da4aa3eb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:15:57 np0005603621 nova_compute[247399]: 2026-01-31 09:15:57.974 247403 DEBUG oslo_concurrency.processutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:15:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:15:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:15:58 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/811724547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:15:58 np0005603621 nova_compute[247399]: 2026-01-31 09:15:58.380 247403 DEBUG oslo_concurrency.processutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:15:58 np0005603621 nova_compute[247399]: 2026-01-31 09:15:58.387 247403 DEBUG nova.compute.provider_tree [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:15:58 np0005603621 nova_compute[247399]: 2026-01-31 09:15:58.403 247403 DEBUG nova.scheduler.client.report [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:15:58 np0005603621 nova_compute[247399]: 2026-01-31 09:15:58.427 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:58 np0005603621 nova_compute[247399]: 2026-01-31 09:15:58.465 247403 INFO nova.scheduler.client.report [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Deleted allocations for instance bb21d70e-861d-467a-a60d-de4487b9bb2a#033[00m
Jan 31 04:15:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:15:58.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:58 np0005603621 nova_compute[247399]: 2026-01-31 09:15:58.562 247403 DEBUG oslo_concurrency.lockutils [None req-c96ad953-8ef9-4362-b2a1-e30612e50a1d da3c9cc3dae946679d48679d183c0fc7 be71ed72a4a048229c95e8ebbcc2d527 - - default default] Lock "bb21d70e-861d-467a-a60d-de4487b9bb2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:15:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:15:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:15:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:15:58.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:15:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 14 KiB/s wr, 30 op/s
Jan 31 04:15:59 np0005603621 nova_compute[247399]: 2026-01-31 09:15:59.809 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:00.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 31 04:16:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:02.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:02 np0005603621 nova_compute[247399]: 2026-01-31 09:16:02.624 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:02.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 12 KiB/s wr, 30 op/s
Jan 31 04:16:04 np0005603621 nova_compute[247399]: 2026-01-31 09:16:04.045 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:04 np0005603621 nova_compute[247399]: 2026-01-31 09:16:04.078 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:04.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:04 np0005603621 nova_compute[247399]: 2026-01-31 09:16:04.812 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Jan 31 04:16:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:06.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:06.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 31 04:16:07 np0005603621 nova_compute[247399]: 2026-01-31 09:16:07.626 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:07 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:07.822 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:08.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:16:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:16:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 852 B/s wr, 6 op/s
Jan 31 04:16:09 np0005603621 nova_compute[247399]: 2026-01-31 09:16:09.783 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769850954.7824204, bb21d70e-861d-467a-a60d-de4487b9bb2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:16:09 np0005603621 nova_compute[247399]: 2026-01-31 09:16:09.783 247403 INFO nova.compute.manager [-] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:16:09 np0005603621 nova_compute[247399]: 2026-01-31 09:16:09.814 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:10 np0005603621 nova_compute[247399]: 2026-01-31 09:16:10.154 247403 DEBUG nova.compute.manager [None req-f6aaa21c-d5bb-4065-8299-542aa4849d4c - - - - - -] [instance: bb21d70e-861d-467a-a60d-de4487b9bb2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:16:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:10.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:10.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:12.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:12 np0005603621 nova_compute[247399]: 2026-01-31 09:16:12.628 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:14.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:14 np0005603621 nova_compute[247399]: 2026-01-31 09:16:14.816 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:16.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:17 np0005603621 podman[410572]: 2026-01-31 09:16:17.499264111 +0000 UTC m=+0.050221556 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 31 04:16:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:17 np0005603621 podman[410573]: 2026-01-31 09:16:17.596010785 +0000 UTC m=+0.146362131 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 31 04:16:17 np0005603621 nova_compute[247399]: 2026-01-31 09:16:17.629 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:19 np0005603621 nova_compute[247399]: 2026-01-31 09:16:19.818 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.624429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850982624468, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1569, "num_deletes": 251, "total_data_size": 2836151, "memory_usage": 2883744, "flush_reason": "Manual Compaction"}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:16:22 np0005603621 nova_compute[247399]: 2026-01-31 09:16:22.630 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850982661341, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2795093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83686, "largest_seqno": 85254, "table_properties": {"data_size": 2787734, "index_size": 4365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15032, "raw_average_key_size": 20, "raw_value_size": 2773206, "raw_average_value_size": 3707, "num_data_blocks": 192, "num_entries": 748, "num_filter_entries": 748, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850818, "oldest_key_time": 1769850818, "file_creation_time": 1769850982, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 36957 microseconds, and 4476 cpu microseconds.
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.661385) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2795093 bytes OK
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.661406) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.665413) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.665428) EVENT_LOG_v1 {"time_micros": 1769850982665424, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.665445) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2829525, prev total WAL file size 2831451, number of live WAL files 2.
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.666022) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2729KB)], [194(11MB)]
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850982666044, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 15241587, "oldest_snapshot_seqno": -1}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:22.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 11064 keys, 13333782 bytes, temperature: kUnknown
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850982866318, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 13333782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13263639, "index_size": 41358, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 292961, "raw_average_key_size": 26, "raw_value_size": 13071901, "raw_average_value_size": 1181, "num_data_blocks": 1566, "num_entries": 11064, "num_filter_entries": 11064, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769850982, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.866526) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 13333782 bytes
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.868221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.1 rd, 66.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.9 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(10.2) write-amplify(4.8) OK, records in: 11581, records dropped: 517 output_compression: NoCompression
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.868237) EVENT_LOG_v1 {"time_micros": 1769850982868230, "job": 122, "event": "compaction_finished", "compaction_time_micros": 200339, "compaction_time_cpu_micros": 25213, "output_level": 6, "num_output_files": 1, "total_output_size": 13333782, "num_input_records": 11581, "num_output_records": 11064, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850982868490, "job": 122, "event": "table_file_deletion", "file_number": 196}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769850982869640, "job": 122, "event": "table_file_deletion", "file_number": 194}
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.665930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.869693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.869697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.869699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.869701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:16:22.869703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d7ab2b06-a365-40fd-bc4b-12e028478cc7 does not exist
Jan 31 04:16:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 992aaca3-2c4d-413c-8195-36de44765587 does not exist
Jan 31 04:16:23 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 41b1f032-c693-4e28-9b2b-84394c6bec31 does not exist
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:23 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.124672827 +0000 UTC m=+0.035016296 container create a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:16:24 np0005603621 systemd[1]: Started libpod-conmon-a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435.scope.
Jan 31 04:16:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:24 np0005603621 nova_compute[247399]: 2026-01-31 09:16:24.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.20460748 +0000 UTC m=+0.114950969 container init a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.107520876 +0000 UTC m=+0.017864365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.210387372 +0000 UTC m=+0.120730841 container start a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.21509575 +0000 UTC m=+0.125439239 container attach a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:16:24 np0005603621 intelligent_almeida[411024]: 167 167
Jan 31 04:16:24 np0005603621 systemd[1]: libpod-a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435.scope: Deactivated successfully.
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.217196386 +0000 UTC m=+0.127539875 container died a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:16:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-405ebe614c7904b115736afa63713b8d5f4639e09231c3e71dab62e10d0afc00-merged.mount: Deactivated successfully.
Jan 31 04:16:24 np0005603621 podman[411008]: 2026-01-31 09:16:24.249030611 +0000 UTC m=+0.159374070 container remove a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_almeida, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:16:24 np0005603621 systemd[1]: libpod-conmon-a25f5c5b276c42dde0b79e9b0628d9ad2bdb1ce9e96493d6e2e90b578d683435.scope: Deactivated successfully.
Jan 31 04:16:24 np0005603621 podman[411050]: 2026-01-31 09:16:24.358098524 +0000 UTC m=+0.034919153 container create 2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 04:16:24 np0005603621 systemd[1]: Started libpod-conmon-2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914.scope.
Jan 31 04:16:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98db297594e3d8148cd4d241b14a6801a9b7725bb9b97531deef82f0c0ea1269/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98db297594e3d8148cd4d241b14a6801a9b7725bb9b97531deef82f0c0ea1269/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98db297594e3d8148cd4d241b14a6801a9b7725bb9b97531deef82f0c0ea1269/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98db297594e3d8148cd4d241b14a6801a9b7725bb9b97531deef82f0c0ea1269/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98db297594e3d8148cd4d241b14a6801a9b7725bb9b97531deef82f0c0ea1269/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:24 np0005603621 podman[411050]: 2026-01-31 09:16:24.433177653 +0000 UTC m=+0.109998282 container init 2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 04:16:24 np0005603621 podman[411050]: 2026-01-31 09:16:24.341726397 +0000 UTC m=+0.018547056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:16:24 np0005603621 podman[411050]: 2026-01-31 09:16:24.439474632 +0000 UTC m=+0.116295261 container start 2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:16:24 np0005603621 podman[411050]: 2026-01-31 09:16:24.443134078 +0000 UTC m=+0.119954727 container attach 2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:16:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:24.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:24 np0005603621 nova_compute[247399]: 2026-01-31 09:16:24.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:25 np0005603621 heuristic_hofstadter[411067]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:16:25 np0005603621 heuristic_hofstadter[411067]: --> relative data size: 1.0
Jan 31 04:16:25 np0005603621 heuristic_hofstadter[411067]: --> All data devices are unavailable
Jan 31 04:16:25 np0005603621 podman[411050]: 2026-01-31 09:16:25.177698601 +0000 UTC m=+0.854519240 container died 2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:16:25 np0005603621 systemd[1]: libpod-2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914.scope: Deactivated successfully.
Jan 31 04:16:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-98db297594e3d8148cd4d241b14a6801a9b7725bb9b97531deef82f0c0ea1269-merged.mount: Deactivated successfully.
Jan 31 04:16:25 np0005603621 podman[411050]: 2026-01-31 09:16:25.278658538 +0000 UTC m=+0.955479167 container remove 2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:16:25 np0005603621 systemd[1]: libpod-conmon-2957ba88dd1a7e9a743e8c54770715622fe9dbedfdedec43aeb93bf52710f914.scope: Deactivated successfully.
Jan 31 04:16:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.724296102 +0000 UTC m=+0.035278143 container create 927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:16:25 np0005603621 systemd[1]: Started libpod-conmon-927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5.scope.
Jan 31 04:16:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.799067862 +0000 UTC m=+0.110049923 container init 927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.804222356 +0000 UTC m=+0.115204397 container start 927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.711111907 +0000 UTC m=+0.022093958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.807704345 +0000 UTC m=+0.118686406 container attach 927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 04:16:25 np0005603621 jovial_hertz[411302]: 167 167
Jan 31 04:16:25 np0005603621 systemd[1]: libpod-927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5.scope: Deactivated successfully.
Jan 31 04:16:25 np0005603621 conmon[411302]: conmon 927cdbc12b7882de22aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5.scope/container/memory.events
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.809559214 +0000 UTC m=+0.120541255 container died 927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 04:16:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-18ab39049bfc158a1865aa0afebbbe7757e1ecdd10fb8a7447fcabe85d1d2766-merged.mount: Deactivated successfully.
Jan 31 04:16:25 np0005603621 podman[411279]: 2026-01-31 09:16:25.841244463 +0000 UTC m=+0.152226514 container remove 927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hertz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:16:25 np0005603621 systemd[1]: libpod-conmon-927cdbc12b7882de22aa391125520b932f75d946996ac00bb666901ef0ebeed5.scope: Deactivated successfully.
Jan 31 04:16:25 np0005603621 podman[411326]: 2026-01-31 09:16:25.956421269 +0000 UTC m=+0.035267384 container create 48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_leavitt, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:16:25 np0005603621 systemd[1]: Started libpod-conmon-48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111.scope.
Jan 31 04:16:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3491ab28cb6ebbf72f9d4f8bc83579824aaa4a9bac336fd209862130ff654635/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3491ab28cb6ebbf72f9d4f8bc83579824aaa4a9bac336fd209862130ff654635/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3491ab28cb6ebbf72f9d4f8bc83579824aaa4a9bac336fd209862130ff654635/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3491ab28cb6ebbf72f9d4f8bc83579824aaa4a9bac336fd209862130ff654635/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:26 np0005603621 podman[411326]: 2026-01-31 09:16:26.025817379 +0000 UTC m=+0.104663514 container init 48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:16:26 np0005603621 podman[411326]: 2026-01-31 09:16:26.030852598 +0000 UTC m=+0.109698713 container start 48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_leavitt, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 04:16:26 np0005603621 podman[411326]: 2026-01-31 09:16:26.033868134 +0000 UTC m=+0.112714269 container attach 48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_leavitt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:16:26 np0005603621 podman[411326]: 2026-01-31 09:16:25.941067635 +0000 UTC m=+0.019913770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:16:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]: {
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:    "0": [
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:        {
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "devices": [
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "/dev/loop3"
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            ],
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "lv_name": "ceph_lv0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "lv_size": "7511998464",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "name": "ceph_lv0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "tags": {
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.cluster_name": "ceph",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.crush_device_class": "",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.encrypted": "0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.osd_id": "0",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.type": "block",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:                "ceph.vdo": "0"
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            },
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "type": "block",
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:            "vg_name": "ceph_vg0"
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:        }
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]:    ]
Jan 31 04:16:26 np0005603621 bold_leavitt[411342]: }
Jan 31 04:16:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:26 np0005603621 systemd[1]: libpod-48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111.scope: Deactivated successfully.
Jan 31 04:16:26 np0005603621 podman[411326]: 2026-01-31 09:16:26.718639735 +0000 UTC m=+0.797485870 container died 48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_leavitt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:16:26 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3491ab28cb6ebbf72f9d4f8bc83579824aaa4a9bac336fd209862130ff654635-merged.mount: Deactivated successfully.
Jan 31 04:16:26 np0005603621 podman[411326]: 2026-01-31 09:16:26.769749349 +0000 UTC m=+0.848595464 container remove 48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:16:26 np0005603621 systemd[1]: libpod-conmon-48cd79fcb824351f28416f83eb336815c789b3b4118c7bf380c53cad7cd1d111.scope: Deactivated successfully.
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.240358692 +0000 UTC m=+0.030812204 container create 72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 04:16:27 np0005603621 systemd[1]: Started libpod-conmon-72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18.scope.
Jan 31 04:16:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.291084142 +0000 UTC m=+0.081537674 container init 72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.296447542 +0000 UTC m=+0.086901044 container start 72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.300018194 +0000 UTC m=+0.090471726 container attach 72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:16:27 np0005603621 competent_mendeleev[411523]: 167 167
Jan 31 04:16:27 np0005603621 systemd[1]: libpod-72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18.scope: Deactivated successfully.
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.301436719 +0000 UTC m=+0.091890251 container died 72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 04:16:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d7c90043930f212976a63903d51345d4f8c2f38edcf806408263beecc00d28b9-merged.mount: Deactivated successfully.
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.22670259 +0000 UTC m=+0.017156122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:16:27 np0005603621 podman[411506]: 2026-01-31 09:16:27.332783619 +0000 UTC m=+0.123237121 container remove 72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mendeleev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:16:27 np0005603621 systemd[1]: libpod-conmon-72d031a98ed7ccaac799d7fbdf70e0a27ba24d20902abcc25474d18fe5e96f18.scope: Deactivated successfully.
Jan 31 04:16:27 np0005603621 podman[411548]: 2026-01-31 09:16:27.449837933 +0000 UTC m=+0.033602002 container create 33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_davinci, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:16:27 np0005603621 systemd[1]: Started libpod-conmon-33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea.scope.
Jan 31 04:16:27 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d634de00df97fd20303e7d64d4f6c633a5d786ba2c16de8787b30b3cceb1e801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d634de00df97fd20303e7d64d4f6c633a5d786ba2c16de8787b30b3cceb1e801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d634de00df97fd20303e7d64d4f6c633a5d786ba2c16de8787b30b3cceb1e801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:27 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d634de00df97fd20303e7d64d4f6c633a5d786ba2c16de8787b30b3cceb1e801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:27 np0005603621 podman[411548]: 2026-01-31 09:16:27.51185474 +0000 UTC m=+0.095618819 container init 33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_davinci, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:16:27 np0005603621 podman[411548]: 2026-01-31 09:16:27.5159923 +0000 UTC m=+0.099756369 container start 33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_davinci, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:16:27 np0005603621 podman[411548]: 2026-01-31 09:16:27.518646264 +0000 UTC m=+0.102410333 container attach 33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_davinci, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 04:16:27 np0005603621 podman[411548]: 2026-01-31 09:16:27.435890313 +0000 UTC m=+0.019654402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:16:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:27 np0005603621 nova_compute[247399]: 2026-01-31 09:16:27.631 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]: {
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:        "osd_id": 0,
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:        "type": "bluestore"
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]:    }
Jan 31 04:16:28 np0005603621 condescending_davinci[411565]: }
Jan 31 04:16:28 np0005603621 systemd[1]: libpod-33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea.scope: Deactivated successfully.
Jan 31 04:16:28 np0005603621 podman[411548]: 2026-01-31 09:16:28.24248452 +0000 UTC m=+0.826248589 container died 33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:16:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d634de00df97fd20303e7d64d4f6c633a5d786ba2c16de8787b30b3cceb1e801-merged.mount: Deactivated successfully.
Jan 31 04:16:28 np0005603621 podman[411548]: 2026-01-31 09:16:28.28747019 +0000 UTC m=+0.871234259 container remove 33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_davinci, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:16:28 np0005603621 systemd[1]: libpod-conmon-33ee02ac1c5285726ed66c1b9b6fa6e1c6057a38138b0a441a3c22a1cc5611ea.scope: Deactivated successfully.
Jan 31 04:16:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:16:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:16:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev be2353f4-f25e-4e7e-9e2a-d49f889ca2d9 does not exist
Jan 31 04:16:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3fafa4a2-b5cd-4be4-98e4-a36b1ccea227 does not exist
Jan 31 04:16:28 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7ef0af7e-ceae-4cec-acd5-270c620fce52 does not exist
Jan 31 04:16:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:28.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:16:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:29 np0005603621 nova_compute[247399]: 2026-01-31 09:16:29.823 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:30.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:30.558 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:30.558 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:30.559 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:31 np0005603621 nova_compute[247399]: 2026-01-31 09:16:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:32.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:32 np0005603621 nova_compute[247399]: 2026-01-31 09:16:32.632 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:34 np0005603621 nova_compute[247399]: 2026-01-31 09:16:34.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:34.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:34 np0005603621 nova_compute[247399]: 2026-01-31 09:16:34.825 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:35 np0005603621 nova_compute[247399]: 2026-01-31 09:16:35.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:35 np0005603621 nova_compute[247399]: 2026-01-31 09:16:35.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:16:35 np0005603621 nova_compute[247399]: 2026-01-31 09:16:35.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:16:35 np0005603621 nova_compute[247399]: 2026-01-31 09:16:35.233 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:16:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:16:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:36.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:36.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.222 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.223 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.634 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:16:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2642791154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.750 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.889 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.891 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4081MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.891 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.892 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.960 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.960 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:16:37 np0005603621 nova_compute[247399]: 2026-01-31 09:16:37.973 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:16:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1561919941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:16:38 np0005603621 nova_compute[247399]: 2026-01-31 09:16:38.392 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:38 np0005603621 nova_compute[247399]: 2026-01-31 09:16:38.397 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:16:38 np0005603621 nova_compute[247399]: 2026-01-31 09:16:38.416 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:16:38 np0005603621 nova_compute[247399]: 2026-01-31 09:16:38.442 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:16:38 np0005603621 nova_compute[247399]: 2026-01-31 09:16:38.442 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:38.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:16:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:38.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:16:38
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta']
Jan 31 04:16:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:16:39 np0005603621 nova_compute[247399]: 2026-01-31 09:16:39.442 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:39 np0005603621 nova_compute[247399]: 2026-01-31 09:16:39.443 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:39 np0005603621 nova_compute[247399]: 2026-01-31 09:16:39.443 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:16:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Jan 31 04:16:39 np0005603621 nova_compute[247399]: 2026-01-31 09:16:39.827 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:40.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:40.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.754 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.754 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.786 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.880 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.880 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.893 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.894 247403 INFO nova.compute.claims [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 31 04:16:40 np0005603621 nova_compute[247399]: 2026-01-31 09:16:40.998 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:16:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835863704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.393 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.400 247403 DEBUG nova.compute.provider_tree [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.417 247403 DEBUG nova.scheduler.client.report [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.440 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.441 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.489 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.489 247403 DEBUG nova.network.neutron [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.507 247403 INFO nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.531 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 31 04:16:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.616 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.617 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.617 247403 INFO nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Creating image(s)#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.645 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.681 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.715 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.719 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.774 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.775 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.776 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.776 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "b1c202daae0a5d5b639e0239462ea0d46fe633d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.802 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:41 np0005603621 nova_compute[247399]: 2026-01-31 09:16:41.807 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.077 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/b1c202daae0a5d5b639e0239462ea0d46fe633d6 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.147 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] resizing rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.216 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.278 247403 DEBUG nova.policy [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4f25568607234a398bc35cbb67eb406f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '390709b3e5174dc4afdc6b04fdae67e3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.286 247403 DEBUG nova.objects.instance [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.303 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.303 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Ensure instance console log exists: /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.304 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.304 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.304 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:42.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:42 np0005603621 nova_compute[247399]: 2026-01-31 09:16:42.635 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:42.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:43 np0005603621 nova_compute[247399]: 2026-01-31 09:16:43.216 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 139 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 724 KiB/s wr, 23 op/s
Jan 31 04:16:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:44.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:44 np0005603621 nova_compute[247399]: 2026-01-31 09:16:44.572 247403 DEBUG nova.network.neutron [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Successfully created port: 6c698385-414a-410d-8bd1-082bba741f94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 31 04:16:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:44 np0005603621 nova_compute[247399]: 2026-01-31 09:16:44.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 160 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.499 247403 DEBUG nova.network.neutron [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Successfully updated port: 6c698385-414a-410d-8bd1-082bba741f94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.520 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.521 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquired lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.521 247403 DEBUG nova.network.neutron [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 31 04:16:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:46.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.628 247403 DEBUG nova.compute.manager [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-changed-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.629 247403 DEBUG nova.compute.manager [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Refreshing instance network info cache due to event network-changed-6c698385-414a-410d-8bd1-082bba741f94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.629 247403 DEBUG oslo_concurrency.lockutils [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:16:46 np0005603621 nova_compute[247399]: 2026-01-31 09:16:46.691 247403 DEBUG nova.network.neutron [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 31 04:16:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.637 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.821 247403 DEBUG nova.network.neutron [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.861 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Releasing lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.861 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Instance network_info: |[{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.862 247403 DEBUG oslo_concurrency.lockutils [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.862 247403 DEBUG nova.network.neutron [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Refreshing network info cache for port 6c698385-414a-410d-8bd1-082bba741f94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.865 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Start _get_guest_xml network_info=[{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'encrypted': False, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'size': 0, 'image_id': '37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.870 247403 WARNING nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.876 247403 DEBUG nova.virt.libvirt.host [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.876 247403 DEBUG nova.virt.libvirt.host [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.880 247403 DEBUG nova.virt.libvirt.host [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.880 247403 DEBUG nova.virt.libvirt.host [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.881 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.882 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-31T07:43:36Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a01eb4f0-fd80-416b-a750-75de320394d8',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-31T07:43:39Z,direct_url=<?>,disk_format='qcow2',id=37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='89e274acfc5c4097be7194f5ef1fabd3',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-31T07:43:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.882 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.883 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.883 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.883 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.884 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.884 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.884 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.885 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.885 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.885 247403 DEBUG nova.virt.hardware [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 31 04:16:47 np0005603621 nova_compute[247399]: 2026-01-31 09:16:47.889 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:16:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3560831507' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.331 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.362 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.366 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:48 np0005603621 podman[411985]: 2026-01-31 09:16:48.500379611 +0000 UTC m=+0.056258597 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Jan 31 04:16:48 np0005603621 podman[411986]: 2026-01-31 09:16:48.536652975 +0000 UTC m=+0.094533155 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 31 04:16:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:48.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:48.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 31 04:16:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320775265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.794 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.795 247403 DEBUG nova.virt.libvirt.vif [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1860833772',display_name='tempest-TestStampPattern-server-1860833772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1860833772',id=223,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD/aBEl4PBIoIshqTDNOjhhhoUeVGicNgguOr3MSdHgT0ltB1LqQrhXegMG9XeJiExk1ZCoew1VKJC1u0bcRchycTTDnBsbTgXLKYBMMmensD0uk0uwm2aHKUBZ2bnKdBA==',key_name='tempest-TestStampPattern-80801695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='390709b3e5174dc4afdc6b04fdae67e3',ramdisk_id='',reservation_id='r-d5sq4vcz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1434051857',owner_user_name='tempest-TestStampPattern-1434051857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:16:41Z,user_data=None,user_id='4f25568607234a398bc35cbb67eb406f',uuid=4e6fd1c3-3988-4a7f-a30d-b599226c25a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.795 247403 DEBUG nova.network.os_vif_util [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Converting VIF {"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.796 247403 DEBUG nova.network.os_vif_util [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.797 247403 DEBUG nova.objects.instance [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.822 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] End _get_guest_xml xml=<domain type="kvm">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <uuid>4e6fd1c3-3988-4a7f-a30d-b599226c25a0</uuid>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <name>instance-000000df</name>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <memory>131072</memory>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <vcpu>1</vcpu>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <metadata>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:name>tempest-TestStampPattern-server-1860833772</nova:name>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:creationTime>2026-01-31 09:16:47</nova:creationTime>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:flavor name="m1.nano">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:memory>128</nova:memory>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:disk>1</nova:disk>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:swap>0</nova:swap>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:ephemeral>0</nova:ephemeral>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:vcpus>1</nova:vcpus>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </nova:flavor>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:owner>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:user uuid="4f25568607234a398bc35cbb67eb406f">tempest-TestStampPattern-1434051857-project-member</nova:user>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:project uuid="390709b3e5174dc4afdc6b04fdae67e3">tempest-TestStampPattern-1434051857</nova:project>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </nova:owner>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:root type="image" uuid="37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <nova:ports>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <nova:port uuid="6c698385-414a-410d-8bd1-082bba741f94">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        </nova:port>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </nova:ports>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </nova:instance>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </metadata>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <sysinfo type="smbios">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <system>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <entry name="manufacturer">RDO</entry>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <entry name="product">OpenStack Compute</entry>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <entry name="serial">4e6fd1c3-3988-4a7f-a30d-b599226c25a0</entry>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <entry name="uuid">4e6fd1c3-3988-4a7f-a30d-b599226c25a0</entry>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <entry name="family">Virtual Machine</entry>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </system>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </sysinfo>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <os>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <boot dev="hd"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <smbios mode="sysinfo"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </os>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <features>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <acpi/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <apic/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <vmcoreinfo/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </features>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <clock offset="utc">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <timer name="pit" tickpolicy="delay"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <timer name="hpet" present="no"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </clock>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <cpu mode="custom" match="exact">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <model>Nehalem</model>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <topology sockets="1" cores="1" threads="1"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </cpu>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  <devices>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <disk type="network" device="disk">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <target dev="vda" bus="virtio"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <disk type="network" device="cdrom">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <driver type="raw" cache="none"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <source protocol="rbd" name="vms/4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk.config">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <host name="192.168.122.100" port="6789"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <host name="192.168.122.102" port="6789"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <host name="192.168.122.101" port="6789"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </source>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <auth username="openstack">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:        <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      </auth>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <target dev="sda" bus="sata"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </disk>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <interface type="ethernet">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <mac address="fa:16:3e:b8:fa:d2"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <driver name="vhost" rx_queue_size="512"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <mtu size="1442"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <target dev="tap6c698385-41"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </interface>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <serial type="pty">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <log file="/var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/console.log" append="off"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </serial>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <video>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <model type="virtio"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </video>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <input type="tablet" bus="usb"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <rng model="virtio">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <backend model="random">/dev/urandom</backend>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </rng>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="pci" model="pcie-root-port"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <controller type="usb" index="0"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    <memballoon model="virtio">
Jan 31 04:16:48 np0005603621 nova_compute[247399]:      <stats period="10"/>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:    </memballoon>
Jan 31 04:16:48 np0005603621 nova_compute[247399]:  </devices>
Jan 31 04:16:48 np0005603621 nova_compute[247399]: </domain>
Jan 31 04:16:48 np0005603621 nova_compute[247399]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.823 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Preparing to wait for external event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.823 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.823 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.824 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.824 247403 DEBUG nova.virt.libvirt.vif [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-31T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1860833772',display_name='tempest-TestStampPattern-server-1860833772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1860833772',id=223,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD/aBEl4PBIoIshqTDNOjhhhoUeVGicNgguOr3MSdHgT0ltB1LqQrhXegMG9XeJiExk1ZCoew1VKJC1u0bcRchycTTDnBsbTgXLKYBMMmensD0uk0uwm2aHKUBZ2bnKdBA==',key_name='tempest-TestStampPattern-80801695',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='390709b3e5174dc4afdc6b04fdae67e3',ramdisk_id='',reservation_id='r-d5sq4vcz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1434051857',owner_user_name='tempest-TestStampPattern-1434051857-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-31T09:16:41Z,user_data=None,user_id='4f25568607234a398bc35cbb67eb406f',uuid=4e6fd1c3-3988-4a7f-a30d-b599226c25a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.825 247403 DEBUG nova.network.os_vif_util [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Converting VIF {"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.825 247403 DEBUG nova.network.os_vif_util [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.825 247403 DEBUG os_vif [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.826 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.826 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.827 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.829 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.830 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c698385-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.830 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c698385-41, col_values=(('external_ids', {'iface-id': '6c698385-414a-410d-8bd1-082bba741f94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:fa:d2', 'vm-uuid': '4e6fd1c3-3988-4a7f-a30d-b599226c25a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:48 np0005603621 NetworkManager[49013]: <info>  [1769851008.8324] manager: (tap6c698385-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.835 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.836 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:48 np0005603621 nova_compute[247399]: 2026-01-31 09:16:48.837 247403 INFO os_vif [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41')#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.151 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.151 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.151 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No VIF found with MAC fa:16:3e:b8:fa:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.152 247403 INFO nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Using config drive#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.180 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.363 247403 DEBUG nova.network.neutron [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updated VIF entry in instance network info cache for port 6c698385-414a-410d-8bd1-082bba741f94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.364 247403 DEBUG nova.network.neutron [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.382 247403 DEBUG oslo_concurrency.lockutils [req-423d2174-5387-443f-848c-d2ae1df4574b req-0bf679bb-b501-4847-920c-45bdc92e49a7 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.530 247403 INFO nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Creating config drive at /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/disk.config#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.534 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpiz5z9vm_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.659 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpiz5z9vm_" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.695 247403 DEBUG nova.storage.rbd_utils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] rbd image 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.699 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/disk.config 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.852 247403 DEBUG oslo_concurrency.processutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/disk.config 4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.853 247403 INFO nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Deleting local config drive /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0/disk.config because it was imported into RBD.#033[00m
Jan 31 04:16:49 np0005603621 kernel: tap6c698385-41: entered promiscuous mode
Jan 31 04:16:49 np0005603621 NetworkManager[49013]: <info>  [1769851009.8938] manager: (tap6c698385-41): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:16:49Z|00926|binding|INFO|Claiming lport 6c698385-414a-410d-8bd1-082bba741f94 for this chassis.
Jan 31 04:16:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:16:49Z|00927|binding|INFO|6c698385-414a-410d-8bd1-082bba741f94: Claiming fa:16:3e:b8:fa:d2 10.100.0.12
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.896 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021639592639121534 of space, bias 1.0, pg target 0.649187779173646 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:16:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:16:49 np0005603621 systemd-udevd[412121]: Network interface NamePolicy= disabled on kernel command line.
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.918 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:16:49Z|00928|binding|INFO|Setting lport 6c698385-414a-410d-8bd1-082bba741f94 ovn-installed in OVS
Jan 31 04:16:49 np0005603621 nova_compute[247399]: 2026-01-31 09:16:49.922 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:49 np0005603621 ovn_controller[149152]: 2026-01-31T09:16:49Z|00929|binding|INFO|Setting lport 6c698385-414a-410d-8bd1-082bba741f94 up in Southbound
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.925 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:fa:d2 10.100.0.12'], port_security=['fa:16:3e:b8:fa:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4e6fd1c3-3988-4a7f-a30d-b599226c25a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '390709b3e5174dc4afdc6b04fdae67e3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '41fc052d-b084-4adc-a493-522d3d569e9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c4a3744-72a7-4358-ad8b-910d4ad4af10, chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6c698385-414a-410d-8bd1-082bba741f94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.927 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6c698385-414a-410d-8bd1-082bba741f94 in datapath 3f3cc872-5825-455b-b8f4-03469e3aacf8 bound to our chassis#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.928 159734 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f3cc872-5825-455b-b8f4-03469e3aacf8#033[00m
Jan 31 04:16:49 np0005603621 NetworkManager[49013]: <info>  [1769851009.9295] device (tap6c698385-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 31 04:16:49 np0005603621 NetworkManager[49013]: <info>  [1769851009.9303] device (tap6c698385-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 31 04:16:49 np0005603621 systemd-machined[212769]: New machine qemu-106-instance-000000df.
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.935 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[cf96ef3b-2c18-4d3f-89d7-9565ac51b082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.936 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f3cc872-51 in ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.937 253234 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f3cc872-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.938 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[1063748e-a87d-4215-892e-6c7f4e9dcfc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.938 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[50b0a1a8-bc1d-4fd0-847c-a87ac3562b9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.948 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[89ea921c-51c9-40a4-8f34-7934c7559043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 systemd[1]: Started Virtual Machine qemu-106-instance-000000df.
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.970 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[9468c772-2441-40a9-9c24-fec2360e3add]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.990 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[af61aca4-2363-4ad8-88bd-d2f3ad27c9fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:49.994 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[3692929e-de4f-4204-838a-a18622d211cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:49 np0005603621 NetworkManager[49013]: <info>  [1769851009.9951] manager: (tap3f3cc872-50): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.021 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[03f2c892-d03d-45c7-9332-52b6d9bcb825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.024 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7027c7-0cc1-47ac-beb8-a968dc4dd1b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 NetworkManager[49013]: <info>  [1769851010.0400] device (tap3f3cc872-50): carrier: link connected
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.044 253297 DEBUG oslo.privsep.daemon [-] privsep: reply[15b5bb56-d489-4f33-97b0-887abdca6954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.057 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7027b8-0fee-4221-bece-90758fb4914c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f3cc872-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:17:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1031555, 'reachable_time': 36112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412157, 'error': None, 'target': 'ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.067 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[e932de99-8898-4ced-a298-3a7d277f4f0f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedf:17ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1031555, 'tstamp': 1031555}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412158, 'error': None, 'target': 'ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.078 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b5107c07-7bc4-4f1e-a730-43a083a7bd0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f3cc872-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:df:17:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 281], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1031555, 'reachable_time': 36112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412159, 'error': None, 'target': 'ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.099 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[a07a7312-5e00-411f-b2ed-6b6beac02b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.141 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d87eba-a377-41cb-99d9-a1b8ca934322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.142 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f3cc872-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.143 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.143 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f3cc872-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.144 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:50 np0005603621 kernel: tap3f3cc872-50: entered promiscuous mode
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.146 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:50 np0005603621 NetworkManager[49013]: <info>  [1769851010.1470] manager: (tap3f3cc872-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.147 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f3cc872-50, col_values=(('external_ids', {'iface-id': '5bebd274-c8f9-4e5f-96fa-6c8eecac7fa3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.147 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:50 np0005603621 ovn_controller[149152]: 2026-01-31T09:16:50Z|00930|binding|INFO|Releasing lport 5bebd274-c8f9-4e5f-96fa-6c8eecac7fa3 from this chassis (sb_readonly=0)
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.148 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.149 159734 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f3cc872-5825-455b-b8f4-03469e3aacf8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f3cc872-5825-455b-b8f4-03469e3aacf8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.150 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7597e0f5-ce95-4bbd-b64e-8eb980ec9af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.150 159734 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: global
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    log         /dev/log local0 debug
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    log-tag     haproxy-metadata-proxy-3f3cc872-5825-455b-b8f4-03469e3aacf8
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    user        root
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    group       root
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    maxconn     1024
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    pidfile     /var/lib/neutron/external/pids/3f3cc872-5825-455b-b8f4-03469e3aacf8.pid.haproxy
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    daemon
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: defaults
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    log global
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    mode http
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    option httplog
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    option dontlognull
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    option http-server-close
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    option forwardfor
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    retries                 3
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    timeout http-request    30s
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    timeout connect         30s
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    timeout client          32s
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    timeout server          32s
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    timeout http-keep-alive 30s
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: listen listener
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    bind 169.254.169.254:80
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    server metadata /var/lib/neutron/metadata_proxy
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]:    http-request add-header X-OVN-Network-ID 3f3cc872-5825-455b-b8f4-03469e3aacf8
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 31 04:16:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:16:50.151 159734 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'env', 'PROCESS_TAG=haproxy-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f3cc872-5825-455b-b8f4-03469e3aacf8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.152 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.177 247403 DEBUG nova.compute.manager [req-8f50daec-210f-4d73-87d5-6723bec72ba4 req-9ea42009-9de5-4d6d-b1cf-dd37627308e0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.177 247403 DEBUG oslo_concurrency.lockutils [req-8f50daec-210f-4d73-87d5-6723bec72ba4 req-9ea42009-9de5-4d6d-b1cf-dd37627308e0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.178 247403 DEBUG oslo_concurrency.lockutils [req-8f50daec-210f-4d73-87d5-6723bec72ba4 req-9ea42009-9de5-4d6d-b1cf-dd37627308e0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.178 247403 DEBUG oslo_concurrency.lockutils [req-8f50daec-210f-4d73-87d5-6723bec72ba4 req-9ea42009-9de5-4d6d-b1cf-dd37627308e0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.178 247403 DEBUG nova.compute.manager [req-8f50daec-210f-4d73-87d5-6723bec72ba4 req-9ea42009-9de5-4d6d-b1cf-dd37627308e0 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Processing event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 31 04:16:50 np0005603621 podman[412192]: 2026-01-31 09:16:50.473381111 +0000 UTC m=+0.048609775 container create 40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 04:16:50 np0005603621 systemd[1]: Started libpod-conmon-40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e.scope.
Jan 31 04:16:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:16:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f6f779e57c7e45aa80399b630606eedf47aa45b1948d592ae34b212c35c70e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 31 04:16:50 np0005603621 podman[412192]: 2026-01-31 09:16:50.544570097 +0000 UTC m=+0.119798791 container init 40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Jan 31 04:16:50 np0005603621 podman[412192]: 2026-01-31 09:16:50.448847527 +0000 UTC m=+0.024076211 image pull 19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 31 04:16:50 np0005603621 podman[412192]: 2026-01-31 09:16:50.548812482 +0000 UTC m=+0.124041156 container start 40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 04:16:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:50.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [NOTICE]   (412212) : New worker (412214) forked
Jan 31 04:16:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [NOTICE]   (412212) : Loading success.
Jan 31 04:16:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:50.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.888 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769851010.887994, 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.888 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] VM Started (Lifecycle Event)#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.890 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.893 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.896 247403 INFO nova.virt.libvirt.driver [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Instance spawned successfully.#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.897 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.911 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.913 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.923 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.923 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.924 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.924 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.925 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.925 247403 DEBUG nova.virt.libvirt.driver [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.933 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.933 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769851010.8881602, 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.933 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] VM Paused (Lifecycle Event)#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.958 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.961 247403 DEBUG nova.virt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Emitting event <LifecycleEvent: 1769851010.8928287, 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.961 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] VM Resumed (Lifecycle Event)#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.987 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:16:50 np0005603621 nova_compute[247399]: 2026-01-31 09:16:50.991 247403 DEBUG nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 31 04:16:51 np0005603621 nova_compute[247399]: 2026-01-31 09:16:51.003 247403 INFO nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Took 9.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 31 04:16:51 np0005603621 nova_compute[247399]: 2026-01-31 09:16:51.004 247403 DEBUG nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:16:51 np0005603621 nova_compute[247399]: 2026-01-31 09:16:51.031 247403 INFO nova.compute.manager [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 31 04:16:51 np0005603621 nova_compute[247399]: 2026-01-31 09:16:51.075 247403 INFO nova.compute.manager [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Took 10.22 seconds to build instance.#033[00m
Jan 31 04:16:51 np0005603621 nova_compute[247399]: 2026-01-31 09:16:51.095 247403 DEBUG oslo_concurrency.lockutils [None req-d91245e4-17f3-4807-93d9-148d7491e520 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.258 247403 DEBUG nova.compute.manager [req-0d721e0b-0ac2-41cd-9db7-28c540be9d26 req-04dffecc-227b-46ea-a1e7-73f2250cc439 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.258 247403 DEBUG oslo_concurrency.lockutils [req-0d721e0b-0ac2-41cd-9db7-28c540be9d26 req-04dffecc-227b-46ea-a1e7-73f2250cc439 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.258 247403 DEBUG oslo_concurrency.lockutils [req-0d721e0b-0ac2-41cd-9db7-28c540be9d26 req-04dffecc-227b-46ea-a1e7-73f2250cc439 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.259 247403 DEBUG oslo_concurrency.lockutils [req-0d721e0b-0ac2-41cd-9db7-28c540be9d26 req-04dffecc-227b-46ea-a1e7-73f2250cc439 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.259 247403 DEBUG nova.compute.manager [req-0d721e0b-0ac2-41cd-9db7-28c540be9d26 req-04dffecc-227b-46ea-a1e7-73f2250cc439 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] No waiting events found dispatching network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.259 247403 WARNING nova.compute.manager [req-0d721e0b-0ac2-41cd-9db7-28c540be9d26 req-04dffecc-227b-46ea-a1e7-73f2250cc439 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received unexpected event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 for instance with vm_state active and task_state None.#033[00m
Jan 31 04:16:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:52.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:52 np0005603621 nova_compute[247399]: 2026-01-31 09:16:52.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 31 04:16:53 np0005603621 nova_compute[247399]: 2026-01-31 09:16:53.832 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:54.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.643 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:54 np0005603621 NetworkManager[49013]: <info>  [1769851014.6441] manager: (patch-provnet-9633882b-fa09-4c13-9ab8-69ba69661845-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Jan 31 04:16:54 np0005603621 NetworkManager[49013]: <info>  [1769851014.6450] manager: (patch-br-int-to-provnet-9633882b-fa09-4c13-9ab8-69ba69661845): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.662 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:54 np0005603621 ovn_controller[149152]: 2026-01-31T09:16:54Z|00931|binding|INFO|Releasing lport 5bebd274-c8f9-4e5f-96fa-6c8eecac7fa3 from this chassis (sb_readonly=0)
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.670 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.969 247403 DEBUG nova.compute.manager [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-changed-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.970 247403 DEBUG nova.compute.manager [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Refreshing instance network info cache due to event network-changed-6c698385-414a-410d-8bd1-082bba741f94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.970 247403 DEBUG oslo_concurrency.lockutils [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.971 247403 DEBUG oslo_concurrency.lockutils [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:16:54 np0005603621 nova_compute[247399]: 2026-01-31 09:16:54.971 247403 DEBUG nova.network.neutron [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Refreshing network info cache for port 6c698385-414a-410d-8bd1-082bba741f94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:16:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.1 MiB/s wr, 74 op/s
Jan 31 04:16:56 np0005603621 nova_compute[247399]: 2026-01-31 09:16:56.438 247403 DEBUG nova.network.neutron [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updated VIF entry in instance network info cache for port 6c698385-414a-410d-8bd1-082bba741f94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:16:56 np0005603621 nova_compute[247399]: 2026-01-31 09:16:56.439 247403 DEBUG nova.network.neutron [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:16:56 np0005603621 nova_compute[247399]: 2026-01-31 09:16:56.524 247403 DEBUG oslo_concurrency.lockutils [req-f266f443-35a2-4d90-a0f9-ffc9b6a5825d req-10c44a7a-ab70-4338-bcf3-4d07144c2262 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:16:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:56.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:57 np0005603621 nova_compute[247399]: 2026-01-31 09:16:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:16:57 np0005603621 nova_compute[247399]: 2026-01-31 09:16:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:16:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 512 KiB/s wr, 76 op/s
Jan 31 04:16:57 np0005603621 nova_compute[247399]: 2026-01-31 09:16:57.640 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:16:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:16:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:16:58.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:16:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:16:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:16:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:16:58.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:16:58 np0005603621 nova_compute[247399]: 2026-01-31 09:16:58.835 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:16:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 04:17:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:00.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:00.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 31 04:17:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:02.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:03 np0005603621 nova_compute[247399]: 2026-01-31 09:17:03.108 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 178 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Jan 31 04:17:03 np0005603621 nova_compute[247399]: 2026-01-31 09:17:03.838 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:04.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:04.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:05 np0005603621 ovn_controller[149152]: 2026-01-31T09:17:05Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:fa:d2 10.100.0.12
Jan 31 04:17:05 np0005603621 ovn_controller[149152]: 2026-01-31T09:17:05Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:fa:d2 10.100.0.12
Jan 31 04:17:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 188 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 943 KiB/s rd, 2.0 MiB/s wr, 64 op/s
Jan 31 04:17:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:06.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 197 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 430 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 31 04:17:07 np0005603621 nova_compute[247399]: 2026-01-31 09:17:07.643 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:17:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:17:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:08 np0005603621 nova_compute[247399]: 2026-01-31 09:17:08.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 04:17:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:10.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:10 np0005603621 nova_compute[247399]: 2026-01-31 09:17:10.972 247403 DEBUG oslo_concurrency.lockutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:10 np0005603621 nova_compute[247399]: 2026-01-31 09:17:10.973 247403 DEBUG oslo_concurrency.lockutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:10 np0005603621 nova_compute[247399]: 2026-01-31 09:17:10.989 247403 DEBUG nova.objects.instance [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lazy-loading 'flavor' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.033 247403 DEBUG oslo_concurrency.lockutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.288 247403 DEBUG oslo_concurrency.lockutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.289 247403 DEBUG oslo_concurrency.lockutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.290 247403 INFO nova.compute.manager [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Attaching volume bb545a69-3527-4e10-b1af-2e7b80d9ad14 to /dev/vdb#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.489 247403 DEBUG os_brick.utils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.493 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.504 254621 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.504 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[3386b50b-3217-47bc-8b44-f9a9650ebf28]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.507 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.518 254621 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.518 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[869fff2c-211b-4017-8fd9-7ce04bf4b6ce]: (4, ('InitiatorName=iqn.1994-05.com.redhat:9d6ad6ddc476', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.520 254621 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.528 254621 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.529 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[46152037-7517-403c-93a5-55f88fcdaf07]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.532 254621 DEBUG oslo.privsep.daemon [-] privsep: reply[52c1525c-f747-4bac-8d3b-a8649ee1656f]: (4, '4e415482-4f51-40cd-acc4-a0d3058a31bb') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.533 247403 DEBUG oslo_concurrency.processutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.567 247403 DEBUG oslo_concurrency.processutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.569 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.570 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.570 247403 DEBUG os_brick.initiator.connectors.lightos [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.570 247403 DEBUG os_brick.utils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:9d6ad6ddc476', 'do_local_attach': False, 'nvme_hostid': '0156c751-d05d-449e-959d-30f482d5b822', 'system uuid': '4e415482-4f51-40cd-acc4-a0d3058a31bb', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:0156c751-d05d-449e-959d-30f482d5b822', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 31 04:17:11 np0005603621 nova_compute[247399]: 2026-01-31 09:17:11.571 247403 DEBUG nova.virt.block_device [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating existing volume attachment record: a1d2d07f-2336-4c5f-8374-d3dd56faf296 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 31 04:17:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.498 247403 DEBUG nova.objects.instance [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lazy-loading 'flavor' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.521 247403 DEBUG nova.virt.libvirt.driver [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Attempting to attach volume bb545a69-3527-4e10-b1af-2e7b80d9ad14 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.523 247403 DEBUG nova.virt.libvirt.guest [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] attach device xml: <disk type="network" device="disk">
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-bb545a69-3527-4e10-b1af-2e7b80d9ad14">
Jan 31 04:17:12 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  </source>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  <auth username="openstack">
Jan 31 04:17:12 np0005603621 nova_compute[247399]:    <secret type="ceph" uuid="2f5ab832-5f2e-5a84-bd93-cf8bab960ee2"/>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  </auth>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 04:17:12 np0005603621 nova_compute[247399]:  <serial>bb545a69-3527-4e10-b1af-2e7b80d9ad14</serial>
Jan 31 04:17:12 np0005603621 nova_compute[247399]: </disk>
Jan 31 04:17:12 np0005603621 nova_compute[247399]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 31 04:17:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.645 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.714 247403 DEBUG nova.virt.libvirt.driver [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.715 247403 DEBUG nova.virt.libvirt.driver [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.715 247403 DEBUG nova.virt.libvirt.driver [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.716 247403 DEBUG nova.virt.libvirt.driver [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No VIF found with MAC fa:16:3e:b8:fa:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 31 04:17:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:12 np0005603621 nova_compute[247399]: 2026-01-31 09:17:12.948 247403 DEBUG oslo_concurrency.lockutils [None req-fe2b8430-1109-42cb-8948-076d68462b3e 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 31 04:17:13 np0005603621 nova_compute[247399]: 2026-01-31 09:17:13.842 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:17:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3706872547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:17:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:17:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3706872547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:17:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:14.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:15 np0005603621 nova_compute[247399]: 2026-01-31 09:17:15.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 200 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 319 KiB/s rd, 1024 KiB/s wr, 51 op/s
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.273 247403 DEBUG oslo_concurrency.lockutils [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.274 247403 DEBUG oslo_concurrency.lockutils [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.288 247403 INFO nova.compute.manager [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Detaching volume bb545a69-3527-4e10-b1af-2e7b80d9ad14#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.407 247403 INFO nova.virt.block_device [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Attempting to driver detach volume bb545a69-3527-4e10-b1af-2e7b80d9ad14 from mountpoint /dev/vdb#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.415 247403 DEBUG nova.virt.libvirt.driver [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Attempting to detach device vdb from instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.416 247403 DEBUG nova.virt.libvirt.guest [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-bb545a69-3527-4e10-b1af-2e7b80d9ad14">
Jan 31 04:17:16 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  </source>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <serial>bb545a69-3527-4e10-b1af-2e7b80d9ad14</serial>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]: </disk>
Jan 31 04:17:16 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.424 247403 INFO nova.virt.libvirt.driver [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Successfully detached device vdb from instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 from the persistent domain config.#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.424 247403 DEBUG nova.virt.libvirt.driver [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.425 247403 DEBUG nova.virt.libvirt.guest [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <source protocol="rbd" name="volumes/volume-bb545a69-3527-4e10-b1af-2e7b80d9ad14">
Jan 31 04:17:16 np0005603621 nova_compute[247399]:    <host name="192.168.122.100" port="6789"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:    <host name="192.168.122.102" port="6789"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:    <host name="192.168.122.101" port="6789"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  </source>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <target dev="vdb" bus="virtio"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <serial>bb545a69-3527-4e10-b1af-2e7b80d9ad14</serial>
Jan 31 04:17:16 np0005603621 nova_compute[247399]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 31 04:17:16 np0005603621 nova_compute[247399]: </disk>
Jan 31 04:17:16 np0005603621 nova_compute[247399]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 31 04:17:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:16.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.619 247403 DEBUG nova.virt.libvirt.driver [None req-d0f11513-b7eb-426b-ae9a-c1d3e28dfec0 - - - - - -] Received event <DeviceRemovedEvent: 1769851036.619051, 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.620 247403 DEBUG nova.virt.libvirt.driver [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.622 247403 INFO nova.virt.libvirt.driver [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Successfully detached device vdb from instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 from the live domain config.#033[00m
Jan 31 04:17:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:16.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.768 247403 DEBUG nova.objects.instance [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lazy-loading 'flavor' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:17:16 np0005603621 nova_compute[247399]: 2026-01-31 09:17:16.805 247403 DEBUG oslo_concurrency.lockutils [None req-b77fa3f3-80a2-404e-b482-7d0ab049877d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 177 KiB/s rd, 322 KiB/s wr, 31 op/s
Jan 31 04:17:17 np0005603621 nova_compute[247399]: 2026-01-31 09:17:17.695 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Jan 31 04:17:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Jan 31 04:17:18 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Jan 31 04:17:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:18.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:18.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:18 np0005603621 nova_compute[247399]: 2026-01-31 09:17:18.845 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:19 np0005603621 nova_compute[247399]: 2026-01-31 09:17:19.448 247403 DEBUG nova.compute.manager [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:17:19 np0005603621 nova_compute[247399]: 2026-01-31 09:17:19.493 247403 INFO nova.compute.manager [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] instance snapshotting#033[00m
Jan 31 04:17:19 np0005603621 podman[412359]: 2026-01-31 09:17:19.502103579 +0000 UTC m=+0.053371306 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:17:19 np0005603621 podman[412360]: 2026-01-31 09:17:19.516364009 +0000 UTC m=+0.070479116 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 31 04:17:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.9 KiB/s rd, 245 KiB/s wr, 12 op/s
Jan 31 04:17:19 np0005603621 nova_compute[247399]: 2026-01-31 09:17:19.732 247403 INFO nova.virt.libvirt.driver [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Beginning live snapshot process#033[00m
Jan 31 04:17:19 np0005603621 nova_compute[247399]: 2026-01-31 09:17:19.921 247403 DEBUG nova.virt.libvirt.imagebackend [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] No parent info for 37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 31 04:17:20 np0005603621 nova_compute[247399]: 2026-01-31 09:17:20.119 247403 DEBUG nova.storage.rbd_utils [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] creating snapshot(3ff1d49ad53b4959b4cbe468366fb64a) on rbd image(4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 04:17:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:20.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:20.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Jan 31 04:17:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Jan 31 04:17:21 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Jan 31 04:17:21 np0005603621 nova_compute[247399]: 2026-01-31 09:17:21.242 247403 DEBUG nova.storage.rbd_utils [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] cloning vms/4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk@3ff1d49ad53b4959b4cbe468366fb64a to images/0d074876-a373-4784-8303-5b3716508074 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 31 04:17:21 np0005603621 nova_compute[247399]: 2026-01-31 09:17:21.387 247403 DEBUG nova.storage.rbd_utils [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] flattening images/0d074876-a373-4784-8303-5b3716508074 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 31 04:17:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 306 KiB/s wr, 15 op/s
Jan 31 04:17:22 np0005603621 nova_compute[247399]: 2026-01-31 09:17:22.256 247403 DEBUG nova.storage.rbd_utils [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] removing snapshot(3ff1d49ad53b4959b4cbe468366fb64a) on rbd image(4e6fd1c3-3988-4a7f-a30d-b599226c25a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 31 04:17:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:22.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:22 np0005603621 nova_compute[247399]: 2026-01-31 09:17:22.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:22.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Jan 31 04:17:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Jan 31 04:17:23 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Jan 31 04:17:23 np0005603621 nova_compute[247399]: 2026-01-31 09:17:23.307 247403 DEBUG nova.storage.rbd_utils [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] creating snapshot(snap) on rbd image(0d074876-a373-4784-8303-5b3716508074) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 31 04:17:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 226 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.8 MiB/s wr, 88 op/s
Jan 31 04:17:23 np0005603621 nova_compute[247399]: 2026-01-31 09:17:23.847 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:24 np0005603621 nova_compute[247399]: 2026-01-31 09:17:24.217 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Jan 31 04:17:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Jan 31 04:17:24 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Jan 31 04:17:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:24.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:24 np0005603621 ovn_controller[149152]: 2026-01-31T09:17:24Z|00932|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 31 04:17:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:24.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 275 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.1 MiB/s wr, 157 op/s
Jan 31 04:17:25 np0005603621 nova_compute[247399]: 2026-01-31 09:17:25.812 247403 INFO nova.virt.libvirt.driver [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Snapshot image upload complete#033[00m
Jan 31 04:17:25 np0005603621 nova_compute[247399]: 2026-01-31 09:17:25.812 247403 INFO nova.compute.manager [None req-1795bc7a-433e-40b4-80be-e0f4b19f3e2d 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Took 6.32 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 31 04:17:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:26.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:26.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.2 MiB/s wr, 168 op/s
Jan 31 04:17:27 np0005603621 nova_compute[247399]: 2026-01-31 09:17:27.699 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Jan 31 04:17:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Jan 31 04:17:28 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Jan 31 04:17:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:28.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:28 np0005603621 nova_compute[247399]: 2026-01-31 09:17:28.849 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:17:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2ce9a151-9a58-49ab-b0da-74a6be0f1b83 does not exist
Jan 31 04:17:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fac6572b-9803-4175-9b0a-4d29b0949135 does not exist
Jan 31 04:17:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c0632053-f8c7-46f4-8bfd-fbbc7b8826b9 does not exist
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:17:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 125 op/s
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:17:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.861890815 +0000 UTC m=+0.036323508 container create 078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:17:29 np0005603621 systemd[1]: Started libpod-conmon-078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3.scope.
Jan 31 04:17:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.937509341 +0000 UTC m=+0.111942054 container init 078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ptolemy, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.846867931 +0000 UTC m=+0.021300654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.944385849 +0000 UTC m=+0.118818542 container start 078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.947375953 +0000 UTC m=+0.121808646 container attach 078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:29 np0005603621 interesting_ptolemy[412889]: 167 167
Jan 31 04:17:29 np0005603621 systemd[1]: libpod-078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3.scope: Deactivated successfully.
Jan 31 04:17:29 np0005603621 conmon[412889]: conmon 078ca47ea94ff4aff345 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3.scope/container/memory.events
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.951457021 +0000 UTC m=+0.125889714 container died 078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ptolemy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:17:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c60ddd57498988743b9949f1175703f597844ae6c888aa70f3e3bc14b61073c6-merged.mount: Deactivated successfully.
Jan 31 04:17:29 np0005603621 podman[412873]: 2026-01-31 09:17:29.987271483 +0000 UTC m=+0.161704176 container remove 078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 04:17:30 np0005603621 systemd[1]: libpod-conmon-078ca47ea94ff4aff345cfdb4902886a3d7b0b1dd5371225f1c5f29b3299e0a3.scope: Deactivated successfully.
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.10981842 +0000 UTC m=+0.037957669 container create 574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 04:17:30 np0005603621 systemd[1]: Started libpod-conmon-574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5.scope.
Jan 31 04:17:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:17:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2cfa55656c93dea26a6884cfcc60978ea314d7fb0a95b59a602038c4024964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2cfa55656c93dea26a6884cfcc60978ea314d7fb0a95b59a602038c4024964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2cfa55656c93dea26a6884cfcc60978ea314d7fb0a95b59a602038c4024964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2cfa55656c93dea26a6884cfcc60978ea314d7fb0a95b59a602038c4024964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d2cfa55656c93dea26a6884cfcc60978ea314d7fb0a95b59a602038c4024964/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.160687326 +0000 UTC m=+0.088826595 container init 574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.166121677 +0000 UTC m=+0.094260926 container start 574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.168853673 +0000 UTC m=+0.096992942 container attach 574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.096434238 +0000 UTC m=+0.024573507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:17:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:17:30.559 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:17:30.560 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:17:30.560 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:30.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:30 np0005603621 elastic_matsumoto[412927]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:17:30 np0005603621 elastic_matsumoto[412927]: --> relative data size: 1.0
Jan 31 04:17:30 np0005603621 elastic_matsumoto[412927]: --> All data devices are unavailable
Jan 31 04:17:30 np0005603621 systemd[1]: libpod-574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5.scope: Deactivated successfully.
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.920865707 +0000 UTC m=+0.849004946 container died 574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9d2cfa55656c93dea26a6884cfcc60978ea314d7fb0a95b59a602038c4024964-merged.mount: Deactivated successfully.
Jan 31 04:17:30 np0005603621 podman[412911]: 2026-01-31 09:17:30.970663879 +0000 UTC m=+0.898803128 container remove 574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_matsumoto, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:30 np0005603621 systemd[1]: libpod-conmon-574ecf9390fa1ca142066676cc5f3533ee7f6e91060ab87edb9c86540de928c5.scope: Deactivated successfully.
Jan 31 04:17:31 np0005603621 nova_compute[247399]: 2026-01-31 09:17:31.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.533888405 +0000 UTC m=+0.036524144 container create 190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:17:31 np0005603621 systemd[1]: Started libpod-conmon-190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd.scope.
Jan 31 04:17:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.605165535 +0000 UTC m=+0.107801294 container init 190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:17:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.8 MiB/s wr, 99 op/s
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.611860716 +0000 UTC m=+0.114496445 container start 190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.519614025 +0000 UTC m=+0.022249794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.615301185 +0000 UTC m=+0.117936974 container attach 190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:17:31 np0005603621 sharp_joliot[413111]: 167 167
Jan 31 04:17:31 np0005603621 systemd[1]: libpod-190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd.scope: Deactivated successfully.
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.616949566 +0000 UTC m=+0.119585305 container died 190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:17:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0ba2b54eb88b78a687f423227517f94f08140e22d8c7aeaaf197fd19700162c2-merged.mount: Deactivated successfully.
Jan 31 04:17:31 np0005603621 podman[413095]: 2026-01-31 09:17:31.649927828 +0000 UTC m=+0.152563567 container remove 190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:17:31 np0005603621 systemd[1]: libpod-conmon-190c9b8b87d4feba7e095c2df68d4124b10632297482948eb833c693a5aa4ecd.scope: Deactivated successfully.
Jan 31 04:17:31 np0005603621 podman[413135]: 2026-01-31 09:17:31.760214468 +0000 UTC m=+0.030702940 container create 78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:17:31 np0005603621 systemd[1]: Started libpod-conmon-78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7.scope.
Jan 31 04:17:31 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:17:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42d77b9170e8051f1b9411c9574ad4c7bb7ecd4b1f57b6e1a92dbabba38b4e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42d77b9170e8051f1b9411c9574ad4c7bb7ecd4b1f57b6e1a92dbabba38b4e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42d77b9170e8051f1b9411c9574ad4c7bb7ecd4b1f57b6e1a92dbabba38b4e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:31 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42d77b9170e8051f1b9411c9574ad4c7bb7ecd4b1f57b6e1a92dbabba38b4e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:31 np0005603621 podman[413135]: 2026-01-31 09:17:31.813322974 +0000 UTC m=+0.083811486 container init 78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:17:31 np0005603621 podman[413135]: 2026-01-31 09:17:31.818921351 +0000 UTC m=+0.089409823 container start 78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:31 np0005603621 podman[413135]: 2026-01-31 09:17:31.822123962 +0000 UTC m=+0.092612494 container attach 78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:17:31 np0005603621 podman[413135]: 2026-01-31 09:17:31.7469576 +0000 UTC m=+0.017446072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]: {
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:    "0": [
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:        {
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "devices": [
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "/dev/loop3"
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            ],
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "lv_name": "ceph_lv0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "lv_size": "7511998464",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "name": "ceph_lv0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "tags": {
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.cluster_name": "ceph",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.crush_device_class": "",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.encrypted": "0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.osd_id": "0",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.type": "block",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:                "ceph.vdo": "0"
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            },
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "type": "block",
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:            "vg_name": "ceph_vg0"
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:        }
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]:    ]
Jan 31 04:17:32 np0005603621 relaxed_shtern[413151]: }
Jan 31 04:17:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:17:32.544 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:17:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:17:32.545 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:17:32 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:17:32.547 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:17:32 np0005603621 systemd[1]: libpod-78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7.scope: Deactivated successfully.
Jan 31 04:17:32 np0005603621 nova_compute[247399]: 2026-01-31 09:17:32.577 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:32 np0005603621 podman[413135]: 2026-01-31 09:17:32.578243176 +0000 UTC m=+0.848731648 container died 78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 04:17:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:32.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:32 np0005603621 nova_compute[247399]: 2026-01-31 09:17:32.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:32.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:32 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:17:32 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:17:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c42d77b9170e8051f1b9411c9574ad4c7bb7ecd4b1f57b6e1a92dbabba38b4e0-merged.mount: Deactivated successfully.
Jan 31 04:17:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:33 np0005603621 podman[413135]: 2026-01-31 09:17:33.14363137 +0000 UTC m=+1.414119842 container remove 78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 04:17:33 np0005603621 systemd[1]: libpod-conmon-78b9a90e6964dfee2b725b856bf32a35eea6faa15bf11f10b6ec83158ad863c7.scope: Deactivated successfully.
Jan 31 04:17:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 448 KiB/s wr, 50 op/s
Jan 31 04:17:33 np0005603621 podman[413315]: 2026-01-31 09:17:33.620329776 +0000 UTC m=+0.021443108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:17:33 np0005603621 nova_compute[247399]: 2026-01-31 09:17:33.851 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:34 np0005603621 podman[413315]: 2026-01-31 09:17:34.112550401 +0000 UTC m=+0.513663713 container create 39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:17:34 np0005603621 systemd[1]: Started libpod-conmon-39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091.scope.
Jan 31 04:17:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:17:34 np0005603621 podman[413315]: 2026-01-31 09:17:34.28077214 +0000 UTC m=+0.681885462 container init 39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:34 np0005603621 podman[413315]: 2026-01-31 09:17:34.287707618 +0000 UTC m=+0.688820930 container start 39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:17:34 np0005603621 vigorous_carver[413331]: 167 167
Jan 31 04:17:34 np0005603621 systemd[1]: libpod-39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091.scope: Deactivated successfully.
Jan 31 04:17:34 np0005603621 podman[413315]: 2026-01-31 09:17:34.291503239 +0000 UTC m=+0.692616571 container attach 39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:17:34 np0005603621 conmon[413331]: conmon 39da86c315f32964510d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091.scope/container/memory.events
Jan 31 04:17:34 np0005603621 podman[413315]: 2026-01-31 09:17:34.29218833 +0000 UTC m=+0.693301642 container died 39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 04:17:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-12b8f5efddbcfff66f52cb0d3f13a6ba912932e27554995ff32f7e8b7b836695-merged.mount: Deactivated successfully.
Jan 31 04:17:34 np0005603621 podman[413315]: 2026-01-31 09:17:34.326511134 +0000 UTC m=+0.727624436 container remove 39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_carver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 04:17:34 np0005603621 systemd[1]: libpod-conmon-39da86c315f32964510d94a7d9c1b92a58d574e6f1e821fb808b440dae2f7091.scope: Deactivated successfully.
Jan 31 04:17:34 np0005603621 podman[413355]: 2026-01-31 09:17:34.441614336 +0000 UTC m=+0.033400075 container create 4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:17:34 np0005603621 systemd[1]: Started libpod-conmon-4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6.scope.
Jan 31 04:17:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:17:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c804cfa47c75b64dc5cc3b6c701f9e0e16ab5f61edec13f8d1e81682e233d60f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c804cfa47c75b64dc5cc3b6c701f9e0e16ab5f61edec13f8d1e81682e233d60f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c804cfa47c75b64dc5cc3b6c701f9e0e16ab5f61edec13f8d1e81682e233d60f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c804cfa47c75b64dc5cc3b6c701f9e0e16ab5f61edec13f8d1e81682e233d60f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:17:34 np0005603621 podman[413355]: 2026-01-31 09:17:34.512788973 +0000 UTC m=+0.104574732 container init 4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 04:17:34 np0005603621 podman[413355]: 2026-01-31 09:17:34.517662146 +0000 UTC m=+0.109447885 container start 4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:17:34 np0005603621 podman[413355]: 2026-01-31 09:17:34.522719887 +0000 UTC m=+0.114505656 container attach 4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:17:34 np0005603621 podman[413355]: 2026-01-31 09:17:34.426786859 +0000 UTC m=+0.018572618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:17:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:34.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:34.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:35 np0005603621 nova_compute[247399]: 2026-01-31 09:17:35.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]: {
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:        "osd_id": 0,
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:        "type": "bluestore"
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]:    }
Jan 31 04:17:35 np0005603621 gracious_hodgkin[413372]: }
Jan 31 04:17:35 np0005603621 systemd[1]: libpod-4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6.scope: Deactivated successfully.
Jan 31 04:17:35 np0005603621 podman[413355]: 2026-01-31 09:17:35.272674376 +0000 UTC m=+0.864460135 container died 4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:17:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c804cfa47c75b64dc5cc3b6c701f9e0e16ab5f61edec13f8d1e81682e233d60f-merged.mount: Deactivated successfully.
Jan 31 04:17:35 np0005603621 podman[413355]: 2026-01-31 09:17:35.312620376 +0000 UTC m=+0.904406115 container remove 4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:17:35 np0005603621 systemd[1]: libpod-conmon-4d7a30ac62b7ed68f171edf3fe157ace23a242d72a1d238ec238b73ee4168ea6.scope: Deactivated successfully.
Jan 31 04:17:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:17:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:17:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:17:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:17:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0e49c914-82a7-4c97-a62a-b4c66b27efb6 does not exist
Jan 31 04:17:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e1d25263-9fdb-4d8d-9df8-9d2bc2ffcfda does not exist
Jan 31 04:17:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1d88771f-bf30-4b22-a2c2-f4fec6a27414 does not exist
Jan 31 04:17:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 418 KiB/s wr, 65 op/s
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:17:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:17:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.422 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.423 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.423 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:17:36 np0005603621 nova_compute[247399]: 2026-01-31 09:17:36.423 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:17:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:36.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:36.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 2.2 KiB/s wr, 51 op/s
Jan 31 04:17:37 np0005603621 nova_compute[247399]: 2026-01-31 09:17:37.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.166 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.180 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.180 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.181 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.199 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.200 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.200 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.200 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:17:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:38.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:17:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647535902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.670 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:17:38
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 31 04:17:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.743 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.743 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:17:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.854 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.867 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.868 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3818MB free_disk=20.942646026611328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.868 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.869 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.935 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.936 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.936 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:17:38 np0005603621 nova_compute[247399]: 2026-01-31 09:17:38.978 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:17:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:17:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/266167338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.410 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.414 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.437 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.484 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.485 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.503 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:39 np0005603621 nova_compute[247399]: 2026-01-31 09:17:39.504 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:17:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.9 KiB/s wr, 45 op/s
Jan 31 04:17:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:40.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:41 np0005603621 nova_compute[247399]: 2026-01-31 09:17:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 938 B/s wr, 27 op/s
Jan 31 04:17:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:42.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:42 np0005603621 nova_compute[247399]: 2026-01-31 09:17:42.702 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:43 np0005603621 nova_compute[247399]: 2026-01-31 09:17:43.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 475 KiB/s rd, 13 KiB/s wr, 50 op/s
Jan 31 04:17:43 np0005603621 nova_compute[247399]: 2026-01-31 09:17:43.856 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:44 np0005603621 nova_compute[247399]: 2026-01-31 09:17:44.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:44.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:44.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 88 op/s
Jan 31 04:17:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:46.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 04:17:47 np0005603621 nova_compute[247399]: 2026-01-31 09:17:47.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:48.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:48.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:48 np0005603621 nova_compute[247399]: 2026-01-31 09:17:48.859 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:49 np0005603621 nova_compute[247399]: 2026-01-31 09:17:49.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021812258631118556 of space, bias 1.0, pg target 0.6543677589335567 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0022579259142936907 of space, bias 1.0, pg target 0.6773777742881072 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004069646554531919 of space, bias 1.0, pg target 1.2208939663595757 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00032606609436069235 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:17:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 31 04:17:50 np0005603621 podman[413557]: 2026-01-31 09:17:50.509923181 +0000 UTC m=+0.063583568 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:17:50 np0005603621 podman[413558]: 2026-01-31 09:17:50.557836313 +0000 UTC m=+0.104940644 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 04:17:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:50.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 04:17:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:52.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:52 np0005603621 nova_compute[247399]: 2026-01-31 09:17:52.742 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 31 04:17:53 np0005603621 nova_compute[247399]: 2026-01-31 09:17:53.861 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 289 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 214 KiB/s wr, 75 op/s
Jan 31 04:17:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:56.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 500 KiB/s wr, 58 op/s
Jan 31 04:17:57 np0005603621 nova_compute[247399]: 2026-01-31 09:17:57.776 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:17:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:17:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:17:58.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:17:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:17:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:17:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:17:58.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:17:58 np0005603621 nova_compute[247399]: 2026-01-31 09:17:58.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:17:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 510 KiB/s wr, 55 op/s
Jan 31 04:18:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:00.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 295 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 509 KiB/s wr, 54 op/s
Jan 31 04:18:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:02.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:02 np0005603621 nova_compute[247399]: 2026-01-31 09:18:02.778 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:02.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 56 op/s
Jan 31 04:18:04 np0005603621 nova_compute[247399]: 2026-01-31 09:18:04.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:04.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:04.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 56 op/s
Jan 31 04:18:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:06.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:06.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 299 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 566 KiB/s rd, 334 KiB/s wr, 36 op/s
Jan 31 04:18:07 np0005603621 nova_compute[247399]: 2026-01-31 09:18:07.812 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:18:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:18:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:08.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:08.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:09 np0005603621 nova_compute[247399]: 2026-01-31 09:18:09.018 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 220 KiB/s rd, 236 KiB/s wr, 5 op/s
Jan 31 04:18:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:10.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 218 KiB/s rd, 227 KiB/s wr, 4 op/s
Jan 31 04:18:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:12.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:12 np0005603621 nova_compute[247399]: 2026-01-31 09:18:12.840 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 219 KiB/s rd, 235 KiB/s wr, 7 op/s
Jan 31 04:18:14 np0005603621 nova_compute[247399]: 2026-01-31 09:18:14.021 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 6 op/s
Jan 31 04:18:16 np0005603621 ovn_controller[149152]: 2026-01-31T09:18:16Z|00933|memory_trim|INFO|Detected inactivity (last active 30027 ms ago): trimming memory
Jan 31 04:18:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Jan 31 04:18:17 np0005603621 nova_compute[247399]: 2026-01-31 09:18:17.890 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:18.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:19 np0005603621 nova_compute[247399]: 2026-01-31 09:18:19.059 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 186 KiB/s rd, 199 KiB/s wr, 5 op/s
Jan 31 04:18:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:20.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:20.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:21 np0005603621 podman[413668]: 2026-01-31 09:18:21.528059744 +0000 UTC m=+0.086450640 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 31 04:18:21 np0005603621 podman[413667]: 2026-01-31 09:18:21.529102627 +0000 UTC m=+0.088570846 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Jan 31 04:18:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 KiB/s rd, 9.5 KiB/s wr, 3 op/s
Jan 31 04:18:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:22.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:22.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:22 np0005603621 nova_compute[247399]: 2026-01-31 09:18:22.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 302 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 197 KiB/s rd, 9.6 KiB/s wr, 8 op/s
Jan 31 04:18:24 np0005603621 nova_compute[247399]: 2026-01-31 09:18:24.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:24.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 304 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 326 KiB/s rd, 174 KiB/s wr, 13 op/s
Jan 31 04:18:26 np0005603621 nova_compute[247399]: 2026-01-31 09:18:26.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:26.439 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:18:26 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:26.439 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:18:26 np0005603621 nova_compute[247399]: 2026-01-31 09:18:26.440 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:26.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 304 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 174 KiB/s wr, 16 op/s
Jan 31 04:18:27 np0005603621 nova_compute[247399]: 2026-01-31 09:18:27.894 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:28.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:28.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:29 np0005603621 nova_compute[247399]: 2026-01-31 09:18:29.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:29 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:29.441 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:18:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 293 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 391 KiB/s rd, 174 KiB/s wr, 20 op/s
Jan 31 04:18:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:30.560 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:30.560 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:30.560 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:30.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 293 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 174 KiB/s wr, 20 op/s
Jan 31 04:18:32 np0005603621 nova_compute[247399]: 2026-01-31 09:18:32.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:32.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:32 np0005603621 nova_compute[247399]: 2026-01-31 09:18:32.895 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1689991938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 283 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 405 KiB/s rd, 174 KiB/s wr, 39 op/s
Jan 31 04:18:34 np0005603621 nova_compute[247399]: 2026-01-31 09:18:34.066 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:34.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:34.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:35 np0005603621 nova_compute[247399]: 2026-01-31 09:18:35.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 222 KiB/s rd, 175 KiB/s wr, 51 op/s
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:18:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cafc7e38-c0f4-476c-bc32-4a36ffa5d6b0 does not exist
Jan 31 04:18:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7f91dc56-7f95-429d-ae25-ca59984b606e does not exist
Jan 31 04:18:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8aaf1d57-04d7-433b-824d-55d388fcf8cb does not exist
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:18:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:36.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:36.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:18:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:18:36 np0005603621 podman[414038]: 2026-01-31 09:18:36.822413212 +0000 UTC m=+0.017266946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:18:37 np0005603621 podman[414038]: 2026-01-31 09:18:37.004401066 +0000 UTC m=+0.199254770 container create af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elbakyan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:18:37 np0005603621 systemd[1]: Started libpod-conmon-af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f.scope.
Jan 31 04:18:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Jan 31 04:18:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:18:37 np0005603621 podman[414038]: 2026-01-31 09:18:37.552975649 +0000 UTC m=+0.747829383 container init af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elbakyan, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:18:37 np0005603621 podman[414038]: 2026-01-31 09:18:37.558806923 +0000 UTC m=+0.753660637 container start af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elbakyan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:18:37 np0005603621 xenodochial_elbakyan[414054]: 167 167
Jan 31 04:18:37 np0005603621 systemd[1]: libpod-af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f.scope: Deactivated successfully.
Jan 31 04:18:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Jan 31 04:18:37 np0005603621 podman[414038]: 2026-01-31 09:18:37.569990506 +0000 UTC m=+0.764844230 container attach af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:18:37 np0005603621 podman[414038]: 2026-01-31 09:18:37.570995598 +0000 UTC m=+0.765849342 container died af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:18:37 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Jan 31 04:18:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Jan 31 04:18:37 np0005603621 nova_compute[247399]: 2026-01-31 09:18:37.927 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-51458b1359eb9d75a36918ee2edf76daf455b2afc368d81192164f893e6b6c7e-merged.mount: Deactivated successfully.
Jan 31 04:18:38 np0005603621 nova_compute[247399]: 2026-01-31 09:18:38.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:38 np0005603621 nova_compute[247399]: 2026-01-31 09:18:38.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:18:38 np0005603621 nova_compute[247399]: 2026-01-31 09:18:38.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:18:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:38.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:38 np0005603621 podman[414038]: 2026-01-31 09:18:38.714644383 +0000 UTC m=+1.909498097 container remove af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:18:38
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms']
Jan 31 04:18:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:18:38 np0005603621 systemd[1]: libpod-conmon-af31b981a2a061ac67f15960300b477a08505db48eb8aac21218e0de32d7be8f.scope: Deactivated successfully.
Jan 31 04:18:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:38.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:38 np0005603621 podman[414079]: 2026-01-31 09:18:38.822139006 +0000 UTC m=+0.020179948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:18:38 np0005603621 podman[414079]: 2026-01-31 09:18:38.981179495 +0000 UTC m=+0.179220417 container create f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:18:39 np0005603621 nova_compute[247399]: 2026-01-31 09:18:39.102 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:39 np0005603621 systemd[1]: Started libpod-conmon-f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369.scope.
Jan 31 04:18:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:18:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c0f6c5764fe26c5158dce5cdcef66fa4a1364725f48f739b1fcf69bdceab82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c0f6c5764fe26c5158dce5cdcef66fa4a1364725f48f739b1fcf69bdceab82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c0f6c5764fe26c5158dce5cdcef66fa4a1364725f48f739b1fcf69bdceab82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c0f6c5764fe26c5158dce5cdcef66fa4a1364725f48f739b1fcf69bdceab82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:39 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53c0f6c5764fe26c5158dce5cdcef66fa4a1364725f48f739b1fcf69bdceab82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:39 np0005603621 podman[414079]: 2026-01-31 09:18:39.202930184 +0000 UTC m=+0.400971126 container init f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:18:39 np0005603621 podman[414079]: 2026-01-31 09:18:39.208094187 +0000 UTC m=+0.406135109 container start f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:18:39 np0005603621 podman[414079]: 2026-01-31 09:18:39.250875867 +0000 UTC m=+0.448916789 container attach f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pasteur, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:18:39 np0005603621 nova_compute[247399]: 2026-01-31 09:18:39.580 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:18:39 np0005603621 nova_compute[247399]: 2026-01-31 09:18:39.580 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquired lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:18:39 np0005603621 nova_compute[247399]: 2026-01-31 09:18:39.581 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 31 04:18:39 np0005603621 nova_compute[247399]: 2026-01-31 09:18:39.581 247403 DEBUG nova.objects.instance [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:18:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 2.5 KiB/s wr, 61 op/s
Jan 31 04:18:39 np0005603621 silly_pasteur[414095]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:18:39 np0005603621 silly_pasteur[414095]: --> relative data size: 1.0
Jan 31 04:18:39 np0005603621 silly_pasteur[414095]: --> All data devices are unavailable
Jan 31 04:18:39 np0005603621 systemd[1]: libpod-f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369.scope: Deactivated successfully.
Jan 31 04:18:39 np0005603621 podman[414079]: 2026-01-31 09:18:39.929625489 +0000 UTC m=+1.127666421 container died f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pasteur, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:18:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-53c0f6c5764fe26c5158dce5cdcef66fa4a1364725f48f739b1fcf69bdceab82-merged.mount: Deactivated successfully.
Jan 31 04:18:40 np0005603621 podman[414079]: 2026-01-31 09:18:40.377123352 +0000 UTC m=+1.575164274 container remove f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_pasteur, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:18:40 np0005603621 systemd[1]: libpod-conmon-f889283bccf7809b3fae9d411f0974114c23e5b5baafecf9d5496328ec183369.scope: Deactivated successfully.
Jan 31 04:18:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:40.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:40.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:40 np0005603621 podman[414262]: 2026-01-31 09:18:40.832716072 +0000 UTC m=+0.016847153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:18:40 np0005603621 podman[414262]: 2026-01-31 09:18:40.994523189 +0000 UTC m=+0.178654250 container create 31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldwasser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:18:41 np0005603621 systemd[1]: Started libpod-conmon-31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a.scope.
Jan 31 04:18:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:18:41 np0005603621 podman[414262]: 2026-01-31 09:18:41.368373757 +0000 UTC m=+0.552504838 container init 31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldwasser, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 04:18:41 np0005603621 podman[414262]: 2026-01-31 09:18:41.374314805 +0000 UTC m=+0.558445856 container start 31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldwasser, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:18:41 np0005603621 systemd[1]: libpod-31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a.scope: Deactivated successfully.
Jan 31 04:18:41 np0005603621 distracted_goldwasser[414279]: 167 167
Jan 31 04:18:41 np0005603621 conmon[414279]: conmon 31d050a57dc4acfba154 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a.scope/container/memory.events
Jan 31 04:18:41 np0005603621 podman[414262]: 2026-01-31 09:18:41.542521783 +0000 UTC m=+0.726652844 container attach 31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 04:18:41 np0005603621 podman[414262]: 2026-01-31 09:18:41.542945977 +0000 UTC m=+0.727077028 container died 31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:18:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 2.5 KiB/s wr, 61 op/s
Jan 31 04:18:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9bb276638af044faa23316d768600fddd5645bda2803cb9159c17dfefe05a96b-merged.mount: Deactivated successfully.
Jan 31 04:18:42 np0005603621 podman[414262]: 2026-01-31 09:18:42.215201464 +0000 UTC m=+1.399332525 container remove 31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:18:42 np0005603621 systemd[1]: libpod-conmon-31d050a57dc4acfba15474eec2e2ad35fe7327d13781680958cc4ecbc377284a.scope: Deactivated successfully.
Jan 31 04:18:42 np0005603621 podman[414304]: 2026-01-31 09:18:42.382014219 +0000 UTC m=+0.081990848 container create c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:18:42 np0005603621 podman[414304]: 2026-01-31 09:18:42.320515978 +0000 UTC m=+0.020492607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:18:42 np0005603621 systemd[1]: Started libpod-conmon-c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855.scope.
Jan 31 04:18:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:18:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6dcf2d875c9a13c4443f399e6ce27f5881b1fc31c6b5e9d3161428341acb597/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6dcf2d875c9a13c4443f399e6ce27f5881b1fc31c6b5e9d3161428341acb597/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6dcf2d875c9a13c4443f399e6ce27f5881b1fc31c6b5e9d3161428341acb597/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6dcf2d875c9a13c4443f399e6ce27f5881b1fc31c6b5e9d3161428341acb597/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:42.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:42 np0005603621 podman[414304]: 2026-01-31 09:18:42.714348248 +0000 UTC m=+0.414324887 container init c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:18:42 np0005603621 podman[414304]: 2026-01-31 09:18:42.719516441 +0000 UTC m=+0.419493070 container start c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:18:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:42.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:42 np0005603621 nova_compute[247399]: 2026-01-31 09:18:42.929 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:43 np0005603621 podman[414304]: 2026-01-31 09:18:43.02113701 +0000 UTC m=+0.721113639 container attach c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:18:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]: {
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:    "0": [
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:        {
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "devices": [
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "/dev/loop3"
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            ],
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "lv_name": "ceph_lv0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "lv_size": "7511998464",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "name": "ceph_lv0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "tags": {
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.cluster_name": "ceph",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.crush_device_class": "",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.encrypted": "0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.osd_id": "0",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.type": "block",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:                "ceph.vdo": "0"
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            },
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "type": "block",
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:            "vg_name": "ceph_vg0"
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:        }
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]:    ]
Jan 31 04:18:43 np0005603621 nervous_lewin[414321]: }
Jan 31 04:18:43 np0005603621 systemd[1]: libpod-c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855.scope: Deactivated successfully.
Jan 31 04:18:43 np0005603621 podman[414304]: 2026-01-31 09:18:43.428599891 +0000 UTC m=+1.128576520 container died c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:18:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d6dcf2d875c9a13c4443f399e6ce27f5881b1fc31c6b5e9d3161428341acb597-merged.mount: Deactivated successfully.
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.548 247403 DEBUG nova.network.neutron [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:18:43 np0005603621 podman[414304]: 2026-01-31 09:18:43.552524741 +0000 UTC m=+1.252501370 container remove c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 31 04:18:43 np0005603621 systemd[1]: libpod-conmon-c736ecc94484957b4f20bfa7c51ae0885ecfd7c588401d1e336069cd8f38e855.scope: Deactivated successfully.
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.579 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Releasing lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.579 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.579 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.580 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.580 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.580 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.624 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.625 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.625 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.625 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:18:43 np0005603621 nova_compute[247399]: 2026-01-31 09:18:43.625 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:18:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 281 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.2 KiB/s wr, 38 op/s
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.025083816 +0000 UTC m=+0.033960092 container create 2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:18:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2291502575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.059 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:18:44 np0005603621 systemd[1]: Started libpod-conmon-2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04.scope.
Jan 31 04:18:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.099582358 +0000 UTC m=+0.108458664 container init 2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.104 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.009730922 +0000 UTC m=+0.018607228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.106379832 +0000 UTC m=+0.115256118 container start 2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:18:44 np0005603621 loving_ritchie[414520]: 167 167
Jan 31 04:18:44 np0005603621 systemd[1]: libpod-2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04.scope: Deactivated successfully.
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.110813222 +0000 UTC m=+0.119689588 container attach 2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.111153213 +0000 UTC m=+0.120029499 container died 2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:18:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-81fb2b120cde66c84e496ac5fb0433229b228fec197b17dc7b6b190b2be574a7-merged.mount: Deactivated successfully.
Jan 31 04:18:44 np0005603621 podman[414502]: 2026-01-31 09:18:44.148534452 +0000 UTC m=+0.157410748 container remove 2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:18:44 np0005603621 systemd[1]: libpod-conmon-2fd04ab0648ce0fe136330b4ce6e623d3fda9f2b611f8ec64e7ebae0916cef04.scope: Deactivated successfully.
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.162 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.163 247403 DEBUG nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 31 04:18:44 np0005603621 podman[414544]: 2026-01-31 09:18:44.282834651 +0000 UTC m=+0.039804617 container create 2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.302 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.303 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3808MB free_disk=20.94265365600586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.303 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.303 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:44 np0005603621 systemd[1]: Started libpod-conmon-2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6.scope.
Jan 31 04:18:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:18:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d5ecc3034609ea3458115802da911a8b155533beab8d3a4d3b75fd565015b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d5ecc3034609ea3458115802da911a8b155533beab8d3a4d3b75fd565015b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d5ecc3034609ea3458115802da911a8b155533beab8d3a4d3b75fd565015b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8d5ecc3034609ea3458115802da911a8b155533beab8d3a4d3b75fd565015b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:18:44 np0005603621 podman[414544]: 2026-01-31 09:18:44.264440301 +0000 UTC m=+0.021410277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:18:44 np0005603621 podman[414544]: 2026-01-31 09:18:44.363402184 +0000 UTC m=+0.120372160 container init 2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:18:44 np0005603621 podman[414544]: 2026-01-31 09:18:44.368360001 +0000 UTC m=+0.125329957 container start 2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rosalind, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 04:18:44 np0005603621 podman[414544]: 2026-01-31 09:18:44.371970575 +0000 UTC m=+0.128940531 container attach 2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rosalind, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.390 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.391 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.391 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.459 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:18:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353150672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.855 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.859 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:18:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.877 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.879 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:18:44 np0005603621 nova_compute[247399]: 2026-01-31 09:18:44.879 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]: {
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:        "osd_id": 0,
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:        "type": "bluestore"
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]:    }
Jan 31 04:18:45 np0005603621 jolly_rosalind[414562]: }
Jan 31 04:18:45 np0005603621 systemd[1]: libpod-2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6.scope: Deactivated successfully.
Jan 31 04:18:45 np0005603621 podman[414544]: 2026-01-31 09:18:45.156463074 +0000 UTC m=+0.913433020 container died 2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:18:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d8d5ecc3034609ea3458115802da911a8b155533beab8d3a4d3b75fd565015b5-merged.mount: Deactivated successfully.
Jan 31 04:18:45 np0005603621 podman[414544]: 2026-01-31 09:18:45.217870862 +0000 UTC m=+0.974840818 container remove 2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:18:45 np0005603621 systemd[1]: libpod-conmon-2ad27fecc0302cb9a2218ad76d0bad82fc25a94ffcec7272d2fe51e43adf6eb6.scope: Deactivated successfully.
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:18:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f53cb3a4-da3e-41b1-9533-8396e688f819 does not exist
Jan 31 04:18:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e970a06b-f79d-4b73-ab4d-54b9a279dda3 does not exist
Jan 31 04:18:45 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 40516b69-6df8-446b-8706-1f0cce23966d does not exist
Jan 31 04:18:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 232 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:18:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:18:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Jan 31 04:18:46 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Jan 31 04:18:46 np0005603621 nova_compute[247399]: 2026-01-31 09:18:46.498 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:46 np0005603621 nova_compute[247399]: 2026-01-31 09:18:46.498 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:18:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 2.0 KiB/s wr, 36 op/s
Jan 31 04:18:47 np0005603621 nova_compute[247399]: 2026-01-31 09:18:47.931 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Jan 31 04:18:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Jan 31 04:18:48 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Jan 31 04:18:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:48.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:48.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:49 np0005603621 nova_compute[247399]: 2026-01-31 09:18:49.106 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 32 op/s
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002173955716080402 of space, bias 1.0, pg target 0.6521867148241206 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0022573806532663315 of space, bias 1.0, pg target 0.6772141959798994 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:18:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.265 247403 DEBUG nova.compute.manager [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-changed-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.265 247403 DEBUG nova.compute.manager [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Refreshing instance network info cache due to event network-changed-6c698385-414a-410d-8bd1-082bba741f94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.266 247403 DEBUG oslo_concurrency.lockutils [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.266 247403 DEBUG oslo_concurrency.lockutils [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquired lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.266 247403 DEBUG nova.network.neutron [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Refreshing network info cache for port 6c698385-414a-410d-8bd1-082bba741f94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.433 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.434 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.434 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.434 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.434 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.435 247403 INFO nova.compute.manager [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Terminating instance#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.436 247403 DEBUG nova.compute.manager [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 31 04:18:50 np0005603621 kernel: tap6c698385-41 (unregistering): left promiscuous mode
Jan 31 04:18:50 np0005603621 NetworkManager[49013]: <info>  [1769851130.4843] device (tap6c698385-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 31 04:18:50 np0005603621 ovn_controller[149152]: 2026-01-31T09:18:50Z|00934|binding|INFO|Releasing lport 6c698385-414a-410d-8bd1-082bba741f94 from this chassis (sb_readonly=0)
Jan 31 04:18:50 np0005603621 ovn_controller[149152]: 2026-01-31T09:18:50Z|00935|binding|INFO|Setting lport 6c698385-414a-410d-8bd1-082bba741f94 down in Southbound
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.489 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 ovn_controller[149152]: 2026-01-31T09:18:50Z|00936|binding|INFO|Removing iface tap6c698385-41 ovn-installed in OVS
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.491 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.498 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.498 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:fa:d2 10.100.0.12'], port_security=['fa:16:3e:b8:fa:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4e6fd1c3-3988-4a7f-a30d-b599226c25a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '390709b3e5174dc4afdc6b04fdae67e3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '41fc052d-b084-4adc-a493-522d3d569e9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c4a3744-72a7-4358-ad8b-910d4ad4af10, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>], logical_port=6c698385-414a-410d-8bd1-082bba741f94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe29f010640>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.500 159734 INFO neutron.agent.ovn.metadata.agent [-] Port 6c698385-414a-410d-8bd1-082bba741f94 in datapath 3f3cc872-5825-455b-b8f4-03469e3aacf8 unbound from our chassis#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.501 159734 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f3cc872-5825-455b-b8f4-03469e3aacf8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.502 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[0d484a59-861f-43ac-9799-e9b37bfb78cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.502 159734 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8 namespace which is not needed anymore#033[00m
Jan 31 04:18:50 np0005603621 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000df.scope: Deactivated successfully.
Jan 31 04:18:50 np0005603621 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000df.scope: Consumed 17.094s CPU time.
Jan 31 04:18:50 np0005603621 systemd-machined[212769]: Machine qemu-106-instance-000000df terminated.
Jan 31 04:18:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [NOTICE]   (412212) : haproxy version is 2.8.14-c23fe91
Jan 31 04:18:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [NOTICE]   (412212) : path to executable is /usr/sbin/haproxy
Jan 31 04:18:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [WARNING]  (412212) : Exiting Master process...
Jan 31 04:18:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [WARNING]  (412212) : Exiting Master process...
Jan 31 04:18:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [ALERT]    (412212) : Current worker (412214) exited with code 143 (Terminated)
Jan 31 04:18:50 np0005603621 neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8[412208]: [WARNING]  (412212) : All workers exited. Exiting... (0)
Jan 31 04:18:50 np0005603621 systemd[1]: libpod-40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e.scope: Deactivated successfully.
Jan 31 04:18:50 np0005603621 podman[414742]: 2026-01-31 09:18:50.621023422 +0000 UTC m=+0.042875013 container died 40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Jan 31 04:18:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e-userdata-shm.mount: Deactivated successfully.
Jan 31 04:18:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4f6f779e57c7e45aa80399b630606eedf47aa45b1948d592ae34b212c35c70e6-merged.mount: Deactivated successfully.
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.651 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.654 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.664 247403 INFO nova.virt.libvirt.driver [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Instance destroyed successfully.#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.664 247403 DEBUG nova.objects.instance [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lazy-loading 'resources' on Instance uuid 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 31 04:18:50 np0005603621 podman[414742]: 2026-01-31 09:18:50.665080243 +0000 UTC m=+0.086931834 container cleanup 40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 31 04:18:50 np0005603621 systemd[1]: libpod-conmon-40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e.scope: Deactivated successfully.
Jan 31 04:18:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:50.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.684 247403 DEBUG nova.virt.libvirt.vif [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-31T09:16:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1860833772',display_name='tempest-TestStampPattern-server-1860833772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1860833772',id=223,image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD/aBEl4PBIoIshqTDNOjhhhoUeVGicNgguOr3MSdHgT0ltB1LqQrhXegMG9XeJiExk1ZCoew1VKJC1u0bcRchycTTDnBsbTgXLKYBMMmensD0uk0uwm2aHKUBZ2bnKdBA==',key_name='tempest-TestStampPattern-80801695',keypairs=<?>,launch_index=0,launched_at=2026-01-31T09:16:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='390709b3e5174dc4afdc6b04fdae67e3',ramdisk_id='',reservation_id='r-d5sq4vcz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='37c0ea6b-d0de-4997-aa0a-7a7de3dd2f16',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1434051857',owner_user_name='tempest-TestStampPattern-1434051857-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-31T09:17:25Z,user_data=None,user_id='4f25568607234a398bc35cbb67eb406f',uuid=4e6fd1c3-3988-4a7f-a30d-b599226c25a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.685 247403 DEBUG nova.network.os_vif_util [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Converting VIF {"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.686 247403 DEBUG nova.network.os_vif_util [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.686 247403 DEBUG os_vif [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.688 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.688 247403 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c698385-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.690 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.691 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.693 247403 INFO os_vif [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:fa:d2,bridge_name='br-int',has_traffic_filtering=True,id=6c698385-414a-410d-8bd1-082bba741f94,network=Network(3f3cc872-5825-455b-b8f4-03469e3aacf8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c698385-41')#033[00m
Jan 31 04:18:50 np0005603621 podman[414782]: 2026-01-31 09:18:50.729873998 +0000 UTC m=+0.046100376 container remove 40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.733 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[7796ed99-f3b7-4aef-9379-08d2ce7adc62]: (4, ('Sat Jan 31 09:18:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8 (40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e)\n40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e\nSat Jan 31 09:18:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8 (40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e)\n40616c1ffbd14ad5e8e5b56a4fc5bb7598eb5515631ce9af83f37b5105953a3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.735 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc80e5b-e8f8-40e0-a8bf-2bdb0f9c4973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.736 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f3cc872-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.738 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 kernel: tap3f3cc872-50: left promiscuous mode
Jan 31 04:18:50 np0005603621 nova_compute[247399]: 2026-01-31 09:18:50.742 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.745 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[655dbb14-43ca-468d-9b99-b582d3a70b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.759 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[b1671a6a-b9d3-4a84-a157-46faa35aa5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.760 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[8879e10e-a4e7-400a-a408-f5420ee15234]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.769 253234 DEBUG oslo.privsep.daemon [-] privsep: reply[5d243d26-9a06-4e62-9a91-b8770e8bd920]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1031550, 'reachable_time': 16021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414815, 'error': None, 'target': 'ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 systemd[1]: run-netns-ovnmeta\x2d3f3cc872\x2d5825\x2d455b\x2db8f4\x2d03469e3aacf8.mount: Deactivated successfully.
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.773 160056 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f3cc872-5825-455b-b8f4-03469e3aacf8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 31 04:18:50 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:18:50.773 160056 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcb6a17-ed5e-4d36-a7a3-9c62c5f56ede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 31 04:18:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:18:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:50.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:18:51 np0005603621 nova_compute[247399]: 2026-01-31 09:18:51.385 247403 INFO nova.virt.libvirt.driver [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Deleting instance files /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0_del#033[00m
Jan 31 04:18:51 np0005603621 nova_compute[247399]: 2026-01-31 09:18:51.386 247403 INFO nova.virt.libvirt.driver [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Deletion of /var/lib/nova/instances/4e6fd1c3-3988-4a7f-a30d-b599226c25a0_del complete#033[00m
Jan 31 04:18:51 np0005603621 nova_compute[247399]: 2026-01-31 09:18:51.463 247403 INFO nova.compute.manager [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Jan 31 04:18:51 np0005603621 nova_compute[247399]: 2026-01-31 09:18:51.464 247403 DEBUG oslo.service.loopingcall [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 31 04:18:51 np0005603621 nova_compute[247399]: 2026-01-31 09:18:51.464 247403 DEBUG nova.compute.manager [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 31 04:18:51 np0005603621 nova_compute[247399]: 2026-01-31 09:18:51.464 247403 DEBUG nova.network.neutron [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 31 04:18:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 202 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 31 op/s
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.030 247403 DEBUG nova.compute.manager [req-48193555-91c1-4042-8260-482c13995e0b req-c0728185-1d7d-4142-b315-de8b6b172774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-vif-unplugged-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.030 247403 DEBUG oslo_concurrency.lockutils [req-48193555-91c1-4042-8260-482c13995e0b req-c0728185-1d7d-4142-b315-de8b6b172774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.031 247403 DEBUG oslo_concurrency.lockutils [req-48193555-91c1-4042-8260-482c13995e0b req-c0728185-1d7d-4142-b315-de8b6b172774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.031 247403 DEBUG oslo_concurrency.lockutils [req-48193555-91c1-4042-8260-482c13995e0b req-c0728185-1d7d-4142-b315-de8b6b172774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.031 247403 DEBUG nova.compute.manager [req-48193555-91c1-4042-8260-482c13995e0b req-c0728185-1d7d-4142-b315-de8b6b172774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] No waiting events found dispatching network-vif-unplugged-6c698385-414a-410d-8bd1-082bba741f94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.032 247403 DEBUG nova.compute.manager [req-48193555-91c1-4042-8260-482c13995e0b req-c0728185-1d7d-4142-b315-de8b6b172774 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-vif-unplugged-6c698385-414a-410d-8bd1-082bba741f94 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 31 04:18:52 np0005603621 podman[414818]: 2026-01-31 09:18:52.490884447 +0000 UTC m=+0.048005156 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 31 04:18:52 np0005603621 podman[414819]: 2026-01-31 09:18:52.51252187 +0000 UTC m=+0.067855683 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.520 247403 DEBUG nova.network.neutron [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.551 247403 INFO nova.compute.manager [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Took 1.09 seconds to deallocate network for instance.#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.603 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.603 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.616 247403 DEBUG nova.compute.manager [req-b47b8e94-79c6-4c08-a342-cfc04ff0eaa7 req-2a43472b-8f70-40d0-975c-68e7fc8085cd fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-vif-deleted-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.662 247403 DEBUG oslo_concurrency.processutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:18:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:52.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:52 np0005603621 nova_compute[247399]: 2026-01-31 09:18:52.932 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:18:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3438237872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.064 247403 DEBUG oslo_concurrency.processutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.070 247403 DEBUG nova.compute.provider_tree [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.106 247403 DEBUG nova.scheduler.client.report [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:18:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.159 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.203 247403 INFO nova.scheduler.client.report [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Deleted allocations for instance 4e6fd1c3-3988-4a7f-a30d-b599226c25a0#033[00m
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.294 247403 DEBUG oslo_concurrency.lockutils [None req-e21fa234-ee9d-4df6-8cf8-a943fe4bbfa1 4f25568607234a398bc35cbb67eb406f 390709b3e5174dc4afdc6b04fdae67e3 - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.551 247403 DEBUG nova.network.neutron [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updated VIF entry in instance network info cache for port 6c698385-414a-410d-8bd1-082bba741f94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.552 247403 DEBUG nova.network.neutron [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Updating instance_info_cache with network_info: [{"id": "6c698385-414a-410d-8bd1-082bba741f94", "address": "fa:16:3e:b8:fa:d2", "network": {"id": "3f3cc872-5825-455b-b8f4-03469e3aacf8", "bridge": "br-int", "label": "tempest-TestStampPattern-915378736-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390709b3e5174dc4afdc6b04fdae67e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c698385-41", "ovs_interfaceid": "6c698385-414a-410d-8bd1-082bba741f94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 31 04:18:53 np0005603621 nova_compute[247399]: 2026-01-31 09:18:53.619 247403 DEBUG oslo_concurrency.lockutils [req-5aaaad34-4edc-4a31-bb60-5f827cbd1955 req-b9a441ed-dda4-425d-a888-a52602a50258 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Releasing lock "refresh_cache-4e6fd1c3-3988-4a7f-a30d-b599226c25a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 31 04:18:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 155 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 2.1 KiB/s wr, 52 op/s
Jan 31 04:18:53 np0005603621 ceph-mon[74394]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Jan 31 04:18:54 np0005603621 nova_compute[247399]: 2026-01-31 09:18:54.129 247403 DEBUG nova.compute.manager [req-01f6bac3-ced7-4364-82e9-a6f2a09ea1a2 req-c10bf4d0-8263-4666-965f-f37cf2aab467 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 31 04:18:54 np0005603621 nova_compute[247399]: 2026-01-31 09:18:54.130 247403 DEBUG oslo_concurrency.lockutils [req-01f6bac3-ced7-4364-82e9-a6f2a09ea1a2 req-c10bf4d0-8263-4666-965f-f37cf2aab467 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Acquiring lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:18:54 np0005603621 nova_compute[247399]: 2026-01-31 09:18:54.130 247403 DEBUG oslo_concurrency.lockutils [req-01f6bac3-ced7-4364-82e9-a6f2a09ea1a2 req-c10bf4d0-8263-4666-965f-f37cf2aab467 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:18:54 np0005603621 nova_compute[247399]: 2026-01-31 09:18:54.130 247403 DEBUG oslo_concurrency.lockutils [req-01f6bac3-ced7-4364-82e9-a6f2a09ea1a2 req-c10bf4d0-8263-4666-965f-f37cf2aab467 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] Lock "4e6fd1c3-3988-4a7f-a30d-b599226c25a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:18:54 np0005603621 nova_compute[247399]: 2026-01-31 09:18:54.130 247403 DEBUG nova.compute.manager [req-01f6bac3-ced7-4364-82e9-a6f2a09ea1a2 req-c10bf4d0-8263-4666-965f-f37cf2aab467 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] No waiting events found dispatching network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 31 04:18:54 np0005603621 nova_compute[247399]: 2026-01-31 09:18:54.130 247403 WARNING nova.compute.manager [req-01f6bac3-ced7-4364-82e9-a6f2a09ea1a2 req-c10bf4d0-8263-4666-965f-f37cf2aab467 fe07f3865a614b72984f087353419503 cb417a6cb57d4693a4fbcf628ac13cdc - - default default] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Received unexpected event network-vif-plugged-6c698385-414a-410d-8bd1-082bba741f94 for instance with vm_state deleted and task_state None.#033[00m
Jan 31 04:18:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:54.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:54.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 122 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 43 op/s
Jan 31 04:18:55 np0005603621 nova_compute[247399]: 2026-01-31 09:18:55.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:56.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:18:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:18:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 1.6 KiB/s wr, 50 op/s
Jan 31 04:18:57 np0005603621 nova_compute[247399]: 2026-01-31 09:18:57.933 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:18:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:18:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:18:58.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:18:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:18:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:18:58.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:18:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 2.1 KiB/s wr, 51 op/s
Jan 31 04:19:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:00.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:00 np0005603621 nova_compute[247399]: 2026-01-31 09:19:00.695 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:00 np0005603621 nova_compute[247399]: 2026-01-31 09:19:00.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:00.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:00 np0005603621 nova_compute[247399]: 2026-01-31 09:19:00.908 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 36 KiB/s rd, 2.1 KiB/s wr, 51 op/s
Jan 31 04:19:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:02.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:02 np0005603621 nova_compute[247399]: 2026-01-31 09:19:02.968 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.3 KiB/s wr, 21 op/s
Jan 31 04:19:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:04.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 KiB/s wr, 18 op/s
Jan 31 04:19:05 np0005603621 nova_compute[247399]: 2026-01-31 09:19:05.664 247403 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769851130.662411, 4e6fd1c3-3988-4a7f-a30d-b599226c25a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 31 04:19:05 np0005603621 nova_compute[247399]: 2026-01-31 09:19:05.664 247403 INFO nova.compute.manager [-] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] VM Stopped (Lifecycle Event)#033[00m
Jan 31 04:19:05 np0005603621 nova_compute[247399]: 2026-01-31 09:19:05.690 247403 DEBUG nova.compute.manager [None req-14016cef-c895-4963-92ad-61562da2555c - - - - - -] [instance: 4e6fd1c3-3988-4a7f-a30d-b599226c25a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 31 04:19:05 np0005603621 nova_compute[247399]: 2026-01-31 09:19:05.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:06.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:06.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 31 04:19:07 np0005603621 nova_compute[247399]: 2026-01-31 09:19:07.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:19:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:19:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:08.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:08.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 KiB/s rd, 597 B/s wr, 4 op/s
Jan 31 04:19:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:10.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:10 np0005603621 nova_compute[247399]: 2026-01-31 09:19:10.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:10.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:12.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:12.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:12 np0005603621 nova_compute[247399]: 2026-01-31 09:19:12.972 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:14.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:14.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:15 np0005603621 nova_compute[247399]: 2026-01-31 09:19:15.703 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:16.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:17 np0005603621 nova_compute[247399]: 2026-01-31 09:19:17.972 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:18.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:18.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:20.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:20 np0005603621 nova_compute[247399]: 2026-01-31 09:19:20.706 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:22.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:22 np0005603621 nova_compute[247399]: 2026-01-31 09:19:22.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:23 np0005603621 podman[414947]: 2026-01-31 09:19:23.494613661 +0000 UTC m=+0.052544089 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:19:23 np0005603621 podman[414948]: 2026-01-31 09:19:23.530904827 +0000 UTC m=+0.081718781 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:19:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:24.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:25 np0005603621 nova_compute[247399]: 2026-01-31 09:19:25.709 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 31 04:19:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:26.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 31 04:19:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:26.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:27 np0005603621 nova_compute[247399]: 2026-01-31 09:19:27.974 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:28 np0005603621 nova_compute[247399]: 2026-01-31 09:19:28.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:28.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:19:30.560 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:19:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:19:30.561 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:19:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:19:30.561 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:19:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:30 np0005603621 nova_compute[247399]: 2026-01-31 09:19:30.712 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:30.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:32.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:32 np0005603621 nova_compute[247399]: 2026-01-31 09:19:32.976 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:34 np0005603621 nova_compute[247399]: 2026-01-31 09:19:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:34.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:34.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:35 np0005603621 nova_compute[247399]: 2026-01-31 09:19:35.715 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:36.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:37 np0005603621 nova_compute[247399]: 2026-01-31 09:19:37.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:37 np0005603621 nova_compute[247399]: 2026-01-31 09:19:37.978 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:38 np0005603621 nova_compute[247399]: 2026-01-31 09:19:38.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:38 np0005603621 nova_compute[247399]: 2026-01-31 09:19:38.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:19:38 np0005603621 nova_compute[247399]: 2026-01-31 09:19:38.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:19:38 np0005603621 nova_compute[247399]: 2026-01-31 09:19:38.215 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:19:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:38.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:19:38
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'backups', 'images', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'cephfs.cephfs.meta']
Jan 31 04:19:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:19:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:38.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:39 np0005603621 nova_compute[247399]: 2026-01-31 09:19:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:39 np0005603621 nova_compute[247399]: 2026-01-31 09:19:39.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:19:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:40 np0005603621 ovn_controller[149152]: 2026-01-31T09:19:40Z|00937|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.232 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.233 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.233 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:19:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:19:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751420123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.641 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.718 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:40.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.802 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.804 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4079MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.805 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.904 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.905 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:19:40 np0005603621 nova_compute[247399]: 2026-01-31 09:19:40.920 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:19:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:40.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:19:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3210062684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:19:41 np0005603621 nova_compute[247399]: 2026-01-31 09:19:41.345 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:19:41 np0005603621 nova_compute[247399]: 2026-01-31 09:19:41.349 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:19:41 np0005603621 nova_compute[247399]: 2026-01-31 09:19:41.374 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:19:41 np0005603621 nova_compute[247399]: 2026-01-31 09:19:41.403 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:19:41 np0005603621 nova_compute[247399]: 2026-01-31 09:19:41.403 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:19:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:42.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:42 np0005603621 nova_compute[247399]: 2026-01-31 09:19:42.981 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:44 np0005603621 nova_compute[247399]: 2026-01-31 09:19:44.403 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:19:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:19:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:44.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:45 np0005603621 nova_compute[247399]: 2026-01-31 09:19:45.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:45 np0005603621 nova_compute[247399]: 2026-01-31 09:19:45.721 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:46 np0005603621 nova_compute[247399]: 2026-01-31 09:19:46.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:46.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:19:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:19:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:46.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5c98c233-c6ca-431b-afeb-a066facfe7f6 does not exist
Jan 31 04:19:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev db8433ba-b277-43ff-a357-14d464b4d57e does not exist
Jan 31 04:19:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0a7057da-5381-4b97-90d1-95828c6fd16a does not exist
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:19:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:19:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:48 np0005603621 nova_compute[247399]: 2026-01-31 09:19:48.020 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.026732562 +0000 UTC m=+0.036184953 container create 659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:19:48 np0005603621 systemd[1]: Started libpod-conmon-659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022.scope.
Jan 31 04:19:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:19:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:19:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.00890399 +0000 UTC m=+0.018356401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.106335875 +0000 UTC m=+0.115788286 container init 659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.112115808 +0000 UTC m=+0.121568199 container start 659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.114621836 +0000 UTC m=+0.124074228 container attach 659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:19:48 np0005603621 blissful_cray[415438]: 167 167
Jan 31 04:19:48 np0005603621 systemd[1]: libpod-659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022.scope: Deactivated successfully.
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.117478927 +0000 UTC m=+0.126931328 container died 659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:19:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d04edb47730f8772482a6366eba421c21e992e619667a297022c1eda8bc86318-merged.mount: Deactivated successfully.
Jan 31 04:19:48 np0005603621 podman[415422]: 2026-01-31 09:19:48.151365476 +0000 UTC m=+0.160817867 container remove 659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:19:48 np0005603621 systemd[1]: libpod-conmon-659c2c5dc5b1bba6d790139e8996813c7caca4a6088aad0fe2103679c63f2022.scope: Deactivated successfully.
Jan 31 04:19:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:48 np0005603621 podman[415462]: 2026-01-31 09:19:48.270693412 +0000 UTC m=+0.039448336 container create 9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kirch, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:19:48 np0005603621 systemd[1]: Started libpod-conmon-9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036.scope.
Jan 31 04:19:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca475f4a0a03c87f0a2aa7eedfd016a4c8ec2a3a4da41c0477825ef96820cdab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca475f4a0a03c87f0a2aa7eedfd016a4c8ec2a3a4da41c0477825ef96820cdab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca475f4a0a03c87f0a2aa7eedfd016a4c8ec2a3a4da41c0477825ef96820cdab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca475f4a0a03c87f0a2aa7eedfd016a4c8ec2a3a4da41c0477825ef96820cdab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca475f4a0a03c87f0a2aa7eedfd016a4c8ec2a3a4da41c0477825ef96820cdab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:48 np0005603621 podman[415462]: 2026-01-31 09:19:48.252144537 +0000 UTC m=+0.020899471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:19:48 np0005603621 podman[415462]: 2026-01-31 09:19:48.357996938 +0000 UTC m=+0.126751882 container init 9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:19:48 np0005603621 podman[415462]: 2026-01-31 09:19:48.370226764 +0000 UTC m=+0.138981678 container start 9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kirch, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:19:48 np0005603621 podman[415462]: 2026-01-31 09:19:48.373651241 +0000 UTC m=+0.142406255 container attach 9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:19:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 19K writes, 86K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1506 writes, 6279 keys, 1506 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s#012Interval WAL: 1507 writes, 1507 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.0      3.89              0.27        61    0.064       0      0       0.0       0.0#012  L6      1/0   12.72 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     55.2     47.4     13.28              1.38        60    0.221    479K    32K       0.0       0.0#012 Sum      1/0   12.72 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.4     42.7     43.4     17.17              1.65       121    0.142    479K    32K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   8.4     79.6     79.4      0.67              0.13         8    0.084     45K   2029       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     55.2     47.4     13.28              1.38        60    0.221    479K    32K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.0      3.89              0.27        60    0.065       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.114, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.73 GB write, 0.10 MB/s write, 0.72 GB read, 0.10 MB/s read, 17.2 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 80.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000812 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4616,76.96 MB,25.3165%) FilterBlock(122,1.30 MB,0.427261%) IndexBlock(122,2.11 MB,0.695485%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:19:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:48.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:49 np0005603621 upbeat_kirch[415479]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:19:49 np0005603621 upbeat_kirch[415479]: --> relative data size: 1.0
Jan 31 04:19:49 np0005603621 upbeat_kirch[415479]: --> All data devices are unavailable
Jan 31 04:19:49 np0005603621 systemd[1]: libpod-9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036.scope: Deactivated successfully.
Jan 31 04:19:49 np0005603621 podman[415494]: 2026-01-31 09:19:49.171480282 +0000 UTC m=+0.025019961 container died 9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 04:19:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ca475f4a0a03c87f0a2aa7eedfd016a4c8ec2a3a4da41c0477825ef96820cdab-merged.mount: Deactivated successfully.
Jan 31 04:19:49 np0005603621 podman[415494]: 2026-01-31 09:19:49.206516598 +0000 UTC m=+0.060056277 container remove 9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kirch, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:19:49 np0005603621 systemd[1]: libpod-conmon-9957d9c2380e3c164f67de98a975039425e7c8da7165c84538b05d1e9763f036.scope: Deactivated successfully.
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.73316698 +0000 UTC m=+0.035926965 container create 174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:19:49 np0005603621 systemd[1]: Started libpod-conmon-174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157.scope.
Jan 31 04:19:49 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.805779032 +0000 UTC m=+0.108539047 container init 174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.813789544 +0000 UTC m=+0.116549529 container start 174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.719158888 +0000 UTC m=+0.021918893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.816647154 +0000 UTC m=+0.119407159 container attach 174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:19:49 np0005603621 systemd[1]: libpod-174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157.scope: Deactivated successfully.
Jan 31 04:19:49 np0005603621 serene_shtern[415664]: 167 167
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.81778272 +0000 UTC m=+0.120542705 container died 174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 31 04:19:49 np0005603621 conmon[415664]: conmon 174c54acc70b881be8c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157.scope/container/memory.events
Jan 31 04:19:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0cfa9de55506493bf9658ccc15a35c07440e28c2229fc27a5a180f36d21bc5bb-merged.mount: Deactivated successfully.
Jan 31 04:19:49 np0005603621 podman[415648]: 2026-01-31 09:19:49.851412121 +0000 UTC m=+0.154172106 container remove 174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_shtern, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:19:49 np0005603621 systemd[1]: libpod-conmon-174c54acc70b881be8c0b2f0b8087a044822c393f13d1fa2ef5617c715d54157.scope: Deactivated successfully.
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:19:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:19:49 np0005603621 podman[415688]: 2026-01-31 09:19:49.976090577 +0000 UTC m=+0.048440701 container create a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bardeen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:19:50 np0005603621 systemd[1]: Started libpod-conmon-a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6.scope.
Jan 31 04:19:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd14fb568680d27a2df6bfd32344676d0176c377e7fde8304e2f1f5569dbc4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd14fb568680d27a2df6bfd32344676d0176c377e7fde8304e2f1f5569dbc4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd14fb568680d27a2df6bfd32344676d0176c377e7fde8304e2f1f5569dbc4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:50 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fd14fb568680d27a2df6bfd32344676d0176c377e7fde8304e2f1f5569dbc4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:50 np0005603621 podman[415688]: 2026-01-31 09:19:50.04083644 +0000 UTC m=+0.113186644 container init a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:19:50 np0005603621 podman[415688]: 2026-01-31 09:19:50.04783358 +0000 UTC m=+0.120183704 container start a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:19:50 np0005603621 podman[415688]: 2026-01-31 09:19:50.051325981 +0000 UTC m=+0.123676155 container attach a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 31 04:19:50 np0005603621 podman[415688]: 2026-01-31 09:19:49.9593988 +0000 UTC m=+0.031748934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.600504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851190600547, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2163, "num_deletes": 254, "total_data_size": 3871009, "memory_usage": 3936688, "flush_reason": "Manual Compaction"}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851190647069, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3784825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85255, "largest_seqno": 87417, "table_properties": {"data_size": 3774968, "index_size": 6286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20537, "raw_average_key_size": 20, "raw_value_size": 3755142, "raw_average_value_size": 3777, "num_data_blocks": 274, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769850982, "oldest_key_time": 1769850982, "file_creation_time": 1769851190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 46606 microseconds, and 5095 cpu microseconds.
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.647110) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3784825 bytes OK
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.647128) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.650467) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.650481) EVENT_LOG_v1 {"time_micros": 1769851190650477, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.650505) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3862130, prev total WAL file size 3862130, number of live WAL files 2.
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.651406) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3696KB)], [197(12MB)]
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851190651474, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 17118607, "oldest_snapshot_seqno": -1}
Jan 31 04:19:50 np0005603621 nova_compute[247399]: 2026-01-31 09:19:50.724 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]: {
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:    "0": [
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:        {
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "devices": [
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "/dev/loop3"
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            ],
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "lv_name": "ceph_lv0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "lv_size": "7511998464",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "name": "ceph_lv0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "tags": {
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.cluster_name": "ceph",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.crush_device_class": "",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.encrypted": "0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.osd_id": "0",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.type": "block",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:                "ceph.vdo": "0"
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            },
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "type": "block",
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:            "vg_name": "ceph_vg0"
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:        }
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]:    ]
Jan 31 04:19:50 np0005603621 youthful_bardeen[415705]: }
Jan 31 04:19:50 np0005603621 systemd[1]: libpod-a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6.scope: Deactivated successfully.
Jan 31 04:19:50 np0005603621 podman[415688]: 2026-01-31 09:19:50.786236456 +0000 UTC m=+0.858586570 container died a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bardeen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11528 keys, 15134390 bytes, temperature: kUnknown
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851190841921, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 15134390, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15059542, "index_size": 44911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 303465, "raw_average_key_size": 26, "raw_value_size": 14858127, "raw_average_value_size": 1288, "num_data_blocks": 1712, "num_entries": 11528, "num_filter_entries": 11528, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851190, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.842201) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 15134390 bytes
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.845814) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.9 rd, 79.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 12.7 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(8.5) write-amplify(4.0) OK, records in: 12058, records dropped: 530 output_compression: NoCompression
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.845830) EVENT_LOG_v1 {"time_micros": 1769851190845822, "job": 124, "event": "compaction_finished", "compaction_time_micros": 190521, "compaction_time_cpu_micros": 43217, "output_level": 6, "num_output_files": 1, "total_output_size": 15134390, "num_input_records": 12058, "num_output_records": 11528, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851190846280, "job": 124, "event": "table_file_deletion", "file_number": 199}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851190847356, "job": 124, "event": "table_file_deletion", "file_number": 197}
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.651237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.847424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.847428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.847430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.847432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:50 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:50.847434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:50 np0005603621 systemd[1]: var-lib-containers-storage-overlay-1fd14fb568680d27a2df6bfd32344676d0176c377e7fde8304e2f1f5569dbc4b-merged.mount: Deactivated successfully.
Jan 31 04:19:50 np0005603621 podman[415688]: 2026-01-31 09:19:50.88142347 +0000 UTC m=+0.953773584 container remove a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:19:50 np0005603621 systemd[1]: libpod-conmon-a57dee191b46a08f124033185fbace3936355070a8bacef653c23007fde902b6.scope: Deactivated successfully.
Jan 31 04:19:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:50.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.324674279 +0000 UTC m=+0.032219887 container create ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:19:51 np0005603621 systemd[1]: Started libpod-conmon-ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174.scope.
Jan 31 04:19:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.383888298 +0000 UTC m=+0.091433936 container init ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.388822974 +0000 UTC m=+0.096368602 container start ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.39186688 +0000 UTC m=+0.099412518 container attach ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:19:51 np0005603621 eloquent_archimedes[415881]: 167 167
Jan 31 04:19:51 np0005603621 systemd[1]: libpod-ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174.scope: Deactivated successfully.
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.39503505 +0000 UTC m=+0.102580678 container died ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.310950656 +0000 UTC m=+0.018496314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:19:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ec7c994820c43563d9e2626e8a48d031d02261946a31bc0c3b596e9a3341527e-merged.mount: Deactivated successfully.
Jan 31 04:19:51 np0005603621 podman[415865]: 2026-01-31 09:19:51.42862841 +0000 UTC m=+0.136174038 container remove ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_archimedes, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 04:19:51 np0005603621 systemd[1]: libpod-conmon-ad49e7f4a135b75a5c24651b977695106d4e515ebba1a044e7198c2f8badd174.scope: Deactivated successfully.
Jan 31 04:19:51 np0005603621 podman[415904]: 2026-01-31 09:19:51.555836245 +0000 UTC m=+0.041397838 container create 1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:19:51 np0005603621 systemd[1]: Started libpod-conmon-1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85.scope.
Jan 31 04:19:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:19:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a581227a928f763f62b63d76c56d3a15109b5917bb4fd0263f8895b15c85a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a581227a928f763f62b63d76c56d3a15109b5917bb4fd0263f8895b15c85a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a581227a928f763f62b63d76c56d3a15109b5917bb4fd0263f8895b15c85a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44a581227a928f763f62b63d76c56d3a15109b5917bb4fd0263f8895b15c85a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:19:51 np0005603621 podman[415904]: 2026-01-31 09:19:51.539995935 +0000 UTC m=+0.025557568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:19:51 np0005603621 podman[415904]: 2026-01-31 09:19:51.643344227 +0000 UTC m=+0.128905920 container init 1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mendel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Jan 31 04:19:51 np0005603621 podman[415904]: 2026-01-31 09:19:51.649273654 +0000 UTC m=+0.134835247 container start 1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mendel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:19:51 np0005603621 podman[415904]: 2026-01-31 09:19:51.652415713 +0000 UTC m=+0.137977406 container attach 1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mendel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:19:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:52 np0005603621 nova_compute[247399]: 2026-01-31 09:19:52.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]: {
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:        "osd_id": 0,
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:        "type": "bluestore"
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]:    }
Jan 31 04:19:52 np0005603621 beautiful_mendel[415921]: }
Jan 31 04:19:52 np0005603621 systemd[1]: libpod-1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85.scope: Deactivated successfully.
Jan 31 04:19:52 np0005603621 podman[415943]: 2026-01-31 09:19:52.43651117 +0000 UTC m=+0.019340711 container died 1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:19:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-44a581227a928f763f62b63d76c56d3a15109b5917bb4fd0263f8895b15c85a6-merged.mount: Deactivated successfully.
Jan 31 04:19:52 np0005603621 podman[415943]: 2026-01-31 09:19:52.48750819 +0000 UTC m=+0.070337701 container remove 1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mendel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:19:52 np0005603621 systemd[1]: libpod-conmon-1d25582f5f5eb4033cf7f224f36d2eb67920dfb5a8ffbccf24f20d3477f0ff85.scope: Deactivated successfully.
Jan 31 04:19:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:19:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:52 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:19:52 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1f397cbb-a40b-4dbb-a52b-4d7fa1298ed4 does not exist
Jan 31 04:19:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3abff81e-f391-49a7-a52e-088493b9e743 does not exist
Jan 31 04:19:52 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c53a891d-455e-4a84-a2a5-3ee6b11519f6 does not exist
Jan 31 04:19:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:52 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:19:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:52.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:53 np0005603621 nova_compute[247399]: 2026-01-31 09:19:53.021 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:54 np0005603621 podman[416009]: 2026-01-31 09:19:54.528546427 +0000 UTC m=+0.073708817 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:19:54 np0005603621 podman[416010]: 2026-01-31 09:19:54.551380228 +0000 UTC m=+0.096536908 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:19:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:54.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:55 np0005603621 nova_compute[247399]: 2026-01-31 09:19:55.726 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:56.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:56.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:19:58 np0005603621 nova_compute[247399]: 2026-01-31 09:19:58.068 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.175756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851198175792, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 321, "num_deletes": 250, "total_data_size": 188529, "memory_usage": 195616, "flush_reason": "Manual Compaction"}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851198178510, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 187260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87418, "largest_seqno": 87738, "table_properties": {"data_size": 185088, "index_size": 335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5833, "raw_average_key_size": 20, "raw_value_size": 180822, "raw_average_value_size": 634, "num_data_blocks": 14, "num_entries": 285, "num_filter_entries": 285, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851191, "oldest_key_time": 1769851191, "file_creation_time": 1769851198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 2792 microseconds, and 875 cpu microseconds.
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.178545) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 187260 bytes OK
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.178564) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.180117) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.180131) EVENT_LOG_v1 {"time_micros": 1769851198180127, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.180146) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 186284, prev total WAL file size 186284, number of live WAL files 2.
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.180451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323630' seq:72057594037927935, type:22 .. '6D6772737461740033353131' seq:0, type:0; will stop at (end)
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(182KB)], [200(14MB)]
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851198180497, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 15321650, "oldest_snapshot_seqno": -1}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 11302 keys, 11485809 bytes, temperature: kUnknown
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851198246306, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 11485809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11417248, "index_size": 39200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28293, "raw_key_size": 298945, "raw_average_key_size": 26, "raw_value_size": 11224465, "raw_average_value_size": 993, "num_data_blocks": 1473, "num_entries": 11302, "num_filter_entries": 11302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.246559) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11485809 bytes
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.248088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.5 rd, 174.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 14.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(143.2) write-amplify(61.3) OK, records in: 11813, records dropped: 511 output_compression: NoCompression
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.248106) EVENT_LOG_v1 {"time_micros": 1769851198248097, "job": 126, "event": "compaction_finished", "compaction_time_micros": 65899, "compaction_time_cpu_micros": 29803, "output_level": 6, "num_output_files": 1, "total_output_size": 11485809, "num_input_records": 11813, "num_output_records": 11302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851198248233, "job": 126, "event": "table_file_deletion", "file_number": 202}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851198249822, "job": 126, "event": "table_file_deletion", "file_number": 200}
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.180371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.249892) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.249897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.249899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.249900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:58 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:19:58.249901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:19:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:19:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:19:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:19:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:19:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:19:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 04:20:00 np0005603621 nova_compute[247399]: 2026-01-31 09:20:00.729 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:00.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 04:20:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:00.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:02.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:02.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:03 np0005603621 nova_compute[247399]: 2026-01-31 09:20:03.105 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:04.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:04.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:05 np0005603621 nova_compute[247399]: 2026-01-31 09:20:05.732 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:06.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:06.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:07 np0005603621 systemd-logind[818]: New session 75 of user zuul.
Jan 31 04:20:07 np0005603621 systemd[1]: Started Session 75 of User zuul.
Jan 31 04:20:08 np0005603621 nova_compute[247399]: 2026-01-31 09:20:08.107 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:20:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:20:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:08.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:10 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40596 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:10 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43793 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:10 np0005603621 nova_compute[247399]: 2026-01-31 09:20:10.736 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:10 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40602 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:10.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:11 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 04:20:11 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821797323' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 04:20:11 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50356 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:12 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50365 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:12 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50371 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:13 np0005603621 nova_compute[247399]: 2026-01-31 09:20:13.154 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:13 np0005603621 ovs-vsctl[416397]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 04:20:14 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 04:20:14 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 04:20:14 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 04:20:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:14.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:14 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: cache status {prefix=cache status} (starting...)
Jan 31 04:20:14 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:14.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:15 np0005603621 lvm[416735]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:20:15 np0005603621 lvm[416735]: VG ceph_vg0 finished
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: client ls {prefix=client ls} (starting...)
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:15 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40632 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:15 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40644 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:20:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612682872' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:20:15 np0005603621 nova_compute[247399]: 2026-01-31 09:20:15.738 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 04:20:15 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/633941157' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128121525' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 04:20:16 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40662 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:16 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:16 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:16.422+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: ops {prefix=ops} (starting...)
Jan 31 04:20:16 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/104182198' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 04:20:16 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982425063' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 04:20:16 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43823 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:16.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4085731652' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40692 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: session ls {prefix=session ls} (starting...)
Jan 31 04:20:17 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:20:17 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43835 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: status {prefix=status} (starting...)
Jan 31 04:20:17 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40704 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562155673' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421258392' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:20:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2463386093' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50446 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43853 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:18 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:18.119+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:18 np0005603621 nova_compute[247399]: 2026-01-31 09:20:18.155 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633317527' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191131499' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50461 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:18.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:18 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40755 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:18 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:20:18 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:18.781+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 04:20:18 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122746255' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 04:20:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:18.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50500 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:19.157+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:20:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/31649977' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 04:20:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3919573024' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43892 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40797 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43904 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 04:20:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524070456' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50539 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:19 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40821 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132649378' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50554 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40845 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e17e86000 session 0x558e180e74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390627328 unmapped: 29876224 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 heartbeat osd_stat(store_statfs(0x1a4285000/0x0/0x1bfc00000, data 0x8309311/0x8503000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e17648c00 session 0x558e175b72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e184ee400 session 0x558e175a7e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390643712 unmapped: 29859840 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e184ff800 session 0x558e175f25a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e17e7dc00 session 0x558e1780be00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e1c8c9c00 session 0x558e175a7c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e15e4fc00 session 0x558e186ac1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390651904 unmapped: 29851648 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e175cd800 session 0x558e1601f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e184ee400 session 0x558e1848c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e15e4fc00 session 0x558e180e7e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e184ff800 session 0x558e1769c960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4843552 data_alloc: 251658240 data_used: 51707904
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390660096 unmapped: 29843456 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e1c8c9c00 session 0x558e17b0af00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e17e86000 session 0x558e15dadc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390660096 unmapped: 29843456 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 heartbeat osd_stat(store_statfs(0x1a4095000/0x0/0x1bfc00000, data 0x851f363/0x8719000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390463488 unmapped: 30040064 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 heartbeat osd_stat(store_statfs(0x1a4095000/0x0/0x1bfc00000, data 0x851f363/0x8719000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390463488 unmapped: 30040064 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e17e86000 session 0x558e170bb4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e15e4fc00 session 0x558e156a2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e1c8c9c00 session 0x558e181d01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 ms_handle_reset con 0x558e184ff800 session 0x558e1809ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390471680 unmapped: 30031872 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e184b3c00 session 0x558e17b090e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e184ee400 session 0x558e1780b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e184b3c00 session 0x558e175f01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 heartbeat osd_stat(store_statfs(0x1a42ab000/0x0/0x1bfc00000, data 0x8309301/0x8502000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e17e86000 session 0x558e18494960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4786274 data_alloc: 251658240 data_used: 48238592
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387137536 unmapped: 33366016 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e1c8c9c00 session 0x558e180b3680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e184ff800 session 0x558e18495a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b0af00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387170304 unmapped: 33333248 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.762634277s of 10.292329788s, submitted: 190
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 heartbeat osd_stat(store_statfs(0x1a419f000/0x0/0x1bfc00000, data 0x80050f0/0x81fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387186688 unmapped: 33316864 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 369 handle_osd_map epochs [370,370], i have 370, src has [1,370]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387219456 unmapped: 33284096 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 heartbeat osd_stat(store_statfs(0x1a419b000/0x0/0x1bfc00000, data 0x8006db9/0x8201000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,1,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387579904 unmapped: 32923648 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4822428 data_alloc: 251658240 data_used: 48283648
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390053888 unmapped: 30449664 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 ms_handle_reset con 0x558e184ee000 session 0x558e1601f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 heartbeat osd_stat(store_statfs(0x1a2c62000/0x0/0x1bfc00000, data 0x8398db9/0x8593000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389357568 unmapped: 31145984 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389365760 unmapped: 31137792 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 heartbeat osd_stat(store_statfs(0x1a2c4e000/0x0/0x1bfc00000, data 0x83b5db9/0x85b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 ms_handle_reset con 0x558e1c8c8000 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389365760 unmapped: 31137792 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389423104 unmapped: 31080448 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 ms_handle_reset con 0x558e15207c00 session 0x558e178214a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 heartbeat osd_stat(store_statfs(0x1a2c4c000/0x0/0x1bfc00000, data 0x83b7db9/0x85b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,2,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4822134 data_alloc: 251658240 data_used: 48349184
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 ms_handle_reset con 0x558e184ee000 session 0x558e17570d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389464064 unmapped: 31039488 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 370 handle_osd_map epochs [371,371], i have 371, src has [1,371]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389595136 unmapped: 30908416 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 371 ms_handle_reset con 0x558e17649c00 session 0x558e175b6960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.107489109s of 10.046010971s, submitted: 228
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 371 ms_handle_reset con 0x558e15e4fc00 session 0x558e184952c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 ms_handle_reset con 0x558e184ff800 session 0x558e1845cb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389611520 unmapped: 30892032 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 ms_handle_reset con 0x558e17fc2c00 session 0x558e176cc5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 ms_handle_reset con 0x558e18088c00 session 0x558e180545a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389627904 unmapped: 30875648 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 ms_handle_reset con 0x558e17649c00 session 0x558e1601d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a2c43000/0x0/0x1bfc00000, data 0x83bb5dd/0x85b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 383385600 unmapped: 37117952 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 ms_handle_reset con 0x558e15e4fc00 session 0x558e15136000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628244 data_alloc: 251658240 data_used: 40026112
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 384417792 unmapped: 36085760 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 384417792 unmapped: 36085760 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a3f0c000/0x0/0x1bfc00000, data 0x70f65aa/0x72f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 384417792 unmapped: 36085760 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 383328256 unmapped: 37175296 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 ms_handle_reset con 0x558e17ca0000 session 0x558e17b03860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 383328256 unmapped: 37175296 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18f0e000 session 0x558e15692000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4498200 data_alloc: 251658240 data_used: 36659200
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 39878656 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1c8c8000 session 0x558e18054b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 39878656 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380633088 unmapped: 39870464 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17fd0c00 session 0x558e1848c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.617487907s of 11.148512840s, submitted: 103
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e182a3800 session 0x558e175a6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a4814000/0x0/0x1bfc00000, data 0x64430c6/0x663e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [0,0,1,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375586816 unmapped: 44916736 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18216000 session 0x558e1845c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375586816 unmapped: 44916736 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17ca0000 session 0x558e1601c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e15e4fc00 session 0x558e170ba3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4316990 data_alloc: 234881024 data_used: 29028352
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373268480 unmapped: 47235072 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e15dacd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17fd0c00 session 0x558e1809b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a5945000/0x0/0x1bfc00000, data 0x55f9021/0x57f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373325824 unmapped: 47177728 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373325824 unmapped: 47177728 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17e7c800 session 0x558e175a74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371572736 unmapped: 48930816 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18216000 session 0x558e181d0b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371572736 unmapped: 48930816 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a68d4000/0x0/0x1bfc00000, data 0x4733021/0x492a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4168928 data_alloc: 234881024 data_used: 26730496
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371572736 unmapped: 48930816 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371572736 unmapped: 48930816 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a68d4000/0x0/0x1bfc00000, data 0x4733021/0x492a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371556352 unmapped: 48947200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e17b09a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.535951614s of 10.751105309s, submitted: 167
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e17570000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371556352 unmapped: 48947200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18216000 session 0x558e184950e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369516544 unmapped: 50987008 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e18494f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4072706 data_alloc: 234881024 data_used: 23703552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368467968 unmapped: 52035584 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17e7c800 session 0x558e1848c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368467968 unmapped: 52035584 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368476160 unmapped: 52027392 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e15e4fc00 session 0x558e16915e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e15692000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a7300000/0x0/0x1bfc00000, data 0x3d09f9f/0x3efe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4061967 data_alloc: 234881024 data_used: 23564288
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a7302000/0x0/0x1bfc00000, data 0x3d09f2e/0x3efc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18216000 session 0x558e15ea1e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.035250664s of 10.386402130s, submitted: 59
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e175b70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4060390 data_alloc: 234881024 data_used: 23564288
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a7304000/0x0/0x1bfc00000, data 0x3d09ebc/0x3efa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1627f000 session 0x558e175a72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17e7c400 session 0x558e16914b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 52019200 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b01860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 51994624 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 51994624 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4058840 data_alloc: 234881024 data_used: 23560192
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 51994624 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a7304000/0x0/0x1bfc00000, data 0x3d09e99/0x3ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a7304000/0x0/0x1bfc00000, data 0x3d09e99/0x3ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 51994624 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a7304000/0x0/0x1bfc00000, data 0x3d09e99/0x3ef9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368517120 unmapped: 51986432 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e175a6780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368517120 unmapped: 51986432 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.017597198s of 10.246155739s, submitted: 39
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368517120 unmapped: 51986432 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776414 data_alloc: 218103808 data_used: 15556608
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360554496 unmapped: 59949056 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e16914d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360611840 unmapped: 59891712 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18216000 session 0x558e152afe00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee6000/0x0/0x1bfc00000, data 0x2129e89/0x2318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 59899904 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 59899904 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 59899904 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee6000/0x0/0x1bfc00000, data 0x2129e89/0x2318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772832 data_alloc: 218103808 data_used: 15556608
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e15e4fc00 session 0x558e1809b680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e18055680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee6000/0x0/0x1bfc00000, data 0x2129e89/0x2318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3772832 data_alloc: 218103808 data_used: 15556608
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18fca400 session 0x558e1766f0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.782698631s of 12.160035133s, submitted: 110
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e175701e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e17b0b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17fd0c00 session 0x558e1706a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360587264 unmapped: 59916288 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee5000/0x0/0x1bfc00000, data 0x2129e99/0x2319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360505344 unmapped: 59998208 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e17b09680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3776454 data_alloc: 218103808 data_used: 15581184
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 60366848 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 60366848 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee5000/0x0/0x1bfc00000, data 0x2129e99/0x2319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 60366848 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e180e72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 60350464 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 60317696 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3779532 data_alloc: 218103808 data_used: 15581184
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 60227584 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee4000/0x0/0x1bfc00000, data 0x2129ea9/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee4000/0x0/0x1bfc00000, data 0x2129ea9/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.674554825s of 10.034374237s, submitted: 250
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362414080 unmapped: 58089472 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 361398272 unmapped: 59105280 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 361406464 unmapped: 59097088 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8ee4000/0x0/0x1bfc00000, data 0x2129ea9/0x231a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 361414656 unmapped: 59088896 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783951 data_alloc: 218103808 data_used: 15790080
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 361873408 unmapped: 58630144 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1c8c8000 session 0x558e175b6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18088c00 session 0x558e175b7c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e1601d0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e186ac780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e17570d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e182a3800 session 0x558e151372c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8a20000/0x0/0x1bfc00000, data 0x25e7ea9/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362692608 unmapped: 57810944 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362692608 unmapped: 57810944 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1c8c8000 session 0x558e180545a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 363741184 unmapped: 56762368 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e180b32c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 58023936 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8a0e000/0x0/0x1bfc00000, data 0x2601e89/0x27f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e182a3800 session 0x558e186ad680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821075 data_alloc: 218103808 data_used: 15605760
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 58023936 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 58023936 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8a0e000/0x0/0x1bfc00000, data 0x2601e89/0x27f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 58023936 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 58023936 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362487808 unmapped: 58015744 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821075 data_alloc: 218103808 data_used: 15605760
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362487808 unmapped: 58015744 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362487808 unmapped: 58015744 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8a0e000/0x0/0x1bfc00000, data 0x2601e89/0x27f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.237250328s of 16.315782547s, submitted: 186
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e17b054a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 57712640 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 57712640 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362790912 unmapped: 57712640 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857883 data_alloc: 234881024 data_used: 20369408
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 57704448 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a89ea000/0x0/0x1bfc00000, data 0x2625e89/0x2814000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 57704448 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a89ea000/0x0/0x1bfc00000, data 0x2625e89/0x2814000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 57704448 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 57704448 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362799104 unmapped: 57704448 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3857883 data_alloc: 234881024 data_used: 20369408
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 57696256 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 57696256 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a89ea000/0x0/0x1bfc00000, data 0x2625e89/0x2814000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 57696256 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 57696256 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 362807296 unmapped: 57696256 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.971329689s of 12.985011101s, submitted: 2
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3868847 data_alloc: 234881024 data_used: 20389888
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 55910400 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 53968896 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8594000/0x0/0x1bfc00000, data 0x2a7be89/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x149ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 51863552 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a73cf000/0x0/0x1bfc00000, data 0x2aa0e89/0x2c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368640000 unmapped: 51863552 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368713728 unmapped: 51789824 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914025 data_alloc: 234881024 data_used: 21196800
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a73cd000/0x0/0x1bfc00000, data 0x2aa2e89/0x2c91000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911413 data_alloc: 234881024 data_used: 21196800
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.838107109s of 11.095834732s, submitted: 48
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 51781632 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a73cb000/0x0/0x1bfc00000, data 0x2aa4e89/0x2c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a73cb000/0x0/0x1bfc00000, data 0x2aa4e89/0x2c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e1845c000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1c8c8000 session 0x558e17b041e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3911373 data_alloc: 234881024 data_used: 21209088
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a73cb000/0x0/0x1bfc00000, data 0x2aa4e89/0x2c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a73cb000/0x0/0x1bfc00000, data 0x2aa4e89/0x2c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368730112 unmapped: 51773440 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 51765248 heap: 420503552 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966885 data_alloc: 234881024 data_used: 21209088
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 54910976 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1c8c8000 session 0x558e15ea1860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 54910976 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 54910976 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a6c99000/0x0/0x1bfc00000, data 0x31d6e89/0x33c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 54910976 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 54910976 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a6c99000/0x0/0x1bfc00000, data 0x31d6e89/0x33c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966885 data_alloc: 234881024 data_used: 21209088
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368754688 unmapped: 54902784 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368754688 unmapped: 54902784 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368754688 unmapped: 54902784 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368754688 unmapped: 54902784 heap: 423657472 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.519433975s of 17.600500107s, submitted: 15
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a6c99000/0x0/0x1bfc00000, data 0x31d6e89/0x33c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1853d400 session 0x558e15dada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368992256 unmapped: 58343424 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4020359 data_alloc: 234881024 data_used: 21209088
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368992256 unmapped: 58343424 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368992256 unmapped: 58343424 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 58335232 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 58335232 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a65ab000/0x0/0x1bfc00000, data 0x38c4e89/0x3ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e175f30e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 58335232 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e175ccc00 session 0x558e175f34a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4020359 data_alloc: 234881024 data_used: 21209088
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 58335232 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17fd1400 session 0x558e175f2780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369000448 unmapped: 58335232 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e175ccc00 session 0x558e180e6b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368934912 unmapped: 58400768 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 368820224 unmapped: 58515456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a65aa000/0x0/0x1bfc00000, data 0x38c4eac/0x3ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.225531578s of 10.371967316s, submitted: 25
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 58187776 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4069064 data_alloc: 234881024 data_used: 27926528
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 58187776 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e176cd0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 58187776 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1c8c8000 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 58187776 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e18542800 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e1f399800 session 0x558e185ada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 57884672 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a6586000/0x0/0x1bfc00000, data 0x38e8eac/0x3ad8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 369451008 unmapped: 57884672 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a6586000/0x0/0x1bfc00000, data 0x38e8eac/0x3ad8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4105100 data_alloc: 234881024 data_used: 32837632
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371392512 unmapped: 55943168 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371523584 unmapped: 55812096 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a6586000/0x0/0x1bfc00000, data 0x38e8eac/0x3ad8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371523584 unmapped: 55812096 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371531776 unmapped: 55803904 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e17649c00 session 0x558e177fe780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e182a3800 session 0x558e176cdc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.976714134s of 10.014178276s, submitted: 8
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373792768 unmapped: 53542912 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4052980 data_alloc: 234881024 data_used: 29245440
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373481472 unmapped: 53854208 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 ms_handle_reset con 0x558e185fd400 session 0x558e18494960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372678656 unmapped: 54657024 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372678656 unmapped: 54657024 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a697f000/0x0/0x1bfc00000, data 0x34edebc/0x36de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372678656 unmapped: 54657024 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e186ac000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372678656 unmapped: 54657024 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a650b000/0x0/0x1bfc00000, data 0x395fb15/0x3b51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4088284 data_alloc: 234881024 data_used: 29188096
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372686848 unmapped: 54648832 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18634400 session 0x558e17b0a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5dd2000/0x0/0x1bfc00000, data 0x408bb77/0x427e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373956608 unmapped: 53379072 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17649c00 session 0x558e175b7680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a599b000/0x0/0x1bfc00000, data 0x40b6be9/0x42ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e1809be00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373202944 unmapped: 54132736 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a599b000/0x0/0x1bfc00000, data 0x40b6be9/0x42ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e185fd400 session 0x558e17b030e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373219328 unmapped: 54116352 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a599b000/0x0/0x1bfc00000, data 0x40b6be9/0x42ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373219328 unmapped: 54116352 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.602177620s of 10.664442062s, submitted: 161
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e180e6960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4158386 data_alloc: 234881024 data_used: 29356032
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373219328 unmapped: 54116352 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373219328 unmapped: 54116352 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a599a000/0x0/0x1bfc00000, data 0x40b6bf9/0x42ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373227520 unmapped: 54108160 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373227520 unmapped: 54108160 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373227520 unmapped: 54108160 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4153474 data_alloc: 234881024 data_used: 29360128
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373227520 unmapped: 54108160 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a599f000/0x0/0x1bfc00000, data 0x40b9bf9/0x42af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1ac47c00 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 373227520 unmapped: 54108160 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17649c00 session 0x558e180b32c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376963072 unmapped: 50372608 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1853d400 session 0x558e180e70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e15692960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18542800 session 0x558e175b7c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e175ccc00 session 0x558e186ac1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370835456 unmapped: 56500224 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e175ccc00 session 0x558e1601ed20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 56492032 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a680a000/0x0/0x1bfc00000, data 0x324ebf9/0x3444000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.262131691s of 10.303797722s, submitted: 14
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17649c00 session 0x558e175701e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4005442 data_alloc: 234881024 data_used: 26648576
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e175f01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370851840 unmapped: 56483840 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a6809000/0x0/0x1bfc00000, data 0x324ec5b/0x3445000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1853d400 session 0x558e17b00f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370860032 unmapped: 56475648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370860032 unmapped: 56475648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370860032 unmapped: 56475648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370860032 unmapped: 56475648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18542800 session 0x558e1809a960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e175f14a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e175ccc00 session 0x558e1601c5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4007645 data_alloc: 234881024 data_used: 26648576
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370860032 unmapped: 56475648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17649c00 session 0x558e1809a960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e1766fa40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a6637000/0x0/0x1bfc00000, data 0x3421bf9/0x3617000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1853d400 session 0x558e175701e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370884608 unmapped: 56451072 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370884608 unmapped: 56451072 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e175ccc00 session 0x558e15692960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370819072 unmapped: 56516608 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e181d1680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17649c00 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e185fd400 session 0x558e1769c960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e16914000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370827264 unmapped: 56508416 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e175f30e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4051648 data_alloc: 234881024 data_used: 26669056
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370860032 unmapped: 56475648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a645c000/0x0/0x1bfc00000, data 0x35fbc09/0x37f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e1845c000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.106956482s of 13.407667160s, submitted: 72
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e175ccc00 session 0x558e17b0cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17649c00 session 0x558e186ad4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e186ac3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5ada000/0x0/0x1bfc00000, data 0x3f7cc6b/0x4174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4129421 data_alloc: 234881024 data_used: 26669056
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5ada000/0x0/0x1bfc00000, data 0x3f7cc6b/0x4174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5ada000/0x0/0x1bfc00000, data 0x3f7cc6b/0x4174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4130221 data_alloc: 234881024 data_used: 26689536
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5ad8000/0x0/0x1bfc00000, data 0x3f7cc6b/0x4174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4132989 data_alloc: 234881024 data_used: 27406336
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 56467456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.143774986s of 13.292873383s, submitted: 37
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372015104 unmapped: 55320576 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372023296 unmapped: 55312384 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18505800 session 0x558e1780bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5ada000/0x0/0x1bfc00000, data 0x3f7cc6b/0x4174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371892224 unmapped: 55443456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e180b2960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4148352 data_alloc: 234881024 data_used: 29097984
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 371900416 unmapped: 55435264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e17b09860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e175b70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e152ae960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e17b09680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e176cc3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18505800 session 0x558e170bb0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x42b5cb7/0x44af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372916224 unmapped: 54419456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e186adc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4203339 data_alloc: 234881024 data_used: 31293440
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x42b5cf0/0x44af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a579c000/0x0/0x1bfc00000, data 0x42b5cf0/0x44af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e170bb4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e170bba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 372924416 unmapped: 54411264 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18505800 session 0x558e1809b2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.211764336s of 13.329321861s, submitted: 52
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a5507000/0x0/0x1bfc00000, data 0x454dcf0/0x4747000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [0,0,0,0,0,11])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4265152 data_alloc: 234881024 data_used: 31309824
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 379125760 unmapped: 48209920 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e1809af00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376168448 unmapped: 51167232 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18500c00 session 0x558e16914f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 381108224 unmapped: 46227456 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 386310144 unmapped: 41025536 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385646592 unmapped: 41689088 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4417431 data_alloc: 251658240 data_used: 44187648
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a495b000/0x0/0x1bfc00000, data 0x50f3cf0/0x52ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385646592 unmapped: 41689088 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385646592 unmapped: 41689088 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385646592 unmapped: 41689088 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385654784 unmapped: 41680896 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385654784 unmapped: 41680896 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4940000/0x0/0x1bfc00000, data 0x5114cf0/0x530e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4415947 data_alloc: 251658240 data_used: 44187648
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385662976 unmapped: 41672704 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385662976 unmapped: 41672704 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 385662976 unmapped: 41672704 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.318671227s of 12.747736931s, submitted: 171
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4940000/0x0/0x1bfc00000, data 0x5114cf0/0x530e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 386719744 unmapped: 40615936 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 389021696 unmapped: 38313984 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4497087 data_alloc: 251658240 data_used: 44478464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393388032 unmapped: 33947648 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393691136 unmapped: 33644544 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18505800 session 0x558e1848cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e1766e3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394477568 unmapped: 32858112 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e1809b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a37fb000/0x0/0x1bfc00000, data 0x6259cf0/0x6453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e1848cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17e87800 session 0x558e16914f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e186adc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e17b09680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a35e5000/0x0/0x1bfc00000, data 0x646cd00/0x6667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18505800 session 0x558e175b70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e180b2960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395124736 unmapped: 32210944 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396197888 unmapped: 31137792 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a33c8000/0x0/0x1bfc00000, data 0x668af00/0x6886000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4606472 data_alloc: 251658240 data_used: 46247936
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 30908416 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1ac46c00 session 0x558e177fe3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 30908416 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e177fe5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396427264 unmapped: 30908416 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.816657066s of 10.445709229s, submitted: 264
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394878976 unmapped: 32456704 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e17b08d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3fa4000/0x0/0x1bfc00000, data 0x5aaeedd/0x5ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394895360 unmapped: 32440320 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4460559 data_alloc: 251658240 data_used: 38920192
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e175b63c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394936320 unmapped: 32399360 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e1809a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394936320 unmapped: 32399360 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394936320 unmapped: 32399360 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e1769de00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e18314780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394936320 unmapped: 32399360 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394936320 unmapped: 32399360 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4464383 data_alloc: 251658240 data_used: 39456768
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3f53000/0x0/0x1bfc00000, data 0x5b01e7b/0x5cfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394936320 unmapped: 32399360 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e1766e1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394944512 unmapped: 32391168 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394944512 unmapped: 32391168 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394944512 unmapped: 32391168 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.326648712s of 11.137385368s, submitted: 75
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394944512 unmapped: 32391168 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4457471 data_alloc: 251658240 data_used: 40603648
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394952704 unmapped: 32382976 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4152000/0x0/0x1bfc00000, data 0x5905c09/0x5afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394952704 unmapped: 32382976 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394952704 unmapped: 32382976 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394952704 unmapped: 32382976 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4152000/0x0/0x1bfc00000, data 0x5905c09/0x5afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394952704 unmapped: 32382976 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4152000/0x0/0x1bfc00000, data 0x5905c09/0x5afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4459231 data_alloc: 251658240 data_used: 40636416
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395665408 unmapped: 31670272 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397778944 unmapped: 29556736 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18505800 session 0x558e165af4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e175f3860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394346496 unmapped: 32989184 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e1766e3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 32980992 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 32980992 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4403423 data_alloc: 251658240 data_used: 37068800
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 32980992 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x5275ba7/0x546b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 32980992 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 32980992 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.006259918s of 13.492305756s, submitted: 94
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a415e000/0x0/0x1bfc00000, data 0x5275ba7/0x546b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394223616 unmapped: 33112064 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394223616 unmapped: 33112064 heap: 427335680 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e181d05a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4428377 data_alloc: 251658240 data_used: 37076992
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394321920 unmapped: 34635776 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394321920 unmapped: 34635776 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394321920 unmapped: 34635776 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394321920 unmapped: 34635776 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a44c2000/0x0/0x1bfc00000, data 0x5596ba7/0x578c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4428377 data_alloc: 251658240 data_used: 37076992
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e156a3c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18508400 session 0x558e1780b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a44c2000/0x0/0x1bfc00000, data 0x5596ba7/0x578c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e151370e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.335809708s of 10.403303146s, submitted: 12
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e175b74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4431460 data_alloc: 251658240 data_used: 37085184
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a44c1000/0x0/0x1bfc00000, data 0x5596bb7/0x578d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a44c1000/0x0/0x1bfc00000, data 0x5596bb7/0x578d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a44c1000/0x0/0x1bfc00000, data 0x5596bb7/0x578d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4449124 data_alloc: 251658240 data_used: 39211008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394330112 unmapped: 34627584 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394338304 unmapped: 34619392 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a44c1000/0x0/0x1bfc00000, data 0x5596bb7/0x578d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394338304 unmapped: 34619392 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4449124 data_alloc: 251658240 data_used: 39211008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394346496 unmapped: 34611200 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 34603008 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.381400108s of 14.059766769s, submitted: 20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394616832 unmapped: 34340864 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3b44000/0x0/0x1bfc00000, data 0x5f13bb7/0x610a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395780096 unmapped: 33177600 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3b3d000/0x0/0x1bfc00000, data 0x5f19bb7/0x6110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e181d03c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395812864 unmapped: 33144832 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4538157 data_alloc: 251658240 data_used: 40943616
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395812864 unmapped: 33144832 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e180b32c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395821056 unmapped: 33136640 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1763e000 session 0x558e175f01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395821056 unmapped: 33136640 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395821056 unmapped: 33136640 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3b3e000/0x0/0x1bfc00000, data 0x5f19bb7/0x6110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e1766fe00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e170bb2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3b3e000/0x0/0x1bfc00000, data 0x5f19bb7/0x6110000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e15137680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4424425 data_alloc: 251658240 data_used: 37859328
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e17b005a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4424745 data_alloc: 251658240 data_used: 37867520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4424745 data_alloc: 251658240 data_used: 37867520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395845632 unmapped: 33112064 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395853824 unmapped: 33103872 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.846694946s of 20.415245056s, submitted: 115
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395853824 unmapped: 33103872 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395853824 unmapped: 33103872 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395853824 unmapped: 33103872 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4425273 data_alloc: 251658240 data_used: 37867520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a45a5000/0x0/0x1bfc00000, data 0x54b3ba7/0x56a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4425273 data_alloc: 251658240 data_used: 37867520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395862016 unmapped: 33095680 heap: 428957696 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18501800 session 0x558e175f03c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e186ac5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e15136780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e17b005a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e15137680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18508400 session 0x558e1809ba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.533448219s of 10.228190422s, submitted: 16
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e183152c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395870208 unmapped: 35258368 heap: 431128576 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e180b32c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 36716544 heap: 431128576 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 36716544 heap: 431128576 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4863000/0x0/0x1bfc00000, data 0x4c02b97/0x4df7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4334280 data_alloc: 234881024 data_used: 34078720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 36716544 heap: 431128576 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4863000/0x0/0x1bfc00000, data 0x4c02b97/0x4df7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 36716544 heap: 431128576 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e175b63c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e184fe400 session 0x558e177fe5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4863000/0x0/0x1bfc00000, data 0x4c02b97/0x4df7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e177fe3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394412032 unmapped: 36716544 heap: 431128576 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394346496 unmapped: 40984576 heap: 435331072 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e180b2960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e17b09680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e16914f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8000 session 0x558e1848cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e1809b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394354688 unmapped: 40976384 heap: 435331072 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e16914d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a49d3000/0x0/0x1bfc00000, data 0x5085ba7/0x527b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e1766fe00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4369666 data_alloc: 234881024 data_used: 34078720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394371072 unmapped: 40960000 heap: 435331072 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e1769d4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394371072 unmapped: 40960000 heap: 435331072 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18509800 session 0x558e1601cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b0d4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a49d3000/0x0/0x1bfc00000, data 0x5085ba7/0x527b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e170eac00 session 0x558e15136000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e186ac000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e1601cb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e155d3c00 session 0x558e17b0c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e1601c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394559488 unmapped: 49168384 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.788321495s of 10.283638000s, submitted: 68
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e17b01a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e17b04d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394559488 unmapped: 49168384 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8c00 session 0x558e185acf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18542400 session 0x558e17b0cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394567680 unmapped: 49160192 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e15dadc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a4050000/0x0/0x1bfc00000, data 0x5a08ba7/0x5bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e1601f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8c00 session 0x558e1766e5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17aef000 session 0x558e175f0960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18f0f400 session 0x558e1645ba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17aef000 session 0x558e177fef00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e17821860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e17b0a3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8c00 session 0x558e180e6b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e176cd860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e17b01a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4495957 data_alloc: 234881024 data_used: 36139008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b0c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17aef000 session 0x558e17b0d4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394584064 unmapped: 49143808 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394584064 unmapped: 49143808 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1807b800 session 0x558e1769c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395837440 unmapped: 47890432 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e18542000 session 0x558e170bb4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395837440 unmapped: 47890432 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b01c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3c87000/0x0/0x1bfc00000, data 0x5dd2b97/0x5fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395837440 unmapped: 47890432 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1807b000 session 0x558e17b0a780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17aef000 session 0x558e17b012c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4532400 data_alloc: 251658240 data_used: 40865792
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395837440 unmapped: 47890432 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395837440 unmapped: 47890432 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396615680 unmapped: 47112192 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.217719078s of 10.555859566s, submitted: 33
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a39ae000/0x0/0x1bfc00000, data 0x60a9bca/0x62a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [0,0,0,4])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397197312 unmapped: 46530560 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397516800 unmapped: 46211072 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1807b800 session 0x558e15137c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e193c7800 session 0x558e1780a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4654296 data_alloc: 251658240 data_used: 47378432
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397746176 unmapped: 45981696 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395780096 unmapped: 47947776 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e15e4fc00 session 0x558e1809a960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393715712 unmapped: 50012160 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a3f45000/0x0/0x1bfc00000, data 0x5b12b97/0x5d07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395198464 unmapped: 48529408 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395198464 unmapped: 48529408 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e182a3800 session 0x558e1766fe00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1c8c8c00 session 0x558e181d0f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4485880 data_alloc: 234881024 data_used: 35270656
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 52084736 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e1807b000 session 0x558e17b0cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 ms_handle_reset con 0x558e17aef000 session 0x558e17b02000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391462912 unmapped: 52264960 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 375 ms_handle_reset con 0x558e15e4fc00 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a43a9000/0x0/0x1bfc00000, data 0x56b0b97/0x58a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391462912 unmapped: 52264960 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 375 ms_handle_reset con 0x558e1807b000 session 0x558e181d14a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391462912 unmapped: 52264960 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.299959183s of 10.435162544s, submitted: 139
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e182a3800 session 0x558e180b32c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e18506800 session 0x558e176cda40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e170eac00 session 0x558e18055860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a43a5000/0x0/0x1bfc00000, data 0x56b27f0/0x58a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e15e4fc00 session 0x558e1809ba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 52240384 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e1807b000 session 0x558e175f2780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4381068 data_alloc: 234881024 data_used: 32100352
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391487488 unmapped: 52240384 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e182a3800 session 0x558e17b0c5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e1807a000 session 0x558e176cc780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e17733800 session 0x558e175b6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392347648 unmapped: 51380224 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b0c5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a4c6d000/0x0/0x1bfc00000, data 0x4dea48d/0x4fe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 51355648 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 51355648 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 51355648 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2463402042' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a4c64000/0x0/0x1bfc00000, data 0x4df448d/0x4fea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4399596 data_alloc: 234881024 data_used: 31989760
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392372224 unmapped: 51355648 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e185fd400 session 0x558e17b0da40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392388608 unmapped: 51339264 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 ms_handle_reset con 0x558e17733800 session 0x558e1780a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392388608 unmapped: 51339264 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a4c64000/0x0/0x1bfc00000, data 0x4df446a/0x4fe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392388608 unmapped: 51339264 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.236588478s of 10.469281197s, submitted: 88
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 51322880 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4401330 data_alloc: 234881024 data_used: 31997952
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392404992 unmapped: 51322880 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a4c5e000/0x0/0x1bfc00000, data 0x4df8fa9/0x4fef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e157ac800 session 0x558e175701e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 51298304 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e175ccc00 session 0x558e177fe780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e1807a000 session 0x558e17b01c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e157ac800 session 0x558e1766e5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a5263000/0x0/0x1bfc00000, data 0x47f4fa9/0x49eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4323790 data_alloc: 234881024 data_used: 28024832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a5263000/0x0/0x1bfc00000, data 0x47f4fa9/0x49eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390217728 unmapped: 53510144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b052c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.474965096s of 10.683644295s, submitted: 41
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4322910 data_alloc: 234881024 data_used: 28020736
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390225920 unmapped: 53501952 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e17733800 session 0x558e175f3860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 63225856 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a625a000/0x0/0x1bfc00000, data 0x37fef37/0x39f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 63225856 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a625a000/0x0/0x1bfc00000, data 0x37fef37/0x39f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e185fd400 session 0x558e178210e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380502016 unmapped: 63225856 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e185fd400 session 0x558e1766eb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380510208 unmapped: 63217664 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e157ac800 session 0x558e1645a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050958 data_alloc: 234881024 data_used: 19619840
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 380526592 unmapped: 63201280 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 ms_handle_reset con 0x558e15e4fc00 session 0x558e17820960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a6959000/0x0/0x1bfc00000, data 0x3100f37/0x32f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 65667072 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 65667072 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a74bf000/0x0/0x1bfc00000, data 0x259af37/0x278f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 65667072 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 65667072 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3947320 data_alloc: 218103808 data_used: 18141184
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.170330048s of 10.729709625s, submitted: 69
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 378060800 unmapped: 65667072 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e17733800 session 0x558e1645b860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a792c000/0x0/0x1bfc00000, data 0x212cbd4/0x2321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e1807a000 session 0x558e18055680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e1807a000 session 0x558e186ad860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e157ac800 session 0x558e156a3c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375750656 unmapped: 67977216 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e15e4fc00 session 0x558e186ad680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e17733800 session 0x558e1645a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e185fd400 session 0x558e1601d0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e157ac800 session 0x558e1766f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e15e4fc00 session 0x558e1809b680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 ms_handle_reset con 0x558e17733800 session 0x558e1601fa40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3953827 data_alloc: 218103808 data_used: 15458304
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a73a1000/0x0/0x1bfc00000, data 0x26b6c46/0x28ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375767040 unmapped: 67960832 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375619584 unmapped: 68108288 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e1807a000 session 0x558e1645bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3958682 data_alloc: 218103808 data_used: 15466496
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375619584 unmapped: 68108288 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.215773582s of 10.462486267s, submitted: 96
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375619584 unmapped: 68108288 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375619584 unmapped: 68108288 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a739e000/0x0/0x1bfc00000, data 0x26b8785/0x28b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375619584 unmapped: 68108288 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375619584 unmapped: 68108288 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3958226 data_alloc: 218103808 data_used: 15470592
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375627776 unmapped: 68100096 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a739e000/0x0/0x1bfc00000, data 0x26b8785/0x28b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3999346 data_alloc: 234881024 data_used: 21278720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a739e000/0x0/0x1bfc00000, data 0x26b8785/0x28b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.829099655s of 10.058479309s, submitted: 2
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e1807b000 session 0x558e178205a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e182a3800 session 0x558e1766e1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e157ac800 session 0x558e186acd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4026386 data_alloc: 234881024 data_used: 21278720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 68075520 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4026474 data_alloc: 234881024 data_used: 21278720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375660544 unmapped: 68067328 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.606068611s of 10.698170662s, submitted: 7
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375660544 unmapped: 68067328 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e15e4fc00 session 0x558e175f0f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e17733800 session 0x558e17b041e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375660544 unmapped: 68067328 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375660544 unmapped: 68067328 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375668736 unmapped: 68059136 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4054530 data_alloc: 234881024 data_used: 25251840
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6fd4000/0x0/0x1bfc00000, data 0x2a82785/0x2c7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4054530 data_alloc: 234881024 data_used: 25251840
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376037376 unmapped: 67690496 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e1807b800 session 0x558e186ada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e157ac800 session 0x558e186adc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e15e4fc00 session 0x558e15136000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e17733800 session 0x558e1601f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.093165398s of 10.491745949s, submitted: 4
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e18506800 session 0x558e17b09860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e1c8c8c00 session 0x558e151363c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e182a3800 session 0x558e1601e000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e157ac800 session 0x558e181d0d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e15e4fc00 session 0x558e1809ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e17733800 session 0x558e177ff680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e18506800 session 0x558e15693a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375308288 unmapped: 68419584 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e1558b000 session 0x558e15136960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375332864 unmapped: 68395008 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a7140000/0x0/0x1bfc00000, data 0x2915785/0x2b0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 375332864 unmapped: 68395008 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4012763 data_alloc: 234881024 data_used: 19439616
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376324096 unmapped: 67403776 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376807424 unmapped: 66920448 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376741888 unmapped: 66985984 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376741888 unmapped: 66985984 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376741888 unmapped: 66985984 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6af6000/0x0/0x1bfc00000, data 0x2b50785/0x2d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4040785 data_alloc: 234881024 data_used: 19439616
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376741888 unmapped: 66985984 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e15e4fc00 session 0x558e186ada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e17733800 session 0x558e17b041e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376741888 unmapped: 66985984 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 heartbeat osd_stat(store_statfs(0x1a6af6000/0x0/0x1bfc00000, data 0x2b50785/0x2d48000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376741888 unmapped: 66985984 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e156bd400 session 0x558e1848d680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.218441963s of 11.076782227s, submitted: 109
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 ms_handle_reset con 0x558e15206800 session 0x558e186ac780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 376766464 unmapped: 66961408 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e182a3800 session 0x558e186acd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4116851 data_alloc: 234881024 data_used: 20955136
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a62ec000/0x0/0x1bfc00000, data 0x3355434/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4134679 data_alloc: 234881024 data_used: 23478272
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a62eb000/0x0/0x1bfc00000, data 0x3356434/0x3552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 377896960 unmapped: 65830912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e156bd400 session 0x558e17b052c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.917867661s of 10.088785172s, submitted: 32
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e15e4fc00 session 0x558e177fe780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 381009920 unmapped: 62717952 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 381009920 unmapped: 62717952 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4155324 data_alloc: 234881024 data_used: 30826496
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 382738432 unmapped: 60989440 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 383877120 unmapped: 59850752 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a5c28000/0x0/0x1bfc00000, data 0x3a1945d/0x3c16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e17733800 session 0x558e175b6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 384303104 unmapped: 59424768 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a5bff000/0x0/0x1bfc00000, data 0x3a4245d/0x3c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 384778240 unmapped: 58949632 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e18503800 session 0x558e1809be00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 384647168 unmapped: 59080704 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e156bd400 session 0x558e1845dc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4239132 data_alloc: 234881024 data_used: 31727616
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 386793472 unmapped: 56934400 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e15e4fc00 session 0x558e18055e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e17733800 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e182a3800 session 0x558e17b054a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e191fc800 session 0x558e17b03e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e156bd400 session 0x558e17820780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387014656 unmapped: 56713216 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e17733800 session 0x558e175b7c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e182a3800 session 0x558e170bb0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a583f000/0x0/0x1bfc00000, data 0x3e01496/0x3ffe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 56614912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 56614912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a5793000/0x0/0x1bfc00000, data 0x3eae496/0x40ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387112960 unmapped: 56614912 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e15e4fc00 session 0x558e175b70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4267416 data_alloc: 234881024 data_used: 32960512
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 56606720 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 56606720 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.698225975s of 14.036934853s, submitted: 157
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 56606720 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 56606720 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a5772000/0x0/0x1bfc00000, data 0x3ecf496/0x40cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387121152 unmapped: 56606720 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e17aeec00 session 0x558e1706b860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4272564 data_alloc: 234881024 data_used: 32972800
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a5757000/0x0/0x1bfc00000, data 0x3ee74a6/0x40e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 56582144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a5757000/0x0/0x1bfc00000, data 0x3ee74a6/0x40e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 56582144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a574a000/0x0/0x1bfc00000, data 0x3ef64a6/0x40f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 56582144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e15206800 session 0x558e1809ad20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387145728 unmapped: 56582144 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e1558b000 session 0x558e175f2780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 387153920 unmapped: 56573952 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4276776 data_alloc: 234881024 data_used: 33533952
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 388407296 unmapped: 55320576 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 ms_handle_reset con 0x558e17733800 session 0x558e175f2780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 381 ms_handle_reset con 0x558e182a3800 session 0x558e1809ad20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 381 ms_handle_reset con 0x558e1807ac00 session 0x558e180b32c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390430720 unmapped: 53297152 heap: 443727872 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 381 ms_handle_reset con 0x558e1f399800 session 0x558e170bb0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 381 heartbeat osd_stat(store_statfs(0x1a51f5000/0x0/0x1bfc00000, data 0x44490ff/0x4648000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 381 ms_handle_reset con 0x558e15206800 session 0x558e175b7c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.467229843s of 10.006500244s, submitted: 125
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 382 ms_handle_reset con 0x558e1558b000 session 0x558e180b34a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400695296 unmapped: 50667520 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 383 ms_handle_reset con 0x558e17733800 session 0x558e17b02960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400703488 unmapped: 50659328 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400703488 unmapped: 50659328 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a424e000/0x0/0x1bfc00000, data 0x53eaae5/0x55ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4498789 data_alloc: 251658240 data_used: 44150784
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400711680 unmapped: 50651136 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 ms_handle_reset con 0x558e182a3800 session 0x558e181d1680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 ms_handle_reset con 0x558e1807ac00 session 0x558e1780a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 ms_handle_reset con 0x558e15206800 session 0x558e186ac780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a424e000/0x0/0x1bfc00000, data 0x53eaae5/0x55ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 ms_handle_reset con 0x558e1558b000 session 0x558e15136960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401489920 unmapped: 49872896 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a3e89000/0x0/0x1bfc00000, data 0x57a87bc/0x59ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401580032 unmapped: 49782784 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401694720 unmapped: 49668096 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401694720 unmapped: 49668096 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4544491 data_alloc: 251658240 data_used: 44388352
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401694720 unmapped: 49668096 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a3e4f000/0x0/0x1bfc00000, data 0x57e72fb/0x59ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 49659904 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 49659904 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 49659904 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401702912 unmapped: 49659904 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.308389664s of 12.645462036s, submitted: 105
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a3e4f000/0x0/0x1bfc00000, data 0x57e72fb/0x59ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e17733800 session 0x558e17b025a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e156bd400 session 0x558e17b0c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4373864 data_alloc: 234881024 data_used: 29761536
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a3e4f000/0x0/0x1bfc00000, data 0x57e72fb/0x59ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395419648 unmapped: 55943168 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e156bd400 session 0x558e1601ef00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4329210 data_alloc: 234881024 data_used: 28971008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50a3000/0x0/0x1bfc00000, data 0x4597279/0x479b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50a3000/0x0/0x1bfc00000, data 0x4597279/0x479b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4329430 data_alloc: 234881024 data_used: 28971008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a50a3000/0x0/0x1bfc00000, data 0x4597279/0x479b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1807a000 session 0x558e18055860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 55934976 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.159786224s of 13.253636360s, submitted: 33
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e15206800 session 0x558e15137860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390758400 unmapped: 60604416 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390758400 unmapped: 60604416 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4252872 data_alloc: 234881024 data_used: 24997888
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390758400 unmapped: 60604416 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390758400 unmapped: 60604416 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a56ad000/0x0/0x1bfc00000, data 0x3f8d279/0x4191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390758400 unmapped: 60604416 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1558b000 session 0x558e180e70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e17733800 session 0x558e180b2f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e17733800 session 0x558e175b74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390766592 unmapped: 60596224 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e15206800 session 0x558e15ea0960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e156bd400 session 0x558e1845cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1558b000 session 0x558e170bb2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1807a000 session 0x558e1706b2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1807a000 session 0x558e1848c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e15206800 session 0x558e14a9bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1558b000 session 0x558e1780ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405389312 unmapped: 45973504 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e156bd400 session 0x558e1848d680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e17733800 session 0x558e17b030e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4397237 data_alloc: 234881024 data_used: 30777344
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e17733800 session 0x558e17b054a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 398860288 unmapped: 52502528 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 ms_handle_reset con 0x558e1558b000 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 386 ms_handle_reset con 0x558e15206800 session 0x558e186ac000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 386 ms_handle_reset con 0x558e156bd400 session 0x558e17b03e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 52494336 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 386 ms_handle_reset con 0x558e1807a000 session 0x558e180e6b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 398868480 unmapped: 52494336 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a381c000/0x0/0x1bfc00000, data 0x4c79f44/0x4e81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.144133568s of 10.520051003s, submitted: 71
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396713984 unmapped: 54648832 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 387 ms_handle_reset con 0x558e156bd400 session 0x558e1769de00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393216000 unmapped: 58146816 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4287008 data_alloc: 234881024 data_used: 23904256
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a472b000/0x0/0x1bfc00000, data 0x3d6ab49/0x3f71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393216000 unmapped: 58146816 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 387 ms_handle_reset con 0x558e17fd0800 session 0x558e1601f680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 387 ms_handle_reset con 0x558e193c6c00 session 0x558e17b0b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 387 ms_handle_reset con 0x558e15e4fc00 session 0x558e180b3860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393216000 unmapped: 58146816 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393216000 unmapped: 58146816 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393224192 unmapped: 58138624 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4727000/0x0/0x1bfc00000, data 0x3d6eb49/0x3f75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393166848 unmapped: 58195968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4290486 data_alloc: 234881024 data_used: 24068096
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393166848 unmapped: 58195968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393166848 unmapped: 58195968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4725000/0x0/0x1bfc00000, data 0x3d706c0/0x3f78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393166848 unmapped: 58195968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.034457207s of 10.173830032s, submitted: 66
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4725000/0x0/0x1bfc00000, data 0x3d706c0/0x3f78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4298398 data_alloc: 234881024 data_used: 25341952
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4725000/0x0/0x1bfc00000, data 0x3d706c0/0x3f78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393175040 unmapped: 58187776 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4306726 data_alloc: 234881024 data_used: 25665536
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4724000/0x0/0x1bfc00000, data 0x3d706c0/0x3f78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.978119850s of 11.002757072s, submitted: 7
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a4724000/0x0/0x1bfc00000, data 0x3d706c0/0x3f78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4312707 data_alloc: 234881024 data_used: 26341376
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15206800 session 0x558e175f2d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e1558b000 session 0x558e17b01c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e17aef400 session 0x558e17b0cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393191424 unmapped: 58171392 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 61267968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15206800 session 0x558e180543c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 61267968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 61267968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a5068000/0x0/0x1bfc00000, data 0x30976a3/0x329d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4161502 data_alloc: 234881024 data_used: 18583552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 61267968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390094848 unmapped: 61267968 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e156bd400 session 0x558e17b0d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4187208 data_alloc: 234881024 data_used: 18583552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a515a000/0x0/0x1bfc00000, data 0x333e6a3/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a515a000/0x0/0x1bfc00000, data 0x333e6a3/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4187208 data_alloc: 234881024 data_used: 18583552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390610944 unmapped: 60751872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.827938080s of 18.811788559s, submitted: 35
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15e4fc00 session 0x558e17b04960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a515a000/0x0/0x1bfc00000, data 0x333e6a3/0x3544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390619136 unmapped: 60743680 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15206800 session 0x558e175b6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 60727296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4200538 data_alloc: 234881024 data_used: 20135936
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 60727296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e156bd400 session 0x558e175f0b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e17aef400 session 0x558e1809a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 390635520 unmapped: 60727296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e193c6c00 session 0x558e1601c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e17fd0800 session 0x558e1601f860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e1f398400 session 0x558e180545a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a5159000/0x0/0x1bfc00000, data 0x333e6c6/0x3545000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4222090 data_alloc: 234881024 data_used: 29859840
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a5151000/0x0/0x1bfc00000, data 0x33436c6/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a5151000/0x0/0x1bfc00000, data 0x33436c6/0x354a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1755f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 391544832 unmapped: 59817984 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.113670349s of 12.181924820s, submitted: 19
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4266276 data_alloc: 234881024 data_used: 29851648
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397074432 unmapped: 54288384 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397172736 unmapped: 54190080 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a354a000/0x0/0x1bfc00000, data 0x3dad6c6/0x3fb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397221888 unmapped: 54140928 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15e4e000 session 0x558e178210e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397656064 unmapped: 53706752 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e184f1000 session 0x558e17b0a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 45203456 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e18089800 session 0x558e1780a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a2b28000/0x0/0x1bfc00000, data 0x47cf6c6/0x49d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4392756 data_alloc: 234881024 data_used: 31600640
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397770752 unmapped: 53592064 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397770752 unmapped: 53592064 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397770752 unmapped: 53592064 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397778944 unmapped: 53583872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e17733800 session 0x558e181d05a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397778944 unmapped: 53583872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a2b28000/0x0/0x1bfc00000, data 0x47cf6c6/0x49d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a2b04000/0x0/0x1bfc00000, data 0x47f36c6/0x49fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4393800 data_alloc: 234881024 data_used: 31608832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397778944 unmapped: 53583872 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a2b04000/0x0/0x1bfc00000, data 0x47f36c6/0x49fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15e4e000 session 0x558e15ea1860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.291234970s of 11.182560921s, submitted: 119
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a2b04000/0x0/0x1bfc00000, data 0x47f36c6/0x49fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397787136 unmapped: 53575680 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e184f1000 session 0x558e165aef00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e18089800 session 0x558e169141e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397795328 unmapped: 53567488 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e1f399800 session 0x558e175f1c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e1807ac00 session 0x558e177ffa40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397795328 unmapped: 53567488 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397803520 unmapped: 53559296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e15e4e000 session 0x558e170bba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4301232 data_alloc: 234881024 data_used: 28155904
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397803520 unmapped: 53559296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397803520 unmapped: 53559296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 heartbeat osd_stat(store_statfs(0x1a325e000/0x0/0x1bfc00000, data 0x409b5f1/0x429f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397803520 unmapped: 53559296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e1807ac00 session 0x558e16915a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397803520 unmapped: 53559296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 ms_handle_reset con 0x558e18089800 session 0x558e175f0f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397803520 unmapped: 53559296 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 389 ms_handle_reset con 0x558e184f1000 session 0x558e186ac780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4231324 data_alloc: 234881024 data_used: 24358912
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397860864 unmapped: 53501952 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397860864 unmapped: 53501952 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.682888031s of 10.254242897s, submitted: 91
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 389 ms_handle_reset con 0x558e1f398400 session 0x558e184945a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 389 ms_handle_reset con 0x558e15e4e000 session 0x558e1769d0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 389 heartbeat osd_stat(store_statfs(0x1a3a5b000/0x0/0x1bfc00000, data 0x389d268/0x3aa1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397860864 unmapped: 53501952 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 389 ms_handle_reset con 0x558e1807ac00 session 0x558e1769c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 ms_handle_reset con 0x558e1f399800 session 0x558e175b7860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397877248 unmapped: 53485568 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a3a54000/0x0/0x1bfc00000, data 0x38a1f10/0x3aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 ms_handle_reset con 0x558e18089800 session 0x558e15693860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 ms_handle_reset con 0x558e184f1000 session 0x558e185ada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397893632 unmapped: 53469184 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4244135 data_alloc: 234881024 data_used: 24371200
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397893632 unmapped: 53469184 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397893632 unmapped: 53469184 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.3 total, 600.0 interval#012Cumulative writes: 54K writes, 207K keys, 54K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 54K writes, 19K syncs, 2.79 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6949 writes, 23K keys, 6949 commit groups, 1.0 writes per commit group, ingest: 23.77 MB, 0.04 MB/s#012Interval WAL: 6949 writes, 2806 syncs, 2.48 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000266 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558e13cb6430#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 0.000266 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.3 total, 4800.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400130048 unmapped: 51232768 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 ms_handle_reset con 0x558e18089800 session 0x558e17b0a780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 ms_handle_reset con 0x558e184f1000 session 0x558e170bb4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400130048 unmapped: 51232768 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a3a50000/0x0/0x1bfc00000, data 0x38a3a70/0x3aad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400138240 unmapped: 51224576 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 ms_handle_reset con 0x558e1f399800 session 0x558e170bab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4352096 data_alloc: 234881024 data_used: 31563776
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 51200000 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 ms_handle_reset con 0x558e1f398400 session 0x558e18315680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 51200000 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a3544000/0x0/0x1bfc00000, data 0x3dadae2/0x3fb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 51200000 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400162816 unmapped: 51200000 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.204914093s of 12.203176498s, submitted: 62
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 ms_handle_reset con 0x558e18500800 session 0x558e17b0c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a3545000/0x0/0x1bfc00000, data 0x3dadae2/0x3fb9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400203776 unmapped: 51159040 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e18089800 session 0x558e1601c5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4357018 data_alloc: 234881024 data_used: 31584256
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400203776 unmapped: 51159040 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a3536000/0x0/0x1bfc00000, data 0x3dba73b/0x3fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400211968 unmapped: 51150848 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e14cb3800 session 0x558e14a9ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a3536000/0x0/0x1bfc00000, data 0x3dba73b/0x3fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 49725440 heap: 451362816 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e184f1000 session 0x558e175b65a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e1f398400 session 0x558e18055860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e1f399800 session 0x558e17b09e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401932288 unmapped: 53108736 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e14cb3800 session 0x558e1780a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e18089800 session 0x558e1848c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a285a000/0x0/0x1bfc00000, data 0x4a8f73b/0x4c9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401940480 unmapped: 53100544 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4466182 data_alloc: 234881024 data_used: 31621120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401309696 unmapped: 53731328 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a284f000/0x0/0x1bfc00000, data 0x4aa273b/0x4caf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.030104637s of 11.092157364s, submitted: 82
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4465058 data_alloc: 234881024 data_used: 31625216
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a2845000/0x0/0x1bfc00000, data 0x4aab73b/0x4cb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4465374 data_alloc: 234881024 data_used: 31625216
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a2843000/0x0/0x1bfc00000, data 0x4aae73b/0x4cbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e15e4e000 session 0x558e169152c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e1807ac00 session 0x558e1769d680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a2843000/0x0/0x1bfc00000, data 0x4aae73b/0x4cbb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401334272 unmapped: 53706752 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.139742851s of 10.544818878s, submitted: 5
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4468456 data_alloc: 234881024 data_used: 31625216
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401342464 unmapped: 53698560 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e18fcb000 session 0x558e186ada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e1f398400 session 0x558e15136780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e14cb3800 session 0x558e17b0b2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 49422336 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a2842000/0x0/0x1bfc00000, data 0x4aae74b/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 49414144 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a272d000/0x0/0x1bfc00000, data 0x4bc2774/0x4dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 49405952 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a272d000/0x0/0x1bfc00000, data 0x4bc2774/0x4dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e15e4e000 session 0x558e15137680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 49405952 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a272d000/0x0/0x1bfc00000, data 0x4bc27ad/0x4dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e1807ac00 session 0x558e1809a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4542217 data_alloc: 251658240 data_used: 35885056
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a272d000/0x0/0x1bfc00000, data 0x4bc27ad/0x4dd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 49405952 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 49405952 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 49405952 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a272e000/0x0/0x1bfc00000, data 0x4bc279f/0x4dd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 49397760 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e18089800 session 0x558e17b0c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 49397760 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e14cb3800 session 0x558e17b043c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4541354 data_alloc: 251658240 data_used: 35885056
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 49397760 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.764206886s of 11.187674522s, submitted: 45
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 49389568 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407109632 unmapped: 47931392 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1dfd000/0x0/0x1bfc00000, data 0x54f37de/0x5701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 47726592 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 44343296 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4650584 data_alloc: 251658240 data_used: 41381888
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 44343296 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1dfd000/0x0/0x1bfc00000, data 0x54f37de/0x5701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 44343296 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1dfd000/0x0/0x1bfc00000, data 0x54f37de/0x5701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 44343296 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410697728 unmapped: 44343296 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e15e4e000 session 0x558e1769cd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 44441600 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e1f398400 session 0x558e1601e1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1dfd000/0x0/0x1bfc00000, data 0x54f37de/0x5701000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4648676 data_alloc: 251658240 data_used: 41385984
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 44441600 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 44441600 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 44441600 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410599424 unmapped: 44441600 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.845231056s of 13.115181923s, submitted: 51
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410607616 unmapped: 44433408 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e18507400 session 0x558e17b0cb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e17648800 session 0x558e175f0960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647531 data_alloc: 251658240 data_used: 41443328
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410624000 unmapped: 44417024 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1dfe000/0x0/0x1bfc00000, data 0x54f37ce/0x5700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410624000 unmapped: 44417024 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410624000 unmapped: 44417024 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1dfe000/0x0/0x1bfc00000, data 0x54f37ce/0x5700000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410624000 unmapped: 44417024 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413155328 unmapped: 41885696 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a1ab4000/0x0/0x1bfc00000, data 0x583d7ce/0x5a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4677125 data_alloc: 251658240 data_used: 41480192
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 411090944 unmapped: 43950080 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 ms_handle_reset con 0x558e14cb3800 session 0x558e1645bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 411148288 unmapped: 43892736 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e15e4e000 session 0x558e175f2d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412303360 unmapped: 42737664 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 42729472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.791529179s of 10.099063873s, submitted: 91
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412311552 unmapped: 42729472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e18507400 session 0x558e1645ad20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e18fcb800 session 0x558e17b0d0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e1f398400 session 0x558e1769d0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4626733 data_alloc: 251658240 data_used: 36487168
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e14cb3800 session 0x558e175f2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1d5d000/0x0/0x1bfc00000, data 0x5594409/0x57a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407609344 unmapped: 47431680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407609344 unmapped: 47431680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e15e4e000 session 0x558e176cc5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407609344 unmapped: 47431680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 ms_handle_reset con 0x558e18507400 session 0x558e1645a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407609344 unmapped: 47431680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 407617536 unmapped: 47423488 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1d59000/0x0/0x1bfc00000, data 0x5595f58/0x57a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e17fc2800 session 0x558e186acf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e1bf3e400 session 0x558e1766f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4625744 data_alloc: 251658240 data_used: 37576704
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405807104 unmapped: 49233920 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1d5a000/0x0/0x1bfc00000, data 0x5595f58/0x57a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1d54000/0x0/0x1bfc00000, data 0x559bf58/0x57aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4626744 data_alloc: 251658240 data_used: 37605376
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.658314705s of 11.066893578s, submitted: 58
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e14cb3800 session 0x558e176cd860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e1807ac00 session 0x558e176ccf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15e4e000 session 0x558e17b0a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1d53000/0x0/0x1bfc00000, data 0x559bf68/0x57ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4573802 data_alloc: 251658240 data_used: 37265408
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405823488 unmapped: 49217536 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a2403000/0x0/0x1bfc00000, data 0x4eebf06/0x50fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 49127424 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e17fc2800 session 0x558e177fe780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 49127424 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 49127424 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e18507400 session 0x558e178214a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 49127424 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a2403000/0x0/0x1bfc00000, data 0x4eebef6/0x50f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,0,0,0,0,3,10,3])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4598865 data_alloc: 251658240 data_used: 37642240
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 409452544 unmapped: 45588480 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.609168053s of 10.070733070s, submitted: 55
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e14cb3800 session 0x558e17661680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15e4e000 session 0x558e156a2000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406315008 unmapped: 48726016 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406315008 unmapped: 48726016 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406315008 unmapped: 48726016 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406323200 unmapped: 48717824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1f3d000/0x0/0x1bfc00000, data 0x53aef68/0x55be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4627243 data_alloc: 251658240 data_used: 37638144
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406323200 unmapped: 48717824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e1558b000 session 0x558e1769da40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15e4fc00 session 0x558e1601dc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e184f1000 session 0x558e175b74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406323200 unmapped: 48717824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1f40000/0x0/0x1bfc00000, data 0x53aef68/0x55be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x186ff9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406331392 unmapped: 48709632 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e18fcb800 session 0x558e1601d0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e17732800 session 0x558e183150e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e14cb3800 session 0x558e152af4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406421504 unmapped: 48619520 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e1558b000 session 0x558e1780bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406478848 unmapped: 48562176 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4486172 data_alloc: 234881024 data_used: 34152448
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.391663551s of 10.001594543s, submitted: 332
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406536192 unmapped: 48504832 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406544384 unmapped: 48496640 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a293e000/0x0/0x1bfc00000, data 0x45a2f35/0x47b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 406544384 unmapped: 48496640 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15e4fc00 session 0x558e177ffa40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 54001664 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 54001664 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4305426 data_alloc: 234881024 data_used: 23605248
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 54001664 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3755000/0x0/0x1bfc00000, data 0x378bf35/0x3999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 54001664 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401039360 unmapped: 54001664 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 53993472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3755000/0x0/0x1bfc00000, data 0x378bf35/0x3999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401047552 unmapped: 53993472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4308718 data_alloc: 234881024 data_used: 23732224
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3755000/0x0/0x1bfc00000, data 0x378bf35/0x3999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3755000/0x0/0x1bfc00000, data 0x378bf35/0x3999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3755000/0x0/0x1bfc00000, data 0x378bf35/0x3999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3755000/0x0/0x1bfc00000, data 0x378bf35/0x3999000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x18b0f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.956764221s of 14.579607964s, submitted: 69
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4315399 data_alloc: 234881024 data_used: 24043520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e1558b000 session 0x558e17b0bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401072128 unmapped: 53968896 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a4790000/0x0/0x1bfc00000, data 0x3790f35/0x399e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a4790000/0x0/0x1bfc00000, data 0x3790f35/0x399e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4344399 data_alloc: 234881024 data_used: 27627520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a478f000/0x0/0x1bfc00000, data 0x3791f35/0x399f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4344639 data_alloc: 234881024 data_used: 27627520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a478f000/0x0/0x1bfc00000, data 0x3791f35/0x399f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399966208 unmapped: 55074816 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a478f000/0x0/0x1bfc00000, data 0x3791f35/0x399f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399974400 unmapped: 55066624 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.191344261s of 11.850981712s, submitted: 11
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403210240 unmapped: 51830784 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 52109312 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3fe9000/0x0/0x1bfc00000, data 0x3f37f35/0x4145000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402931712 unmapped: 52109312 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4414111 data_alloc: 234881024 data_used: 28241920
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403841024 unmapped: 51200000 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e14cb3800 session 0x558e181d01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 51462144 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403578880 unmapped: 51462144 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404627456 unmapped: 50413568 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404848640 unmapped: 50192384 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3e55000/0x0/0x1bfc00000, data 0x40cbf35/0x42d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4413307 data_alloc: 234881024 data_used: 28295168
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404897792 unmapped: 50143232 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404897792 unmapped: 50143232 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e17fc2800 session 0x558e18494f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404897792 unmapped: 50143232 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.586421013s of 11.006023407s, submitted: 119
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15e4e000 session 0x558e175b6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404905984 unmapped: 50135040 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404905984 unmapped: 50135040 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4412079 data_alloc: 234881024 data_used: 28307456
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a3e56000/0x0/0x1bfc00000, data 0x40cbed3/0x42d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404905984 unmapped: 50135040 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399138816 unmapped: 55902208 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15273c00 session 0x558e186acd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e14cb3800 session 0x558e17b0a3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e1558b000 session 0x558e17b03e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 ms_handle_reset con 0x558e15e4e000 session 0x558e175b7680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399138816 unmapped: 55902208 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e17fc2800 session 0x558e175b63c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e191fd800 session 0x558e1645b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e1807ac00 session 0x558e152afc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e1bf3e400 session 0x558e185ac000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e14cb3800 session 0x558e1780b860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399155200 unmapped: 55885824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e1558b000 session 0x558e1645b2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 ms_handle_reset con 0x558e15e4e000 session 0x558e156a34a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399155200 unmapped: 55885824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a4c96000/0x0/0x1bfc00000, data 0x3288b3c/0x3497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4244800 data_alloc: 234881024 data_used: 19116032
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399155200 unmapped: 55885824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 heartbeat osd_stat(store_statfs(0x1a4c96000/0x0/0x1bfc00000, data 0x3288b3c/0x3497000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399155200 unmapped: 55885824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 395 handle_osd_map epochs [396,396], i have 396, src has [1,396]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 396 ms_handle_reset con 0x558e14cb3800 session 0x558e17b02960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399155200 unmapped: 55885824 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399163392 unmapped: 55877632 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.041492462s of 11.409887314s, submitted: 54
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399163392 unmapped: 55877632 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4260711 data_alloc: 234881024 data_used: 19120128
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399163392 unmapped: 55877632 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399163392 unmapped: 55877632 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a4c91000/0x0/0x1bfc00000, data 0x3443790/0x349d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399114240 unmapped: 55926784 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 397 ms_handle_reset con 0x558e1558b000 session 0x558e180e6b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399114240 unmapped: 55926784 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399114240 unmapped: 55926784 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4262754 data_alloc: 234881024 data_used: 19124224
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399122432 unmapped: 55918592 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 397 ms_handle_reset con 0x558e1807ac00 session 0x558e18495a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 55746560 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a4c6b000/0x0/0x1bfc00000, data 0x346942d/0x34c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399294464 unmapped: 55746560 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4c6b000/0x0/0x1bfc00000, data 0x346942d/0x34c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 55721984 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 55721984 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4c67000/0x0/0x1bfc00000, data 0x346af6c/0x34c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4c67000/0x0/0x1bfc00000, data 0x346af6c/0x34c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4293445 data_alloc: 234881024 data_used: 22315008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 55721984 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 55721984 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 55721984 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4c67000/0x0/0x1bfc00000, data 0x346af6c/0x34c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399319040 unmapped: 55721984 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.863586426s of 15.354431152s, submitted: 42
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 55713792 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4317447 data_alloc: 234881024 data_used: 22310912
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 54665216 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 54665216 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1bf3e400 session 0x558e1766ed20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e17fc2800 session 0x558e185ac960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400375808 unmapped: 54665216 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4c64000/0x0/0x1bfc00000, data 0x3775f6c/0x34ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1807ac00 session 0x558e178210e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1558b000 session 0x558e175714a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e14cb3800 session 0x558e1601c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4271035 data_alloc: 234881024 data_used: 19124224
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4fa5000/0x0/0x1bfc00000, data 0x3433fce/0x3189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4271035 data_alloc: 234881024 data_used: 19124224
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395927552 unmapped: 59113472 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.609726906s of 11.277618408s, submitted: 45
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1bf3e400 session 0x558e176cc780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4fa5000/0x0/0x1bfc00000, data 0x3433fce/0x3189000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,2])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e18634000 session 0x558e175705a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1f399400 session 0x558e175f0b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 55640064 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399400960 unmapped: 55640064 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399417344 unmapped: 55623680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399417344 unmapped: 55623680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a4fa4000/0x0/0x1bfc00000, data 0x3433ff1/0x318a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4287232 data_alloc: 234881024 data_used: 24543232
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399417344 unmapped: 55623680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399417344 unmapped: 55623680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399417344 unmapped: 55623680 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e14cb3800 session 0x558e17570b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1558b000 session 0x558e17571a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399572992 unmapped: 55468032 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1807ac00 session 0x558e175f3860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399572992 unmapped: 55468032 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a426a000/0x0/0x1bfc00000, data 0x416d053/0x3ec4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4393581 data_alloc: 234881024 data_used: 24551424
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399572992 unmapped: 55468032 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399572992 unmapped: 55468032 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.363395691s of 11.725243568s, submitted: 49
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399605760 unmapped: 55435264 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 ms_handle_reset con 0x558e1bf3e400 session 0x558e186acb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403841024 unmapped: 51200000 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 51191808 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a383b000/0x0/0x1bfc00000, data 0x4b99cac/0x48f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 400 ms_handle_reset con 0x558e1bf3e400 session 0x558e177fe3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4505540 data_alloc: 234881024 data_used: 27582464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 51159040 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 52969472 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 401 ms_handle_reset con 0x558e1807ac00 session 0x558e175f0000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e1558b000 session 0x558e169141e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 62472192 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e18506000 session 0x558e15dada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e14cb3800 session 0x558e1780af00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 62464000 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400982016 unmapped: 62455808 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a2d0c000/0x0/0x1bfc00000, data 0x56c18fb/0x5420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4608576 data_alloc: 234881024 data_used: 29917184
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399712256 unmapped: 63725568 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a2d0c000/0x0/0x1bfc00000, data 0x56c18fb/0x5420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e1807ac00 session 0x558e175a72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397410304 unmapped: 66027520 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e1bf3e400 session 0x558e1769c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397713408 unmapped: 65724416 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.380999565s of 10.683680534s, submitted: 62
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e1769c960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18632c00 session 0x558e17b0bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397713408 unmapped: 65724416 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a221e000/0x0/0x1bfc00000, data 0x61af58c/0x5f0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e1769d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e175a6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397713408 unmapped: 65724416 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1558b000 session 0x558e175a6780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a221e000/0x0/0x1bfc00000, data 0x61af58c/0x5f0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4627232 data_alloc: 234881024 data_used: 24453120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e18494f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x5a8f58c/0x57ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4627232 data_alloc: 234881024 data_used: 24453120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396263424 unmapped: 67174400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396263424 unmapped: 67174400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e181d01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1807ac00 session 0x558e1780bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1807ac00 session 0x558e175b74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e183150e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396263424 unmapped: 67174400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1558b000 session 0x558e156a2000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e17b03680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e17b001e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x5a8f58c/0x57ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e176cd860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e175a6780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674058 data_alloc: 234881024 data_used: 24453120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a2390000/0x0/0x1bfc00000, data 0x603d59c/0x5d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a2390000/0x0/0x1bfc00000, data 0x603d59c/0x5d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1558b000 session 0x558e17b0bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1807ac00 session 0x558e1769c960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674058 data_alloc: 234881024 data_used: 24453120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e1769c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.228857040s of 18.564140320s, submitted: 26
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e1780af00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18634000 session 0x558e1766f860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e156a3c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395018240 unmapped: 68419584 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a238f000/0x0/0x1bfc00000, data 0x603d5ac/0x5d9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395018240 unmapped: 68419584 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18632c00 session 0x558e17b014a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718203 data_alloc: 234881024 data_used: 30314496
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395026432 unmapped: 68411392 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394797056 unmapped: 68640768 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e17e7cc00 session 0x558e1601d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e17e7cc00 session 0x558e180e7e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a238e000/0x0/0x1bfc00000, data 0x603d5ac/0x5d9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397090816 unmapped: 66347008 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e17661680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397090816 unmapped: 66347008 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397090816 unmapped: 66347008 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e18506000 session 0x558e1706b860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a2390000/0x0/0x1bfc00000, data 0x5d3253a/0x5a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a2698000/0x0/0x1bfc00000, data 0x5d341e7/0x5a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4738301 data_alloc: 251658240 data_used: 38334464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397123584 unmapped: 66314240 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e17732800 session 0x558e17b0d4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e18fcb800 session 0x558e178203c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e15556c00 session 0x558e17820780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 66297856 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 66297856 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.787518501s of 11.968016624s, submitted: 61
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 62447616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e17732800 session 0x558e180541e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401440768 unmapped: 61997056 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e17e7cc00 session 0x558e152af680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4645895 data_alloc: 234881024 data_used: 33972224
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a1e15000/0x0/0x1bfc00000, data 0x60fc1d7/0x6319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399474688 unmapped: 63963136 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399474688 unmapped: 63963136 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401874944 unmapped: 61562880 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 405 heartbeat osd_stat(store_statfs(0x1a216d000/0x0/0x1bfc00000, data 0x6470cb4/0x5fc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 61087744 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e18506000 session 0x558e17570000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 61046784 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4804117 data_alloc: 234881024 data_used: 34136064
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 61046784 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 61046784 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 61038592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 61038592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.950796127s of 10.631396294s, submitted: 212
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a20d1000/0x0/0x1bfc00000, data 0x64fe943/0x604e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e18fcb800 session 0x558e1766f2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 59662336 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e15556c00 session 0x558e1601c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e17732800 session 0x558e176cc5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e17e7cc00 session 0x558e176cda40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e18506000 session 0x558e15ea0d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4815156 data_alloc: 234881024 data_used: 34136064
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a1f67000/0x0/0x1bfc00000, data 0x6676943/0x61c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 59654144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e1bf3e400 session 0x558e175a70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a1f67000/0x0/0x1bfc00000, data 0x6676943/0x61c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403791872 unmapped: 59645952 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e15556c00 session 0x558e18495c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403791872 unmapped: 59645952 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 59629568 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 407 ms_handle_reset con 0x558e17732800 session 0x558e175a74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a1f65000/0x0/0x1bfc00000, data 0x667845f/0x61c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 407 ms_handle_reset con 0x558e17e7cc00 session 0x558e1809b4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4812011 data_alloc: 234881024 data_used: 34336768
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 407 ms_handle_reset con 0x558e1558b400 session 0x558e177ff680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a1f65000/0x0/0x1bfc00000, data 0x667845f/0x61c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403832832 unmapped: 59604992 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 408 ms_handle_reset con 0x558e180ce400 session 0x558e1645ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 408 ms_handle_reset con 0x558e15556c00 session 0x558e15137e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.398662567s of 10.007455826s, submitted: 102
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e170eb800 session 0x558e178203c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e18635400 session 0x558e1766eb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a4019000/0x0/0x1bfc00000, data 0x3e29240/0x4046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 64110592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e17732800 session 0x558e170ba3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e1558b400 session 0x558e180b2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e15556c00 session 0x558e17b01860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360264 data_alloc: 234881024 data_used: 23638016
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399335424 unmapped: 64102400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a44ef000/0x0/0x1bfc00000, data 0x3a2188a/0x3c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e170eb800 session 0x558e15ea0780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399409152 unmapped: 64028672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e17732800 session 0x558e152afc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4439666 data_alloc: 234881024 data_used: 23601152
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a446c000/0x0/0x1bfc00000, data 0x3aa48c3/0x3cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 64004096 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.361456871s of 10.004527092s, submitted: 161
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403292160 unmapped: 60145664 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524226 data_alloc: 234881024 data_used: 23879680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403292160 unmapped: 60145664 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18635400 session 0x558e177fe3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32d9000/0x0/0x1bfc00000, data 0x4c3643a/0x4e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32d9000/0x0/0x1bfc00000, data 0x4c3643a/0x4e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4585704 data_alloc: 234881024 data_used: 23875584
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7cc00 session 0x558e175b7680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e1766e1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32d9000/0x0/0x1bfc00000, data 0x4c3643a/0x4e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403103744 unmapped: 60334080 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e170eb800 session 0x558e186ad860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.963036537s of 10.694633484s, submitted: 58
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403111936 unmapped: 60325888 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e18494000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7cc00 session 0x558e1645a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4588309 data_alloc: 234881024 data_used: 23883776
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32b1000/0x0/0x1bfc00000, data 0x4c5d45d/0x4e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403111936 unmapped: 60325888 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e17b01e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404037632 unmapped: 59400192 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e1766e3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e1809ba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404529152 unmapped: 58908672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404529152 unmapped: 58908672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32af000/0x0/0x1bfc00000, data 0x4c5f45d/0x4e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408477696 unmapped: 54960128 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709181 data_alloc: 251658240 data_used: 40263680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408477696 unmapped: 54960128 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e15ea0d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408477696 unmapped: 54960128 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32af000/0x0/0x1bfc00000, data 0x4c5f45d/0x4e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e17b030e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604751 data_alloc: 251658240 data_used: 37675008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3e13000/0x0/0x1bfc00000, data 0x40fb45d/0x431b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.932298660s of 12.038512230s, submitted: 32
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 409591808 unmapped: 53846016 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 411590656 unmapped: 51847168 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 411680768 unmapped: 51757056 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a349d000/0x0/0x1bfc00000, data 0x4a7045d/0x4c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 50176000 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770651 data_alloc: 251658240 data_used: 39198720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2be3000/0x0/0x1bfc00000, data 0x531d45d/0x553d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2be3000/0x0/0x1bfc00000, data 0x531d45d/0x553d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4773601 data_alloc: 251658240 data_used: 39391232
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x532045d/0x5540000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767745 data_alloc: 251658240 data_used: 39415808
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18635400 session 0x558e17b01680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.476836205s of 14.158192635s, submitted: 166
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18504000 session 0x558e1845c000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e1809b680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 49037312 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 49037312 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2c12000/0x0/0x1bfc00000, data 0x52fc43a/0x551b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 49037312 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759909 data_alloc: 251658240 data_used: 39301120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2c12000/0x0/0x1bfc00000, data 0x52fc43a/0x551b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2c11000/0x0/0x1bfc00000, data 0x52fd43a/0x551c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e170eb800 session 0x558e1601dc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e18494f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414425088 unmapped: 49012736 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4584221 data_alloc: 234881024 data_used: 30871552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414425088 unmapped: 49012736 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414425088 unmapped: 49012736 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a360f000/0x0/0x1bfc00000, data 0x42c443a/0x44e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4584221 data_alloc: 234881024 data_used: 30871552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: mgrc ms_handle_reset ms_handle_reset con 0x558e18542c00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3835187053
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3835187053,v1:192.168.122.100:6801/3835187053]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: mgrc handle_mgr_configure stats_period=5
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a360f000/0x0/0x1bfc00000, data 0x42c443a/0x44e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a360f000/0x0/0x1bfc00000, data 0x42c443a/0x44e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4584221 data_alloc: 234881024 data_used: 30871552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e181d0f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 48857088 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7dc00 session 0x558e17b092c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e175b7c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 48848896 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18504000 session 0x558e176ccf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.935745239s of 22.065441132s, submitted: 24
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e177fe5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414597120 unmapped: 48840704 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1558b000 session 0x558e15dada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1807ac00 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e1848c3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e175f3860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e17571c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7cc00 session 0x558e18054b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e1601dc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1807ac00 session 0x558e1845c000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e17b01680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1807a800 session 0x558e17b030e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1c8c8800 session 0x558e15ea0d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414441472 unmapped: 48996352 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e17b0d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a383a000/0x0/0x1bfc00000, data 0x42c444a/0x44e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414457856 unmapped: 48979968 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4491086 data_alloc: 234881024 data_used: 26501120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4491086 data_alloc: 234881024 data_used: 26501120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 47644672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.544863701s of 12.705204010s, submitted: 25
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4532618 data_alloc: 234881024 data_used: 32337920
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 47603712 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4092000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4092000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4537202 data_alloc: 234881024 data_used: 33103872
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4086000/0x0/0x1bfc00000, data 0x3a7943a/0x3c98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4624536 data_alloc: 234881024 data_used: 33644544
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416890880 unmapped: 46546944 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.343315125s of 10.646893501s, submitted: 73
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a34d7000/0x0/0x1bfc00000, data 0x462043a/0x483f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640986 data_alloc: 234881024 data_used: 33513472
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 410 handle_osd_map epochs [411,411], i have 411, src has [1,411]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e18494000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a34de000/0x0/0x1bfc00000, data 0x462143a/0x4840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18501400 session 0x558e17b0c5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e15ea0780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649804 data_alloc: 234881024 data_used: 34615296
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e70000 session 0x558e1766eb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e1809b4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e18495c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.740987301s of 10.010063171s, submitted: 57
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e1601e000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18501400 session 0x558e17b00f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a33b5000/0x0/0x1bfc00000, data 0x4747157/0x4969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a33b5000/0x0/0x1bfc00000, data 0x4747157/0x4969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e170eb400 session 0x558e185ac5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4662115 data_alloc: 234881024 data_used: 34615296
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e1780b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 43065344 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e165ae960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2a22000/0x0/0x1bfc00000, data 0x50da157/0x52fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e15692d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18501400 session 0x558e1769de00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 53583872 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 53583872 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 53583872 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418258944 unmapped: 53575680 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18f0e400 session 0x558e1645b860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742155 data_alloc: 251658240 data_used: 35667968
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2a1e000/0x0/0x1bfc00000, data 0x50dd167/0x5300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e169141e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.706860542s of 10.017317772s, submitted: 15
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418267136 unmapped: 53567488 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e186ac3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e17b0a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 53420032 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 53420032 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422379520 unmapped: 49455104 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422879232 unmapped: 48955392 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a29f9000/0x0/0x1bfc00000, data 0x5170167/0x5325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4824333 data_alloc: 251658240 data_used: 45694976
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 48128000 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4899649 data_alloc: 251658240 data_used: 46161920
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423772160 unmapped: 48062464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a208e000/0x0/0x1bfc00000, data 0x5adb167/0x5c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423772160 unmapped: 48062464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.691255569s of 11.111913681s, submitted: 56
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423780352 unmapped: 48054272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a208a000/0x0/0x1bfc00000, data 0x5ade167/0x5c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423780352 unmapped: 48054272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 47505408 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4947467 data_alloc: 251658240 data_used: 46370816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428957696 unmapped: 42876928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428965888 unmapped: 42868736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428998656 unmapped: 42835968 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429015040 unmapped: 42819584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429015040 unmapped: 42819584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5002446 data_alloc: 251658240 data_used: 46809088
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429015040 unmapped: 42819584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992478 data_alloc: 251658240 data_used: 46952448
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 46006272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 46006272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e7c400 session 0x558e17b04b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.185502052s of 15.360220909s, submitted: 50
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 45998080 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e155d3800 session 0x558e183150e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17ca0800 session 0x558e17b0cd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425844736 unmapped: 45989888 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e155d3800 session 0x558e15dacd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2075000/0x0/0x1bfc00000, data 0x5af4167/0x5ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 45981696 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4901853 data_alloc: 251658240 data_used: 45346816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2076000/0x0/0x1bfc00000, data 0x5af4157/0x5ca8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 45981696 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e15136b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 45948928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 45948928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 45948928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e87c00 session 0x558e1848da40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4939429 data_alloc: 251658240 data_used: 45342720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e17b0b2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d7b000/0x0/0x1bfc00000, data 0x4df0147/0x4fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d7b000/0x0/0x1bfc00000, data 0x4df0147/0x4fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764191 data_alloc: 251658240 data_used: 39112704
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.408885002s of 13.657489777s, submitted: 65
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 45932544 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e17b09e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d4f000/0x0/0x1bfc00000, data 0x4e14147/0x4fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 45932544 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 45932544 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e180cf400 session 0x558e17b001e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d4f000/0x0/0x1bfc00000, data 0x4e14147/0x4fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e87c00 session 0x558e177ff680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4805083 data_alloc: 251658240 data_used: 44142592
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e175b7680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e191fdc00 session 0x558e180e72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e70c00 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426967040 unmapped: 44867584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 411 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e17e87c00 session 0x558e17b041e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4763347 data_alloc: 251658240 data_used: 44044288
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x48c5d30/0x4ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e15556c00 session 0x558e1766e780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.172450066s of 10.508795738s, submitted: 49
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e18504000 session 0x558e17b045a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x48c5d30/0x4ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e1807a800 session 0x558e186ad860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430637056 unmapped: 41197568 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430964736 unmapped: 40869888 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4848018 data_alloc: 251658240 data_used: 45154304
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430972928 unmapped: 40861696 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e180cf400 session 0x558e175f01e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a29a6000/0x0/0x1bfc00000, data 0x5158d20/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [0,0,1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e15556c00 session 0x558e175f34a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599958 data_alloc: 234881024 data_used: 32751616
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e1848cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e170bb4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420364288 unmapped: 51470336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.573250771s of 10.048701286s, submitted: 158
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17e87c00 session 0x558e1809be00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a3cd6000/0x0/0x1bfc00000, data 0x3e287fd/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4717000/0x0/0x1bfc00000, data 0x30c07fd/0x32e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4717000/0x0/0x1bfc00000, data 0x30c07fd/0x32e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4455360 data_alloc: 234881024 data_used: 26558464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e186acd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e170bb2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e1780ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e1601e000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292332 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e170bb0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e170ba1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292332 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e180e70e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.756290436s of 14.266667366s, submitted: 15
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e175f14a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412164096 unmapped: 59670528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316473 data_alloc: 234881024 data_used: 19558400
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316473 data_alloc: 234881024 data_used: 19558400
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.683190346s of 11.701642036s, submitted: 6
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 59490304 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412753920 unmapped: 59080704 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329939 data_alloc: 234881024 data_used: 19558400
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5501000/0x0/0x1bfc00000, data 0x25fd7fd/0x281d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 59547648 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 59547648 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x26937fd/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 59547648 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e180b3680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17e87c00 session 0x558e1848d680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e1807a800 session 0x558e17571680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e1807a800 session 0x558e1706ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e156a2d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52cc000/0x0/0x1bfc00000, data 0x28327fd/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358266 data_alloc: 234881024 data_used: 19656704
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52cc000/0x0/0x1bfc00000, data 0x28327fd/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412327936 unmapped: 59506688 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.042664528s of 12.211009979s, submitted: 45
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412467200 unmapped: 59367424 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357298 data_alloc: 234881024 data_used: 19656704
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412467200 unmapped: 59367424 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412475392 unmapped: 59359232 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412475392 unmapped: 59359232 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52ab000/0x0/0x1bfc00000, data 0x28537fd/0x2a73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412475392 unmapped: 59359232 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e175a6000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e17571860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17e87c00 session 0x558e17b045a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296288 data_alloc: 218103808 data_used: 17399808
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5873000/0x0/0x1bfc00000, data 0x228b7fd/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296288 data_alloc: 218103808 data_used: 17399808
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5873000/0x0/0x1bfc00000, data 0x228b7fd/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5873000/0x0/0x1bfc00000, data 0x228b7fd/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.162653923s of 12.348175049s, submitted: 25
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 58638336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4370226 data_alloc: 234881024 data_used: 17797120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4370226 data_alloc: 234881024 data_used: 17797120
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.642455101s of 12.881525040s, submitted: 62
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e1848cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281318 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e177ff680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 59449344 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 59449344 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 59432960 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e1645ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17b054a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e1807a800 session 0x558e156a2000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 59432960 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e1848d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.243667603s of 24.796049118s, submitted: 26
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e1601d4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4328887 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a541b000/0x0/0x1bfc00000, data 0x26e37fd/0x2903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17b01c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e175a72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e1601d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e1706b2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412655616 unmapped: 62849024 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417073 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e17b0a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e186acb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17820b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e1780a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e1645ad20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a48ba000/0x0/0x1bfc00000, data 0x324380d/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e17b001e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e15136780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a48ba000/0x0/0x1bfc00000, data 0x324380d/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417073 data_alloc: 218103808 data_used: 16318464
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a48ba000/0x0/0x1bfc00000, data 0x324380d/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.153007507s of 14.410977364s, submitted: 25
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e17b041e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e18054b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 62685184 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 62685184 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422911 data_alloc: 218103808 data_used: 16326656
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412631040 unmapped: 62873600 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540191 data_alloc: 234881024 data_used: 31350784
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540191 data_alloc: 234881024 data_used: 31350784
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.293484688s of 12.319557190s, submitted: 9
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 417054720 unmapped: 58449920 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422313984 unmapped: 53190656 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2f08000/0x0/0x1bfc00000, data 0x3a53853/0x3c76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421281792 unmapped: 54222848 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 54124544 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a28a4000/0x0/0x1bfc00000, data 0x40b0853/0x42d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421601280 unmapped: 53903360 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2888000/0x0/0x1bfc00000, data 0x40c4853/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4663159 data_alloc: 234881024 data_used: 32256000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2888000/0x0/0x1bfc00000, data 0x40c4853/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2888000/0x0/0x1bfc00000, data 0x40c4853/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4654851 data_alloc: 234881024 data_used: 32256000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2895000/0x0/0x1bfc00000, data 0x40c6853/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.770472527s of 13.373343468s, submitted: 145
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421625856 unmapped: 53878784 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4658235 data_alloc: 234881024 data_used: 32243712
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421625856 unmapped: 53878784 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a288f000/0x0/0x1bfc00000, data 0x40ca4ac/0x42ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421625856 unmapped: 53878784 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a288f000/0x0/0x1bfc00000, data 0x40ca4ac/0x42ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 414 ms_handle_reset con 0x558e18504000 session 0x558e1766f860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423665664 unmapped: 63881216 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 415 ms_handle_reset con 0x558e191fdc00 session 0x558e16915a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423673856 unmapped: 63873024 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a19e8000/0x0/0x1bfc00000, data 0x4f70159/0x5195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423698432 unmapped: 63848448 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4760629 data_alloc: 234881024 data_used: 35393536
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423698432 unmapped: 63848448 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a19e4000/0x0/0x1bfc00000, data 0x4f71dce/0x5198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 63840256 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184fec00 session 0x558e1601cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e15e4f800 session 0x558e180b34a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e15272000 session 0x558e175f1c20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184fec00 session 0x558e15ea0780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e18504000 session 0x558e175714a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e191fdc00 session 0x558e1706b860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184f2800 session 0x558e1601cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e15272000 session 0x558e1766f860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184fec00 session 0x558e18054b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4790537 data_alloc: 234881024 data_used: 35393536
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a1610000/0x0/0x1bfc00000, data 0x5346dde/0x556e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.951803207s of 12.451093674s, submitted: 51
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423886848 unmapped: 63660032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a160c000/0x0/0x1bfc00000, data 0x534891d/0x5571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423886848 unmapped: 63660032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a160c000/0x0/0x1bfc00000, data 0x534891d/0x5571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423886848 unmapped: 63660032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4794535 data_alloc: 234881024 data_used: 35401728
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a160c000/0x0/0x1bfc00000, data 0x534891d/0x5571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e18504000 session 0x558e17b04000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 63512576 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 63512576 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 63504384 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 63504384 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e18f0fc00 session 0x558e1769c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424050688 unmapped: 63496192 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4826520 data_alloc: 251658240 data_used: 36204544
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424099840 unmapped: 63447040 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4879000 data_alloc: 251658240 data_used: 43540480
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.388605118s of 16.425638199s, submitted: 16
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 61751296 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428785664 unmapped: 58761216 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 58720256 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4936172 data_alloc: 251658240 data_used: 44589056
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428851200 unmapped: 58695680 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428851200 unmapped: 58695680 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x19fdc7000/0x0/0x1bfc00000, data 0x59ee91d/0x5c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956704 data_alloc: 251658240 data_used: 48934912
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e191fdc00 session 0x558e15136780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e17aee400 session 0x558e180b23c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.634708405s of 10.053680420s, submitted: 54
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 56385536 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x19fdc2000/0x0/0x1bfc00000, data 0x59f391d/0x5c1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e15272000 session 0x558e186ac3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4850348 data_alloc: 251658240 data_used: 43851776
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a041f000/0x0/0x1bfc00000, data 0x4fe690d/0x520e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e15556c00 session 0x558e15ea0d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e184fec00 session 0x558e175a6780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4684822 data_alloc: 251658240 data_used: 35962880
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a167a000/0x0/0x1bfc00000, data 0x413d8da/0x4363000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.669324875s of 11.818543434s, submitted: 33
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4685082 data_alloc: 251658240 data_used: 35962880
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a1679000/0x0/0x1bfc00000, data 0x413e8da/0x4364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427704320 unmapped: 59842560 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.3 total, 600.0 interval#012Cumulative writes: 59K writes, 227K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.76 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5202 writes, 20K keys, 5202 commit groups, 1.0 writes per commit group, ingest: 20.24 MB, 0.03 MB/s#012Interval WAL: 5202 writes, 2078 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427704320 unmapped: 59842560 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a1679000/0x0/0x1bfc00000, data 0x413e8da/0x4364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4689562 data_alloc: 251658240 data_used: 37027840
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e18504000 session 0x558e156a25a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427704320 unmapped: 59842560 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 418 ms_handle_reset con 0x558e15556c00 session 0x558e156a2b40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 418 ms_handle_reset con 0x558e15272000 session 0x558e177ff2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a1674000/0x0/0x1bfc00000, data 0x41405f7/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [1,2])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 418 ms_handle_reset con 0x558e17aee400 session 0x558e177fef00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429170688 unmapped: 58376192 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 418 heartbeat osd_stat(store_statfs(0x19fcaf000/0x0/0x1bfc00000, data 0x5b05595/0x5d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429170688 unmapped: 58376192 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 419 ms_handle_reset con 0x558e184fec00 session 0x558e1645b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429178880 unmapped: 58368000 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e18f0fc00 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.062422752s of 10.363031387s, submitted: 101
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e15272000 session 0x558e175f1a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429203456 unmapped: 58343424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e15556c00 session 0x558e1809b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e17aee400 session 0x558e175f2d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4895946 data_alloc: 251658240 data_used: 40210432
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 heartbeat osd_stat(store_statfs(0x19fcaa000/0x0/0x1bfc00000, data 0x5b08f19/0x5d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429203456 unmapped: 58343424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e184fec00 session 0x558e170ba3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429228032 unmapped: 58318848 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e193c6000 session 0x558e1706ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e191fd400 session 0x558e1601d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429228032 unmapped: 58318848 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e15272000 session 0x558e165af4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a16e2000/0x0/0x1bfc00000, data 0x40d1b1e/0x42fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712034 data_alloc: 251658240 data_used: 39739392
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a16de000/0x0/0x1bfc00000, data 0x40d3679/0x42ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e17aee400 session 0x558e176ccf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e184fec00 session 0x558e17b0d4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e156bc400 session 0x558e17570000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e15272000 session 0x558e180e7860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e17aee400 session 0x558e180e7e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e184fec00 session 0x558e151372c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e191fd400 session 0x558e18495a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e156bc800 session 0x558e1809ba40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e15556c00 session 0x558e17b0bc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e15272000 session 0x558e17b0c5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 63897600 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a16da000/0x0/0x1bfc00000, data 0x40d5326/0x4302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 63897600 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e18501400 session 0x558e17b00f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e17732800 session 0x558e17570d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4589346 data_alloc: 234881024 data_used: 24395776
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.267575264s of 10.683115959s, submitted: 138
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e17aee400 session 0x558e180e7860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a1f92000/0x0/0x1bfc00000, data 0x381c336/0x3a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3045000/0x0/0x1bfc00000, data 0x2768e6e/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417822 data_alloc: 218103808 data_used: 16379904
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 424 ms_handle_reset con 0x558e15272000 session 0x558e170ba3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3045000/0x0/0x1bfc00000, data 0x2768e6e/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Jan 31 04:20:20 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40866 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1809b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e177ff4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e1645b0e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420782080 unmapped: 66764800 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457687 data_alloc: 234881024 data_used: 19607552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457687 data_alloc: 234881024 data_used: 19607552
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.997537613s of 19.108545303s, submitted: 49
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 63299584 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530919 data_alloc: 234881024 data_used: 19644416
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2617000/0x0/0x1bfc00000, data 0x31969bd/0x33c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2617000/0x0/0x1bfc00000, data 0x31969bd/0x33c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424239104 unmapped: 63307776 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 63299584 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 63283200 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4549795 data_alloc: 234881024 data_used: 20557824
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4549795 data_alloc: 234881024 data_used: 20557824
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4549795 data_alloc: 234881024 data_used: 20557824
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.564064026s of 16.081941605s, submitted: 84
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x37719bd/0x39a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e185fc400 session 0x558e1601f680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e1601d860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1769c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e175a7680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e1601e3c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x37719bd/0x39a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593282 data_alloc: 234881024 data_used: 20557824
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184eec00 session 0x558e17b01680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e175f21e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1845cf00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e152aed20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x37719bd/0x39a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4595120 data_alloc: 234881024 data_used: 20557824
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x37719cd/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x37719cd/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4634388 data_alloc: 234881024 data_used: 26116096
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x37719cd/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.197906494s of 17.336410522s, submitted: 21
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x37729cd/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4634872 data_alloc: 234881024 data_used: 26116096
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x37729cd/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426369024 unmapped: 61177856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426369024 unmapped: 61177856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426369024 unmapped: 61177856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4697934 data_alloc: 234881024 data_used: 26128384
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b8000/0x0/0x1bfc00000, data 0x3ff49cd/0x4226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426377216 unmapped: 61169664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426377216 unmapped: 61169664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b8000/0x0/0x1bfc00000, data 0x3ff49cd/0x4226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426377216 unmapped: 61169664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.118430138s of 10.285791397s, submitted: 43
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b8000/0x0/0x1bfc00000, data 0x3ff49cd/0x4226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4702580 data_alloc: 234881024 data_used: 26468352
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b2000/0x0/0x1bfc00000, data 0x3ffa9cd/0x422c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e16915a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18089000 session 0x558e17571680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b2000/0x0/0x1bfc00000, data 0x3ffa9cd/0x422c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [1])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555081 data_alloc: 234881024 data_used: 20561920
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32239bd/0x3454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e17b04960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.955152512s of 11.067607880s, submitted: 30
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32239bd/0x3454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421126144 unmapped: 66420736 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555081 data_alloc: 234881024 data_used: 20561920
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184fec00 session 0x558e175a6780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421158912 unmapped: 66387968 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e191fd400 session 0x558e175f0960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421216256 unmapped: 66330624 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e18315680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 66256896 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421314560 unmapped: 66232320 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383870 data_alloc: 218103808 data_used: 15134720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383870 data_alloc: 218103808 data_used: 15134720
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421330944 unmapped: 66215936 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421355520 unmapped: 66191360 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 66134016 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 66134016 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 66134016 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 66125824 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421429248 unmapped: 66117632 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421462016 unmapped: 66084864 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421462016 unmapped: 66084864 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421462016 unmapped: 66084864 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e17b0dc20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e17b0c780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e1809a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1809ad20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 108.594985962s of 109.833038330s, submitted: 382
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184fec00 session 0x558e1809b4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e191fd400 session 0x558e1780a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e17570f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1848da40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e18494000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0f000/0x0/0x1bfc00000, data 0x268f9ad/0x28bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422270 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0f000/0x0/0x1bfc00000, data 0x268f9ad/0x28bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422270 data_alloc: 218103808 data_used: 15138816
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0f000/0x0/0x1bfc00000, data 0x268f9ad/0x28bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184fec00 session 0x558e18495860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0e000/0x0/0x1bfc00000, data 0x268f9d0/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462992 data_alloc: 234881024 data_used: 20385792
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0e000/0x0/0x1bfc00000, data 0x268f9d0/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462992 data_alloc: 234881024 data_used: 20385792
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0e000/0x0/0x1bfc00000, data 0x268f9d0/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.986009598s of 20.119245529s, submitted: 13
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422076416 unmapped: 65470464 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503560 data_alloc: 234881024 data_used: 20520960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503560 data_alloc: 234881024 data_used: 20520960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422092800 unmapped: 65454080 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422092800 unmapped: 65454080 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422092800 unmapped: 65454080 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422100992 unmapped: 65445888 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503560 data_alloc: 234881024 data_used: 20520960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422100992 unmapped: 65445888 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e185ac960
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.309407234s of 13.431710243s, submitted: 27
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17fd0800 session 0x558e1706a1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422100992 unmapped: 65445888 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e175a6000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 68796416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.220516205s of 42.306221008s, submitted: 33
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e15dad2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395882 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 68771840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 68771840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f9ac/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 68771840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 68763648 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e169152c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e181d0d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f9ac/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449739 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449739 data_alloc: 218103808 data_used: 15073280
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6f000/0x0/0x1bfc00000, data 0x282ea0e/0x2a5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.493515968s of 12.869886398s, submitted: 31
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e180545a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x282ea31/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4500510 data_alloc: 234881024 data_used: 21995520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x282ea31/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4500510 data_alloc: 234881024 data_used: 21995520
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x282ea31/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.652256012s of 11.668172836s, submitted: 4
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 65101824 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2100000/0x0/0x1bfc00000, data 0x329ca31/0x34ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 65101824 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579302 data_alloc: 234881024 data_used: 22003712
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2100000/0x0/0x1bfc00000, data 0x329ca31/0x34ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422707200 unmapped: 64839680 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2088000/0x0/0x1bfc00000, data 0x3314a31/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207d000/0x0/0x1bfc00000, data 0x331fa31/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4590494 data_alloc: 234881024 data_used: 23314432
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207d000/0x0/0x1bfc00000, data 0x331fa31/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.939406395s of 11.124873161s, submitted: 85
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207d000/0x0/0x1bfc00000, data 0x331fa31/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4590786 data_alloc: 234881024 data_used: 23318528
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422789120 unmapped: 64757760 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422789120 unmapped: 64757760 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17fd0800 session 0x558e176cde00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207c000/0x0/0x1bfc00000, data 0x3320a31/0x3552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422797312 unmapped: 64749568 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422805504 unmapped: 64741376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4591130 data_alloc: 234881024 data_used: 23334912
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 426 ms_handle_reset con 0x558e18501400 session 0x558e175a74a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422813696 unmapped: 64733184 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422813696 unmapped: 64733184 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422821888 unmapped: 64724992 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 427 ms_handle_reset con 0x558e184fec00 session 0x558e15dacd20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2078000/0x0/0x1bfc00000, data 0x332268a/0x3555000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 427 ms_handle_reset con 0x558e17fd0400 session 0x558e152ae000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 59637760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 59637760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.998736382s of 10.688847542s, submitted: 23
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 428 ms_handle_reset con 0x558e15272000 session 0x558e156a2f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4690667 data_alloc: 234881024 data_used: 33574912
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a180b000/0x0/0x1bfc00000, data 0x3b8e2e3/0x3dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423329792 unmapped: 68067328 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e17fd0800 session 0x558e175712c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 68042752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e184fec00 session 0x558e15ea1860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e18501400 session 0x558e17b0a5a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e14cb3400 session 0x558e181d1680
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 68042752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e14cb3400 session 0x558e1766ed20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 68042752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423362560 unmapped: 68034560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698255 data_alloc: 234881024 data_used: 33574912
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e1780ab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e184fec00 session 0x558e17b08d20
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17fd0800 session 0x558e1766eb40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e18501400 session 0x558e175b6f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e14cb3400 session 0x558e1601ef00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e180e72c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4700791 data_alloc: 234881024 data_used: 33579008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b93808/0x3dcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17fd0800 session 0x558e152af4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b93808/0x3dcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4700791 data_alloc: 234881024 data_used: 33579008
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e184fec00 session 0x558e1601c1e0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e18501400 session 0x558e165aef00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.898597717s of 15.987887383s, submitted: 28
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e14cb3400 session 0x558e176cc780
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4702905 data_alloc: 234881024 data_used: 33599488
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4702905 data_alloc: 234881024 data_used: 33599488
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.751194000s of 13.756506920s, submitted: 2
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735891 data_alloc: 234881024 data_used: 33701888
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742611 data_alloc: 234881024 data_used: 34357248
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4743763 data_alloc: 234881024 data_used: 34758656
Jan 31 04:20:20 np0005603621 nova_compute[247399]: 2026-01-31 09:20:20.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4743763 data_alloc: 234881024 data_used: 34758656
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.341286659s of 15.488368034s, submitted: 11
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735603 data_alloc: 234881024 data_used: 34758656
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735603 data_alloc: 234881024 data_used: 34758656
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.959305763s of 11.974246025s, submitted: 4
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e184fec00 session 0x558e1809af00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4741043 data_alloc: 234881024 data_used: 35516416
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423575552 unmapped: 67821568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423575552 unmapped: 67821568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e170bab40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17fd0800 session 0x558e152af4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717131 data_alloc: 234881024 data_used: 35495936
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b937b6/0x3dcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17e7d000 session 0x558e175b7e00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4716406 data_alloc: 234881024 data_used: 35495936
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e14cb3400 session 0x558e175a7a40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.583814621s of 13.691693306s, submitted: 31
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423591936 unmapped: 67805184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e15dada40
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423591936 unmapped: 67805184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423591936 unmapped: 67805184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423600128 unmapped: 67796992 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 430 handle_osd_map epochs [431,431], i have 431, src has [1,431]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4719530 data_alloc: 234881024 data_used: 35500032
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a17ff000/0x0/0x1bfc00000, data 0x3b953f1/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 67772416 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 67747840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4642462 data_alloc: 234881024 data_used: 33583104
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 431 ms_handle_reset con 0x558e17e7d000 session 0x558e17b0a000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 67747840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.550810814s of 10.401992798s, submitted: 24
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 431 ms_handle_reset con 0x558e17fd0800 session 0x558e175b63c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 67747840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a206a000/0x0/0x1bfc00000, data 0x332b3f1/0x3564000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423665664 unmapped: 67731456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 432 ms_handle_reset con 0x558e184fec00 session 0x558e165af4a0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423665664 unmapped: 67731456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2066000/0x0/0x1bfc00000, data 0x332d0ba/0x3567000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423690240 unmapped: 67706880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649610 data_alloc: 234881024 data_used: 33591296
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423690240 unmapped: 67706880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2063000/0x0/0x1bfc00000, data 0x332ec15/0x356a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 67690496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 433 ms_handle_reset con 0x558e15556c00 session 0x558e186ad2c0
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 433 ms_handle_reset con 0x558e14cb3400 session 0x558e156a2f00
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4439599 data_alloc: 218103808 data_used: 15101952
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a31a3000/0x0/0x1bfc00000, data 0x218db90/0x23c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.949651718s of 10.408130646s, submitted: 76
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 ms_handle_reset con 0x558e15272000 session 0x558e175a6000
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 ms_handle_reset con 0x558e17e7d000 session 0x558e18495860
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414146560 unmapped: 77250560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414146560 unmapped: 77250560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414146560 unmapped: 77250560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414154752 unmapped: 77242368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414154752 unmapped: 77242368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414154752 unmapped: 77242368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 77225984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 77225984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 77225984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 77217792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 77217792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 77217792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 77201408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 77201408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 77193216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 77193216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 77193216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 77201408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'config diff' '{prefix=config diff}'
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'config show' '{prefix=config show}'
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413917184 unmapped: 77479936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 77692928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:20:20 np0005603621 ceph-osd[84880]: do_command 'log dump' '{prefix=log dump}'
Jan 31 04:20:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:20:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:20:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:20.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810660195' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40884 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43943 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:20:21 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:21.115+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/332300819' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270719008' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40902 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50620 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:21.575+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:20:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990379865' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40932 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:21 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43964 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:22 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40962 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 04:20:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657446220' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 04:20:22 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43979 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:22 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50674 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:22 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.40983 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:22 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43991 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:22.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:22 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50695 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:22.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50716 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043888937' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44009 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:23 np0005603621 nova_compute[247399]: 2026-01-31 09:20:23.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41010 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:23.317+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50731 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657502994' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44024 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4090892592' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 04:20:23 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602182841' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50752 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:23 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44039 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965016907' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4138859260' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155901326' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50776 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44051 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2167023638' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/909377280' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44063 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50794 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:24.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428933019' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 04:20:24 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169182829' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 04:20:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:24.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50818 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:25 np0005603621 systemd[1]: Starting Hostname Service...
Jan 31 04:20:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44081 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:25 np0005603621 podman[418436]: 2026-01-31 09:20:25.141447438 +0000 UTC m=+0.067186132 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:25 np0005603621 systemd[1]: Started Hostname Service.
Jan 31 04:20:25 np0005603621 podman[418447]: 2026-01-31 09:20:25.167230452 +0000 UTC m=+0.090987743 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649796783' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1346043595' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 31 04:20:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50836 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701644008' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 31 04:20:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:25 np0005603621 nova_compute[247399]: 2026-01-31 09:20:25.743 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 04:20:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381181227' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 04:20:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44111 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:25 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:25.995+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:25 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 04:20:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093359325' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 04:20:26 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.50860 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:26 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:20:26.197+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:26 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:20:26 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 04:20:26 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198138034' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 04:20:26 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41154 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:26 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41163 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:26.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:26 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41169 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:27 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41178 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:27 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41193 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 04:20:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523358902' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 04:20:27 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41214 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:28 np0005603621 nova_compute[247399]: 2026-01-31 09:20:28.157 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:28 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41238 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 04:20:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2662270197' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 04:20:28 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41247 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:20:28 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320335021' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:20:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:28.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:28 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41268 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:28.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3890659276' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1368862571' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51016 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51022 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44222 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44237 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51040 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:29 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51055 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44246 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:30 np0005603621 nova_compute[247399]: 2026-01-31 09:20:30.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 04:20:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/148996315' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 04:20:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51079 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44273 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:20:30.561 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:20:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:20:30.561 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:20:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:20:30.562 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:20:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41364 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:30 np0005603621 nova_compute[247399]: 2026-01-31 09:20:30.745 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:30.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51088 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44282 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 04:20:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2871700870' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 04:20:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:30.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51106 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44294 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 31 04:20:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791659971' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41403 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51127 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 31 04:20:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704659532' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44315 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51148 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:32 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44321 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:32 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:20:32 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2571431683' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:20:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:20:32 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51181 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:32.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 31 04:20:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735743224' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 31 04:20:33 np0005603621 nova_compute[247399]: 2026-01-31 09:20:33.159 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 31 04:20:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126330221' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 04:20:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:33 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51244 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:33 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41496 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:34 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44402 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 31 04:20:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2645219692' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 31 04:20:34 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41511 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:34.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:35.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41523 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:35 np0005603621 nova_compute[247399]: 2026-01-31 09:20:35.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Jan 31 04:20:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101320494' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 31 04:20:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Jan 31 04:20:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508009472' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 31 04:20:35 np0005603621 nova_compute[247399]: 2026-01-31 09:20:35.748 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51292 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41550 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41559 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44444 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Jan 31 04:20:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977335151' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 31 04:20:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:36.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:36 np0005603621 ovs-appctl[420880]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 04:20:36 np0005603621 ovs-appctl[420885]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 04:20:36 np0005603621 ovs-appctl[420907]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 31 04:20:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51319 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Jan 31 04:20:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282072337' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 31 04:20:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 31 04:20:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/944697519' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 31 04:20:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41583 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44468 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51337 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41595 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:38 np0005603621 nova_compute[247399]: 2026-01-31 09:20:38.160 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51358 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:38 np0005603621 nova_compute[247399]: 2026-01-31 09:20:38.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064614384' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44489 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183616214' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:20:38
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'vms', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups']
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:20:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:38.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44495 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Jan 31 04:20:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470403169' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 31 04:20:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:39.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:39 np0005603621 nova_compute[247399]: 2026-01-31 09:20:39.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:39 np0005603621 nova_compute[247399]: 2026-01-31 09:20:39.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41637 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51379 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3835187053
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51385 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:20:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1455451851' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:20:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44519 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Jan 31 04:20:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2521574908' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 31 04:20:40 np0005603621 nova_compute[247399]: 2026-01-31 09:20:40.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:40 np0005603621 nova_compute[247399]: 2026-01-31 09:20:40.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:20:40 np0005603621 nova_compute[247399]: 2026-01-31 09:20:40.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:20:40 np0005603621 nova_compute[247399]: 2026-01-31 09:20:40.240 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44525 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Jan 31 04:20:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1472950096' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 31 04:20:40 np0005603621 nova_compute[247399]: 2026-01-31 09:20:40.751 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:40 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51406 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Jan 31 04:20:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/861143815' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 31 04:20:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:41.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51412 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.226 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.227 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.227 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.227 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.227 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:20:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41685 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44546 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882059032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.653 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:20:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Jan 31 04:20:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178863489' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 31 04:20:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44552 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.779 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.780 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3874MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.781 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:20:41 np0005603621 nova_compute[247399]: 2026-01-31 09:20:41.781 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:20:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Jan 31 04:20:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2823426172' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 31 04:20:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41730 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.417 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.417 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.621 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.693 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.693 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.722 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:20:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Jan 31 04:20:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762023155' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 31 04:20:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51451 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.762 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:20:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:42.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:42 np0005603621 nova_compute[247399]: 2026-01-31 09:20:42.841 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:20:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:43.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:43 np0005603621 nova_compute[247399]: 2026-01-31 09:20:43.162 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41748 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073861376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:20:43 np0005603621 nova_compute[247399]: 2026-01-31 09:20:43.261 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:20:43 np0005603621 nova_compute[247399]: 2026-01-31 09:20:43.266 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.293328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851243293399, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 1004, "num_deletes": 255, "total_data_size": 1076709, "memory_usage": 1103704, "flush_reason": "Manual Compaction"}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851243299388, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 1065671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87739, "largest_seqno": 88742, "table_properties": {"data_size": 1060077, "index_size": 2604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16513, "raw_average_key_size": 22, "raw_value_size": 1047493, "raw_average_value_size": 1409, "num_data_blocks": 113, "num_entries": 743, "num_filter_entries": 743, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851198, "oldest_key_time": 1769851198, "file_creation_time": 1769851243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 6081 microseconds, and 2672 cpu microseconds.
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.299427) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 1065671 bytes OK
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.299439) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.301408) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.301426) EVENT_LOG_v1 {"time_micros": 1769851243301421, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.301448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 1070950, prev total WAL file size 1070950, number of live WAL files 2.
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.301865) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373636' seq:72057594037927935, type:22 .. '6C6F676D0034303137' seq:0, type:0; will stop at (end)
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(1040KB)], [203(10MB)]
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851243301904, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 12551480, "oldest_snapshot_seqno": -1}
Jan 31 04:20:43 np0005603621 nova_compute[247399]: 2026-01-31 09:20:43.307 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:20:43 np0005603621 nova_compute[247399]: 2026-01-31 09:20:43.308 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:20:43 np0005603621 nova_compute[247399]: 2026-01-31 09:20:43.308 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 11526 keys, 12426848 bytes, temperature: kUnknown
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851243372851, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 12426848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12355599, "index_size": 41312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 306497, "raw_average_key_size": 26, "raw_value_size": 12157738, "raw_average_value_size": 1054, "num_data_blocks": 1557, "num_entries": 11526, "num_filter_entries": 11526, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851243, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.373051) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12426848 bytes
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.374875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.8 rd, 175.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(23.4) write-amplify(11.7) OK, records in: 12045, records dropped: 519 output_compression: NoCompression
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.374899) EVENT_LOG_v1 {"time_micros": 1769851243374888, "job": 128, "event": "compaction_finished", "compaction_time_micros": 71008, "compaction_time_cpu_micros": 21902, "output_level": 6, "num_output_files": 1, "total_output_size": 12426848, "num_input_records": 12045, "num_output_records": 11526, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851243375105, "job": 128, "event": "table_file_deletion", "file_number": 205}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851243376289, "job": 128, "event": "table_file_deletion", "file_number": 203}
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.301729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.376419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.376429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.376434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.376438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:20:43.376443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:20:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41766 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44585 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Jan 31 04:20:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644252732' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 31 04:20:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41802 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:44.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41808 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51505 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:20:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032183971' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:20:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:45 np0005603621 nova_compute[247399]: 2026-01-31 09:20:45.753 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Jan 31 04:20:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187039742' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 31 04:20:46 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41838 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:46 np0005603621 systemd[1]: Starting Time & Date Service...
Jan 31 04:20:46 np0005603621 systemd[1]: Started Time & Date Service.
Jan 31 04:20:46 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 04:20:46 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.41850 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:46.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:46 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44630 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:46 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51532 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:47 np0005603621 nova_compute[247399]: 2026-01-31 09:20:47.309 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:47 np0005603621 nova_compute[247399]: 2026-01-31 09:20:47.310 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:47 np0005603621 nova_compute[247399]: 2026-01-31 09:20:47.310 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:20:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Jan 31 04:20:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3792815610' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 31 04:20:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:47 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51547 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:48 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51553 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:48 np0005603621 nova_compute[247399]: 2026-01-31 09:20:48.164 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:48 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44657 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:48.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:48 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44675 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:20:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:49.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51571 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51574 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51580 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44699 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51604 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:50 np0005603621 nova_compute[247399]: 2026-01-31 09:20:50.757 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:50.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44708 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:20:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:20:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:51.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:51 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.51610 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:51 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44729 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:52 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.44735 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:20:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:52.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:53.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:53 np0005603621 nova_compute[247399]: 2026-01-31 09:20:53.165 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:53 np0005603621 podman[423214]: 2026-01-31 09:20:53.488654368 +0000 UTC m=+0.068187582 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:53 np0005603621 podman[423214]: 2026-01-31 09:20:53.644369253 +0000 UTC m=+0.223902477 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:54 np0005603621 podman[423367]: 2026-01-31 09:20:54.129887856 +0000 UTC m=+0.047216071 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:20:54 np0005603621 podman[423367]: 2026-01-31 09:20:54.140981416 +0000 UTC m=+0.058309631 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:20:54 np0005603621 podman[423431]: 2026-01-31 09:20:54.319191462 +0000 UTC m=+0.047103268 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, name=keepalived, release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 04:20:54 np0005603621 podman[423431]: 2026-01-31 09:20:54.327768972 +0000 UTC m=+0.055680778 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git)
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:20:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:20:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:55.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:55 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fe1bb2ac-05ac-4965-9040-8ab149e9bf02 does not exist
Jan 31 04:20:55 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ffdfdb1f-41e3-4360-94b0-057b840cb943 does not exist
Jan 31 04:20:55 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 19649555-9410-4117-860a-4ac29fb31d4b does not exist
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:20:55 np0005603621 podman[423619]: 2026-01-31 09:20:55.261004436 +0000 UTC m=+0.073324205 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:55 np0005603621 podman[423618]: 2026-01-31 09:20:55.261349377 +0000 UTC m=+0.074627626 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.572940761 +0000 UTC m=+0.036657408 container create 4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:20:55 np0005603621 systemd[1]: Started libpod-conmon-4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c.scope.
Jan 31 04:20:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.644487009 +0000 UTC m=+0.108203666 container init 4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.651518171 +0000 UTC m=+0.115234818 container start 4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.55737771 +0000 UTC m=+0.021094367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.65464485 +0000 UTC m=+0.118361517 container attach 4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:20:55 np0005603621 blissful_brattain[423792]: 167 167
Jan 31 04:20:55 np0005603621 systemd[1]: libpod-4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c.scope: Deactivated successfully.
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.657660605 +0000 UTC m=+0.121377272 container died 4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:55 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0f03c9e880d0b21f0c46bceb0bfda97c616301fc18954bfe7d3dde272956cfa4-merged.mount: Deactivated successfully.
Jan 31 04:20:55 np0005603621 podman[423775]: 2026-01-31 09:20:55.68727452 +0000 UTC m=+0.150991167 container remove 4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:20:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:55 np0005603621 systemd[1]: libpod-conmon-4834b66680b856c47010fcd16a7b4c6f4dcb6899e246f50e78bd62cfc784f10c.scope: Deactivated successfully.
Jan 31 04:20:55 np0005603621 nova_compute[247399]: 2026-01-31 09:20:55.760 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:55 np0005603621 podman[423818]: 2026-01-31 09:20:55.813160103 +0000 UTC m=+0.041591114 container create 8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ritchie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:20:55 np0005603621 systemd[1]: Started libpod-conmon-8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45.scope.
Jan 31 04:20:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ecb4d5a1af66fe8ec7f94f6dd7428336d17d5a306105f2316b9751226e9819/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ecb4d5a1af66fe8ec7f94f6dd7428336d17d5a306105f2316b9751226e9819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ecb4d5a1af66fe8ec7f94f6dd7428336d17d5a306105f2316b9751226e9819/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ecb4d5a1af66fe8ec7f94f6dd7428336d17d5a306105f2316b9751226e9819/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:55 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62ecb4d5a1af66fe8ec7f94f6dd7428336d17d5a306105f2316b9751226e9819/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:55 np0005603621 podman[423818]: 2026-01-31 09:20:55.796359903 +0000 UTC m=+0.024790944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:20:55 np0005603621 podman[423818]: 2026-01-31 09:20:55.905124815 +0000 UTC m=+0.133555866 container init 8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:20:55 np0005603621 podman[423818]: 2026-01-31 09:20:55.910423592 +0000 UTC m=+0.138854633 container start 8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 04:20:55 np0005603621 podman[423818]: 2026-01-31 09:20:55.913324624 +0000 UTC m=+0.141755645 container attach 8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:56 np0005603621 admiring_ritchie[423834]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:20:56 np0005603621 admiring_ritchie[423834]: --> relative data size: 1.0
Jan 31 04:20:56 np0005603621 admiring_ritchie[423834]: --> All data devices are unavailable
Jan 31 04:20:56 np0005603621 systemd[1]: libpod-8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45.scope: Deactivated successfully.
Jan 31 04:20:56 np0005603621 podman[423818]: 2026-01-31 09:20:56.640198395 +0000 UTC m=+0.868629456 container died 8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:20:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-62ecb4d5a1af66fe8ec7f94f6dd7428336d17d5a306105f2316b9751226e9819-merged.mount: Deactivated successfully.
Jan 31 04:20:56 np0005603621 podman[423818]: 2026-01-31 09:20:56.698187695 +0000 UTC m=+0.926618726 container remove 8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:20:56 np0005603621 systemd[1]: libpod-conmon-8352ce41b49ce8fce90dbee40e1f1202bf639ebd3d8d140e2c882c740d20bd45.scope: Deactivated successfully.
Jan 31 04:20:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:56.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:57.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.180297112 +0000 UTC m=+0.030898737 container create 47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 04:20:57 np0005603621 systemd[1]: Started libpod-conmon-47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a.scope.
Jan 31 04:20:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.263439075 +0000 UTC m=+0.114040720 container init 47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.167062183 +0000 UTC m=+0.017663828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.268838086 +0000 UTC m=+0.119439711 container start 47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:20:57 np0005603621 infallible_morse[424020]: 167 167
Jan 31 04:20:57 np0005603621 systemd[1]: libpod-47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a.scope: Deactivated successfully.
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.274955539 +0000 UTC m=+0.125557164 container attach 47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.275173096 +0000 UTC m=+0.125774731 container died 47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:20:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2dd9d3ab88d4080aecd894c4956888a4746293e5c3afcb7ade9c3cddedfe38cb-merged.mount: Deactivated successfully.
Jan 31 04:20:57 np0005603621 podman[424003]: 2026-01-31 09:20:57.328284072 +0000 UTC m=+0.178885697 container remove 47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_morse, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:57 np0005603621 systemd[1]: libpod-conmon-47a4f14cede225d6ef33d58bf2b1058b42c02a58a63f134eccc337c97c01113a.scope: Deactivated successfully.
Jan 31 04:20:57 np0005603621 podman[424046]: 2026-01-31 09:20:57.457260533 +0000 UTC m=+0.041435199 container create b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:20:57 np0005603621 systemd[1]: Started libpod-conmon-b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86.scope.
Jan 31 04:20:57 np0005603621 podman[424046]: 2026-01-31 09:20:57.439768841 +0000 UTC m=+0.023943527 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:20:57 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07da9a92a75b2ff200e20ae6fd1945f6e1c329ea4b4a9a3711ab66882c856e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07da9a92a75b2ff200e20ae6fd1945f6e1c329ea4b4a9a3711ab66882c856e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07da9a92a75b2ff200e20ae6fd1945f6e1c329ea4b4a9a3711ab66882c856e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:57 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c07da9a92a75b2ff200e20ae6fd1945f6e1c329ea4b4a9a3711ab66882c856e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:57 np0005603621 podman[424046]: 2026-01-31 09:20:57.555017268 +0000 UTC m=+0.139191964 container init b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:20:57 np0005603621 podman[424046]: 2026-01-31 09:20:57.560760729 +0000 UTC m=+0.144935395 container start b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:20:57 np0005603621 podman[424046]: 2026-01-31 09:20:57.563754584 +0000 UTC m=+0.147929250 container attach b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 31 04:20:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:58 np0005603621 nova_compute[247399]: 2026-01-31 09:20:58.170 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:20:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]: {
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:    "0": [
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:        {
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "devices": [
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "/dev/loop3"
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            ],
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "lv_name": "ceph_lv0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "lv_size": "7511998464",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "name": "ceph_lv0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "tags": {
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.cluster_name": "ceph",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.crush_device_class": "",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.encrypted": "0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.osd_id": "0",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.type": "block",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:                "ceph.vdo": "0"
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            },
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "type": "block",
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:            "vg_name": "ceph_vg0"
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:        }
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]:    ]
Jan 31 04:20:58 np0005603621 nervous_chatterjee[424063]: }
Jan 31 04:20:58 np0005603621 systemd[1]: libpod-b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86.scope: Deactivated successfully.
Jan 31 04:20:58 np0005603621 podman[424046]: 2026-01-31 09:20:58.244024344 +0000 UTC m=+0.828199010 container died b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-c07da9a92a75b2ff200e20ae6fd1945f6e1c329ea4b4a9a3711ab66882c856e1-merged.mount: Deactivated successfully.
Jan 31 04:20:58 np0005603621 podman[424046]: 2026-01-31 09:20:58.292457572 +0000 UTC m=+0.876632248 container remove b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:58 np0005603621 systemd[1]: libpod-conmon-b06c07dfbc3a195759d0df628bcc113dec61784e121a132f45854b77c8924a86.scope: Deactivated successfully.
Jan 31 04:20:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:20:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.820269651 +0000 UTC m=+0.042163362 container create b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:20:58 np0005603621 systemd[1]: Started libpod-conmon-b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f.scope.
Jan 31 04:20:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.891721795 +0000 UTC m=+0.113615526 container init b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.896820527 +0000 UTC m=+0.118714248 container start b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_euler, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.803698328 +0000 UTC m=+0.025592079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:20:58 np0005603621 charming_euler[424244]: 167 167
Jan 31 04:20:58 np0005603621 systemd[1]: libpod-b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f.scope: Deactivated successfully.
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.903728225 +0000 UTC m=+0.125621976 container attach b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_euler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.904021044 +0000 UTC m=+0.125914765 container died b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 04:20:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4204eb53cfff8ad80725007bc7b10f6bcea799f7f956e83f2a811810f8917caa-merged.mount: Deactivated successfully.
Jan 31 04:20:58 np0005603621 podman[424227]: 2026-01-31 09:20:58.938812553 +0000 UTC m=+0.160706254 container remove b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_euler, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:20:58 np0005603621 systemd[1]: libpod-conmon-b6de4dc42abed420a87816df2d0324e5a529e06f1f5f55e694517ac23081e36f.scope: Deactivated successfully.
Jan 31 04:20:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:20:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:20:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:20:59.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.07022422 +0000 UTC m=+0.044365151 container create 96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:20:59 np0005603621 systemd[1]: Started libpod-conmon-96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc.scope.
Jan 31 04:20:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb4ec51b05d891dc07dd10d48683b2e3bc1774fa2757b55e7918a517b6a802/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb4ec51b05d891dc07dd10d48683b2e3bc1774fa2757b55e7918a517b6a802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb4ec51b05d891dc07dd10d48683b2e3bc1774fa2757b55e7918a517b6a802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eabb4ec51b05d891dc07dd10d48683b2e3bc1774fa2757b55e7918a517b6a802/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.124672408 +0000 UTC m=+0.098813339 container init 96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.13044819 +0000 UTC m=+0.104589141 container start 96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.134006933 +0000 UTC m=+0.108147894 container attach 96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.054656818 +0000 UTC m=+0.028797769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:20:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]: {
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:        "osd_id": 0,
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:        "type": "bluestore"
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]:    }
Jan 31 04:20:59 np0005603621 elegant_goodall[424285]: }
Jan 31 04:20:59 np0005603621 systemd[1]: libpod-96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc.scope: Deactivated successfully.
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.847827081 +0000 UTC m=+0.821968012 container died 96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_goodall, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:20:59 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eabb4ec51b05d891dc07dd10d48683b2e3bc1774fa2757b55e7918a517b6a802-merged.mount: Deactivated successfully.
Jan 31 04:20:59 np0005603621 podman[424268]: 2026-01-31 09:20:59.882025041 +0000 UTC m=+0.856165972 container remove 96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:20:59 np0005603621 systemd[1]: libpod-conmon-96e581ad39514627104a2887b3e38b16544cc05abae8a35d53f4a0fcbf505ffc.scope: Deactivated successfully.
Jan 31 04:20:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:20:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:59 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:20:59 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:20:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 62125455-ec26-45da-8002-c99e16d51f5c does not exist
Jan 31 04:20:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0bebf3e5-9e05-436e-bf26-856c6aef8eff does not exist
Jan 31 04:20:59 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1b59de61-6fbd-4056-8378-80624e35d55b does not exist
Jan 31 04:21:00 np0005603621 nova_compute[247399]: 2026-01-31 09:21:00.764 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:00.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:01.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:01 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:21:01 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:21:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:02.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:03.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:03 np0005603621 nova_compute[247399]: 2026-01-31 09:21:03.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:04.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:05.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:05 np0005603621 nova_compute[247399]: 2026-01-31 09:21:05.766 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:06.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:07.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:08 np0005603621 nova_compute[247399]: 2026-01-31 09:21:08.195 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:21:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:21:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:09.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:10 np0005603621 nova_compute[247399]: 2026-01-31 09:21:10.768 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:10.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:11.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:12.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:13.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:13 np0005603621 nova_compute[247399]: 2026-01-31 09:21:13.196 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1293956767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:21:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:21:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1293956767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:21:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:14.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 04:21:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:15 np0005603621 nova_compute[247399]: 2026-01-31 09:21:15.770 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:16 np0005603621 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 31 04:21:16 np0005603621 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 31 04:21:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:16.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:17.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:18 np0005603621 nova_compute[247399]: 2026-01-31 09:21:18.199 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:18.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:19.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:20 np0005603621 nova_compute[247399]: 2026-01-31 09:21:20.774 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:21.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:23.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:23 np0005603621 nova_compute[247399]: 2026-01-31 09:21:23.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:25.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:25 np0005603621 podman[424436]: 2026-01-31 09:21:25.357119475 +0000 UTC m=+0.067245504 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Jan 31 04:21:25 np0005603621 podman[424437]: 2026-01-31 09:21:25.357205108 +0000 UTC m=+0.067187282 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:21:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:25 np0005603621 nova_compute[247399]: 2026-01-31 09:21:25.775 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:26.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:27.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:28 np0005603621 nova_compute[247399]: 2026-01-31 09:21:28.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:28.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:29.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:30 np0005603621 nova_compute[247399]: 2026-01-31 09:21:30.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:21:30.561 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:21:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:21:30.562 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:21:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:21:30.562 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:21:30 np0005603621 nova_compute[247399]: 2026-01-31 09:21:30.779 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:30.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:31.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:32 np0005603621 systemd-logind[818]: Session 75 logged out. Waiting for processes to exit.
Jan 31 04:21:32 np0005603621 systemd[1]: session-75.scope: Deactivated successfully.
Jan 31 04:21:32 np0005603621 systemd[1]: session-75.scope: Consumed 2min 33.646s CPU time, 1.0G memory peak, read 349.5M from disk, written 408.4M to disk.
Jan 31 04:21:32 np0005603621 systemd-logind[818]: Removed session 75.
Jan 31 04:21:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:32.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:32 np0005603621 systemd-logind[818]: New session 76 of user zuul.
Jan 31 04:21:32 np0005603621 systemd[1]: Started Session 76 of User zuul.
Jan 31 04:21:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:33.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:33 np0005603621 systemd[1]: session-76.scope: Deactivated successfully.
Jan 31 04:21:33 np0005603621 systemd-logind[818]: Session 76 logged out. Waiting for processes to exit.
Jan 31 04:21:33 np0005603621 systemd-logind[818]: Removed session 76.
Jan 31 04:21:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:33 np0005603621 nova_compute[247399]: 2026-01-31 09:21:33.202 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:33 np0005603621 systemd-logind[818]: New session 77 of user zuul.
Jan 31 04:21:33 np0005603621 systemd[1]: Started Session 77 of User zuul.
Jan 31 04:21:33 np0005603621 systemd-logind[818]: Session 77 logged out. Waiting for processes to exit.
Jan 31 04:21:33 np0005603621 systemd[1]: session-77.scope: Deactivated successfully.
Jan 31 04:21:33 np0005603621 systemd-logind[818]: Removed session 77.
Jan 31 04:21:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:34.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:35.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:35 np0005603621 nova_compute[247399]: 2026-01-31 09:21:35.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:35 np0005603621 nova_compute[247399]: 2026-01-31 09:21:35.781 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:36.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:37.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:38 np0005603621 nova_compute[247399]: 2026-01-31 09:21:38.203 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:21:38
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', 'images', '.mgr', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 31 04:21:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:21:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:38.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:39.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:39 np0005603621 nova_compute[247399]: 2026-01-31 09:21:39.195 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:21:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:40 np0005603621 nova_compute[247399]: 2026-01-31 09:21:40.783 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:40.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:41.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.224 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.224 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.225 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.225 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:21:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:21:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546912337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.645 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:21:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.767 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.768 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3987MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.768 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.769 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.861 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.861 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:21:41 np0005603621 nova_compute[247399]: 2026-01-31 09:21:41.884 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:21:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:21:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742514804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:21:42 np0005603621 nova_compute[247399]: 2026-01-31 09:21:42.298 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:21:42 np0005603621 nova_compute[247399]: 2026-01-31 09:21:42.303 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:21:42 np0005603621 nova_compute[247399]: 2026-01-31 09:21:42.326 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:21:42 np0005603621 nova_compute[247399]: 2026-01-31 09:21:42.328 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:21:42 np0005603621 nova_compute[247399]: 2026-01-31 09:21:42.328 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:21:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:42.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:43.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:43 np0005603621 nova_compute[247399]: 2026-01-31 09:21:43.251 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:43 np0005603621 nova_compute[247399]: 2026-01-31 09:21:43.328 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:43 np0005603621 nova_compute[247399]: 2026-01-31 09:21:43.329 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:21:43 np0005603621 nova_compute[247399]: 2026-01-31 09:21:43.329 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:21:43 np0005603621 nova_compute[247399]: 2026-01-31 09:21:43.355 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:21:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:44.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:45.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:45 np0005603621 nova_compute[247399]: 2026-01-31 09:21:45.786 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:46 np0005603621 nova_compute[247399]: 2026-01-31 09:21:46.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:46.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:47.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:47 np0005603621 nova_compute[247399]: 2026-01-31 09:21:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:47 np0005603621 nova_compute[247399]: 2026-01-31 09:21:47.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:48 np0005603621 nova_compute[247399]: 2026-01-31 09:21:48.253 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:48.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:49.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:21:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:21:50 np0005603621 nova_compute[247399]: 2026-01-31 09:21:50.789 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:50.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:51.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:52.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:53.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:53 np0005603621 nova_compute[247399]: 2026-01-31 09:21:53.287 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:54.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:21:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:55.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:21:55 np0005603621 nova_compute[247399]: 2026-01-31 09:21:55.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:55 np0005603621 nova_compute[247399]: 2026-01-31 09:21:55.214 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:55 np0005603621 nova_compute[247399]: 2026-01-31 09:21:55.214 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:21:55 np0005603621 nova_compute[247399]: 2026-01-31 09:21:55.242 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:21:55 np0005603621 podman[424699]: 2026-01-31 09:21:55.551542654 +0000 UTC m=+0.102551489 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:21:55 np0005603621 podman[424700]: 2026-01-31 09:21:55.561453386 +0000 UTC m=+0.112323256 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:21:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:55 np0005603621 nova_compute[247399]: 2026-01-31 09:21:55.791 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:56.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:57.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:57 np0005603621 nova_compute[247399]: 2026-01-31 09:21:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:21:57 np0005603621 nova_compute[247399]: 2026-01-31 09:21:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:21:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:21:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:21:58 np0005603621 nova_compute[247399]: 2026-01-31 09:21:58.289 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:21:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:21:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:21:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:21:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:21:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:21:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:00 np0005603621 nova_compute[247399]: 2026-01-31 09:22:00.792 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:00.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:22:01 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 99b04e4d-da54-45d1-9845-092a117f3c07 does not exist
Jan 31 04:22:01 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 98731447-d315-42ad-909c-2ec16ba3f0a7 does not exist
Jan 31 04:22:01 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 67febc3c-b793-4057-bac3-2bbd296fe0ed does not exist
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:22:01 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:22:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:01.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:22:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.3 total, 600.0 interval#012Cumulative writes: 62K writes, 234K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 62K writes, 22K syncs, 2.75 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2220 writes, 6987 keys, 2220 commit groups, 1.0 writes per commit group, ingest: 5.66 MB, 0.01 MB/s#012Interval WAL: 2220 writes, 963 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.588725624 +0000 UTC m=+0.085826900 container create ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.522099621 +0000 UTC m=+0.019200937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:22:01 np0005603621 systemd[1]: Started libpod-conmon-ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0.scope.
Jan 31 04:22:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.717408945 +0000 UTC m=+0.214510251 container init ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:22:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.725717797 +0000 UTC m=+0.222819083 container start ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:22:01 np0005603621 wonderful_engelbart[425036]: 167 167
Jan 31 04:22:01 np0005603621 systemd[1]: libpod-ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0.scope: Deactivated successfully.
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.802410608 +0000 UTC m=+0.299511894 container attach ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.80340959 +0000 UTC m=+0.300510876 container died ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:22:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9dd46908e8e303fecadae59563db1e9462d87d2cba1a1c1002ff82676a7886ef-merged.mount: Deactivated successfully.
Jan 31 04:22:01 np0005603621 podman[425019]: 2026-01-31 09:22:01.993692146 +0000 UTC m=+0.490793472 container remove ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_engelbart, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:22:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:22:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:22:02 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:22:02 np0005603621 systemd[1]: libpod-conmon-ea6b97791e7c3e3c6d93a9dee317bbaa5e62e5fdbc90d1075f0b39b203a8a7a0.scope: Deactivated successfully.
Jan 31 04:22:02 np0005603621 podman[425063]: 2026-01-31 09:22:02.169654259 +0000 UTC m=+0.100637587 container create b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:22:02 np0005603621 podman[425063]: 2026-01-31 09:22:02.088689874 +0000 UTC m=+0.019673222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:22:02 np0005603621 systemd[1]: Started libpod-conmon-b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7.scope.
Jan 31 04:22:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:22:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a06b5fe27fd17db0d30a5d3c7e86be61b5835c1ea6bee21cbc5829a039bc75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a06b5fe27fd17db0d30a5d3c7e86be61b5835c1ea6bee21cbc5829a039bc75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a06b5fe27fd17db0d30a5d3c7e86be61b5835c1ea6bee21cbc5829a039bc75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a06b5fe27fd17db0d30a5d3c7e86be61b5835c1ea6bee21cbc5829a039bc75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a06b5fe27fd17db0d30a5d3c7e86be61b5835c1ea6bee21cbc5829a039bc75/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:02 np0005603621 podman[425063]: 2026-01-31 09:22:02.422463487 +0000 UTC m=+0.353446835 container init b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:22:02 np0005603621 podman[425063]: 2026-01-31 09:22:02.427510967 +0000 UTC m=+0.358494295 container start b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:22:02 np0005603621 podman[425063]: 2026-01-31 09:22:02.488806742 +0000 UTC m=+0.419790070 container attach b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 04:22:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:02.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000033s ======
Jan 31 04:22:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:03.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000033s
Jan 31 04:22:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:03 np0005603621 nova_compute[247399]: 2026-01-31 09:22:03.291 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:03 np0005603621 flamboyant_albattani[425080]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:22:03 np0005603621 flamboyant_albattani[425080]: --> relative data size: 1.0
Jan 31 04:22:03 np0005603621 flamboyant_albattani[425080]: --> All data devices are unavailable
Jan 31 04:22:03 np0005603621 systemd[1]: libpod-b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7.scope: Deactivated successfully.
Jan 31 04:22:03 np0005603621 podman[425063]: 2026-01-31 09:22:03.350141616 +0000 UTC m=+1.281124944 container died b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_albattani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:22:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-82a06b5fe27fd17db0d30a5d3c7e86be61b5835c1ea6bee21cbc5829a039bc75-merged.mount: Deactivated successfully.
Jan 31 04:22:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:04 np0005603621 podman[425063]: 2026-01-31 09:22:04.074154216 +0000 UTC m=+2.005137544 container remove b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_albattani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:22:04 np0005603621 systemd[1]: libpod-conmon-b7f1148d4cb054c9bcc807d23cb5ac1ef81c2c23855bcfd2826be34fbfd19da7.scope: Deactivated successfully.
Jan 31 04:22:04 np0005603621 podman[425251]: 2026-01-31 09:22:04.580617292 +0000 UTC m=+0.090792237 container create 1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:22:04 np0005603621 podman[425251]: 2026-01-31 09:22:04.508299278 +0000 UTC m=+0.018474243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:22:04 np0005603621 systemd[1]: Started libpod-conmon-1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7.scope.
Jan 31 04:22:04 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:22:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:04.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:04 np0005603621 podman[425251]: 2026-01-31 09:22:04.882498689 +0000 UTC m=+0.392673664 container init 1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 31 04:22:04 np0005603621 podman[425251]: 2026-01-31 09:22:04.887825697 +0000 UTC m=+0.398000642 container start 1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:22:04 np0005603621 reverent_bhaskara[425267]: 167 167
Jan 31 04:22:04 np0005603621 systemd[1]: libpod-1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7.scope: Deactivated successfully.
Jan 31 04:22:04 np0005603621 podman[425251]: 2026-01-31 09:22:04.900830598 +0000 UTC m=+0.411005543 container attach 1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:22:04 np0005603621 podman[425251]: 2026-01-31 09:22:04.901494099 +0000 UTC m=+0.411669044 container died 1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:22:04 np0005603621 systemd[1]: var-lib-containers-storage-overlay-be72f302d425e475e40dea4904ecce8416a5cf9bca81f7bdd6bd365c8843ed50-merged.mount: Deactivated successfully.
Jan 31 04:22:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:05.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:05 np0005603621 podman[425251]: 2026-01-31 09:22:05.418522836 +0000 UTC m=+0.928697781 container remove 1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:22:05 np0005603621 systemd[1]: libpod-conmon-1d6323f21e9ddbd5ef214a2571a2c102813282eb97d6cb43de9ae51a8f0564f7.scope: Deactivated successfully.
Jan 31 04:22:05 np0005603621 podman[425293]: 2026-01-31 09:22:05.526799994 +0000 UTC m=+0.024542526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:22:05 np0005603621 podman[425293]: 2026-01-31 09:22:05.702799749 +0000 UTC m=+0.200542281 container create 28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:22:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:05 np0005603621 nova_compute[247399]: 2026-01-31 09:22:05.793 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:06 np0005603621 systemd[1]: Started libpod-conmon-28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9.scope.
Jan 31 04:22:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:22:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1d2ee7e14214a7a45c66b8cc44f1d52c5de1e2f222bacc08d6b6288c3026d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1d2ee7e14214a7a45c66b8cc44f1d52c5de1e2f222bacc08d6b6288c3026d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1d2ee7e14214a7a45c66b8cc44f1d52c5de1e2f222bacc08d6b6288c3026d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:06 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c1d2ee7e14214a7a45c66b8cc44f1d52c5de1e2f222bacc08d6b6288c3026d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:06 np0005603621 podman[425293]: 2026-01-31 09:22:06.1971168 +0000 UTC m=+0.694859352 container init 28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 04:22:06 np0005603621 podman[425293]: 2026-01-31 09:22:06.203335176 +0000 UTC m=+0.701077698 container start 28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:22:06 np0005603621 podman[425293]: 2026-01-31 09:22:06.339270906 +0000 UTC m=+0.837013438 container attach 28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 04:22:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:06.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]: {
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:    "0": [
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:        {
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "devices": [
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "/dev/loop3"
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            ],
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "lv_name": "ceph_lv0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "lv_size": "7511998464",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "name": "ceph_lv0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "tags": {
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.cluster_name": "ceph",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.crush_device_class": "",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.encrypted": "0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.osd_id": "0",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.type": "block",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:                "ceph.vdo": "0"
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            },
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "type": "block",
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:            "vg_name": "ceph_vg0"
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:        }
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]:    ]
Jan 31 04:22:06 np0005603621 naughty_mirzakhani[425310]: }
Jan 31 04:22:06 np0005603621 systemd[1]: libpod-28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9.scope: Deactivated successfully.
Jan 31 04:22:06 np0005603621 podman[425293]: 2026-01-31 09:22:06.930425764 +0000 UTC m=+1.428168296 container died 28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:22:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:07.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4c1d2ee7e14214a7a45c66b8cc44f1d52c5de1e2f222bacc08d6b6288c3026d8-merged.mount: Deactivated successfully.
Jan 31 04:22:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:08 np0005603621 podman[425293]: 2026-01-31 09:22:08.15871058 +0000 UTC m=+2.656453152 container remove 28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:22:08 np0005603621 systemd[1]: libpod-conmon-28992992f2f84156709a41df156d2cc62a7a2fd874c78d56f6a063af7fec3af9.scope: Deactivated successfully.
Jan 31 04:22:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:08 np0005603621 nova_compute[247399]: 2026-01-31 09:22:08.291 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:22:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:22:08 np0005603621 podman[425525]: 2026-01-31 09:22:08.578370545 +0000 UTC m=+0.018017150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:22:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:08.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:08 np0005603621 podman[425525]: 2026-01-31 09:22:08.96350056 +0000 UTC m=+0.403147185 container create 74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 31 04:22:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:09.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:09 np0005603621 systemd[1]: Started libpod-conmon-74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065.scope.
Jan 31 04:22:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:22:09 np0005603621 podman[425525]: 2026-01-31 09:22:09.292979649 +0000 UTC m=+0.732626284 container init 74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:22:09 np0005603621 podman[425525]: 2026-01-31 09:22:09.298259265 +0000 UTC m=+0.737905850 container start 74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:22:09 np0005603621 flamboyant_feynman[425541]: 167 167
Jan 31 04:22:09 np0005603621 systemd[1]: libpod-74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065.scope: Deactivated successfully.
Jan 31 04:22:09 np0005603621 podman[425525]: 2026-01-31 09:22:09.383973631 +0000 UTC m=+0.823620216 container attach 74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:22:09 np0005603621 podman[425525]: 2026-01-31 09:22:09.38457397 +0000 UTC m=+0.824220555 container died 74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 31 04:22:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-79510f519e8f6858cc32e0335e90c09a43c9fdebea0c0443c1eb78463594282d-merged.mount: Deactivated successfully.
Jan 31 04:22:09 np0005603621 podman[425525]: 2026-01-31 09:22:09.710546638 +0000 UTC m=+1.150193233 container remove 74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:22:09 np0005603621 systemd[1]: libpod-conmon-74d01d7beea603a200c4624ad058c4c603dfe5529ed1eef5cba8c1905e2a9065.scope: Deactivated successfully.
Jan 31 04:22:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:09 np0005603621 podman[425567]: 2026-01-31 09:22:09.853412027 +0000 UTC m=+0.063018680 container create 1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:22:09 np0005603621 systemd[1]: Started libpod-conmon-1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc.scope.
Jan 31 04:22:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:22:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6779f64c55131798faf087499b72fae9dfbc1f3781384271c34854c3ec68005b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6779f64c55131798faf087499b72fae9dfbc1f3781384271c34854c3ec68005b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6779f64c55131798faf087499b72fae9dfbc1f3781384271c34854c3ec68005b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6779f64c55131798faf087499b72fae9dfbc1f3781384271c34854c3ec68005b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:22:09 np0005603621 podman[425567]: 2026-01-31 09:22:09.814711496 +0000 UTC m=+0.024318169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:22:09 np0005603621 podman[425567]: 2026-01-31 09:22:09.918245193 +0000 UTC m=+0.127851866 container init 1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:22:09 np0005603621 podman[425567]: 2026-01-31 09:22:09.922824717 +0000 UTC m=+0.132431370 container start 1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:22:09 np0005603621 podman[425567]: 2026-01-31 09:22:09.925725579 +0000 UTC m=+0.135332262 container attach 1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]: {
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:        "osd_id": 0,
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:        "type": "bluestore"
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]:    }
Jan 31 04:22:10 np0005603621 affectionate_kirch[425583]: }
Jan 31 04:22:10 np0005603621 systemd[1]: libpod-1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc.scope: Deactivated successfully.
Jan 31 04:22:10 np0005603621 podman[425567]: 2026-01-31 09:22:10.710010542 +0000 UTC m=+0.919617195 container died 1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:22:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6779f64c55131798faf087499b72fae9dfbc1f3781384271c34854c3ec68005b-merged.mount: Deactivated successfully.
Jan 31 04:22:10 np0005603621 podman[425567]: 2026-01-31 09:22:10.75653582 +0000 UTC m=+0.966142473 container remove 1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:22:10 np0005603621 systemd[1]: libpod-conmon-1cc3c96c1dd670902a3425d3e569b1aaab736db6617051f5fad8f893aad38ecc.scope: Deactivated successfully.
Jan 31 04:22:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:22:10 np0005603621 nova_compute[247399]: 2026-01-31 09:22:10.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:22:10 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:22:10 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:22:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 57f3a92f-9225-4468-81c9-c1f434aa92b4 does not exist
Jan 31 04:22:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fac628ce-89a5-4fae-b1aa-7a0581a3bf06 does not exist
Jan 31 04:22:10 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3b49fe08-0543-408b-b2d4-704f025cefea does not exist
Jan 31 04:22:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:10.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:22:10 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:22:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:11.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 04:22:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:12.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:13.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:13 np0005603621 nova_compute[247399]: 2026-01-31 09:22:13.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:15 np0005603621 nova_compute[247399]: 2026-01-31 09:22:15.798 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:16 np0005603621 nova_compute[247399]: 2026-01-31 09:22:16.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:16.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:17.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:18 np0005603621 nova_compute[247399]: 2026-01-31 09:22:18.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:19.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:20 np0005603621 nova_compute[247399]: 2026-01-31 09:22:20.801 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:20.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:21.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:23.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:23 np0005603621 nova_compute[247399]: 2026-01-31 09:22:23.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:22:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:25.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:22:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:25 np0005603621 nova_compute[247399]: 2026-01-31 09:22:25.803 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:26 np0005603621 podman[425678]: 2026-01-31 09:22:26.518186437 +0000 UTC m=+0.070397812 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:22:26 np0005603621 podman[425679]: 2026-01-31 09:22:26.520665796 +0000 UTC m=+0.073372957 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:22:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:26.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:28 np0005603621 nova_compute[247399]: 2026-01-31 09:22:28.390 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:29.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:22:30.563 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:22:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:22:30.563 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:22:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:22:30.564 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:22:30 np0005603621 nova_compute[247399]: 2026-01-31 09:22:30.806 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:30.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:31.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:31 np0005603621 nova_compute[247399]: 2026-01-31 09:22:31.218 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:32.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:33.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:33 np0005603621 nova_compute[247399]: 2026-01-31 09:22:33.390 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:35.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:35 np0005603621 nova_compute[247399]: 2026-01-31 09:22:35.808 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:36 np0005603621 nova_compute[247399]: 2026-01-31 09:22:36.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:37.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:38 np0005603621 nova_compute[247399]: 2026-01-31 09:22:38.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:22:38
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'volumes']
Jan 31 04:22:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:22:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:38.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:39 np0005603621 nova_compute[247399]: 2026-01-31 09:22:39.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:39.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:22:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:40 np0005603621 nova_compute[247399]: 2026-01-31 09:22:40.811 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:40.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:41.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.234 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.235 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.236 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:22:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:22:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672592038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.679 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:22:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.806 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.808 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.808 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.809 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.924 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.925 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:22:41 np0005603621 nova_compute[247399]: 2026-01-31 09:22:41.947 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:22:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:22:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1434053112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:22:42 np0005603621 nova_compute[247399]: 2026-01-31 09:22:42.377 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:22:42 np0005603621 nova_compute[247399]: 2026-01-31 09:22:42.382 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:22:42 np0005603621 nova_compute[247399]: 2026-01-31 09:22:42.423 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:22:42 np0005603621 nova_compute[247399]: 2026-01-31 09:22:42.425 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:22:42 np0005603621 nova_compute[247399]: 2026-01-31 09:22:42.425 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:22:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:42.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:43.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:43 np0005603621 nova_compute[247399]: 2026-01-31 09:22:43.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:44 np0005603621 nova_compute[247399]: 2026-01-31 09:22:44.181 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:44 np0005603621 nova_compute[247399]: 2026-01-31 09:22:44.182 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:22:44 np0005603621 nova_compute[247399]: 2026-01-31 09:22:44.183 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:22:44 np0005603621 nova_compute[247399]: 2026-01-31 09:22:44.205 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:22:44 np0005603621 nova_compute[247399]: 2026-01-31 09:22:44.206 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:44.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:45.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:45 np0005603621 nova_compute[247399]: 2026-01-31 09:22:45.813 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:47.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:48 np0005603621 nova_compute[247399]: 2026-01-31 09:22:48.217 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:48 np0005603621 nova_compute[247399]: 2026-01-31 09:22:48.218 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:48 np0005603621 nova_compute[247399]: 2026-01-31 09:22:48.218 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:22:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:48 np0005603621 nova_compute[247399]: 2026-01-31 09:22:48.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:48.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:49.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:22:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:22:50 np0005603621 nova_compute[247399]: 2026-01-31 09:22:50.816 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:52.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:53.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:53 np0005603621 nova_compute[247399]: 2026-01-31 09:22:53.478 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:54.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:55.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:55 np0005603621 nova_compute[247399]: 2026-01-31 09:22:55.819 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:56.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:57.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:57 np0005603621 podman[425883]: 2026-01-31 09:22:57.515855224 +0000 UTC m=+0.065534780 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:22:57 np0005603621 podman[425882]: 2026-01-31 09:22:57.521391174 +0000 UTC m=+0.071985889 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:22:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:22:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:22:58 np0005603621 nova_compute[247399]: 2026-01-31 09:22:58.480 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:22:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:22:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:22:58.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:22:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:22:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:22:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:22:59.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:22:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:00 np0005603621 nova_compute[247399]: 2026-01-31 09:23:00.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:00.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:01.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:02.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:03.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:03 np0005603621 nova_compute[247399]: 2026-01-31 09:23:03.512 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:04.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:05.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:05 np0005603621 nova_compute[247399]: 2026-01-31 09:23:05.856 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:06.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:07.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:08 np0005603621 nova_compute[247399]: 2026-01-31 09:23:08.514 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:23:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:23:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:08.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:09.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:10 np0005603621 nova_compute[247399]: 2026-01-31 09:23:10.859 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:10.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:11.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:23:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:12.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 57d61c0f-e21a-4598-af3f-d3387abfbea5 does not exist
Jan 31 04:23:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 73b8b338-4ade-4962-bedb-1db85111ed29 does not exist
Jan 31 04:23:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 277005ac-b3a3-4972-993a-3949f9d8394b does not exist
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:23:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:13.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:13 np0005603621 nova_compute[247399]: 2026-01-31 09:23:13.513 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.539937119 +0000 UTC m=+0.044640036 container create cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:23:13 np0005603621 systemd[1]: Started libpod-conmon-cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef.scope.
Jan 31 04:23:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.513145834 +0000 UTC m=+0.017848781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.610969528 +0000 UTC m=+0.115672465 container init cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_davinci, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.61717931 +0000 UTC m=+0.121882227 container start cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:23:13 np0005603621 beautiful_davinci[426393]: 167 167
Jan 31 04:23:13 np0005603621 systemd[1]: libpod-cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef.scope: Deactivated successfully.
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.621881294 +0000 UTC m=+0.126584211 container attach cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.62434059 +0000 UTC m=+0.129043507 container died cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_davinci, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:23:13 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d00b3e34c4ebb85a4a8d11ae8cc6f0bca34c564d98ad1e45286040b72d484df4-merged.mount: Deactivated successfully.
Jan 31 04:23:13 np0005603621 podman[426379]: 2026-01-31 09:23:13.674212087 +0000 UTC m=+0.178915004 container remove cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:23:13 np0005603621 systemd[1]: libpod-conmon-cb0a29665e5ea5842a01cbcfa25128e73974ff2c9afbb364309ac8be6cee0aef.scope: Deactivated successfully.
Jan 31 04:23:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:13 np0005603621 podman[426415]: 2026-01-31 09:23:13.785967391 +0000 UTC m=+0.034981039 container create b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:23:13 np0005603621 systemd[1]: Started libpod-conmon-b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6.scope.
Jan 31 04:23:13 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757c155d480d797623e17502e0d7ab6bec27f2c939f48cf5c90dd2b9076ff73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757c155d480d797623e17502e0d7ab6bec27f2c939f48cf5c90dd2b9076ff73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757c155d480d797623e17502e0d7ab6bec27f2c939f48cf5c90dd2b9076ff73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757c155d480d797623e17502e0d7ab6bec27f2c939f48cf5c90dd2b9076ff73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:13 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757c155d480d797623e17502e0d7ab6bec27f2c939f48cf5c90dd2b9076ff73/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:13 np0005603621 podman[426415]: 2026-01-31 09:23:13.849526279 +0000 UTC m=+0.098539957 container init b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:23:13 np0005603621 podman[426415]: 2026-01-31 09:23:13.854619676 +0000 UTC m=+0.103633324 container start b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:23:13 np0005603621 podman[426415]: 2026-01-31 09:23:13.858319931 +0000 UTC m=+0.107333579 container attach b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:23:13 np0005603621 podman[426415]: 2026-01-31 09:23:13.773461845 +0000 UTC m=+0.022475513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:23:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:23:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:23:14 np0005603621 bold_wu[426431]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:23:14 np0005603621 bold_wu[426431]: --> relative data size: 1.0
Jan 31 04:23:14 np0005603621 bold_wu[426431]: --> All data devices are unavailable
Jan 31 04:23:14 np0005603621 systemd[1]: libpod-b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6.scope: Deactivated successfully.
Jan 31 04:23:14 np0005603621 podman[426415]: 2026-01-31 09:23:14.587905602 +0000 UTC m=+0.836919250 container died b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:23:14 np0005603621 systemd[1]: var-lib-containers-storage-overlay-6757c155d480d797623e17502e0d7ab6bec27f2c939f48cf5c90dd2b9076ff73-merged.mount: Deactivated successfully.
Jan 31 04:23:14 np0005603621 podman[426415]: 2026-01-31 09:23:14.643797724 +0000 UTC m=+0.892811392 container remove b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 31 04:23:14 np0005603621 systemd[1]: libpod-conmon-b3e549a3ab732c8ab0a757f6ac04bf466593e1ea0ada3c1075031fcd2723b7d6.scope: Deactivated successfully.
Jan 31 04:23:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:14.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.099811076 +0000 UTC m=+0.033237335 container create 61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:23:15 np0005603621 systemd[1]: Started libpod-conmon-61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db.scope.
Jan 31 04:23:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.158329229 +0000 UTC m=+0.091755488 container init 61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feistel, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.163784437 +0000 UTC m=+0.097210686 container start 61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:23:15 np0005603621 gallant_feistel[426621]: 167 167
Jan 31 04:23:15 np0005603621 systemd[1]: libpod-61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db.scope: Deactivated successfully.
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.167515793 +0000 UTC m=+0.100942062 container attach 61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feistel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.168478423 +0000 UTC m=+0.101904682 container died 61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.085506055 +0000 UTC m=+0.018932314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:23:15 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bf7ab87ef6c6fa2a9e86178c9751d40f8a0692217f0b2933d5c7f2109730ebe5-merged.mount: Deactivated successfully.
Jan 31 04:23:15 np0005603621 podman[426605]: 2026-01-31 09:23:15.201752998 +0000 UTC m=+0.135179257 container remove 61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_feistel, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:23:15 np0005603621 systemd[1]: libpod-conmon-61cb4b9f2fb8de04155fea8ffb7c1765e4b300163eaa01b4c3d248986de520db.scope: Deactivated successfully.
Jan 31 04:23:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:15.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:15 np0005603621 podman[426645]: 2026-01-31 09:23:15.311232182 +0000 UTC m=+0.034214596 container create 900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:23:15 np0005603621 systemd[1]: Started libpod-conmon-900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245.scope.
Jan 31 04:23:15 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:23:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3630a3175a3dd0165eba3e3302a284e46b9ab7dabb2e1a475a63acfa627ffe4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3630a3175a3dd0165eba3e3302a284e46b9ab7dabb2e1a475a63acfa627ffe4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3630a3175a3dd0165eba3e3302a284e46b9ab7dabb2e1a475a63acfa627ffe4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:15 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3630a3175a3dd0165eba3e3302a284e46b9ab7dabb2e1a475a63acfa627ffe4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:15 np0005603621 podman[426645]: 2026-01-31 09:23:15.376194803 +0000 UTC m=+0.099177237 container init 900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:23:15 np0005603621 podman[426645]: 2026-01-31 09:23:15.381429065 +0000 UTC m=+0.104411479 container start 900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:23:15 np0005603621 podman[426645]: 2026-01-31 09:23:15.383899451 +0000 UTC m=+0.106881865 container attach 900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:23:15 np0005603621 podman[426645]: 2026-01-31 09:23:15.296879059 +0000 UTC m=+0.019861493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:23:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:15 np0005603621 nova_compute[247399]: 2026-01-31 09:23:15.861 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:16 np0005603621 silly_babbage[426661]: {
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:    "0": [
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:        {
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "devices": [
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "/dev/loop3"
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            ],
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "lv_name": "ceph_lv0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "lv_size": "7511998464",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "name": "ceph_lv0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "tags": {
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.cluster_name": "ceph",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.crush_device_class": "",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.encrypted": "0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.osd_id": "0",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.type": "block",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:                "ceph.vdo": "0"
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            },
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "type": "block",
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:            "vg_name": "ceph_vg0"
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:        }
Jan 31 04:23:16 np0005603621 silly_babbage[426661]:    ]
Jan 31 04:23:16 np0005603621 silly_babbage[426661]: }
Jan 31 04:23:16 np0005603621 systemd[1]: libpod-900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245.scope: Deactivated successfully.
Jan 31 04:23:16 np0005603621 podman[426645]: 2026-01-31 09:23:16.111038967 +0000 UTC m=+0.834021421 container died 900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:23:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3630a3175a3dd0165eba3e3302a284e46b9ab7dabb2e1a475a63acfa627ffe4a-merged.mount: Deactivated successfully.
Jan 31 04:23:16 np0005603621 podman[426645]: 2026-01-31 09:23:16.157823349 +0000 UTC m=+0.880805763 container remove 900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_babbage, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:23:16 np0005603621 systemd[1]: libpod-conmon-900c7c0122fbaee229bc3164b48636994a202472a94d27e4ffe474577817d245.scope: Deactivated successfully.
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.620100384 +0000 UTC m=+0.037640951 container create 490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 31 04:23:16 np0005603621 systemd[1]: Started libpod-conmon-490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e.scope.
Jan 31 04:23:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.67549852 +0000 UTC m=+0.093039107 container init 490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.681159395 +0000 UTC m=+0.098699962 container start 490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_albattani, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.683976912 +0000 UTC m=+0.101517499 container attach 490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 04:23:16 np0005603621 fervent_albattani[426843]: 167 167
Jan 31 04:23:16 np0005603621 systemd[1]: libpod-490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e.scope: Deactivated successfully.
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.684717845 +0000 UTC m=+0.102258412 container died 490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_albattani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.606907187 +0000 UTC m=+0.024447784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:23:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eae13134cb910d4ffb1a2671d3669b2cf85cc66d51edf9aad1136efbf0cb67cf-merged.mount: Deactivated successfully.
Jan 31 04:23:16 np0005603621 podman[426825]: 2026-01-31 09:23:16.716490294 +0000 UTC m=+0.134030861 container remove 490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:23:16 np0005603621 systemd[1]: libpod-conmon-490f335917be703370aafd0eb413382e6dcbe463a66b7fe1bb153c8521d5dd3e.scope: Deactivated successfully.
Jan 31 04:23:16 np0005603621 podman[426868]: 2026-01-31 09:23:16.821321524 +0000 UTC m=+0.032949996 container create f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:23:16 np0005603621 systemd[1]: Started libpod-conmon-f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4.scope.
Jan 31 04:23:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:23:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb5cb409edd3f1fed482cfe69a50f5b7453d7f7c23e6fb839b4b71374cc41b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb5cb409edd3f1fed482cfe69a50f5b7453d7f7c23e6fb839b4b71374cc41b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb5cb409edd3f1fed482cfe69a50f5b7453d7f7c23e6fb839b4b71374cc41b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5eb5cb409edd3f1fed482cfe69a50f5b7453d7f7c23e6fb839b4b71374cc41b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:23:16 np0005603621 podman[426868]: 2026-01-31 09:23:16.886013208 +0000 UTC m=+0.097641710 container init f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendeleev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:23:16 np0005603621 podman[426868]: 2026-01-31 09:23:16.890722023 +0000 UTC m=+0.102350505 container start f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:23:16 np0005603621 podman[426868]: 2026-01-31 09:23:16.89353902 +0000 UTC m=+0.105167522 container attach f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendeleev, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:23:16 np0005603621 podman[426868]: 2026-01-31 09:23:16.807779347 +0000 UTC m=+0.019407869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:23:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:16.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.146037) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851397146127, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1638, "num_deletes": 251, "total_data_size": 2876281, "memory_usage": 2920880, "flush_reason": "Manual Compaction"}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851397158766, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 2812756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88743, "largest_seqno": 90380, "table_properties": {"data_size": 2804975, "index_size": 4658, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16803, "raw_average_key_size": 20, "raw_value_size": 2789238, "raw_average_value_size": 3426, "num_data_blocks": 204, "num_entries": 814, "num_filter_entries": 814, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851244, "oldest_key_time": 1769851244, "file_creation_time": 1769851397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 12768 microseconds, and 6124 cpu microseconds.
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.158821) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 2812756 bytes OK
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.158847) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.162305) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.162332) EVENT_LOG_v1 {"time_micros": 1769851397162324, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.162363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 2869184, prev total WAL file size 2869184, number of live WAL files 2.
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.163793) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(2746KB)], [206(11MB)]
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851397163851, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 15239604, "oldest_snapshot_seqno": -1}
Jan 31 04:23:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:17.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 11821 keys, 13219670 bytes, temperature: kUnknown
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851397336001, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 13219670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13145817, "index_size": 43212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29573, "raw_key_size": 313622, "raw_average_key_size": 26, "raw_value_size": 12942201, "raw_average_value_size": 1094, "num_data_blocks": 1633, "num_entries": 11821, "num_filter_entries": 11821, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.336199) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 13219670 bytes
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.337306) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.5 rd, 76.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.9 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(10.1) write-amplify(4.7) OK, records in: 12340, records dropped: 519 output_compression: NoCompression
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.337321) EVENT_LOG_v1 {"time_micros": 1769851397337313, "job": 130, "event": "compaction_finished", "compaction_time_micros": 172202, "compaction_time_cpu_micros": 25927, "output_level": 6, "num_output_files": 1, "total_output_size": 13219670, "num_input_records": 12340, "num_output_records": 11821, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851397337590, "job": 130, "event": "table_file_deletion", "file_number": 208}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851397338345, "job": 130, "event": "table_file_deletion", "file_number": 206}
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.163666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.338395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.338400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.338402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.338403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:23:17.338405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]: {
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:        "osd_id": 0,
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:        "type": "bluestore"
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]:    }
Jan 31 04:23:17 np0005603621 stoic_mendeleev[426884]: }
Jan 31 04:23:17 np0005603621 systemd[1]: libpod-f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4.scope: Deactivated successfully.
Jan 31 04:23:17 np0005603621 podman[426868]: 2026-01-31 09:23:17.647513203 +0000 UTC m=+0.859141695 container died f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:23:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a5eb5cb409edd3f1fed482cfe69a50f5b7453d7f7c23e6fb839b4b71374cc41b-merged.mount: Deactivated successfully.
Jan 31 04:23:17 np0005603621 podman[426868]: 2026-01-31 09:23:17.696220714 +0000 UTC m=+0.907849206 container remove f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 04:23:17 np0005603621 systemd[1]: libpod-conmon-f7ed6310c4b87be001bf7d1e15aaa57fc95706d65eb43322c375549ad5f7a8b4.scope: Deactivated successfully.
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:23:17 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e6d0d4ed-5020-49cf-a978-bb8d29933a4e does not exist
Jan 31 04:23:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e85ba798-068f-472e-9dce-243500481d9e does not exist
Jan 31 04:23:17 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev a0835bfa-a6fa-4c41-8e00-8e3e948f5d67 does not exist
Jan 31 04:23:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:18 np0005603621 nova_compute[247399]: 2026-01-31 09:23:18.516 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:18 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:23:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:18.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:19.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:20 np0005603621 nova_compute[247399]: 2026-01-31 09:23:20.863 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:20.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:21.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:22.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:23:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:23.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:23:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:23 np0005603621 nova_compute[247399]: 2026-01-31 09:23:23.518 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:25.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:25 np0005603621 nova_compute[247399]: 2026-01-31 09:23:25.866 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:26.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:27.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:28 np0005603621 podman[426998]: 2026-01-31 09:23:28.111075783 +0000 UTC m=+0.049675992 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 04:23:28 np0005603621 podman[426999]: 2026-01-31 09:23:28.156356288 +0000 UTC m=+0.094093880 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:23:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:28 np0005603621 nova_compute[247399]: 2026-01-31 09:23:28.519 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:28.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:29.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 04:23:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:23:30.565 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:23:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:23:30.565 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:23:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:23:30.565 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:23:30 np0005603621 nova_compute[247399]: 2026-01-31 09:23:30.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:30.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:31 np0005603621 nova_compute[247399]: 2026-01-31 09:23:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:31.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 31 04:23:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:32.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:33.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:33 np0005603621 nova_compute[247399]: 2026-01-31 09:23:33.522 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Jan 31 04:23:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:34.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:35.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 0 B/s wr, 67 op/s
Jan 31 04:23:35 np0005603621 nova_compute[247399]: 2026-01-31 09:23:35.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:36 np0005603621 nova_compute[247399]: 2026-01-31 09:23:36.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:36.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:37.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 114 KiB/s rd, 0 B/s wr, 189 op/s
Jan 31 04:23:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:38 np0005603621 nova_compute[247399]: 2026-01-31 09:23:38.523 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:23:38
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'images']
Jan 31 04:23:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:23:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 31 04:23:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:23:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:39.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 131 KiB/s rd, 0 B/s wr, 218 op/s
Jan 31 04:23:40 np0005603621 nova_compute[247399]: 2026-01-31 09:23:40.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:40 np0005603621 nova_compute[247399]: 2026-01-31 09:23:40.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:40.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:41 np0005603621 nova_compute[247399]: 2026-01-31 09:23:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:41 np0005603621 nova_compute[247399]: 2026-01-31 09:23:41.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:23:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:23:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:41.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:23:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 0 B/s wr, 210 op/s
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.230 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.230 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.230 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.231 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.231 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:23:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:23:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215367608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.687 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.817 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.818 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4025MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.818 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.819 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.897 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.898 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:23:42 np0005603621 nova_compute[247399]: 2026-01-31 09:23:42.936 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:23:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:43.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:23:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905494864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:23:43 np0005603621 nova_compute[247399]: 2026-01-31 09:23:43.345 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:23:43 np0005603621 nova_compute[247399]: 2026-01-31 09:23:43.350 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:23:43 np0005603621 nova_compute[247399]: 2026-01-31 09:23:43.375 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:23:43 np0005603621 nova_compute[247399]: 2026-01-31 09:23:43.376 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:23:43 np0005603621 nova_compute[247399]: 2026-01-31 09:23:43.376 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:23:43 np0005603621 nova_compute[247399]: 2026-01-31 09:23:43.525 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 0 B/s wr, 210 op/s
Jan 31 04:23:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:44.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 31 04:23:45 np0005603621 nova_compute[247399]: 2026-01-31 09:23:45.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:46 np0005603621 nova_compute[247399]: 2026-01-31 09:23:46.376 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:46 np0005603621 nova_compute[247399]: 2026-01-31 09:23:46.377 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:23:46 np0005603621 nova_compute[247399]: 2026-01-31 09:23:46.377 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:23:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:46.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 150 op/s
Jan 31 04:23:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:48 np0005603621 nova_compute[247399]: 2026-01-31 09:23:48.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:48.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:49.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Jan 31 04:23:49 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:23:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:23:50 np0005603621 nova_compute[247399]: 2026-01-31 09:23:50.876 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:50.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:51.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:53 np0005603621 nova_compute[247399]: 2026-01-31 09:23:53.550 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:54.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:55.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:55 np0005603621 nova_compute[247399]: 2026-01-31 09:23:55.879 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:56 np0005603621 nova_compute[247399]: 2026-01-31 09:23:56.619 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:23:56 np0005603621 nova_compute[247399]: 2026-01-31 09:23:56.619 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:56 np0005603621 nova_compute[247399]: 2026-01-31 09:23:56.620 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:56 np0005603621 nova_compute[247399]: 2026-01-31 09:23:56.620 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:23:56 np0005603621 nova_compute[247399]: 2026-01-31 09:23:56.877 247403 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 2.45 sec#033[00m
Jan 31 04:23:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:56.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:57.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:23:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:23:58 np0005603621 podman[427180]: 2026-01-31 09:23:58.489301559 +0000 UTC m=+0.051101585 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:23:58 np0005603621 podman[427181]: 2026-01-31 09:23:58.518137977 +0000 UTC m=+0.077237050 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:23:58 np0005603621 nova_compute[247399]: 2026-01-31 09:23:58.552 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:23:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:23:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:23:58.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:23:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:23:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:23:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:23:59.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:23:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:00 np0005603621 nova_compute[247399]: 2026-01-31 09:24:00.882 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:00.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:01.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:24:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:02.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:24:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:03.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:03 np0005603621 nova_compute[247399]: 2026-01-31 09:24:03.600 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:04.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:05.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:05 np0005603621 nova_compute[247399]: 2026-01-31 09:24:05.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:06.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:07.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:24:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:24:08 np0005603621 nova_compute[247399]: 2026-01-31 09:24:08.602 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:08.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:09.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:10 np0005603621 nova_compute[247399]: 2026-01-31 09:24:10.437 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:10 np0005603621 nova_compute[247399]: 2026-01-31 09:24:10.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:12.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:13.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:13 np0005603621 nova_compute[247399]: 2026-01-31 09:24:13.644 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:14.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:24:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:15.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:24:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:15 np0005603621 nova_compute[247399]: 2026-01-31 09:24:15.903 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:16.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:17.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:18 np0005603621 nova_compute[247399]: 2026-01-31 09:24:18.644 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:18.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:24:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:19 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:24:19 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.088513341 +0000 UTC m=+0.033966798 container create 67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:24:19 np0005603621 systemd[1]: Started libpod-conmon-67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58.scope.
Jan 31 04:24:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:19 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.072805757 +0000 UTC m=+0.018259224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.170971742 +0000 UTC m=+0.116425219 container init 67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.176156331 +0000 UTC m=+0.121609788 container start 67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.179143153 +0000 UTC m=+0.124596610 container attach 67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:24:19 np0005603621 elated_gould[427570]: 167 167
Jan 31 04:24:19 np0005603621 systemd[1]: libpod-67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58.scope: Deactivated successfully.
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.180282479 +0000 UTC m=+0.125735936 container died 67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:24:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-523760e88c3cf40431d7ca995cd8800a85ddf4b0e536ef17e410550721571753-merged.mount: Deactivated successfully.
Jan 31 04:24:19 np0005603621 podman[427556]: 2026-01-31 09:24:19.21570115 +0000 UTC m=+0.161154617 container remove 67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:24:19 np0005603621 systemd[1]: libpod-conmon-67eb5e9ce46042b7de828f9be03ea4503d1f53e7a83c6695cc1050019f3afd58.scope: Deactivated successfully.
Jan 31 04:24:19 np0005603621 podman[427594]: 2026-01-31 09:24:19.326207986 +0000 UTC m=+0.037938211 container create f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:24:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:19.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:19 np0005603621 systemd[1]: Started libpod-conmon-f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66.scope.
Jan 31 04:24:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f29e96c15a9d1a28155256fe6f5a71283971470e0d37aa593f46469134bd55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f29e96c15a9d1a28155256fe6f5a71283971470e0d37aa593f46469134bd55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f29e96c15a9d1a28155256fe6f5a71283971470e0d37aa593f46469134bd55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:19 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f29e96c15a9d1a28155256fe6f5a71283971470e0d37aa593f46469134bd55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:19 np0005603621 podman[427594]: 2026-01-31 09:24:19.309137499 +0000 UTC m=+0.020867774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:19 np0005603621 podman[427594]: 2026-01-31 09:24:19.418614003 +0000 UTC m=+0.130344238 container init f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:24:19 np0005603621 podman[427594]: 2026-01-31 09:24:19.423867255 +0000 UTC m=+0.135597480 container start f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 04:24:19 np0005603621 podman[427594]: 2026-01-31 09:24:19.43508145 +0000 UTC m=+0.146811685 container attach f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 31 04:24:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:20 np0005603621 blissful_wing[427610]: [
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:    {
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "available": false,
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "ceph_device": false,
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "lsm_data": {},
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "lvs": [],
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "path": "/dev/sr0",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "rejected_reasons": [
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "Has a FileSystem",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "Insufficient space (<5GB)"
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        ],
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        "sys_api": {
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "actuators": null,
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "device_nodes": "sr0",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "devname": "sr0",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "human_readable_size": "482.00 KB",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "id_bus": "ata",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "model": "QEMU DVD-ROM",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "nr_requests": "2",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "parent": "/dev/sr0",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "partitions": {},
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "path": "/dev/sr0",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "removable": "1",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "rev": "2.5+",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "ro": "0",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "rotational": "1",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "sas_address": "",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "sas_device_handle": "",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "scheduler_mode": "mq-deadline",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "sectors": 0,
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "sectorsize": "2048",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "size": 493568.0,
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "support_discard": "2048",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "type": "disk",
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:            "vendor": "QEMU"
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:        }
Jan 31 04:24:20 np0005603621 blissful_wing[427610]:    }
Jan 31 04:24:20 np0005603621 blissful_wing[427610]: ]
Jan 31 04:24:20 np0005603621 systemd[1]: libpod-f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66.scope: Deactivated successfully.
Jan 31 04:24:20 np0005603621 systemd[1]: libpod-f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66.scope: Consumed 1.125s CPU time.
Jan 31 04:24:20 np0005603621 podman[427594]: 2026-01-31 09:24:20.547486708 +0000 UTC m=+1.259216933 container died f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:24:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-63f29e96c15a9d1a28155256fe6f5a71283971470e0d37aa593f46469134bd55-merged.mount: Deactivated successfully.
Jan 31 04:24:20 np0005603621 podman[427594]: 2026-01-31 09:24:20.599203392 +0000 UTC m=+1.310933617 container remove f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 04:24:20 np0005603621 systemd[1]: libpod-conmon-f766305ef748ae76020b1de6191c68de16b7b8e75c3aaaaf2e39413080fc9b66.scope: Deactivated successfully.
Jan 31 04:24:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:24:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:24:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:20 np0005603621 nova_compute[247399]: 2026-01-31 09:24:20.905 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:20.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:24:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:21.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4d9a521d-c2b1-42a5-ac9f-e7639d79f793 does not exist
Jan 31 04:24:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 45c36b5d-01f7-4997-beb0-d44a4e72977a does not exist
Jan 31 04:24:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3008b02e-98ce-4769-9316-3767e3db0c68 does not exist
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:24:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:24:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.824363125 +0000 UTC m=+0.033520334 container create b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 04:24:21 np0005603621 systemd[1]: Started libpod-conmon-b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4.scope.
Jan 31 04:24:21 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.885159139 +0000 UTC m=+0.094316368 container init b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.890270186 +0000 UTC m=+0.099427395 container start b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:24:21 np0005603621 jolly_vaughan[429099]: 167 167
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.893115684 +0000 UTC m=+0.102272913 container attach b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:24:21 np0005603621 systemd[1]: libpod-b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4.scope: Deactivated successfully.
Jan 31 04:24:21 np0005603621 conmon[429099]: conmon b4c61fc272d2016621b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4.scope/container/memory.events
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.894833937 +0000 UTC m=+0.103991146 container died b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.810525619 +0000 UTC m=+0.019682858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:21 np0005603621 systemd[1]: var-lib-containers-storage-overlay-eac5b3e47bc34060c93509f9e07f9dfa78c0e94ed31246e52ccbecee4c1caab6-merged.mount: Deactivated successfully.
Jan 31 04:24:21 np0005603621 podman[429083]: 2026-01-31 09:24:21.920054974 +0000 UTC m=+0.129212193 container remove b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:24:21 np0005603621 systemd[1]: libpod-conmon-b4c61fc272d2016621b9cd1a8756e8d95e08146ff5aa9fb5e63daac68dc985f4.scope: Deactivated successfully.
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.042993322 +0000 UTC m=+0.041027705 container create 647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 31 04:24:22 np0005603621 systemd[1]: Started libpod-conmon-647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47.scope.
Jan 31 04:24:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9234d104267920738405689978f995e752b6d8a7d42d0d3aa4c408a03e77ca12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9234d104267920738405689978f995e752b6d8a7d42d0d3aa4c408a03e77ca12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9234d104267920738405689978f995e752b6d8a7d42d0d3aa4c408a03e77ca12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9234d104267920738405689978f995e752b6d8a7d42d0d3aa4c408a03e77ca12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9234d104267920738405689978f995e752b6d8a7d42d0d3aa4c408a03e77ca12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.119317474 +0000 UTC m=+0.117351877 container init 647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.027312679 +0000 UTC m=+0.025347082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.128052854 +0000 UTC m=+0.126087247 container start 647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_newton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.131617424 +0000 UTC m=+0.129651847 container attach 647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_newton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:24:22 np0005603621 charming_newton[429140]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:24:22 np0005603621 charming_newton[429140]: --> relative data size: 1.0
Jan 31 04:24:22 np0005603621 charming_newton[429140]: --> All data devices are unavailable
Jan 31 04:24:22 np0005603621 systemd[1]: libpod-647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47.scope: Deactivated successfully.
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.821958116 +0000 UTC m=+0.819992499 container died 647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_newton, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:24:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9234d104267920738405689978f995e752b6d8a7d42d0d3aa4c408a03e77ca12-merged.mount: Deactivated successfully.
Jan 31 04:24:22 np0005603621 podman[429123]: 2026-01-31 09:24:22.884464772 +0000 UTC m=+0.882499155 container remove 647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:24:22 np0005603621 systemd[1]: libpod-conmon-647cee43ccf6e196eeb4aef99cdb2570fbb5149ef3836a52e110c1c1e1bbdb47.scope: Deactivated successfully.
Jan 31 04:24:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:22.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.316915298 +0000 UTC m=+0.032103880 container create 8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bartik, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:24:23 np0005603621 systemd[1]: Started libpod-conmon-8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe.scope.
Jan 31 04:24:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:24:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:23.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:24:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.375498173 +0000 UTC m=+0.090686745 container init 8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.381491918 +0000 UTC m=+0.096680500 container start 8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:24:23 np0005603621 naughty_bartik[429323]: 167 167
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.384700977 +0000 UTC m=+0.099889559 container attach 8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:24:23 np0005603621 systemd[1]: libpod-8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe.scope: Deactivated successfully.
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.385354927 +0000 UTC m=+0.100543509 container died 8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bartik, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.302501493 +0000 UTC m=+0.017690105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-69f0501b5b7fc9e6e1e463d211586e1f6c05915c3449c379cb9627dda8cc0849-merged.mount: Deactivated successfully.
Jan 31 04:24:23 np0005603621 podman[429307]: 2026-01-31 09:24:23.413845685 +0000 UTC m=+0.129034267 container remove 8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:24:23 np0005603621 systemd[1]: libpod-conmon-8e219eb535bfbbbc38358ad0a978b31a1ce06c0567bdf44df60c09125d038bbe.scope: Deactivated successfully.
Jan 31 04:24:23 np0005603621 podman[429347]: 2026-01-31 09:24:23.548642058 +0000 UTC m=+0.033915555 container create 6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:24:23 np0005603621 systemd[1]: Started libpod-conmon-6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471.scope.
Jan 31 04:24:23 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612f0430ee5b42db7db0de7c6267e656903de191cd877a91ff3e58e52c4f38c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612f0430ee5b42db7db0de7c6267e656903de191cd877a91ff3e58e52c4f38c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612f0430ee5b42db7db0de7c6267e656903de191cd877a91ff3e58e52c4f38c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:23 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/612f0430ee5b42db7db0de7c6267e656903de191cd877a91ff3e58e52c4f38c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:23 np0005603621 podman[429347]: 2026-01-31 09:24:23.621778102 +0000 UTC m=+0.107051619 container init 6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 31 04:24:23 np0005603621 podman[429347]: 2026-01-31 09:24:23.626170178 +0000 UTC m=+0.111443695 container start 6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_maxwell, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:24:23 np0005603621 podman[429347]: 2026-01-31 09:24:23.629449469 +0000 UTC m=+0.114722976 container attach 6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:24:23 np0005603621 podman[429347]: 2026-01-31 09:24:23.534257586 +0000 UTC m=+0.019531103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:23 np0005603621 nova_compute[247399]: 2026-01-31 09:24:23.648 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]: {
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:    "0": [
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:        {
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "devices": [
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "/dev/loop3"
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            ],
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "lv_name": "ceph_lv0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "lv_size": "7511998464",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "name": "ceph_lv0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "tags": {
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.cluster_name": "ceph",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.crush_device_class": "",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.encrypted": "0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.osd_id": "0",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.type": "block",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:                "ceph.vdo": "0"
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            },
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "type": "block",
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:            "vg_name": "ceph_vg0"
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:        }
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]:    ]
Jan 31 04:24:24 np0005603621 boring_maxwell[429365]: }
Jan 31 04:24:24 np0005603621 systemd[1]: libpod-6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471.scope: Deactivated successfully.
Jan 31 04:24:24 np0005603621 podman[429347]: 2026-01-31 09:24:24.377688375 +0000 UTC m=+0.862961872 container died 6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_maxwell, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:24:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-612f0430ee5b42db7db0de7c6267e656903de191cd877a91ff3e58e52c4f38c7-merged.mount: Deactivated successfully.
Jan 31 04:24:24 np0005603621 podman[429347]: 2026-01-31 09:24:24.42037501 +0000 UTC m=+0.905648507 container remove 6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 31 04:24:24 np0005603621 systemd[1]: libpod-conmon-6dac819521249c41ef053b579d5b42838e22bae49583dfbd374a6f8eb111d471.scope: Deactivated successfully.
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.867799398 +0000 UTC m=+0.032611646 container create 529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 04:24:24 np0005603621 systemd[1]: Started libpod-conmon-529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068.scope.
Jan 31 04:24:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.916946493 +0000 UTC m=+0.081758741 container init 529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.92174541 +0000 UTC m=+0.086557658 container start 529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.924539946 +0000 UTC m=+0.089352224 container attach 529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:24:24 np0005603621 nice_meninsky[429547]: 167 167
Jan 31 04:24:24 np0005603621 systemd[1]: libpod-529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068.scope: Deactivated successfully.
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.925241828 +0000 UTC m=+0.090054076 container died 529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:24:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7c0c5029cf4913710f7bf883ab1defe5c60385adeb2e1baad09b1295b66069a1-merged.mount: Deactivated successfully.
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.853514037 +0000 UTC m=+0.018326305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:24 np0005603621 podman[429531]: 2026-01-31 09:24:24.955360086 +0000 UTC m=+0.120172334 container remove 529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_meninsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 31 04:24:24 np0005603621 systemd[1]: libpod-conmon-529b2ad96780402208fd68e8ceb62ad2cad99e44a83263ed569bc2ee8d5a8068.scope: Deactivated successfully.
Jan 31 04:24:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:24.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.057482282 +0000 UTC m=+0.033227914 container create f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:24:25 np0005603621 systemd[1]: Started libpod-conmon-f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865.scope.
Jan 31 04:24:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:24:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da485f136a032a18742f5933822c1b1d0ac0fa65867bf57c0a17285e48c268f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da485f136a032a18742f5933822c1b1d0ac0fa65867bf57c0a17285e48c268f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da485f136a032a18742f5933822c1b1d0ac0fa65867bf57c0a17285e48c268f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:25 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da485f136a032a18742f5933822c1b1d0ac0fa65867bf57c0a17285e48c268f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.128399468 +0000 UTC m=+0.104145110 container init f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.132849856 +0000 UTC m=+0.108595488 container start f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.135792296 +0000 UTC m=+0.111537948 container attach f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.042636156 +0000 UTC m=+0.018381808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:24:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:25.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]: {
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:        "osd_id": 0,
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:        "type": "bluestore"
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]:    }
Jan 31 04:24:25 np0005603621 infallible_feynman[429588]: }
Jan 31 04:24:25 np0005603621 systemd[1]: libpod-f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865.scope: Deactivated successfully.
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.878778291 +0000 UTC m=+0.854523923 container died f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:24:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7da485f136a032a18742f5933822c1b1d0ac0fa65867bf57c0a17285e48c268f-merged.mount: Deactivated successfully.
Jan 31 04:24:25 np0005603621 nova_compute[247399]: 2026-01-31 09:24:25.907 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:25 np0005603621 podman[429571]: 2026-01-31 09:24:25.927032778 +0000 UTC m=+0.902778410 container remove f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:24:25 np0005603621 systemd[1]: libpod-conmon-f3a747567ebdea5941e05bc8af8d3031c98e1d11a928154a0b56a03030991865.scope: Deactivated successfully.
Jan 31 04:24:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:24:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:24:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 32417b29-2596-48e8-aed2-50e3f9b6f114 does not exist
Jan 31 04:24:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b49df04d-744d-4ff0-9694-22a57de8cc4e does not exist
Jan 31 04:24:25 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1ff3b241-f7bb-454b-9865-925e33b995cd does not exist
Jan 31 04:24:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:26 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:24:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:27.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:28 np0005603621 nova_compute[247399]: 2026-01-31 09:24:28.649 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:28.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:29 np0005603621 podman[429724]: 2026-01-31 09:24:29.509545702 +0000 UTC m=+0.069415569 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Jan 31 04:24:29 np0005603621 podman[429725]: 2026-01-31 09:24:29.535475042 +0000 UTC m=+0.094697949 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller)
Jan 31 04:24:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:24:30.566 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:24:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:24:30.566 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:24:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:24:30.566 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:24:30 np0005603621 nova_compute[247399]: 2026-01-31 09:24:30.909 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:30.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:31 np0005603621 nova_compute[247399]: 2026-01-31 09:24:31.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:31.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:32.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:33.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:33 np0005603621 nova_compute[247399]: 2026-01-31 09:24:33.651 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:34.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:24:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:35.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:24:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:35 np0005603621 nova_compute[247399]: 2026-01-31 09:24:35.911 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:36 np0005603621 nova_compute[247399]: 2026-01-31 09:24:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:36.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:37.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:24:38 np0005603621 nova_compute[247399]: 2026-01-31 09:24:38.653 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:24:38
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups']
Jan 31 04:24:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:24:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:38.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:24:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:40 np0005603621 nova_compute[247399]: 2026-01-31 09:24:40.914 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.220 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.220 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.220 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.221 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.221 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:24:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:24:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1956430872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.610 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.726 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.728 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4021MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.728 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.729 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.820 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.820 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:24:42 np0005603621 nova_compute[247399]: 2026-01-31 09:24:42.839 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:24:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:24:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694175777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:24:43 np0005603621 nova_compute[247399]: 2026-01-31 09:24:43.239 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:24:43 np0005603621 nova_compute[247399]: 2026-01-31 09:24:43.243 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:24:43 np0005603621 nova_compute[247399]: 2026-01-31 09:24:43.257 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:24:43 np0005603621 nova_compute[247399]: 2026-01-31 09:24:43.259 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:24:43 np0005603621 nova_compute[247399]: 2026-01-31 09:24:43.259 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:24:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:43.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:43 np0005603621 nova_compute[247399]: 2026-01-31 09:24:43.698 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:44.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:45 np0005603621 nova_compute[247399]: 2026-01-31 09:24:45.955 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:47.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:48 np0005603621 nova_compute[247399]: 2026-01-31 09:24:48.260 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:48 np0005603621 nova_compute[247399]: 2026-01-31 09:24:48.260 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:24:48 np0005603621 nova_compute[247399]: 2026-01-31 09:24:48.260 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:24:48 np0005603621 nova_compute[247399]: 2026-01-31 09:24:48.283 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:24:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:48 np0005603621 nova_compute[247399]: 2026-01-31 09:24:48.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:49 np0005603621 nova_compute[247399]: 2026-01-31 09:24:49.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:49 np0005603621 nova_compute[247399]: 2026-01-31 09:24:49.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:49.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:24:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:24:50 np0005603621 nova_compute[247399]: 2026-01-31 09:24:50.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:24:50 np0005603621 nova_compute[247399]: 2026-01-31 09:24:50.992 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:51.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:24:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:53.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:24:53 np0005603621 nova_compute[247399]: 2026-01-31 09:24:53.740 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:55 np0005603621 nova_compute[247399]: 2026-01-31 09:24:55.993 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:57.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:57.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:24:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:24:58 np0005603621 nova_compute[247399]: 2026-01-31 09:24:58.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:24:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:24:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:24:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:24:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:24:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:24:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:00 np0005603621 podman[429882]: 2026-01-31 09:25:00.529821404 +0000 UTC m=+0.081346257 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 31 04:25:00 np0005603621 podman[429883]: 2026-01-31 09:25:00.536665695 +0000 UTC m=+0.086556118 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:25:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:01.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:01 np0005603621 nova_compute[247399]: 2026-01-31 09:25:01.038 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:03.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:03 np0005603621 nova_compute[247399]: 2026-01-31 09:25:03.795 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:05.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:05.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:06 np0005603621 nova_compute[247399]: 2026-01-31 09:25:06.060 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:07.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:07.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:25:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:25:08 np0005603621 nova_compute[247399]: 2026-01-31 09:25:08.795 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:09.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:09.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:11 np0005603621 nova_compute[247399]: 2026-01-31 09:25:11.092 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:11.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:13.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:13 np0005603621 nova_compute[247399]: 2026-01-31 09:25:13.796 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:25:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:15.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:25:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:16 np0005603621 nova_compute[247399]: 2026-01-31 09:25:16.094 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:18 np0005603621 nova_compute[247399]: 2026-01-31 09:25:18.800 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:21.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:21 np0005603621 nova_compute[247399]: 2026-01-31 09:25:21.096 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:21.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:25:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:25:23 np0005603621 nova_compute[247399]: 2026-01-31 09:25:23.801 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:25.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:26 np0005603621 nova_compute[247399]: 2026-01-31 09:25:26.141 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:25:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7c824372-c7b5-45fb-8954-a7030ffd0047 does not exist
Jan 31 04:25:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6617e9b5-ec1b-404a-b389-12e484ec7228 does not exist
Jan 31 04:25:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c2065d22-2a14-4bf1-9372-f689c793e746 does not exist
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:25:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:25:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:27 np0005603621 podman[430261]: 2026-01-31 09:25:27.952013963 +0000 UTC m=+0.093744051 container create 64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:25:27 np0005603621 podman[430261]: 2026-01-31 09:25:27.875294108 +0000 UTC m=+0.017024186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:25:28 np0005603621 systemd[1]: Started libpod-conmon-64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a.scope.
Jan 31 04:25:28 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:25:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:25:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:25:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:25:28 np0005603621 podman[430261]: 2026-01-31 09:25:28.123346813 +0000 UTC m=+0.265076901 container init 64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 04:25:28 np0005603621 podman[430261]: 2026-01-31 09:25:28.129695487 +0000 UTC m=+0.271425545 container start 64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:25:28 np0005603621 boring_raman[430278]: 167 167
Jan 31 04:25:28 np0005603621 systemd[1]: libpod-64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a.scope: Deactivated successfully.
Jan 31 04:25:28 np0005603621 podman[430261]: 2026-01-31 09:25:28.222810447 +0000 UTC m=+0.364540525 container attach 64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:25:28 np0005603621 podman[430261]: 2026-01-31 09:25:28.223126627 +0000 UTC m=+0.364856685 container died 64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 31 04:25:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:28 np0005603621 systemd[1]: var-lib-containers-storage-overlay-37e2f48238921cbdc5bc88078bb6e5ed1c5362a844debb7c13dc7dad2a03e14a-merged.mount: Deactivated successfully.
Jan 31 04:25:28 np0005603621 podman[430261]: 2026-01-31 09:25:28.666480909 +0000 UTC m=+0.808210967 container remove 64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_raman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:25:28 np0005603621 systemd[1]: libpod-conmon-64bbb5829cc4fa723f5201868e16ca857ba25e2778c870215a7493a13471447a.scope: Deactivated successfully.
Jan 31 04:25:28 np0005603621 nova_compute[247399]: 2026-01-31 09:25:28.802 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:28 np0005603621 podman[430323]: 2026-01-31 09:25:28.883345021 +0000 UTC m=+0.118719040 container create b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:25:28 np0005603621 podman[430323]: 2026-01-31 09:25:28.804748949 +0000 UTC m=+0.040122998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:25:28 np0005603621 systemd[1]: Started libpod-conmon-b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef.scope.
Jan 31 04:25:29 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:25:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38512e739af53da600956656a3ee33fddd73aa384dcf692ad7c7239aa428bea3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38512e739af53da600956656a3ee33fddd73aa384dcf692ad7c7239aa428bea3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38512e739af53da600956656a3ee33fddd73aa384dcf692ad7c7239aa428bea3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38512e739af53da600956656a3ee33fddd73aa384dcf692ad7c7239aa428bea3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:29 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38512e739af53da600956656a3ee33fddd73aa384dcf692ad7c7239aa428bea3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:29.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:29 np0005603621 podman[430323]: 2026-01-31 09:25:29.065081281 +0000 UTC m=+0.300455340 container init b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:25:29 np0005603621 podman[430323]: 2026-01-31 09:25:29.071165079 +0000 UTC m=+0.306539108 container start b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:25:29 np0005603621 podman[430323]: 2026-01-31 09:25:29.091301779 +0000 UTC m=+0.326675858 container attach b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:25:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:29.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:29 np0005603621 great_wescoff[430372]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:25:29 np0005603621 great_wescoff[430372]: --> relative data size: 1.0
Jan 31 04:25:29 np0005603621 great_wescoff[430372]: --> All data devices are unavailable
Jan 31 04:25:29 np0005603621 systemd[1]: libpod-b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef.scope: Deactivated successfully.
Jan 31 04:25:29 np0005603621 podman[430323]: 2026-01-31 09:25:29.876398851 +0000 UTC m=+1.111772880 container died b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:25:29 np0005603621 systemd[1]: var-lib-containers-storage-overlay-38512e739af53da600956656a3ee33fddd73aa384dcf692ad7c7239aa428bea3-merged.mount: Deactivated successfully.
Jan 31 04:25:29 np0005603621 podman[430323]: 2026-01-31 09:25:29.917895011 +0000 UTC m=+1.153269040 container remove b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:25:29 np0005603621 systemd[1]: libpod-conmon-b95a8b279d2a458c43d4c5e5b482ed70d1715f3ec0881d9c5557e36ae2a88bef.scope: Deactivated successfully.
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.384242731 +0000 UTC m=+0.031031918 container create 19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:25:30 np0005603621 systemd[1]: Started libpod-conmon-19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be.scope.
Jan 31 04:25:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.440825374 +0000 UTC m=+0.087614581 container init 19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.445508618 +0000 UTC m=+0.092297805 container start 19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ptolemy, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:25:30 np0005603621 elated_ptolemy[430558]: 167 167
Jan 31 04:25:30 np0005603621 systemd[1]: libpod-19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be.scope: Deactivated successfully.
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.449672447 +0000 UTC m=+0.096461634 container attach 19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.450076799 +0000 UTC m=+0.096865986 container died 19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ptolemy, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.371377465 +0000 UTC m=+0.018166672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:25:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-854cd6f4e3a02bd857312c1ebce983d802d1ee263251a31fc05ef52c187c5dca-merged.mount: Deactivated successfully.
Jan 31 04:25:30 np0005603621 podman[430541]: 2026-01-31 09:25:30.485652056 +0000 UTC m=+0.132441243 container remove 19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:25:30 np0005603621 systemd[1]: libpod-conmon-19c0944dd0ef3184965659f8abb26a4ab95f3b7bdebb2701e67a96f94d36d5be.scope: Deactivated successfully.
Jan 31 04:25:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:25:30.568 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:25:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:25:30.570 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:25:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:25:30.570 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:25:30 np0005603621 podman[430583]: 2026-01-31 09:25:30.596083398 +0000 UTC m=+0.024088563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:25:30 np0005603621 podman[430583]: 2026-01-31 09:25:30.701490117 +0000 UTC m=+0.129495292 container create 686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:25:30 np0005603621 systemd[1]: Started libpod-conmon-686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd.scope.
Jan 31 04:25:30 np0005603621 podman[430597]: 2026-01-31 09:25:30.926434348 +0000 UTC m=+0.184832797 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:25:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb71c0fa9745a1acb872f66e5fec282c91048944358b36dc97c2884bef45c5bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb71c0fa9745a1acb872f66e5fec282c91048944358b36dc97c2884bef45c5bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb71c0fa9745a1acb872f66e5fec282c91048944358b36dc97c2884bef45c5bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb71c0fa9745a1acb872f66e5fec282c91048944358b36dc97c2884bef45c5bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:31.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:31 np0005603621 podman[430583]: 2026-01-31 09:25:31.090507184 +0000 UTC m=+0.518512339 container init 686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 04:25:31 np0005603621 podman[430598]: 2026-01-31 09:25:31.097729826 +0000 UTC m=+0.356204947 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:25:31 np0005603621 podman[430583]: 2026-01-31 09:25:31.097803409 +0000 UTC m=+0.525808544 container start 686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:25:31 np0005603621 podman[430583]: 2026-01-31 09:25:31.141427582 +0000 UTC m=+0.569432717 container attach 686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:25:31 np0005603621 nova_compute[247399]: 2026-01-31 09:25:31.143 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:31.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]: {
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:    "0": [
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:        {
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "devices": [
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "/dev/loop3"
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            ],
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "lv_name": "ceph_lv0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "lv_size": "7511998464",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "name": "ceph_lv0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "tags": {
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.cluster_name": "ceph",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.crush_device_class": "",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.encrypted": "0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.osd_id": "0",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.type": "block",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:                "ceph.vdo": "0"
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            },
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "type": "block",
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:            "vg_name": "ceph_vg0"
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:        }
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]:    ]
Jan 31 04:25:31 np0005603621 peaceful_beaver[430628]: }
Jan 31 04:25:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:31 np0005603621 systemd[1]: libpod-686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd.scope: Deactivated successfully.
Jan 31 04:25:31 np0005603621 podman[430583]: 2026-01-31 09:25:31.811302765 +0000 UTC m=+1.239307900 container died 686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:25:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-fb71c0fa9745a1acb872f66e5fec282c91048944358b36dc97c2884bef45c5bb-merged.mount: Deactivated successfully.
Jan 31 04:25:32 np0005603621 podman[430583]: 2026-01-31 09:25:32.000793183 +0000 UTC m=+1.428798318 container remove 686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:25:32 np0005603621 systemd[1]: libpod-conmon-686d3b372ff178b1a437b3f505cf070d3b4e8781e9b36f422ce7d64d8e8b26cd.scope: Deactivated successfully.
Jan 31 04:25:32 np0005603621 nova_compute[247399]: 2026-01-31 09:25:32.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:32 np0005603621 podman[430804]: 2026-01-31 09:25:32.535365317 +0000 UTC m=+0.045768992 container create 6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:25:32 np0005603621 systemd[1]: Started libpod-conmon-6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c.scope.
Jan 31 04:25:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:25:32 np0005603621 podman[430804]: 2026-01-31 09:25:32.517108244 +0000 UTC m=+0.027511949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:25:32 np0005603621 podman[430804]: 2026-01-31 09:25:32.755476619 +0000 UTC m=+0.265880384 container init 6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_pasteur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 31 04:25:32 np0005603621 podman[430804]: 2026-01-31 09:25:32.761241367 +0000 UTC m=+0.271645042 container start 6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 31 04:25:32 np0005603621 systemd[1]: libpod-6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c.scope: Deactivated successfully.
Jan 31 04:25:32 np0005603621 romantic_pasteur[430820]: 167 167
Jan 31 04:25:32 np0005603621 podman[430804]: 2026-01-31 09:25:32.878855671 +0000 UTC m=+0.389259376 container attach 6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 04:25:32 np0005603621 podman[430804]: 2026-01-31 09:25:32.879438109 +0000 UTC m=+0.389841784 container died 6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 04:25:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-0723cc7b4375923df0e22cd8fc9e80354db7dac58adbb0a315bf57b4e48f5151-merged.mount: Deactivated successfully.
Jan 31 04:25:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:33.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:33 np0005603621 podman[430804]: 2026-01-31 09:25:33.2000887 +0000 UTC m=+0.710492375 container remove 6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_pasteur, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 31 04:25:33 np0005603621 systemd[1]: libpod-conmon-6c2a3591a36f6da63ccb6d577730ee5354a9aae0a1258783ae440eaafa2c1d1c.scope: Deactivated successfully.
Jan 31 04:25:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:33 np0005603621 podman[430846]: 2026-01-31 09:25:33.379726925 +0000 UTC m=+0.090210330 container create b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:25:33 np0005603621 podman[430846]: 2026-01-31 09:25:33.3107277 +0000 UTC m=+0.021211125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:25:33 np0005603621 systemd[1]: Started libpod-conmon-b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b.scope.
Jan 31 04:25:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:25:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d66f0715bb7efc27977a2629a1bf4c05eeb2366074dc43b887793c5a351448f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:33.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d66f0715bb7efc27977a2629a1bf4c05eeb2366074dc43b887793c5a351448f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d66f0715bb7efc27977a2629a1bf4c05eeb2366074dc43b887793c5a351448f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:33 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d66f0715bb7efc27977a2629a1bf4c05eeb2366074dc43b887793c5a351448f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:25:33 np0005603621 podman[430846]: 2026-01-31 09:25:33.520723821 +0000 UTC m=+0.231207236 container init b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ganguly, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:25:33 np0005603621 podman[430846]: 2026-01-31 09:25:33.526184939 +0000 UTC m=+0.236668344 container start b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:25:33 np0005603621 podman[430846]: 2026-01-31 09:25:33.635681623 +0000 UTC m=+0.346165038 container attach b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:25:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:33 np0005603621 nova_compute[247399]: 2026-01-31 09:25:33.844 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]: {
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:        "osd_id": 0,
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:        "type": "bluestore"
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]:    }
Jan 31 04:25:34 np0005603621 priceless_ganguly[430863]: }
Jan 31 04:25:34 np0005603621 systemd[1]: libpod-b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b.scope: Deactivated successfully.
Jan 31 04:25:34 np0005603621 conmon[430863]: conmon b483cce41c12d018bdbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b.scope/container/memory.events
Jan 31 04:25:34 np0005603621 podman[430885]: 2026-01-31 09:25:34.364041757 +0000 UTC m=+0.023818725 container died b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 04:25:34 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8d66f0715bb7efc27977a2629a1bf4c05eeb2366074dc43b887793c5a351448f-merged.mount: Deactivated successfully.
Jan 31 04:25:34 np0005603621 podman[430885]: 2026-01-31 09:25:34.573576964 +0000 UTC m=+0.233353932 container remove b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:25:34 np0005603621 systemd[1]: libpod-conmon-b483cce41c12d018bdbdcd2abe50aabd6084475c30929f20eee0e14e53ead71b.scope: Deactivated successfully.
Jan 31 04:25:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:25:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:25:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:25:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:25:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 37621905-ef89-48f0-9c41-01d8ad664de9 does not exist
Jan 31 04:25:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d1d70dfc-850b-481b-8aee-f4f8bb1d95ab does not exist
Jan 31 04:25:34 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev f75b02c4-36fe-4b81-962e-c0d68a47f422 does not exist
Jan 31 04:25:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:25:35 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:25:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:35.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:36 np0005603621 nova_compute[247399]: 2026-01-31 09:25:36.191 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:37.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:37 np0005603621 nova_compute[247399]: 2026-01-31 09:25:37.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:25:38
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 31 04:25:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:25:38 np0005603621 nova_compute[247399]: 2026-01-31 09:25:38.845 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:39.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:25:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:39.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:41.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:41 np0005603621 nova_compute[247399]: 2026-01-31 09:25:41.193 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:41.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:42 np0005603621 nova_compute[247399]: 2026-01-31 09:25:42.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:42 np0005603621 nova_compute[247399]: 2026-01-31 09:25:42.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:25:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:43.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:43 np0005603621 nova_compute[247399]: 2026-01-31 09:25:43.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:43.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:43 np0005603621 nova_compute[247399]: 2026-01-31 09:25:43.847 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.253 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.253 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.253 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.253 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.253 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:25:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:25:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136762209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.641 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.798 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.800 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.800 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:25:44 np0005603621 nova_compute[247399]: 2026-01-31 09:25:44.800 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:25:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:45.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.442 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.443 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:25:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:45.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.676 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:25:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.894 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.895 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.924 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:25:45 np0005603621 nova_compute[247399]: 2026-01-31 09:25:45.984 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.070 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.194 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:25:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740045420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.520 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.525 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.579 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.581 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:25:46 np0005603621 nova_compute[247399]: 2026-01-31 09:25:46.581 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:25:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:47.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:47.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:48 np0005603621 nova_compute[247399]: 2026-01-31 09:25:48.885 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:49.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:49.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:25:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:25:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.197 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:51.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.582 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.583 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.583 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.640 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.641 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.641 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:51 np0005603621 nova_compute[247399]: 2026-01-31 09:25:51.642 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:25:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:52 np0005603621 nova_compute[247399]: 2026-01-31 09:25:52.510 247403 DEBUG oslo_concurrency.processutils [None req-3fbad158-704d-4546-a900-43e27b0e5be8 94836483675641d9846c5768c3b91eed 89e274acfc5c4097be7194f5ef1fabd3 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:25:52 np0005603621 nova_compute[247399]: 2026-01-31 09:25:52.533 247403 DEBUG oslo_concurrency.processutils [None req-3fbad158-704d-4546-a900-43e27b0e5be8 94836483675641d9846c5768c3b91eed 89e274acfc5c4097be7194f5ef1fabd3 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:25:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:53.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:53.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:53 np0005603621 nova_compute[247399]: 2026-01-31 09:25:53.887 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:55.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:56 np0005603621 nova_compute[247399]: 2026-01-31 09:25:56.199 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:57.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:25:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:57.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:25:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:25:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:25:58 np0005603621 nova_compute[247399]: 2026-01-31 09:25:58.930 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:25:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:25:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:25:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:25:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:25:59.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:25:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:00 np0005603621 nova_compute[247399]: 2026-01-31 09:26:00.254 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:00 np0005603621 nova_compute[247399]: 2026-01-31 09:26:00.511 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:26:00.511 159734 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '96:be:2a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '82:75:fe:8f:a4:91'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 31 04:26:00 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:26:00.513 159734 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 31 04:26:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:01.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:01 np0005603621 nova_compute[247399]: 2026-01-31 09:26:01.200 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:01 np0005603621 podman[431058]: 2026-01-31 09:26:01.494683423 +0000 UTC m=+0.049262378 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Jan 31 04:26:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:01.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:01 np0005603621 podman[431059]: 2026-01-31 09:26:01.528764724 +0000 UTC m=+0.078369646 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller)
Jan 31 04:26:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:03.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:03.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:04 np0005603621 nova_compute[247399]: 2026-01-31 09:26:04.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:05.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:05.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:06 np0005603621 nova_compute[247399]: 2026-01-31 09:26:06.232 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:07.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:07.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:08 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:26:08.514 159734 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=59a8b96c-18d5-4426-968c-99837b56953c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 31 04:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:26:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:26:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:09.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:09 np0005603621 nova_compute[247399]: 2026-01-31 09:26:09.325 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:09.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:11.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:11 np0005603621 nova_compute[247399]: 2026-01-31 09:26:11.233 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:13.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:14 np0005603621 nova_compute[247399]: 2026-01-31 09:26:14.327 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:26:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432075883' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:26:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:26:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432075883' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:26:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:15.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:16 np0005603621 nova_compute[247399]: 2026-01-31 09:26:16.235 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:17.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:19.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:19 np0005603621 nova_compute[247399]: 2026-01-31 09:26:19.329 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:19.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:21.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:21 np0005603621 nova_compute[247399]: 2026-01-31 09:26:21.237 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:23.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:23.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:24 np0005603621 nova_compute[247399]: 2026-01-31 09:26:24.331 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:25.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:26 np0005603621 nova_compute[247399]: 2026-01-31 09:26:26.240 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:27.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:29 np0005603621 nova_compute[247399]: 2026-01-31 09:26:29.332 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:26:30.570 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:26:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:26:30.571 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:26:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:26:30.571 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:26:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:31.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:31 np0005603621 nova_compute[247399]: 2026-01-31 09:26:31.242 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:31.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4202: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:32 np0005603621 podman[431223]: 2026-01-31 09:26:32.483499735 +0000 UTC m=+0.044516682 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:26:32 np0005603621 podman[431224]: 2026-01-31 09:26:32.508221167 +0000 UTC m=+0.066751318 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:26:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:33.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:33 np0005603621 nova_compute[247399]: 2026-01-31 09:26:33.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:33.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:34 np0005603621 nova_compute[247399]: 2026-01-31 09:26:34.334 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:35.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:26:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:35.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 nova_compute[247399]: 2026-01-31 09:26:36.281 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d3f680ea-f8fc-413b-97f5-cf7894f125cc does not exist
Jan 31 04:26:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7aeb33d7-543b-42c8-bb54-cfd44158204d does not exist
Jan 31 04:26:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 126fadf6-24c0-45ad-a786-ae1fe34757e8 does not exist
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:26:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:26:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:37.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:37 np0005603621 podman[431663]: 2026-01-31 09:26:37.366530735 +0000 UTC m=+0.018948125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:26:37 np0005603621 podman[431663]: 2026-01-31 09:26:37.48384315 +0000 UTC m=+0.136260500 container create 605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:26:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:37 np0005603621 systemd[1]: Started libpod-conmon-605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357.scope.
Jan 31 04:26:37 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:26:37 np0005603621 podman[431663]: 2026-01-31 09:26:37.696522223 +0000 UTC m=+0.348939593 container init 605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_yalow, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:26:37 np0005603621 podman[431663]: 2026-01-31 09:26:37.703873681 +0000 UTC m=+0.356291031 container start 605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_yalow, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:26:37 np0005603621 wonderful_yalow[431680]: 167 167
Jan 31 04:26:37 np0005603621 systemd[1]: libpod-605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357.scope: Deactivated successfully.
Jan 31 04:26:37 np0005603621 podman[431663]: 2026-01-31 09:26:37.819030488 +0000 UTC m=+0.471447838 container attach 605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_yalow, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:26:37 np0005603621 podman[431663]: 2026-01-31 09:26:37.820192624 +0000 UTC m=+0.472609974 container died 605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:26:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 31 04:26:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:26:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:37 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:26:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-a8ad1830bba572cef3d63ea3cb20526090b70df617f34c5c4137f3ce54a426d8-merged.mount: Deactivated successfully.
Jan 31 04:26:38 np0005603621 nova_compute[247399]: 2026-01-31 09:26:38.206 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:38 np0005603621 podman[431663]: 2026-01-31 09:26:38.322893074 +0000 UTC m=+0.975310424 container remove 605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:26:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:38 np0005603621 systemd[1]: libpod-conmon-605ed960e7da3b35493535c1d97ddecb5a0b33750e4f5d83f4e9cff2a4c83357.scope: Deactivated successfully.
Jan 31 04:26:38 np0005603621 podman[431706]: 2026-01-31 09:26:38.487174497 +0000 UTC m=+0.087292241 container create d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:26:38 np0005603621 podman[431706]: 2026-01-31 09:26:38.421974748 +0000 UTC m=+0.022092522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:26:38 np0005603621 systemd[1]: Started libpod-conmon-d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891.scope.
Jan 31 04:26:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:26:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752bffbccbfc53ac7199386f40f6b87a921d9875b3e26b7b664bc310f45af0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752bffbccbfc53ac7199386f40f6b87a921d9875b3e26b7b664bc310f45af0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752bffbccbfc53ac7199386f40f6b87a921d9875b3e26b7b664bc310f45af0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752bffbccbfc53ac7199386f40f6b87a921d9875b3e26b7b664bc310f45af0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d752bffbccbfc53ac7199386f40f6b87a921d9875b3e26b7b664bc310f45af0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:26:38 np0005603621 podman[431706]: 2026-01-31 09:26:38.702318147 +0000 UTC m=+0.302435901 container init d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:26:38 np0005603621 podman[431706]: 2026-01-31 09:26:38.707429744 +0000 UTC m=+0.307547488 container start d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:26:38
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Jan 31 04:26:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:26:38 np0005603621 podman[431706]: 2026-01-31 09:26:38.74917408 +0000 UTC m=+0.349291844 container attach d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:26:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:39.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:26:39 np0005603621 nova_compute[247399]: 2026-01-31 09:26:39.336 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:39 np0005603621 brave_napier[431722]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:26:39 np0005603621 brave_napier[431722]: --> relative data size: 1.0
Jan 31 04:26:39 np0005603621 brave_napier[431722]: --> All data devices are unavailable
Jan 31 04:26:39 np0005603621 systemd[1]: libpod-d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891.scope: Deactivated successfully.
Jan 31 04:26:39 np0005603621 podman[431706]: 2026-01-31 09:26:39.468180256 +0000 UTC m=+1.068298000 container died d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:26:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d752bffbccbfc53ac7199386f40f6b87a921d9875b3e26b7b664bc310f45af0c-merged.mount: Deactivated successfully.
Jan 31 04:26:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:39 np0005603621 podman[431706]: 2026-01-31 09:26:39.593644312 +0000 UTC m=+1.193762056 container remove d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_napier, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:26:39 np0005603621 systemd[1]: libpod-conmon-d736497fceb7966a5bba461ebb7b32f2e58a79a9c0e08a31306b7fd4d7fe6891.scope: Deactivated successfully.
Jan 31 04:26:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.143451415 +0000 UTC m=+0.085703652 container create 6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.079478653 +0000 UTC m=+0.021730920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:26:40 np0005603621 systemd[1]: Started libpod-conmon-6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9.scope.
Jan 31 04:26:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.219943842 +0000 UTC m=+0.162196089 container init 6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.225439561 +0000 UTC m=+0.167691798 container start 6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.228349671 +0000 UTC m=+0.170601938 container attach 6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:26:40 np0005603621 musing_elgamal[431908]: 167 167
Jan 31 04:26:40 np0005603621 systemd[1]: libpod-6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9.scope: Deactivated successfully.
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.22929708 +0000 UTC m=+0.171549317 container died 6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 31 04:26:40 np0005603621 systemd[1]: var-lib-containers-storage-overlay-ba7fd18c158c9af122f1ccbe6ccf9c75e136493c97ff7080e7eba2fa1b14076e-merged.mount: Deactivated successfully.
Jan 31 04:26:40 np0005603621 podman[431891]: 2026-01-31 09:26:40.260013546 +0000 UTC m=+0.202265783 container remove 6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:26:40 np0005603621 systemd[1]: libpod-conmon-6ed6a7b0d9ab9dded048b459e6af8aadb2c9881fcc633afe60198dad40c51aa9.scope: Deactivated successfully.
Jan 31 04:26:40 np0005603621 podman[431935]: 2026-01-31 09:26:40.376820876 +0000 UTC m=+0.035100063 container create cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:26:40 np0005603621 systemd[1]: Started libpod-conmon-cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe.scope.
Jan 31 04:26:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:26:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21da9359d0fca35a6baae25ae7c4f9357e11f2f615afe51324086adf136001fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21da9359d0fca35a6baae25ae7c4f9357e11f2f615afe51324086adf136001fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21da9359d0fca35a6baae25ae7c4f9357e11f2f615afe51324086adf136001fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21da9359d0fca35a6baae25ae7c4f9357e11f2f615afe51324086adf136001fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:40 np0005603621 podman[431935]: 2026-01-31 09:26:40.451341142 +0000 UTC m=+0.109620339 container init cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 04:26:40 np0005603621 podman[431935]: 2026-01-31 09:26:40.45807104 +0000 UTC m=+0.116350217 container start cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_booth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 31 04:26:40 np0005603621 podman[431935]: 2026-01-31 09:26:40.361502594 +0000 UTC m=+0.019781771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:26:40 np0005603621 podman[431935]: 2026-01-31 09:26:40.461298139 +0000 UTC m=+0.119577396 container attach cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:26:41 np0005603621 epic_booth[431951]: {
Jan 31 04:26:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:41.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:41 np0005603621 epic_booth[431951]:    "0": [
Jan 31 04:26:41 np0005603621 epic_booth[431951]:        {
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "devices": [
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "/dev/loop3"
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            ],
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "lv_name": "ceph_lv0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "lv_size": "7511998464",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "name": "ceph_lv0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "tags": {
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.cluster_name": "ceph",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.crush_device_class": "",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.encrypted": "0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.osd_id": "0",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.type": "block",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:                "ceph.vdo": "0"
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            },
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "type": "block",
Jan 31 04:26:41 np0005603621 epic_booth[431951]:            "vg_name": "ceph_vg0"
Jan 31 04:26:41 np0005603621 epic_booth[431951]:        }
Jan 31 04:26:41 np0005603621 epic_booth[431951]:    ]
Jan 31 04:26:41 np0005603621 epic_booth[431951]: }
Jan 31 04:26:41 np0005603621 systemd[1]: libpod-cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe.scope: Deactivated successfully.
Jan 31 04:26:41 np0005603621 podman[431935]: 2026-01-31 09:26:41.151459886 +0000 UTC m=+0.809739053 container died cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:26:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-21da9359d0fca35a6baae25ae7c4f9357e11f2f615afe51324086adf136001fd-merged.mount: Deactivated successfully.
Jan 31 04:26:41 np0005603621 podman[431935]: 2026-01-31 09:26:41.198543887 +0000 UTC m=+0.856823064 container remove cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_booth, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 04:26:41 np0005603621 systemd[1]: libpod-conmon-cd93f4cd3f85135252bc7ec07f2c029c129fda1b56a7a0a4557bd0a930e4dfbe.scope: Deactivated successfully.
Jan 31 04:26:41 np0005603621 nova_compute[247399]: 2026-01-31 09:26:41.283 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:41.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:41 np0005603621 podman[432113]: 2026-01-31 09:26:41.789087654 +0000 UTC m=+0.102626773 container create f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:26:41 np0005603621 podman[432113]: 2026-01-31 09:26:41.716111125 +0000 UTC m=+0.029650244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:26:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:41 np0005603621 systemd[1]: Started libpod-conmon-f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955.scope.
Jan 31 04:26:41 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:26:41 np0005603621 podman[432113]: 2026-01-31 09:26:41.962289691 +0000 UTC m=+0.275828800 container init f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:26:41 np0005603621 podman[432113]: 2026-01-31 09:26:41.968742021 +0000 UTC m=+0.282281110 container start f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 31 04:26:41 np0005603621 epic_rosalind[432129]: 167 167
Jan 31 04:26:41 np0005603621 systemd[1]: libpod-f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955.scope: Deactivated successfully.
Jan 31 04:26:42 np0005603621 podman[432113]: 2026-01-31 09:26:42.046886348 +0000 UTC m=+0.360425427 container attach f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:26:42 np0005603621 podman[432113]: 2026-01-31 09:26:42.047746215 +0000 UTC m=+0.361285314 container died f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:26:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-3e98bae6a5f618c2c0d3fd54529c8ba49d017ff4e0285fcd0e183cecd6a4b4e0-merged.mount: Deactivated successfully.
Jan 31 04:26:42 np0005603621 podman[432113]: 2026-01-31 09:26:42.609208236 +0000 UTC m=+0.922747315 container remove f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_rosalind, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:26:42 np0005603621 systemd[1]: libpod-conmon-f585226d791ea8fef6539cd40b1dba145b8594f4000be1baa02068deff3e8955.scope: Deactivated successfully.
Jan 31 04:26:42 np0005603621 podman[432153]: 2026-01-31 09:26:42.778897766 +0000 UTC m=+0.054616405 container create d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:26:42 np0005603621 systemd[1]: Started libpod-conmon-d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791.scope.
Jan 31 04:26:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:26:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb2cbfa8914f8072428660a19fc6027cd590bf4b8fa670098ff0fe8ac9318ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb2cbfa8914f8072428660a19fc6027cd590bf4b8fa670098ff0fe8ac9318ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb2cbfa8914f8072428660a19fc6027cd590bf4b8fa670098ff0fe8ac9318ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb2cbfa8914f8072428660a19fc6027cd590bf4b8fa670098ff0fe8ac9318ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:26:42 np0005603621 podman[432153]: 2026-01-31 09:26:42.754134043 +0000 UTC m=+0.029852742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:26:42 np0005603621 podman[432153]: 2026-01-31 09:26:42.862824402 +0000 UTC m=+0.138543031 container init d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 31 04:26:42 np0005603621 podman[432153]: 2026-01-31 09:26:42.870197539 +0000 UTC m=+0.145916148 container start d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 04:26:42 np0005603621 podman[432153]: 2026-01-31 09:26:42.874897713 +0000 UTC m=+0.150616322 container attach d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:26:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:43.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]: {
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:        "osd_id": 0,
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:        "type": "bluestore"
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]:    }
Jan 31 04:26:43 np0005603621 elegant_faraday[432170]: }
Jan 31 04:26:43 np0005603621 systemd[1]: libpod-d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791.scope: Deactivated successfully.
Jan 31 04:26:43 np0005603621 podman[432153]: 2026-01-31 09:26:43.652450334 +0000 UTC m=+0.928168943 container died d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:26:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4fb2cbfa8914f8072428660a19fc6027cd590bf4b8fa670098ff0fe8ac9318ea-merged.mount: Deactivated successfully.
Jan 31 04:26:43 np0005603621 podman[432153]: 2026-01-31 09:26:43.691873289 +0000 UTC m=+0.967591888 container remove d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 04:26:43 np0005603621 systemd[1]: libpod-conmon-d49bdf0cf32e80baeb07d658d4b3c0fff8572a0e16505840ac0014a2bfd15791.scope: Deactivated successfully.
Jan 31 04:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:26:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:26:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d42ee54e-b058-4bd1-8541-bf4eb4998413 does not exist
Jan 31 04:26:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 74e5d053-c284-47e8-b815-5e1cfaa7dbf1 does not exist
Jan 31 04:26:43 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3255ec67-990b-4cf1-92c4-7c810f5ea963 does not exist
Jan 31 04:26:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:44 np0005603621 nova_compute[247399]: 2026-01-31 09:26:44.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:44 np0005603621 nova_compute[247399]: 2026-01-31 09:26:44.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:44 np0005603621 nova_compute[247399]: 2026-01-31 09:26:44.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:26:44 np0005603621 nova_compute[247399]: 2026-01-31 09:26:44.339 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:44 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:26:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:45.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.287 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.288 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.288 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.288 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.288 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.307 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:46 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:26:46 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971905874' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.686 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.815 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.816 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4021MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.816 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.816 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.893 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.893 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:26:46 np0005603621 nova_compute[247399]: 2026-01-31 09:26:46.910 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:26:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:47.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:26:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417016882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:26:47 np0005603621 nova_compute[247399]: 2026-01-31 09:26:47.312 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:26:47 np0005603621 nova_compute[247399]: 2026-01-31 09:26:47.315 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:26:47 np0005603621 nova_compute[247399]: 2026-01-31 09:26:47.334 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:26:47 np0005603621 nova_compute[247399]: 2026-01-31 09:26:47.335 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:26:47 np0005603621 nova_compute[247399]: 2026-01-31 09:26:47.336 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:26:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:26:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:47.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:26:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:49 np0005603621 nova_compute[247399]: 2026-01-31 09:26:49.341 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:26:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:26:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:51.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:51 np0005603621 nova_compute[247399]: 2026-01-31 09:26:51.309 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:51 np0005603621 nova_compute[247399]: 2026-01-31 09:26:51.336 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:51 np0005603621 nova_compute[247399]: 2026-01-31 09:26:51.336 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:26:51 np0005603621 nova_compute[247399]: 2026-01-31 09:26:51.336 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:26:51 np0005603621 nova_compute[247399]: 2026-01-31 09:26:51.408 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:26:51 np0005603621 nova_compute[247399]: 2026-01-31 09:26:51.409 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:52 np0005603621 nova_compute[247399]: 2026-01-31 09:26:52.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:52 np0005603621 nova_compute[247399]: 2026-01-31 09:26:52.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.306772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851612306807, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2106, "num_deletes": 251, "total_data_size": 4067871, "memory_usage": 4127472, "flush_reason": "Manual Compaction"}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851612350332, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 3939867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90381, "largest_seqno": 92486, "table_properties": {"data_size": 3930184, "index_size": 6176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19324, "raw_average_key_size": 20, "raw_value_size": 3911098, "raw_average_value_size": 4108, "num_data_blocks": 270, "num_entries": 952, "num_filter_entries": 952, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851397, "oldest_key_time": 1769851397, "file_creation_time": 1769851612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 43601 microseconds, and 5369 cpu microseconds.
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.350371) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 3939867 bytes OK
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.350390) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.351865) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.351877) EVENT_LOG_v1 {"time_micros": 1769851612351873, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.351895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 4059286, prev total WAL file size 4059286, number of live WAL files 2.
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.352428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(3847KB)], [209(12MB)]
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851612352478, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 17159537, "oldest_snapshot_seqno": -1}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 12254 keys, 15188238 bytes, temperature: kUnknown
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851612461304, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 15188238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15109896, "index_size": 46613, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30661, "raw_key_size": 323268, "raw_average_key_size": 26, "raw_value_size": 14897059, "raw_average_value_size": 1215, "num_data_blocks": 1775, "num_entries": 12254, "num_filter_entries": 12254, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.461619) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 15188238 bytes
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.470206) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.6 rd, 139.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.6 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(8.2) write-amplify(3.9) OK, records in: 12773, records dropped: 519 output_compression: NoCompression
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.470249) EVENT_LOG_v1 {"time_micros": 1769851612470230, "job": 132, "event": "compaction_finished", "compaction_time_micros": 108894, "compaction_time_cpu_micros": 26088, "output_level": 6, "num_output_files": 1, "total_output_size": 15188238, "num_input_records": 12773, "num_output_records": 12254, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851612471036, "job": 132, "event": "table_file_deletion", "file_number": 211}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851612473316, "job": 132, "event": "table_file_deletion", "file_number": 209}
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.352345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.473410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.473417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.473420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.473424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:26:52 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:26:52.473427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:26:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:53.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:53.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:54 np0005603621 nova_compute[247399]: 2026-01-31 09:26:54.344 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:55.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:55.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:56 np0005603621 nova_compute[247399]: 2026-01-31 09:26:56.311 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:57.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:26:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:57.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:26:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:26:58 np0005603621 nova_compute[247399]: 2026-01-31 09:26:58.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:58 np0005603621 nova_compute[247399]: 2026-01-31 09:26:58.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:26:58 np0005603621 nova_compute[247399]: 2026-01-31 09:26:58.217 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:26:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:26:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:26:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:59 np0005603621 nova_compute[247399]: 2026-01-31 09:26:59.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:26:59 np0005603621 nova_compute[247399]: 2026-01-31 09:26:59.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:26:59 np0005603621 nova_compute[247399]: 2026-01-31 09:26:59.345 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:26:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:26:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:26:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:26:59.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:26:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:01.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:01 np0005603621 nova_compute[247399]: 2026-01-31 09:27:01.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:01.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:03 np0005603621 podman[432356]: 2026-01-31 09:27:03.489455357 +0000 UTC m=+0.046283948 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:27:03 np0005603621 podman[432357]: 2026-01-31 09:27:03.540571152 +0000 UTC m=+0.097112184 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 04:27:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:03.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:04 np0005603621 nova_compute[247399]: 2026-01-31 09:27:04.349 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:05.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:05.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:06 np0005603621 nova_compute[247399]: 2026-01-31 09:27:06.318 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:07.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:27:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:27:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:09 np0005603621 nova_compute[247399]: 2026-01-31 09:27:09.350 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:09.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:11.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:11 np0005603621 nova_compute[247399]: 2026-01-31 09:27:11.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:11.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:13.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:14 np0005603621 nova_compute[247399]: 2026-01-31 09:27:14.351 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:15.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:16 np0005603621 nova_compute[247399]: 2026-01-31 09:27:16.321 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:17.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:17.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:19.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:19 np0005603621 nova_compute[247399]: 2026-01-31 09:27:19.353 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:27:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:19.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:27:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4226: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:21.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:21 np0005603621 nova_compute[247399]: 2026-01-31 09:27:21.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:21.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:23 np0005603621 nova_compute[247399]: 2026-01-31 09:27:23.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:23.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:24 np0005603621 nova_compute[247399]: 2026-01-31 09:27:24.355 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:27:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:25.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:27:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:26 np0005603621 nova_compute[247399]: 2026-01-31 09:27:26.326 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:27.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:29.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:29 np0005603621 nova_compute[247399]: 2026-01-31 09:27:29.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:29.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:27:30.571 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:27:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:27:30.572 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:27:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:27:30.572 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:27:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:31.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:31 np0005603621 nova_compute[247399]: 2026-01-31 09:27:31.328 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:31.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:33.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:33.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4233: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:34 np0005603621 nova_compute[247399]: 2026-01-31 09:27:34.358 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:34 np0005603621 podman[432516]: 2026-01-31 09:27:34.485513074 +0000 UTC m=+0.047012480 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Jan 31 04:27:34 np0005603621 podman[432517]: 2026-01-31 09:27:34.512151505 +0000 UTC m=+0.070658178 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:27:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:35.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:36 np0005603621 nova_compute[247399]: 2026-01-31 09:27:36.330 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:37 np0005603621 nova_compute[247399]: 2026-01-31 09:27:37.332 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:37.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:27:38
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Jan 31 04:27:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:27:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:39.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:27:39 np0005603621 nova_compute[247399]: 2026-01-31 09:27:39.360 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:39.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:40 np0005603621 nova_compute[247399]: 2026-01-31 09:27:40.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:41 np0005603621 nova_compute[247399]: 2026-01-31 09:27:41.332 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:41.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4237: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:43.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:43.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:44 np0005603621 nova_compute[247399]: 2026-01-31 09:27:44.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:44 np0005603621 nova_compute[247399]: 2026-01-31 09:27:44.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:27:44 np0005603621 nova_compute[247399]: 2026-01-31 09:27:44.361 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:27:44 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3601dbe4-6348-4fb0-960e-25cc5255d54c does not exist
Jan 31 04:27:44 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3a82d808-b59b-4fee-ab3e-f3e736c57659 does not exist
Jan 31 04:27:44 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 102a3dcd-3fae-474e-b3ed-e452970c9605 does not exist
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:27:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:27:45 np0005603621 nova_compute[247399]: 2026-01-31 09:27:45.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.23313898 +0000 UTC m=+0.038467436 container create 93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:27:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:27:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:27:45 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:27:45 np0005603621 systemd[1]: Started libpod-conmon-93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a.scope.
Jan 31 04:27:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.212155914 +0000 UTC m=+0.017484400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.312896448 +0000 UTC m=+0.118224944 container init 93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_vaughan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.317874162 +0000 UTC m=+0.123202628 container start 93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.321604317 +0000 UTC m=+0.126932783 container attach 93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_vaughan, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 04:27:45 np0005603621 goofy_vaughan[432854]: 167 167
Jan 31 04:27:45 np0005603621 systemd[1]: libpod-93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a.scope: Deactivated successfully.
Jan 31 04:27:45 np0005603621 conmon[432854]: conmon 93def75bde5119908820 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a.scope/container/memory.events
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.323827455 +0000 UTC m=+0.129155921 container died 93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:27:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e5c36a231567993eb64c6d747eecf4074e625870dcc0c5698df5ccef1de341dd-merged.mount: Deactivated successfully.
Jan 31 04:27:45 np0005603621 podman[432838]: 2026-01-31 09:27:45.38207439 +0000 UTC m=+0.187402856 container remove 93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:27:45 np0005603621 systemd[1]: libpod-conmon-93def75bde5119908820a3f706f7c377453498ed86afd19f28d3cbd0db56002a.scope: Deactivated successfully.
Jan 31 04:27:45 np0005603621 podman[432877]: 2026-01-31 09:27:45.496693572 +0000 UTC m=+0.036138025 container create d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:27:45 np0005603621 systemd[1]: Started libpod-conmon-d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769.scope.
Jan 31 04:27:45 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:27:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307dc1226712824a70023a7d1785cbf6992b9ce945d9a0cf64ca5508bb1004e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307dc1226712824a70023a7d1785cbf6992b9ce945d9a0cf64ca5508bb1004e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307dc1226712824a70023a7d1785cbf6992b9ce945d9a0cf64ca5508bb1004e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307dc1226712824a70023a7d1785cbf6992b9ce945d9a0cf64ca5508bb1004e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:45 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307dc1226712824a70023a7d1785cbf6992b9ce945d9a0cf64ca5508bb1004e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:45 np0005603621 podman[432877]: 2026-01-31 09:27:45.575920513 +0000 UTC m=+0.115364996 container init d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 04:27:45 np0005603621 podman[432877]: 2026-01-31 09:27:45.48072237 +0000 UTC m=+0.020166843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:27:45 np0005603621 podman[432877]: 2026-01-31 09:27:45.581803515 +0000 UTC m=+0.121247968 container start d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 04:27:45 np0005603621 podman[432877]: 2026-01-31 09:27:45.587866241 +0000 UTC m=+0.127310694 container attach d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 31 04:27:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:45.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:46 np0005603621 nova_compute[247399]: 2026-01-31 09:27:46.335 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:46 np0005603621 friendly_sanderson[432893]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:27:46 np0005603621 friendly_sanderson[432893]: --> relative data size: 1.0
Jan 31 04:27:46 np0005603621 friendly_sanderson[432893]: --> All data devices are unavailable
Jan 31 04:27:46 np0005603621 systemd[1]: libpod-d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769.scope: Deactivated successfully.
Jan 31 04:27:46 np0005603621 podman[432877]: 2026-01-31 09:27:46.392687982 +0000 UTC m=+0.932132435 container died d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:27:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-307dc1226712824a70023a7d1785cbf6992b9ce945d9a0cf64ca5508bb1004e3-merged.mount: Deactivated successfully.
Jan 31 04:27:47 np0005603621 podman[432877]: 2026-01-31 09:27:47.10625573 +0000 UTC m=+1.645700223 container remove d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:27:47 np0005603621 systemd[1]: libpod-conmon-d28e0ffb31a7d4df0e821a4164c95aa677f9b7a25c2163c31cfe1f56abc97769.scope: Deactivated successfully.
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.225 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.226 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.226 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.226 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:27:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:27:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759384977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:27:47 np0005603621 podman[433084]: 2026-01-31 09:27:47.639588045 +0000 UTC m=+0.106863224 container create 2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_borg, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.643 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:27:47 np0005603621 podman[433084]: 2026-01-31 09:27:47.553118751 +0000 UTC m=+0.020393960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:27:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:47.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:47 np0005603621 systemd[1]: Started libpod-conmon-2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5.scope.
Jan 31 04:27:47 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.780 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.781 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.781 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.781 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:27:47 np0005603621 podman[433084]: 2026-01-31 09:27:47.825499453 +0000 UTC m=+0.292774642 container init 2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_borg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 31 04:27:47 np0005603621 podman[433084]: 2026-01-31 09:27:47.831055865 +0000 UTC m=+0.298331044 container start 2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_borg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:27:47 np0005603621 wonderful_borg[433102]: 167 167
Jan 31 04:27:47 np0005603621 systemd[1]: libpod-2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5.scope: Deactivated successfully.
Jan 31 04:27:47 np0005603621 conmon[433102]: conmon 2bd6fae3d8580d08af19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5.scope/container/memory.events
Jan 31 04:27:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:47 np0005603621 podman[433084]: 2026-01-31 09:27:47.8675728 +0000 UTC m=+0.334847979 container attach 2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:27:47 np0005603621 podman[433084]: 2026-01-31 09:27:47.868016844 +0000 UTC m=+0.335292023 container died 2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.880 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.881 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:27:47 np0005603621 nova_compute[247399]: 2026-01-31 09:27:47.905 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:27:48 np0005603621 systemd[1]: var-lib-containers-storage-overlay-13206155c677eff2bad1d85c05be75a38131c4a68ae39deabdd3a3e44832cce2-merged.mount: Deactivated successfully.
Jan 31 04:27:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:27:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561736694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:27:48 np0005603621 nova_compute[247399]: 2026-01-31 09:27:48.332 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:27:48 np0005603621 nova_compute[247399]: 2026-01-31 09:27:48.336 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:27:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:48 np0005603621 podman[433084]: 2026-01-31 09:27:48.353870945 +0000 UTC m=+0.821146124 container remove 2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:27:48 np0005603621 systemd[1]: libpod-conmon-2bd6fae3d8580d08af19ac8b08ef98af906c0f5e2dd1a023c875f313115195f5.scope: Deactivated successfully.
Jan 31 04:27:48 np0005603621 podman[433148]: 2026-01-31 09:27:48.457788877 +0000 UTC m=+0.018896433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:27:48 np0005603621 podman[433148]: 2026-01-31 09:27:48.555731835 +0000 UTC m=+0.116839361 container create 2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:27:48 np0005603621 systemd[1]: Started libpod-conmon-2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98.scope.
Jan 31 04:27:48 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:27:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091059bc953ea23b20e560a6ea9e0740a7224e7b34ddc744274160431c1acb79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091059bc953ea23b20e560a6ea9e0740a7224e7b34ddc744274160431c1acb79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091059bc953ea23b20e560a6ea9e0740a7224e7b34ddc744274160431c1acb79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:48 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091059bc953ea23b20e560a6ea9e0740a7224e7b34ddc744274160431c1acb79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:48 np0005603621 podman[433148]: 2026-01-31 09:27:48.759293249 +0000 UTC m=+0.320400795 container init 2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hypatia, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:27:48 np0005603621 podman[433148]: 2026-01-31 09:27:48.764331884 +0000 UTC m=+0.325439420 container start 2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hypatia, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:27:48 np0005603621 podman[433148]: 2026-01-31 09:27:48.820190885 +0000 UTC m=+0.381298411 container attach 2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:27:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:49.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:49 np0005603621 nova_compute[247399]: 2026-01-31 09:27:49.364 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]: {
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:    "0": [
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:        {
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "devices": [
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "/dev/loop3"
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            ],
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "lv_name": "ceph_lv0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "lv_size": "7511998464",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "name": "ceph_lv0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "tags": {
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.cluster_name": "ceph",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.crush_device_class": "",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.encrypted": "0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.osd_id": "0",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.type": "block",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:                "ceph.vdo": "0"
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            },
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "type": "block",
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:            "vg_name": "ceph_vg0"
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:        }
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]:    ]
Jan 31 04:27:49 np0005603621 pedantic_hypatia[433164]: }
Jan 31 04:27:49 np0005603621 systemd[1]: libpod-2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98.scope: Deactivated successfully.
Jan 31 04:27:49 np0005603621 podman[433148]: 2026-01-31 09:27:49.498065774 +0000 UTC m=+1.059173360 container died 2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hypatia, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:27:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:49 np0005603621 systemd[1]: var-lib-containers-storage-overlay-091059bc953ea23b20e560a6ea9e0740a7224e7b34ddc744274160431c1acb79-merged.mount: Deactivated successfully.
Jan 31 04:27:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:49 np0005603621 podman[433148]: 2026-01-31 09:27:49.935212204 +0000 UTC m=+1.496319730 container remove 2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:27:49 np0005603621 systemd[1]: libpod-conmon-2080db145da674465d9ac5f0a985751356ba00ea5f9fc5e2ca94e50269c17e98.scope: Deactivated successfully.
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:27:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:27:50 np0005603621 podman[433377]: 2026-01-31 09:27:50.421427937 +0000 UTC m=+0.020805813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:27:50 np0005603621 podman[433377]: 2026-01-31 09:27:50.536527803 +0000 UTC m=+0.135905649 container create df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hofstadter, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 31 04:27:50 np0005603621 systemd[1]: Started libpod-conmon-df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0.scope.
Jan 31 04:27:50 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:27:50 np0005603621 nova_compute[247399]: 2026-01-31 09:27:50.730 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:27:50 np0005603621 podman[433377]: 2026-01-31 09:27:50.732478072 +0000 UTC m=+0.331855948 container init df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:27:50 np0005603621 nova_compute[247399]: 2026-01-31 09:27:50.732 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:27:50 np0005603621 nova_compute[247399]: 2026-01-31 09:27:50.733 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:27:50 np0005603621 podman[433377]: 2026-01-31 09:27:50.737401053 +0000 UTC m=+0.336778899 container start df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:27:50 np0005603621 vibrant_hofstadter[433394]: 167 167
Jan 31 04:27:50 np0005603621 systemd[1]: libpod-df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0.scope: Deactivated successfully.
Jan 31 04:27:50 np0005603621 podman[433377]: 2026-01-31 09:27:50.776712704 +0000 UTC m=+0.376090570 container attach df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:27:50 np0005603621 podman[433377]: 2026-01-31 09:27:50.777038035 +0000 UTC m=+0.376415881 container died df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 31 04:27:51 np0005603621 systemd[1]: var-lib-containers-storage-overlay-789467b17f2f401eb709a0267e5f454c9d9add06795fd7369c1cd41763038fde-merged.mount: Deactivated successfully.
Jan 31 04:27:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:51.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:51 np0005603621 podman[433377]: 2026-01-31 09:27:51.300101893 +0000 UTC m=+0.899479739 container remove df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hofstadter, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:27:51 np0005603621 nova_compute[247399]: 2026-01-31 09:27:51.338 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:51 np0005603621 systemd[1]: libpod-conmon-df4e0864f3d7e98b1b12a4c2f0e5eef690549dc7d11f450cf140331e0ef4eee0.scope: Deactivated successfully.
Jan 31 04:27:51 np0005603621 podman[433419]: 2026-01-31 09:27:51.433415121 +0000 UTC m=+0.032425080 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:27:51 np0005603621 podman[433419]: 2026-01-31 09:27:51.660017754 +0000 UTC m=+0.259027693 container create a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:27:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:51 np0005603621 systemd[1]: Started libpod-conmon-a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149.scope.
Jan 31 04:27:51 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:27:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dff113d5d464d02a1ba17a2fefbbe5e13753c1ca9d68b0b0f2b3fb361a24d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dff113d5d464d02a1ba17a2fefbbe5e13753c1ca9d68b0b0f2b3fb361a24d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dff113d5d464d02a1ba17a2fefbbe5e13753c1ca9d68b0b0f2b3fb361a24d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:51 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65dff113d5d464d02a1ba17a2fefbbe5e13753c1ca9d68b0b0f2b3fb361a24d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:27:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:51 np0005603621 podman[433419]: 2026-01-31 09:27:51.929806616 +0000 UTC m=+0.528816575 container init a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:27:51 np0005603621 podman[433419]: 2026-01-31 09:27:51.937840124 +0000 UTC m=+0.536850063 container start a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:27:51 np0005603621 podman[433419]: 2026-01-31 09:27:51.974583406 +0000 UTC m=+0.573593345 container attach a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:27:52 np0005603621 sweet_easley[433435]: {
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:        "osd_id": 0,
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:        "type": "bluestore"
Jan 31 04:27:52 np0005603621 sweet_easley[433435]:    }
Jan 31 04:27:52 np0005603621 sweet_easley[433435]: }
Jan 31 04:27:52 np0005603621 systemd[1]: libpod-a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149.scope: Deactivated successfully.
Jan 31 04:27:52 np0005603621 podman[433419]: 2026-01-31 09:27:52.723579077 +0000 UTC m=+1.322589016 container died a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 31 04:27:52 np0005603621 systemd[1]: var-lib-containers-storage-overlay-65dff113d5d464d02a1ba17a2fefbbe5e13753c1ca9d68b0b0f2b3fb361a24d5-merged.mount: Deactivated successfully.
Jan 31 04:27:53 np0005603621 podman[433419]: 2026-01-31 09:27:53.017793773 +0000 UTC m=+1.616803702 container remove a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 31 04:27:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:27:53 np0005603621 systemd[1]: libpod-conmon-a7ecc1576fc1e9e06dd06b5639bb350e5d40410e45520b2430c6e26940e89149.scope: Deactivated successfully.
Jan 31 04:27:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:27:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:27:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:53.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:53.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:53 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:27:53 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 0ef5b4fb-2425-4780-8305-184a35552470 does not exist
Jan 31 04:27:53 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c06a51d1-1b30-49d8-9b5f-950624a91723 does not exist
Jan 31 04:27:53 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev feacbc30-bdcb-4988-a49e-534bf26f693a does not exist
Jan 31 04:27:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.365 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:27:54 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.734 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.735 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.735 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.824 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.824 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.825 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:54 np0005603621 nova_compute[247399]: 2026-01-31 09:27:54.826 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:27:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:27:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:27:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:56 np0005603621 nova_compute[247399]: 2026-01-31 09:27:56.380 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:57.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:57.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:27:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:27:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:27:59.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:59 np0005603621 nova_compute[247399]: 2026-01-31 09:27:59.369 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:27:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:27:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:27:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:27:59.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:27:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4246: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:01.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:01 np0005603621 nova_compute[247399]: 2026-01-31 09:28:01.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:28:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:01.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:28:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:03.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:03.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:04 np0005603621 nova_compute[247399]: 2026-01-31 09:28:04.285 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:04 np0005603621 nova_compute[247399]: 2026-01-31 09:28:04.370 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:05 np0005603621 podman[433525]: 2026-01-31 09:28:05.488384729 +0000 UTC m=+0.047319779 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:28:05 np0005603621 podman[433526]: 2026-01-31 09:28:05.546726617 +0000 UTC m=+0.105466981 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:28:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:05.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:06 np0005603621 nova_compute[247399]: 2026-01-31 09:28:06.384 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:07.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:07.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:28:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:28:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:09 np0005603621 nova_compute[247399]: 2026-01-31 09:28:09.372 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:09.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:11.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:11 np0005603621 nova_compute[247399]: 2026-01-31 09:28:11.386 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:28:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:28:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4252: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:13.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:14 np0005603621 nova_compute[247399]: 2026-01-31 09:28:14.374 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:28:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112294893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:28:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:28:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2112294893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:28:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:15.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:15.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:16 np0005603621 nova_compute[247399]: 2026-01-31 09:28:16.389 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:28:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:28:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:28:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:17.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:28:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.606026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851698606058, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 916, "num_deletes": 250, "total_data_size": 1460705, "memory_usage": 1488376, "flush_reason": "Manual Compaction"}
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851698681605, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 880301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92487, "largest_seqno": 93402, "table_properties": {"data_size": 876638, "index_size": 1378, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9702, "raw_average_key_size": 20, "raw_value_size": 868855, "raw_average_value_size": 1856, "num_data_blocks": 62, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851613, "oldest_key_time": 1769851613, "file_creation_time": 1769851698, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 75904 microseconds, and 2490 cpu microseconds.
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.681916) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 880301 bytes OK
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.681952) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.723182) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.723228) EVENT_LOG_v1 {"time_micros": 1769851698723219, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.723251) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 1456383, prev total WAL file size 1456383, number of live WAL files 2.
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.723809) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353130' seq:72057594037927935, type:22 .. '6D6772737461740033373631' seq:0, type:0; will stop at (end)
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(859KB)], [212(14MB)]
Jan 31 04:28:18 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851698723835, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 16068539, "oldest_snapshot_seqno": -1}
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 12246 keys, 12785929 bytes, temperature: kUnknown
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851699106636, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 12785929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12711315, "index_size": 42885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30661, "raw_key_size": 323282, "raw_average_key_size": 26, "raw_value_size": 12502227, "raw_average_value_size": 1020, "num_data_blocks": 1622, "num_entries": 12246, "num_filter_entries": 12246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851698, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.106965) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12785929 bytes
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.119478) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.0 rd, 33.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(32.8) write-amplify(14.5) OK, records in: 12722, records dropped: 476 output_compression: NoCompression
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.119515) EVENT_LOG_v1 {"time_micros": 1769851699119502, "job": 134, "event": "compaction_finished", "compaction_time_micros": 382937, "compaction_time_cpu_micros": 26637, "output_level": 6, "num_output_files": 1, "total_output_size": 12785929, "num_input_records": 12722, "num_output_records": 12246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851699119730, "job": 134, "event": "table_file_deletion", "file_number": 214}
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851699120834, "job": 134, "event": "table_file_deletion", "file_number": 212}
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:18.723680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.120859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.120863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.120864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.120866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:19 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:19.120867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:19.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:19 np0005603621 nova_compute[247399]: 2026-01-31 09:28:19.376 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:19.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:21.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:21 np0005603621 nova_compute[247399]: 2026-01-31 09:28:21.390 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:21.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:23.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.472688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851703472717, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 297, "num_deletes": 257, "total_data_size": 78364, "memory_usage": 84312, "flush_reason": "Manual Compaction"}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851703504112, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 78009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93403, "largest_seqno": 93699, "table_properties": {"data_size": 76085, "index_size": 151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4862, "raw_average_key_size": 17, "raw_value_size": 72259, "raw_average_value_size": 259, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851699, "oldest_key_time": 1769851699, "file_creation_time": 1769851703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 31478 microseconds, and 923 cpu microseconds.
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.504160) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 78009 bytes OK
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.504182) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.537136) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.537182) EVENT_LOG_v1 {"time_micros": 1769851703537173, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.537206) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 76172, prev total WAL file size 76172, number of live WAL files 2.
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.537545) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303136' seq:72057594037927935, type:22 .. '6C6F676D0034323639' seq:0, type:0; will stop at (end)
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(76KB)], [215(12MB)]
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851703537573, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 12863938, "oldest_snapshot_seqno": -1}
Jan 31 04:28:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:23.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 12003 keys, 12746862 bytes, temperature: kUnknown
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851703771909, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 12746862, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12673323, "index_size": 42450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30021, "raw_key_size": 319174, "raw_average_key_size": 26, "raw_value_size": 12467838, "raw_average_value_size": 1038, "num_data_blocks": 1601, "num_entries": 12003, "num_filter_entries": 12003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.772134) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12746862 bytes
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.798117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.9 rd, 54.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.2 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(328.3) write-amplify(163.4) OK, records in: 12524, records dropped: 521 output_compression: NoCompression
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.798153) EVENT_LOG_v1 {"time_micros": 1769851703798141, "job": 136, "event": "compaction_finished", "compaction_time_micros": 234410, "compaction_time_cpu_micros": 22863, "output_level": 6, "num_output_files": 1, "total_output_size": 12746862, "num_input_records": 12524, "num_output_records": 12003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851703798344, "job": 136, "event": "table_file_deletion", "file_number": 217}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851703799479, "job": 136, "event": "table_file_deletion", "file_number": 215}
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.537514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.799559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.799565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.799566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.799568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:23 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:28:23.799570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:28:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:24 np0005603621 nova_compute[247399]: 2026-01-31 09:28:24.378 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:25.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:25.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:26 np0005603621 nova_compute[247399]: 2026-01-31 09:28:26.393 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:27.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:27.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:29.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:29 np0005603621 nova_compute[247399]: 2026-01-31 09:28:29.379 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:29.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:28:30.573 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:28:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:28:30.573 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:28:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:28:30.573 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:28:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:31 np0005603621 nova_compute[247399]: 2026-01-31 09:28:31.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:31.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:33.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:34 np0005603621 nova_compute[247399]: 2026-01-31 09:28:34.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:34 np0005603621 nova_compute[247399]: 2026-01-31 09:28:34.381 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:36 np0005603621 nova_compute[247399]: 2026-01-31 09:28:36.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:36 np0005603621 podman[433687]: 2026-01-31 09:28:36.499552057 +0000 UTC m=+0.051256020 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:28:36 np0005603621 podman[433688]: 2026-01-31 09:28:36.525691843 +0000 UTC m=+0.070261326 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:28:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:37.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:28:38
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.rgw.root', '.mgr', 'volumes', 'vms', 'default.rgw.meta', 'images', 'default.rgw.control']
Jan 31 04:28:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:28:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:28:39 np0005603621 nova_compute[247399]: 2026-01-31 09:28:39.382 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:28:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:28:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:40 np0005603621 nova_compute[247399]: 2026-01-31 09:28:40.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:41.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:41 np0005603621 nova_compute[247399]: 2026-01-31 09:28:41.400 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:41.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:43.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:43.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:44 np0005603621 nova_compute[247399]: 2026-01-31 09:28:44.383 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:46 np0005603621 nova_compute[247399]: 2026-01-31 09:28:46.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:46 np0005603621 nova_compute[247399]: 2026-01-31 09:28:46.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:28:46 np0005603621 nova_compute[247399]: 2026-01-31 09:28:46.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.247 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.248 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.248 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.248 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.248 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:28:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:28:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977587436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.693 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:28:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:47.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.829 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.830 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4054MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.830 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.831 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:28:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.995 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:28:47 np0005603621 nova_compute[247399]: 2026-01-31 09:28:47.995 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:28:48 np0005603621 nova_compute[247399]: 2026-01-31 09:28:48.016 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:28:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:28:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1409151371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:28:48 np0005603621 nova_compute[247399]: 2026-01-31 09:28:48.446 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:28:48 np0005603621 nova_compute[247399]: 2026-01-31 09:28:48.451 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:28:48 np0005603621 nova_compute[247399]: 2026-01-31 09:28:48.468 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:28:48 np0005603621 nova_compute[247399]: 2026-01-31 09:28:48.470 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:28:48 np0005603621 nova_compute[247399]: 2026-01-31 09:28:48.470 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:28:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:49 np0005603621 nova_compute[247399]: 2026-01-31 09:28:49.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:49.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:28:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:28:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:51 np0005603621 nova_compute[247399]: 2026-01-31 09:28:51.405 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:51.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:53 np0005603621 nova_compute[247399]: 2026-01-31 09:28:53.472 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:53 np0005603621 nova_compute[247399]: 2026-01-31 09:28:53.472 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:28:53 np0005603621 nova_compute[247399]: 2026-01-31 09:28:53.472 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:28:53 np0005603621 nova_compute[247399]: 2026-01-31 09:28:53.494 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:28:53 np0005603621 nova_compute[247399]: 2026-01-31 09:28:53.494 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:53 np0005603621 nova_compute[247399]: 2026-01-31 09:28:53.494 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:54 np0005603621 nova_compute[247399]: 2026-01-31 09:28:54.387 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:28:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 3d3fd0c5-5db6-4eb2-9d30-a76a1cfd2c3a does not exist
Jan 31 04:28:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 16ae3e30-520b-43fe-8f4f-e6277bd563a3 does not exist
Jan 31 04:28:54 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 69e14849-7bb9-4490-beb9-af9961ff1216 does not exist
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:28:54 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:28:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:28:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:55.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:28:55 np0005603621 podman[434106]: 2026-01-31 09:28:55.193120016 +0000 UTC m=+0.017934644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:28:55 np0005603621 podman[434106]: 2026-01-31 09:28:55.491470049 +0000 UTC m=+0.316284657 container create 1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 31 04:28:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:28:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:28:55 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:28:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:55 np0005603621 systemd[1]: Started libpod-conmon-1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb.scope.
Jan 31 04:28:55 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:28:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:56 np0005603621 podman[434106]: 2026-01-31 09:28:56.111721552 +0000 UTC m=+0.936536160 container init 1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_archimedes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:28:56 np0005603621 podman[434106]: 2026-01-31 09:28:56.118436788 +0000 UTC m=+0.943251396 container start 1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:28:56 np0005603621 kind_archimedes[434122]: 167 167
Jan 31 04:28:56 np0005603621 systemd[1]: libpod-1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb.scope: Deactivated successfully.
Jan 31 04:28:56 np0005603621 conmon[434122]: conmon 1c428191bb73b2e334c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb.scope/container/memory.events
Jan 31 04:28:56 np0005603621 podman[434106]: 2026-01-31 09:28:56.183282557 +0000 UTC m=+1.008097195 container attach 1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:28:56 np0005603621 podman[434106]: 2026-01-31 09:28:56.184863995 +0000 UTC m=+1.009678623 container died 1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:28:56 np0005603621 nova_compute[247399]: 2026-01-31 09:28:56.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:28:56 np0005603621 nova_compute[247399]: 2026-01-31 09:28:56.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:56 np0005603621 systemd[1]: var-lib-containers-storage-overlay-317791d281ce9621d2600dd191aa2f5bb86ced7fc7ffd87a38919fd1f2b46df5-merged.mount: Deactivated successfully.
Jan 31 04:28:56 np0005603621 podman[434106]: 2026-01-31 09:28:56.577322648 +0000 UTC m=+1.402137246 container remove 1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_archimedes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 04:28:56 np0005603621 systemd[1]: libpod-conmon-1c428191bb73b2e334c1a59a846f3476838cbdcb9ffdd0652547077d590d6efb.scope: Deactivated successfully.
Jan 31 04:28:56 np0005603621 podman[434146]: 2026-01-31 09:28:56.695384106 +0000 UTC m=+0.039484918 container create 09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:28:56 np0005603621 systemd[1]: Started libpod-conmon-09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9.scope.
Jan 31 04:28:56 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:28:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13eb464ae6f4c2216bc0166a6009101b30e60b1406e8826568b21792d45a57e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13eb464ae6f4c2216bc0166a6009101b30e60b1406e8826568b21792d45a57e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13eb464ae6f4c2216bc0166a6009101b30e60b1406e8826568b21792d45a57e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13eb464ae6f4c2216bc0166a6009101b30e60b1406e8826568b21792d45a57e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:56 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b13eb464ae6f4c2216bc0166a6009101b30e60b1406e8826568b21792d45a57e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:56 np0005603621 podman[434146]: 2026-01-31 09:28:56.675187104 +0000 UTC m=+0.019287946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:28:56 np0005603621 podman[434146]: 2026-01-31 09:28:56.772460792 +0000 UTC m=+0.116561624 container init 09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:28:56 np0005603621 podman[434146]: 2026-01-31 09:28:56.776988061 +0000 UTC m=+0.121088873 container start 09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:28:56 np0005603621 podman[434146]: 2026-01-31 09:28:56.791195209 +0000 UTC m=+0.135296051 container attach 09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:28:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:57.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:57 np0005603621 objective_bell[434163]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:28:57 np0005603621 objective_bell[434163]: --> relative data size: 1.0
Jan 31 04:28:57 np0005603621 objective_bell[434163]: --> All data devices are unavailable
Jan 31 04:28:57 np0005603621 systemd[1]: libpod-09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9.scope: Deactivated successfully.
Jan 31 04:28:57 np0005603621 podman[434146]: 2026-01-31 09:28:57.5603552 +0000 UTC m=+0.904456012 container died 09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:28:57 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b13eb464ae6f4c2216bc0166a6009101b30e60b1406e8826568b21792d45a57e-merged.mount: Deactivated successfully.
Jan 31 04:28:57 np0005603621 podman[434146]: 2026-01-31 09:28:57.704283526 +0000 UTC m=+1.048384338 container remove 09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_bell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:28:57 np0005603621 systemd[1]: libpod-conmon-09393c0cb123d6ee452e197029be545758dd027753eb3fad85cff8203b591fa9.scope: Deactivated successfully.
Jan 31 04:28:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:57.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.322862997 +0000 UTC m=+0.044542024 container create 53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elgamal, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:28:58 np0005603621 systemd[1]: Started libpod-conmon-53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297.scope.
Jan 31 04:28:58 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.301409445 +0000 UTC m=+0.023088502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:28:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.436016203 +0000 UTC m=+0.157695250 container init 53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elgamal, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.446192517 +0000 UTC m=+0.167871554 container start 53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:28:58 np0005603621 systemd[1]: libpod-53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297.scope: Deactivated successfully.
Jan 31 04:28:58 np0005603621 flamboyant_elgamal[434347]: 167 167
Jan 31 04:28:58 np0005603621 conmon[434347]: conmon 53694bfd9d4a60745a41 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297.scope/container/memory.events
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.476262374 +0000 UTC m=+0.197941401 container attach 53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.477277585 +0000 UTC m=+0.198956612 container died 53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:28:58 np0005603621 systemd[1]: var-lib-containers-storage-overlay-933f774715910b1f23aa8423bcdd3ac3c6de4e970e31efb98a478d236b1a88e5-merged.mount: Deactivated successfully.
Jan 31 04:28:58 np0005603621 podman[434330]: 2026-01-31 09:28:58.784642506 +0000 UTC m=+0.506321533 container remove 53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:28:58 np0005603621 systemd[1]: libpod-conmon-53694bfd9d4a60745a41ee7505665f798282e12ad2a1e669062d5ce6d26e8297.scope: Deactivated successfully.
Jan 31 04:28:58 np0005603621 podman[434373]: 2026-01-31 09:28:58.92857506 +0000 UTC m=+0.057458210 container create b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:28:58 np0005603621 podman[434373]: 2026-01-31 09:28:58.895079618 +0000 UTC m=+0.023962798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:28:59 np0005603621 systemd[1]: Started libpod-conmon-b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e.scope.
Jan 31 04:28:59 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:28:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde9bf306856f5f614f331c31671404717904cb39197412b46bb42deb51a3f7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde9bf306856f5f614f331c31671404717904cb39197412b46bb42deb51a3f7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde9bf306856f5f614f331c31671404717904cb39197412b46bb42deb51a3f7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:59 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde9bf306856f5f614f331c31671404717904cb39197412b46bb42deb51a3f7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:28:59 np0005603621 podman[434373]: 2026-01-31 09:28:59.045224636 +0000 UTC m=+0.174107826 container init b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mayer, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:28:59 np0005603621 podman[434373]: 2026-01-31 09:28:59.051515789 +0000 UTC m=+0.180398949 container start b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:28:59 np0005603621 podman[434373]: 2026-01-31 09:28:59.128077538 +0000 UTC m=+0.256960698 container attach b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mayer, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:28:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:28:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:28:59.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:28:59 np0005603621 nova_compute[247399]: 2026-01-31 09:28:59.389 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]: {
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:    "0": [
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:        {
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "devices": [
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "/dev/loop3"
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            ],
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "lv_name": "ceph_lv0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "lv_size": "7511998464",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "name": "ceph_lv0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "tags": {
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.cluster_name": "ceph",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.crush_device_class": "",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.encrypted": "0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.osd_id": "0",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.type": "block",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:                "ceph.vdo": "0"
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            },
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "type": "block",
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:            "vg_name": "ceph_vg0"
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:        }
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]:    ]
Jan 31 04:28:59 np0005603621 laughing_mayer[434390]: }
Jan 31 04:28:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:28:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:28:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:28:59.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:28:59 np0005603621 systemd[1]: libpod-b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e.scope: Deactivated successfully.
Jan 31 04:28:59 np0005603621 podman[434399]: 2026-01-31 09:28:59.825829978 +0000 UTC m=+0.020572615 container died b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 31 04:28:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:00 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cde9bf306856f5f614f331c31671404717904cb39197412b46bb42deb51a3f7e-merged.mount: Deactivated successfully.
Jan 31 04:29:00 np0005603621 podman[434399]: 2026-01-31 09:29:00.611002883 +0000 UTC m=+0.805745520 container remove b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mayer, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 31 04:29:00 np0005603621 systemd[1]: libpod-conmon-b45f74a81f6ac324f32ebb66d46ab1944b07274889667eb738551e485e53ee8e.scope: Deactivated successfully.
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.17106512 +0000 UTC m=+0.085629639 container create c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.103654183 +0000 UTC m=+0.018218722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:29:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:01.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:01 np0005603621 nova_compute[247399]: 2026-01-31 09:29:01.408 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:01 np0005603621 systemd[1]: Started libpod-conmon-c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639.scope.
Jan 31 04:29:01 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.492083994 +0000 UTC m=+0.406648533 container init c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.498272984 +0000 UTC m=+0.412837513 container start c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 31 04:29:01 np0005603621 busy_lalande[434570]: 167 167
Jan 31 04:29:01 np0005603621 systemd[1]: libpod-c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639.scope: Deactivated successfully.
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.575248526 +0000 UTC m=+0.489813045 container attach c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.575668429 +0000 UTC m=+0.490232948 container died c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:29:01 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f1e6f9decb90136292b159d9f35a211f25f60be11280c3e4bbfe0892fa7e60b8-merged.mount: Deactivated successfully.
Jan 31 04:29:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:29:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:01.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:29:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:01 np0005603621 podman[434554]: 2026-01-31 09:29:01.954646667 +0000 UTC m=+0.869211186 container remove c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 31 04:29:01 np0005603621 systemd[1]: libpod-conmon-c663ca2e271ad9206b922bd7487a0ad3325823b96fae9ef656b1b16aa2edc639.scope: Deactivated successfully.
Jan 31 04:29:02 np0005603621 podman[434594]: 2026-01-31 09:29:02.058793377 +0000 UTC m=+0.019454051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:29:02 np0005603621 podman[434594]: 2026-01-31 09:29:02.250732202 +0000 UTC m=+0.211392846 container create 8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_franklin, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:29:02 np0005603621 systemd[1]: Started libpod-conmon-8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7.scope.
Jan 31 04:29:02 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:29:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6f2172e93f6bc662d51e9735b7eb32f46334f98ca8163483d55bcff0f06fc0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:29:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6f2172e93f6bc662d51e9735b7eb32f46334f98ca8163483d55bcff0f06fc0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:29:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6f2172e93f6bc662d51e9735b7eb32f46334f98ca8163483d55bcff0f06fc0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:29:02 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc6f2172e93f6bc662d51e9735b7eb32f46334f98ca8163483d55bcff0f06fc0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:29:02 np0005603621 podman[434594]: 2026-01-31 09:29:02.544645939 +0000 UTC m=+0.505306603 container init 8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_franklin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:29:02 np0005603621 podman[434594]: 2026-01-31 09:29:02.55020521 +0000 UTC m=+0.510865864 container start 8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_franklin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 31 04:29:02 np0005603621 podman[434594]: 2026-01-31 09:29:02.569648279 +0000 UTC m=+0.530308923 container attach 8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]: {
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:        "osd_id": 0,
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:        "type": "bluestore"
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]:    }
Jan 31 04:29:03 np0005603621 vibrant_franklin[434611]: }
Jan 31 04:29:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:03 np0005603621 systemd[1]: libpod-8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7.scope: Deactivated successfully.
Jan 31 04:29:03 np0005603621 podman[434594]: 2026-01-31 09:29:03.315881404 +0000 UTC m=+1.276542068 container died 8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 31 04:29:03 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cc6f2172e93f6bc662d51e9735b7eb32f46334f98ca8163483d55bcff0f06fc0-merged.mount: Deactivated successfully.
Jan 31 04:29:03 np0005603621 podman[434594]: 2026-01-31 09:29:03.356680761 +0000 UTC m=+1.317341405 container remove 8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_franklin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:29:03 np0005603621 systemd[1]: libpod-conmon-8ca7b568f08e9709114b53fd913edf9c10a73cc56ed2896429abc102d39452c7.scope: Deactivated successfully.
Jan 31 04:29:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:29:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:29:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:29:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:03 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:29:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6e16d86e-2231-49b9-802a-786f5d03603d does not exist
Jan 31 04:29:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 24ecc7cb-d5d7-4c55-a893-de74b8bd6320 does not exist
Jan 31 04:29:03 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 444aed2d-ff84-4c67-99a3-aaf8a71c567e does not exist
Jan 31 04:29:04 np0005603621 nova_compute[247399]: 2026-01-31 09:29:04.392 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:05.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:06 np0005603621 nova_compute[247399]: 2026-01-31 09:29:06.428 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:07 np0005603621 podman[434695]: 2026-01-31 09:29:07.494901219 +0000 UTC m=+0.056063638 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 04:29:07 np0005603621 podman[434696]: 2026-01-31 09:29:07.58123158 +0000 UTC m=+0.142387759 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:29:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:29:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:07.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:29:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:29:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:29:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:29:08 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:29:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:09.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:09 np0005603621 nova_compute[247399]: 2026-01-31 09:29:09.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:09.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:11 np0005603621 nova_compute[247399]: 2026-01-31 09:29:11.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:11.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:13.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:13.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:14 np0005603621 nova_compute[247399]: 2026-01-31 09:29:14.396 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:29:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945565630' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:29:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:29:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1945565630' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:29:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:15.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:16 np0005603621 nova_compute[247399]: 2026-01-31 09:29:16.431 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:17.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:19 np0005603621 nova_compute[247399]: 2026-01-31 09:29:19.397 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:19.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:21.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:21 np0005603621 nova_compute[247399]: 2026-01-31 09:29:21.433 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:21.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:23.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4288: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:24 np0005603621 nova_compute[247399]: 2026-01-31 09:29:24.399 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4289: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:26 np0005603621 nova_compute[247399]: 2026-01-31 09:29:26.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:27.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:27.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:29 np0005603621 nova_compute[247399]: 2026-01-31 09:29:29.400 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:29.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:29:30.574 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:29:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:29:30.574 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:29:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:29:30.575 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:29:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:31 np0005603621 nova_compute[247399]: 2026-01-31 09:29:31.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:29:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:31.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:29:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:33.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:33.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:34 np0005603621 nova_compute[247399]: 2026-01-31 09:29:34.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:35.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:35.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:36 np0005603621 nova_compute[247399]: 2026-01-31 09:29:36.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:36 np0005603621 nova_compute[247399]: 2026-01-31 09:29:36.439 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:37.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:38 np0005603621 podman[434859]: 2026-01-31 09:29:38.49151125 +0000 UTC m=+0.048665231 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 31 04:29:38 np0005603621 podman[434860]: 2026-01-31 09:29:38.537579139 +0000 UTC m=+0.094147071 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:29:38
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'default.rgw.meta']
Jan 31 04:29:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:29:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:39 np0005603621 nova_compute[247399]: 2026-01-31 09:29:39.402 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:39.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:41 np0005603621 nova_compute[247399]: 2026-01-31 09:29:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:41 np0005603621 nova_compute[247399]: 2026-01-31 09:29:41.442 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:41.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:44 np0005603621 nova_compute[247399]: 2026-01-31 09:29:44.404 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:45.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:46 np0005603621 nova_compute[247399]: 2026-01-31 09:29:46.443 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.197 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.256 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.257 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.257 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.257 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.257 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:29:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:29:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1752424501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.805 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:29:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:47.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.910 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.911 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4046MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.911 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:29:47 np0005603621 nova_compute[247399]: 2026-01-31 09:29:47.911 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:29:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.030 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.031 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.065 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:29:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:29:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2619438656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.517 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.523 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:29:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:29:48 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 21K writes, 94K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1472 writes, 7209 keys, 1472 commit groups, 1.0 writes per commit group, ingest: 10.63 MB, 0.02 MB/s#012Interval WAL: 1472 writes, 1472 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     31.4      4.11              0.30        68    0.060       0      0       0.0       0.0#012  L6      1/0   12.16 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.6     57.5     49.5     14.50              1.57        67    0.216    565K    35K       0.0       0.0#012 Sum      1/0   12.16 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.6     44.8     45.5     18.61              1.87       135    0.138    565K    35K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.3     70.2     69.8      1.45              0.22        14    0.103     86K   3595       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     57.5     49.5     14.50              1.57        67    0.216    565K    35K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     31.4      4.10              0.30        67    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.4      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.126, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.83 GB write, 0.11 MB/s write, 0.81 GB read, 0.11 MB/s read, 18.6 seconds#012Interval compaction: 0.10 GB write, 0.17 MB/s write, 0.10 GB read, 0.17 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f82bbcb1f0#2 capacity: 304.00 MB usage: 91.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000513 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5654,87.30 MB,28.7186%) FilterBlock(136,1.53 MB,0.503254%) IndexBlock(136,2.46 MB,0.80847%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.543 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.545 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:29:48 np0005603621 nova_compute[247399]: 2026-01-31 09:29:48.545 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:29:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:49.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:49 np0005603621 nova_compute[247399]: 2026-01-31 09:29:49.406 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:49 np0005603621 nova_compute[247399]: 2026-01-31 09:29:49.541 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:49.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:29:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:29:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:51 np0005603621 nova_compute[247399]: 2026-01-31 09:29:51.479 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:29:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:29:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:52 np0005603621 nova_compute[247399]: 2026-01-31 09:29:52.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:53.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:53.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:54 np0005603621 nova_compute[247399]: 2026-01-31 09:29:54.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:54 np0005603621 nova_compute[247399]: 2026-01-31 09:29:54.407 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:55 np0005603621 nova_compute[247399]: 2026-01-31 09:29:55.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:55 np0005603621 nova_compute[247399]: 2026-01-31 09:29:55.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:29:55 np0005603621 nova_compute[247399]: 2026-01-31 09:29:55.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:29:55 np0005603621 nova_compute[247399]: 2026-01-31 09:29:55.215 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:29:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:55.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:55.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:56 np0005603621 nova_compute[247399]: 2026-01-31 09:29:56.481 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:57.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:57.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:29:58 np0005603621 nova_compute[247399]: 2026-01-31 09:29:58.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:29:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:29:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:29:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:29:59.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:29:59 np0005603621 nova_compute[247399]: 2026-01-31 09:29:59.408 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:29:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:29:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:29:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:29:59.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:29:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:00 np0005603621 ceph-mon[74394]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 31 04:30:00 np0005603621 ceph-mon[74394]: overall HEALTH_OK
Jan 31 04:30:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:01.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:01 np0005603621 nova_compute[247399]: 2026-01-31 09:30:01.483 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:01.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:01 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:03.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:04 np0005603621 nova_compute[247399]: 2026-01-31 09:30:04.410 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 31 04:30:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:05 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 31 04:30:05 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 2f20180e-918c-46c7-b22f-f88b0b607938 does not exist
Jan 31 04:30:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 30f078e7-4e18-49d3-816f-229b84be5fd7 does not exist
Jan 31 04:30:06 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4f553023-6708-47e0-8175-12ddc491d376 does not exist
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:30:06 np0005603621 nova_compute[247399]: 2026-01-31 09:30:06.485 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:06 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:30:06 np0005603621 podman[435283]: 2026-01-31 09:30:06.575903909 +0000 UTC m=+0.062838268 container create db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 31 04:30:06 np0005603621 podman[435283]: 2026-01-31 09:30:06.530640564 +0000 UTC m=+0.017574943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:30:06 np0005603621 systemd[1]: Started libpod-conmon-db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a.scope.
Jan 31 04:30:06 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:30:06 np0005603621 podman[435283]: 2026-01-31 09:30:06.814514542 +0000 UTC m=+0.301448931 container init db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:30:06 np0005603621 podman[435283]: 2026-01-31 09:30:06.823442836 +0000 UTC m=+0.310377215 container start db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:30:06 np0005603621 beautiful_golick[435300]: 167 167
Jan 31 04:30:06 np0005603621 systemd[1]: libpod-db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a.scope: Deactivated successfully.
Jan 31 04:30:06 np0005603621 podman[435283]: 2026-01-31 09:30:06.951428531 +0000 UTC m=+0.438362890 container attach db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:30:06 np0005603621 podman[435283]: 2026-01-31 09:30:06.952973818 +0000 UTC m=+0.439908267 container died db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_golick, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:30:07 np0005603621 systemd[1]: var-lib-containers-storage-overlay-420e923af5e70dd423f0f53f4156e4f37814afd5cf21c6d8ee78f4754ba37a66-merged.mount: Deactivated successfully.
Jan 31 04:30:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:07.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:07 np0005603621 podman[435283]: 2026-01-31 09:30:07.560494237 +0000 UTC m=+1.047428596 container remove db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_golick, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:30:07 np0005603621 systemd[1]: libpod-conmon-db49fef24676940494444613a6009426e6cf3701f517c43521c052eb6d920a6a.scope: Deactivated successfully.
Jan 31 04:30:07 np0005603621 podman[435324]: 2026-01-31 09:30:07.66214527 +0000 UTC m=+0.019688547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:30:07 np0005603621 podman[435324]: 2026-01-31 09:30:07.763818524 +0000 UTC m=+0.121361771 container create 6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:30:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:30:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:07.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:30:07 np0005603621 systemd[1]: Started libpod-conmon-6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976.scope.
Jan 31 04:30:07 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:30:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f696431f8c69ea4f88b1c8518d631d49fb4e46d40fa82d2852582f58d9cb45c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f696431f8c69ea4f88b1c8518d631d49fb4e46d40fa82d2852582f58d9cb45c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f696431f8c69ea4f88b1c8518d631d49fb4e46d40fa82d2852582f58d9cb45c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f696431f8c69ea4f88b1c8518d631d49fb4e46d40fa82d2852582f58d9cb45c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:07 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f696431f8c69ea4f88b1c8518d631d49fb4e46d40fa82d2852582f58d9cb45c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:07 np0005603621 podman[435324]: 2026-01-31 09:30:07.960494234 +0000 UTC m=+0.318037511 container init 6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 31 04:30:07 np0005603621 podman[435324]: 2026-01-31 09:30:07.965842688 +0000 UTC m=+0.323385935 container start 6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gagarin, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:30:07 np0005603621 podman[435324]: 2026-01-31 09:30:07.973252937 +0000 UTC m=+0.330796204 container attach 6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gagarin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:30:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:30:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:30:08 np0005603621 sleepy_gagarin[435340]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:30:08 np0005603621 sleepy_gagarin[435340]: --> relative data size: 1.0
Jan 31 04:30:08 np0005603621 sleepy_gagarin[435340]: --> All data devices are unavailable
Jan 31 04:30:08 np0005603621 systemd[1]: libpod-6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976.scope: Deactivated successfully.
Jan 31 04:30:08 np0005603621 podman[435324]: 2026-01-31 09:30:08.710926888 +0000 UTC m=+1.068470165 container died 6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gagarin, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 31 04:30:08 np0005603621 systemd[1]: var-lib-containers-storage-overlay-9f696431f8c69ea4f88b1c8518d631d49fb4e46d40fa82d2852582f58d9cb45c-merged.mount: Deactivated successfully.
Jan 31 04:30:08 np0005603621 podman[435324]: 2026-01-31 09:30:08.805815952 +0000 UTC m=+1.163359199 container remove 6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_gagarin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:30:08 np0005603621 systemd[1]: libpod-conmon-6617c59d374ae5bfd209c7c6703584d3f6bd785487310a39cd52cc8225b53976.scope: Deactivated successfully.
Jan 31 04:30:08 np0005603621 podman[435356]: 2026-01-31 09:30:08.838994925 +0000 UTC m=+0.102478179 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Jan 31 04:30:08 np0005603621 podman[435363]: 2026-01-31 09:30:08.862954423 +0000 UTC m=+0.126255922 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:30:09 np0005603621 nova_compute[247399]: 2026-01-31 09:30:09.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.321306817 +0000 UTC m=+0.040743706 container create 44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:30:09 np0005603621 systemd[1]: Started libpod-conmon-44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11.scope.
Jan 31 04:30:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:09.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.304972833 +0000 UTC m=+0.024409742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.414446977 +0000 UTC m=+0.133883886 container init 44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:30:09 np0005603621 nova_compute[247399]: 2026-01-31 09:30:09.411 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.420412221 +0000 UTC m=+0.139849100 container start 44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 04:30:09 np0005603621 relaxed_torvalds[435572]: 167 167
Jan 31 04:30:09 np0005603621 systemd[1]: libpod-44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11.scope: Deactivated successfully.
Jan 31 04:30:09 np0005603621 conmon[435572]: conmon 44780849fb815be56722 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11.scope/container/memory.events
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.424991402 +0000 UTC m=+0.144428281 container attach 44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.425936471 +0000 UTC m=+0.145373350 container died 44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 04:30:09 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e166676b4df7ecf67651f1d3458e6bab126d040e2e63a426a2a7d8beb68dc3b6-merged.mount: Deactivated successfully.
Jan 31 04:30:09 np0005603621 podman[435555]: 2026-01-31 09:30:09.473245389 +0000 UTC m=+0.192682278 container remove 44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_torvalds, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 31 04:30:09 np0005603621 systemd[1]: libpod-conmon-44780849fb815be56722a0d53ba3851ba3eda5b2452b2fdbd8f27a4d24a59b11.scope: Deactivated successfully.
Jan 31 04:30:09 np0005603621 podman[435595]: 2026-01-31 09:30:09.583702773 +0000 UTC m=+0.035454324 container create 0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cannon, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:30:09 np0005603621 systemd[1]: Started libpod-conmon-0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0.scope.
Jan 31 04:30:09 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:30:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a44f1c156041c506154aba01efb98bd1f001b1d3d646047d1a91e317cb6a7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a44f1c156041c506154aba01efb98bd1f001b1d3d646047d1a91e317cb6a7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a44f1c156041c506154aba01efb98bd1f001b1d3d646047d1a91e317cb6a7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:09 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24a44f1c156041c506154aba01efb98bd1f001b1d3d646047d1a91e317cb6a7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:09 np0005603621 podman[435595]: 2026-01-31 09:30:09.655810945 +0000 UTC m=+0.107562506 container init 0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 04:30:09 np0005603621 podman[435595]: 2026-01-31 09:30:09.662007176 +0000 UTC m=+0.113758717 container start 0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cannon, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:30:09 np0005603621 podman[435595]: 2026-01-31 09:30:09.567345388 +0000 UTC m=+0.019096949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:30:09 np0005603621 podman[435595]: 2026-01-31 09:30:09.664808122 +0000 UTC m=+0.116559683 container attach 0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:30:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:09.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:10 np0005603621 keen_cannon[435612]: {
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:    "0": [
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:        {
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "devices": [
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "/dev/loop3"
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            ],
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "lv_name": "ceph_lv0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "lv_size": "7511998464",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "name": "ceph_lv0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "tags": {
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.cluster_name": "ceph",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.crush_device_class": "",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.encrypted": "0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.osd_id": "0",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.type": "block",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:                "ceph.vdo": "0"
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            },
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "type": "block",
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:            "vg_name": "ceph_vg0"
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:        }
Jan 31 04:30:10 np0005603621 keen_cannon[435612]:    ]
Jan 31 04:30:10 np0005603621 keen_cannon[435612]: }
Jan 31 04:30:10 np0005603621 systemd[1]: libpod-0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0.scope: Deactivated successfully.
Jan 31 04:30:10 np0005603621 podman[435595]: 2026-01-31 09:30:10.408702485 +0000 UTC m=+0.860454026 container died 0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:30:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-24a44f1c156041c506154aba01efb98bd1f001b1d3d646047d1a91e317cb6a7f-merged.mount: Deactivated successfully.
Jan 31 04:30:10 np0005603621 podman[435595]: 2026-01-31 09:30:10.455426704 +0000 UTC m=+0.907178245 container remove 0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:30:10 np0005603621 systemd[1]: libpod-conmon-0dc1eabbf0804795ff1b1e8b665960840f2e098456fa3ece4c94e1f9cfd798c0.scope: Deactivated successfully.
Jan 31 04:30:10 np0005603621 podman[435824]: 2026-01-31 09:30:10.893968917 +0000 UTC m=+0.033783191 container create 73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:30:10 np0005603621 systemd[1]: Started libpod-conmon-73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07.scope.
Jan 31 04:30:10 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:30:10 np0005603621 podman[435824]: 2026-01-31 09:30:10.961828939 +0000 UTC m=+0.101643243 container init 73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hamilton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:30:10 np0005603621 podman[435824]: 2026-01-31 09:30:10.967247336 +0000 UTC m=+0.107061610 container start 73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 31 04:30:10 np0005603621 podman[435824]: 2026-01-31 09:30:10.969880887 +0000 UTC m=+0.109695181 container attach 73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 31 04:30:10 np0005603621 lucid_hamilton[435841]: 167 167
Jan 31 04:30:10 np0005603621 systemd[1]: libpod-73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07.scope: Deactivated successfully.
Jan 31 04:30:10 np0005603621 podman[435824]: 2026-01-31 09:30:10.972297061 +0000 UTC m=+0.112111335 container died 73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:30:10 np0005603621 podman[435824]: 2026-01-31 09:30:10.878393118 +0000 UTC m=+0.018207422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:30:10 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8ed0cf14f59dfa85b4f74299dbbf6769213f646b3ee72ce5cea28c60499b4a45-merged.mount: Deactivated successfully.
Jan 31 04:30:11 np0005603621 podman[435824]: 2026-01-31 09:30:11.008233529 +0000 UTC m=+0.148047803 container remove 73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:30:11 np0005603621 systemd[1]: libpod-conmon-73dc53152d67e142b7a0af3d4f6de7f8f7bf7d5e513fc882d20ea98219343e07.scope: Deactivated successfully.
Jan 31 04:30:11 np0005603621 podman[435867]: 2026-01-31 09:30:11.104518786 +0000 UTC m=+0.022092722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:30:11 np0005603621 podman[435867]: 2026-01-31 09:30:11.278956171 +0000 UTC m=+0.196530077 container create 5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:30:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:11.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:11 np0005603621 systemd[1]: Started libpod-conmon-5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1.scope.
Jan 31 04:30:11 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:30:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719a245520ae5003af48b2cf7dd5264544917e08e7583ac7642919fb2606b6c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719a245520ae5003af48b2cf7dd5264544917e08e7583ac7642919fb2606b6c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719a245520ae5003af48b2cf7dd5264544917e08e7583ac7642919fb2606b6c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:11 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719a245520ae5003af48b2cf7dd5264544917e08e7583ac7642919fb2606b6c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:30:11 np0005603621 nova_compute[247399]: 2026-01-31 09:30:11.486 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:11 np0005603621 podman[435867]: 2026-01-31 09:30:11.534339311 +0000 UTC m=+0.451913257 container init 5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:30:11 np0005603621 podman[435867]: 2026-01-31 09:30:11.541184922 +0000 UTC m=+0.458758838 container start 5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cori, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:30:11 np0005603621 podman[435867]: 2026-01-31 09:30:11.583861837 +0000 UTC m=+0.501435773 container attach 5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cori, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:30:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:11.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]: {
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:        "osd_id": 0,
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:        "type": "bluestore"
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]:    }
Jan 31 04:30:12 np0005603621 vigilant_cori[435884]: }
Jan 31 04:30:12 np0005603621 systemd[1]: libpod-5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1.scope: Deactivated successfully.
Jan 31 04:30:12 np0005603621 podman[435867]: 2026-01-31 09:30:12.297786156 +0000 UTC m=+1.215360162 container died 5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cori, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:30:12 np0005603621 systemd[1]: var-lib-containers-storage-overlay-719a245520ae5003af48b2cf7dd5264544917e08e7583ac7642919fb2606b6c9-merged.mount: Deactivated successfully.
Jan 31 04:30:12 np0005603621 podman[435867]: 2026-01-31 09:30:12.342935618 +0000 UTC m=+1.260509534 container remove 5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:30:12 np0005603621 systemd[1]: libpod-conmon-5c04c7838bb7c718c59846fa1e113edbec7a69f7ec828e6cb22b054e7a7846c1.scope: Deactivated successfully.
Jan 31 04:30:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:30:12 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:12 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:30:13 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6a7b328e-545d-4926-83d8-ddd30e722d7c does not exist
Jan 31 04:30:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 331e8843-2434-4a2a-b7af-364474e2f29f does not exist
Jan 31 04:30:13 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 950f5c68-4b9c-42c5-a163-7dbd9ecfac7f does not exist
Jan 31 04:30:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:13.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:14 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:30:14 np0005603621 nova_compute[247399]: 2026-01-31 09:30:14.421 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 31 04:30:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920769072' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 31 04:30:14 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 31 04:30:14 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920769072' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 31 04:30:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:15.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:16 np0005603621 nova_compute[247399]: 2026-01-31 09:30:16.489 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:17.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:17.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:19.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:19 np0005603621 nova_compute[247399]: 2026-01-31 09:30:19.423 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:19.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:21 np0005603621 nova_compute[247399]: 2026-01-31 09:30:21.525 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:21.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:23.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:24 np0005603621 nova_compute[247399]: 2026-01-31 09:30:24.425 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:25.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:25.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:26 np0005603621 nova_compute[247399]: 2026-01-31 09:30:26.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:30:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:27.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:30:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:30:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:27.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:30:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:29.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:29 np0005603621 nova_compute[247399]: 2026-01-31 09:30:29.427 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:29.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:30:30.575 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:30:30.576 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:30:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:30:30.576 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:30:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:31.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:31 np0005603621 nova_compute[247399]: 2026-01-31 09:30:31.528 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:31.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:33.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:34 np0005603621 nova_compute[247399]: 2026-01-31 09:30:34.427 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:36 np0005603621 nova_compute[247399]: 2026-01-31 09:30:36.531 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:37 np0005603621 nova_compute[247399]: 2026-01-31 09:30:37.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:37.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:37.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:30:38
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'images']
Jan 31 04:30:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:30:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:39 np0005603621 nova_compute[247399]: 2026-01-31 09:30:39.429 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:39 np0005603621 podman[436032]: 2026-01-31 09:30:39.489588863 +0000 UTC m=+0.047429013 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:30:39 np0005603621 podman[436033]: 2026-01-31 09:30:39.514555552 +0000 UTC m=+0.068202072 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller)
Jan 31 04:30:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:41.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:41 np0005603621 nova_compute[247399]: 2026-01-31 09:30:41.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:41.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4327: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:43 np0005603621 nova_compute[247399]: 2026-01-31 09:30:43.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:43.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:43.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:44 np0005603621 nova_compute[247399]: 2026-01-31 09:30:44.432 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:45.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.306492) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851846306521, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 1385, "num_deletes": 251, "total_data_size": 2429694, "memory_usage": 2472440, "flush_reason": "Manual Compaction"}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851846326450, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 2392709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93701, "largest_seqno": 95084, "table_properties": {"data_size": 2386157, "index_size": 3750, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13683, "raw_average_key_size": 20, "raw_value_size": 2373102, "raw_average_value_size": 3479, "num_data_blocks": 166, "num_entries": 682, "num_filter_entries": 682, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851704, "oldest_key_time": 1769851704, "file_creation_time": 1769851846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 20015 microseconds, and 3692 cpu microseconds.
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.326500) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 2392709 bytes OK
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.326522) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.330286) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.330339) EVENT_LOG_v1 {"time_micros": 1769851846330329, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.330367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 2423708, prev total WAL file size 2423708, number of live WAL files 2.
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.330962) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(2336KB)], [218(12MB)]
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851846331022, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 15139571, "oldest_snapshot_seqno": -1}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 12168 keys, 13038267 bytes, temperature: kUnknown
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851846466143, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 13038267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12963783, "index_size": 42947, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 323287, "raw_average_key_size": 26, "raw_value_size": 12755573, "raw_average_value_size": 1048, "num_data_blocks": 1614, "num_entries": 12168, "num_filter_entries": 12168, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769851846, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.466360) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 13038267 bytes
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.495663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.0 rd, 96.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.2 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 12685, records dropped: 517 output_compression: NoCompression
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.495709) EVENT_LOG_v1 {"time_micros": 1769851846495694, "job": 138, "event": "compaction_finished", "compaction_time_micros": 135185, "compaction_time_cpu_micros": 24788, "output_level": 6, "num_output_files": 1, "total_output_size": 13038267, "num_input_records": 12685, "num_output_records": 12168, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851846496126, "job": 138, "event": "table_file_deletion", "file_number": 220}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769851846496963, "job": 138, "event": "table_file_deletion", "file_number": 218}
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.330850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.497012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.497017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.497019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.497021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:30:46 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:30:46.497023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:30:46 np0005603621 nova_compute[247399]: 2026-01-31 09:30:46.536 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.222 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.223 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.223 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.223 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:30:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:47.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:30:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891088182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.647 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.764 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.766 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4025MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.766 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.766 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:30:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:47.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.949 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:30:47 np0005603621 nova_compute[247399]: 2026-01-31 09:30:47.950 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:30:47 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.071 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.164 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.164 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.186 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.207 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.230 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:30:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:30:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329720458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.670 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.675 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.693 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.694 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:30:48 np0005603621 nova_compute[247399]: 2026-01-31 09:30:48.694 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:30:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:49.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:49 np0005603621 nova_compute[247399]: 2026-01-31 09:30:49.433 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:49.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:49 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:30:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:30:50 np0005603621 nova_compute[247399]: 2026-01-31 09:30:50.695 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:50 np0005603621 nova_compute[247399]: 2026-01-31 09:30:50.696 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:50 np0005603621 nova_compute[247399]: 2026-01-31 09:30:50.696 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:30:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:51.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:51 np0005603621 nova_compute[247399]: 2026-01-31 09:30:51.579 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:30:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:51.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:30:51 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4332: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:53 np0005603621 nova_compute[247399]: 2026-01-31 09:30:53.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:53.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:53 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:54 np0005603621 nova_compute[247399]: 2026-01-31 09:30:54.435 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:55 np0005603621 nova_compute[247399]: 2026-01-31 09:30:55.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:55.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:55 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:56 np0005603621 nova_compute[247399]: 2026-01-31 09:30:56.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:30:56 np0005603621 nova_compute[247399]: 2026-01-31 09:30:56.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:30:56 np0005603621 nova_compute[247399]: 2026-01-31 09:30:56.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:30:56 np0005603621 nova_compute[247399]: 2026-01-31 09:30:56.225 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:30:56 np0005603621 nova_compute[247399]: 2026-01-31 09:30:56.629 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:57.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:57 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4335: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:30:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:30:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:30:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:30:59.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:30:59 np0005603621 nova_compute[247399]: 2026-01-31 09:30:59.437 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:30:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:30:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:30:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:30:59.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:30:59 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:00 np0005603621 nova_compute[247399]: 2026-01-31 09:31:00.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:01 np0005603621 nova_compute[247399]: 2026-01-31 09:31:01.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:01.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:03.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:03.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:03 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:04 np0005603621 nova_compute[247399]: 2026-01-31 09:31:04.438 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:05.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:05 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4339: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:06 np0005603621 nova_compute[247399]: 2026-01-31 09:31:06.635 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:07.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:07.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:07 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:31:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:31:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:09.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:09 np0005603621 nova_compute[247399]: 2026-01-31 09:31:09.440 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:09.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:09 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:10 np0005603621 podman[436186]: 2026-01-31 09:31:10.514834294 +0000 UTC m=+0.072145584 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:31:10 np0005603621 podman[436187]: 2026-01-31 09:31:10.547034937 +0000 UTC m=+0.103751388 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:31:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:11.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:11 np0005603621 nova_compute[247399]: 2026-01-31 09:31:11.638 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:11 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:13.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:13.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:13 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:14 np0005603621 podman[436453]: 2026-01-31 09:31:14.08506054 +0000 UTC m=+0.051043244 container exec 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:31:14 np0005603621 podman[436453]: 2026-01-31 09:31:14.167635304 +0000 UTC m=+0.133617988 container exec_died 8a056797e460928a5aeedd3e2298bafe24e7f52c20f4a7487148750cf8988d86 (image=quay.io/ceph/ceph:v18, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:31:14 np0005603621 nova_compute[247399]: 2026-01-31 09:31:14.440 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:14 np0005603621 podman[436609]: 2026-01-31 09:31:14.609232613 +0000 UTC m=+0.048877958 container exec e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:31:14 np0005603621 podman[436631]: 2026-01-31 09:31:14.673892254 +0000 UTC m=+0.052596071 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:31:14 np0005603621 podman[436609]: 2026-01-31 09:31:14.894596566 +0000 UTC m=+0.334241861 container exec_died e85bd24af30cc47f8586afc6e524047e12ba657e07d03a65812cdd8f8d12c0fe (image=quay.io/ceph/haproxy:2.3, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-haproxy-rgw-default-compute-0-evwczw)
Jan 31 04:31:15 np0005603621 podman[436676]: 2026-01-31 09:31:15.187596234 +0000 UTC m=+0.053421727 container exec 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, vcs-type=git, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container)
Jan 31 04:31:15 np0005603621 podman[436696]: 2026-01-31 09:31:15.250977487 +0000 UTC m=+0.049749134 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, distribution-scope=public, com.redhat.component=keepalived-container, release=1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 31 04:31:15 np0005603621 podman[436676]: 2026-01-31 09:31:15.261057218 +0000 UTC m=+0.126882691 container exec_died 9f037d7b549dfea18e07bcd207334bdb6be640ac888fb6a410ce9d0db364b790 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-keepalived-rgw-default-compute-0-wujrgc, io.openshift.expose-services=, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:15.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:15 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-crash-compute-0[81764]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4c888882-8210-4185-9987-52bd4b0eab27 does not exist
Jan 31 04:31:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e32fd419-0ccc-47e1-b109-844f79b77fcc does not exist
Jan 31 04:31:15 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev fe8b807e-b109-4ec0-99e8-7b061548ca36 does not exist
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:31:15 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:31:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:15 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:31:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:16 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.413299374 +0000 UTC m=+0.033165433 container create ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:31:16 np0005603621 systemd[1]: Started libpod-conmon-ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773.scope.
Jan 31 04:31:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.481398823 +0000 UTC m=+0.101264902 container init ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.486641474 +0000 UTC m=+0.106507533 container start ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.489927775 +0000 UTC m=+0.109793864 container attach ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 04:31:16 np0005603621 optimistic_bartik[436999]: 167 167
Jan 31 04:31:16 np0005603621 systemd[1]: libpod-ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773.scope: Deactivated successfully.
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.493227137 +0000 UTC m=+0.113093196 container died ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.397416135 +0000 UTC m=+0.017282194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:31:16 np0005603621 systemd[1]: var-lib-containers-storage-overlay-80f199a360eeaf9e93d43ccd13bf6da1e8efb956bea0505333adcb15e3611609-merged.mount: Deactivated successfully.
Jan 31 04:31:16 np0005603621 podman[436982]: 2026-01-31 09:31:16.534242091 +0000 UTC m=+0.154108150 container remove ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_bartik, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:31:16 np0005603621 systemd[1]: libpod-conmon-ab5eb3ad01b709ce8207de5b891b72ae3bb491e4dc49b81860c85da804711773.scope: Deactivated successfully.
Jan 31 04:31:16 np0005603621 nova_compute[247399]: 2026-01-31 09:31:16.640 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:16 np0005603621 podman[437023]: 2026-01-31 09:31:16.662877195 +0000 UTC m=+0.044148292 container create 5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:31:16 np0005603621 systemd[1]: Started libpod-conmon-5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc.scope.
Jan 31 04:31:16 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:31:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170c7c125d2445cf578015ec23966ad2e9c2beda7a13c0d5ef2d41559f6765f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170c7c125d2445cf578015ec23966ad2e9c2beda7a13c0d5ef2d41559f6765f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170c7c125d2445cf578015ec23966ad2e9c2beda7a13c0d5ef2d41559f6765f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170c7c125d2445cf578015ec23966ad2e9c2beda7a13c0d5ef2d41559f6765f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:16 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/170c7c125d2445cf578015ec23966ad2e9c2beda7a13c0d5ef2d41559f6765f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:16 np0005603621 podman[437023]: 2026-01-31 09:31:16.723903005 +0000 UTC m=+0.105174122 container init 5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:31:16 np0005603621 podman[437023]: 2026-01-31 09:31:16.730614282 +0000 UTC m=+0.111885389 container start 5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:31:16 np0005603621 podman[437023]: 2026-01-31 09:31:16.639283857 +0000 UTC m=+0.020554974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:31:16 np0005603621 podman[437023]: 2026-01-31 09:31:16.735965177 +0000 UTC m=+0.117236274 container attach 5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:31:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:17.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:17 np0005603621 optimistic_williamson[437040]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:31:17 np0005603621 optimistic_williamson[437040]: --> relative data size: 1.0
Jan 31 04:31:17 np0005603621 optimistic_williamson[437040]: --> All data devices are unavailable
Jan 31 04:31:17 np0005603621 systemd[1]: libpod-5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc.scope: Deactivated successfully.
Jan 31 04:31:17 np0005603621 podman[437055]: 2026-01-31 09:31:17.570565905 +0000 UTC m=+0.021270897 container died 5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:31:17 np0005603621 systemd[1]: var-lib-containers-storage-overlay-170c7c125d2445cf578015ec23966ad2e9c2beda7a13c0d5ef2d41559f6765f6-merged.mount: Deactivated successfully.
Jan 31 04:31:17 np0005603621 podman[437055]: 2026-01-31 09:31:17.621362681 +0000 UTC m=+0.072067643 container remove 5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williamson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:31:17 np0005603621 systemd[1]: libpod-conmon-5d55b0af91a174104373fa23accebf01edecf8a5011b2624203f443ca32db4dc.scope: Deactivated successfully.
Jan 31 04:31:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:17 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:18 np0005603621 podman[437210]: 2026-01-31 09:31:18.178587831 +0000 UTC m=+0.038108496 container create 0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:31:18 np0005603621 systemd[1]: Started libpod-conmon-0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7.scope.
Jan 31 04:31:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:31:18 np0005603621 podman[437210]: 2026-01-31 09:31:18.244674207 +0000 UTC m=+0.104194892 container init 0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 31 04:31:18 np0005603621 podman[437210]: 2026-01-31 09:31:18.248588168 +0000 UTC m=+0.108108833 container start 0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 31 04:31:18 np0005603621 podman[437210]: 2026-01-31 09:31:18.251562159 +0000 UTC m=+0.111082854 container attach 0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 31 04:31:18 np0005603621 dazzling_kirch[437227]: 167 167
Jan 31 04:31:18 np0005603621 systemd[1]: libpod-0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7.scope: Deactivated successfully.
Jan 31 04:31:18 np0005603621 podman[437210]: 2026-01-31 09:31:18.162804314 +0000 UTC m=+0.022324999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:31:18 np0005603621 podman[437232]: 2026-01-31 09:31:18.287182487 +0000 UTC m=+0.021287357 container died 0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 31 04:31:18 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e5305955a6895c1a25b11af821701f7de5e0cba46c362aa67931231858448abe-merged.mount: Deactivated successfully.
Jan 31 04:31:18 np0005603621 podman[437232]: 2026-01-31 09:31:18.317583944 +0000 UTC m=+0.051688784 container remove 0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kirch, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 04:31:18 np0005603621 systemd[1]: libpod-conmon-0ca79f48f63fdc168856d0e96405fc32e7095815888c27a6e4ae6157db8145c7.scope: Deactivated successfully.
Jan 31 04:31:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:18 np0005603621 podman[437255]: 2026-01-31 09:31:18.431713321 +0000 UTC m=+0.033076770 container create de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:31:18 np0005603621 systemd[1]: Started libpod-conmon-de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591.scope.
Jan 31 04:31:18 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:31:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd6a8cefcca7d8292e459ec8f743f2ceb6211d7513a08ba69fc0ed24f3c8b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd6a8cefcca7d8292e459ec8f743f2ceb6211d7513a08ba69fc0ed24f3c8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd6a8cefcca7d8292e459ec8f743f2ceb6211d7513a08ba69fc0ed24f3c8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:18 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cd6a8cefcca7d8292e459ec8f743f2ceb6211d7513a08ba69fc0ed24f3c8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:18 np0005603621 podman[437255]: 2026-01-31 09:31:18.510143858 +0000 UTC m=+0.111507337 container init de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 04:31:18 np0005603621 podman[437255]: 2026-01-31 09:31:18.416331627 +0000 UTC m=+0.017695106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:31:18 np0005603621 podman[437255]: 2026-01-31 09:31:18.516501954 +0000 UTC m=+0.117865403 container start de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:31:18 np0005603621 podman[437255]: 2026-01-31 09:31:18.51994922 +0000 UTC m=+0.121312699 container attach de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]: {
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:    "0": [
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:        {
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "devices": [
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "/dev/loop3"
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            ],
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "lv_name": "ceph_lv0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "lv_size": "7511998464",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "name": "ceph_lv0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "tags": {
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.cluster_name": "ceph",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.crush_device_class": "",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.encrypted": "0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.osd_id": "0",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.type": "block",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:                "ceph.vdo": "0"
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            },
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "type": "block",
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:            "vg_name": "ceph_vg0"
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:        }
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]:    ]
Jan 31 04:31:19 np0005603621 keen_hodgkin[437271]: }
Jan 31 04:31:19 np0005603621 systemd[1]: libpod-de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591.scope: Deactivated successfully.
Jan 31 04:31:19 np0005603621 podman[437255]: 2026-01-31 09:31:19.268190896 +0000 UTC m=+0.869554345 container died de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:31:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-40cd6a8cefcca7d8292e459ec8f743f2ceb6211d7513a08ba69fc0ed24f3c8b9-merged.mount: Deactivated successfully.
Jan 31 04:31:19 np0005603621 podman[437255]: 2026-01-31 09:31:19.315711581 +0000 UTC m=+0.917075030 container remove de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:31:19 np0005603621 systemd[1]: libpod-conmon-de773286b9f0adb27734153960591de753ecac1c21cc243ec97ba54fd9b4f591.scope: Deactivated successfully.
Jan 31 04:31:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:19.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:19 np0005603621 nova_compute[247399]: 2026-01-31 09:31:19.441 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.756398641 +0000 UTC m=+0.034320869 container create d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_joliot, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:31:19 np0005603621 systemd[1]: Started libpod-conmon-d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216.scope.
Jan 31 04:31:19 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.828344957 +0000 UTC m=+0.106267205 container init d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.834755035 +0000 UTC m=+0.112677263 container start d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.73918827 +0000 UTC m=+0.017110548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.837616803 +0000 UTC m=+0.115539051 container attach d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_joliot, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 31 04:31:19 np0005603621 hardcore_joliot[437451]: 167 167
Jan 31 04:31:19 np0005603621 systemd[1]: libpod-d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216.scope: Deactivated successfully.
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.840251375 +0000 UTC m=+0.118173603 container died d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 31 04:31:19 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b4ced08d572d780eb93f15069aca6cbf23516abdb3fd7f767979234369eb71f4-merged.mount: Deactivated successfully.
Jan 31 04:31:19 np0005603621 podman[437435]: 2026-01-31 09:31:19.86928814 +0000 UTC m=+0.147210368 container remove d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:31:19 np0005603621 systemd[1]: libpod-conmon-d7ca8ecf5f1ea0362c44175cf1e015039139107986cfaf3f94d97f1a18467216.scope: Deactivated successfully.
Jan 31 04:31:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:19 np0005603621 podman[437475]: 2026-01-31 09:31:19.982043283 +0000 UTC m=+0.042008515 container create c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:31:19 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:20 np0005603621 systemd[1]: Started libpod-conmon-c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c.scope.
Jan 31 04:31:20 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:31:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1e41d7ef2a53c13b6cd95735b4e8e85030185d418b444a1fa0c24479fb3fa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1e41d7ef2a53c13b6cd95735b4e8e85030185d418b444a1fa0c24479fb3fa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1e41d7ef2a53c13b6cd95735b4e8e85030185d418b444a1fa0c24479fb3fa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:20 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb1e41d7ef2a53c13b6cd95735b4e8e85030185d418b444a1fa0c24479fb3fa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:31:20 np0005603621 podman[437475]: 2026-01-31 09:31:19.962009347 +0000 UTC m=+0.021974609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:31:20 np0005603621 podman[437475]: 2026-01-31 09:31:20.060937085 +0000 UTC m=+0.120902357 container init c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:31:20 np0005603621 podman[437475]: 2026-01-31 09:31:20.067151837 +0000 UTC m=+0.127117099 container start c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:31:20 np0005603621 podman[437475]: 2026-01-31 09:31:20.070767888 +0000 UTC m=+0.130733140 container attach c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]: {
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:        "osd_id": 0,
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:        "type": "bluestore"
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]:    }
Jan 31 04:31:20 np0005603621 pensive_khayyam[437491]: }
Jan 31 04:31:20 np0005603621 systemd[1]: libpod-c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c.scope: Deactivated successfully.
Jan 31 04:31:20 np0005603621 podman[437475]: 2026-01-31 09:31:20.84191826 +0000 UTC m=+0.901883532 container died c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:31:20 np0005603621 systemd[1]: var-lib-containers-storage-overlay-cb1e41d7ef2a53c13b6cd95735b4e8e85030185d418b444a1fa0c24479fb3fa1-merged.mount: Deactivated successfully.
Jan 31 04:31:20 np0005603621 podman[437475]: 2026-01-31 09:31:20.886640819 +0000 UTC m=+0.946606061 container remove c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:31:20 np0005603621 systemd[1]: libpod-conmon-c88b37f679dcc2b4f30d32e980b2910d428b3e27b66aaf2ffeeb110748352b9c.scope: Deactivated successfully.
Jan 31 04:31:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:31:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:20 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:31:20 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 48953269-764c-4b63-88d7-b343c38af6b6 does not exist
Jan 31 04:31:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c8bbbd16-7c2d-4521-ad7e-39e5eb717d7d does not exist
Jan 31 04:31:20 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev df28b78b-267b-4940-aae2-db561fb39dd7 does not exist
Jan 31 04:31:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:21 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:31:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:21.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:21 np0005603621 nova_compute[247399]: 2026-01-31 09:31:21.642 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:31:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:21.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:31:21 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:23.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:23 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:24 np0005603621 nova_compute[247399]: 2026-01-31 09:31:24.443 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:31:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:25.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:31:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:31:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:31:25 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:26 np0005603621 nova_compute[247399]: 2026-01-31 09:31:26.644 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:27.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:27 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:27.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:29.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:29 np0005603621 nova_compute[247399]: 2026-01-31 09:31:29.445 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:29 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:29.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:31:30.576 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:31:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:31:30.577 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:31:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:31:30.577 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:31:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:31.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:31 np0005603621 nova_compute[247399]: 2026-01-31 09:31:31.645 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:31 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:31.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:33.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:33 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:33.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:34 np0005603621 nova_compute[247399]: 2026-01-31 09:31:34.446 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:35.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:35 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:36 np0005603621 nova_compute[247399]: 2026-01-31 09:31:36.690 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:37 np0005603621 nova_compute[247399]: 2026-01-31 09:31:37.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:37.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:37 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:31:38
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'images', '.mgr', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta']
Jan 31 04:31:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:31:39 np0005603621 nova_compute[247399]: 2026-01-31 09:31:39.447 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:39.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:39 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:41.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:41 np0005603621 podman[437638]: 2026-01-31 09:31:41.517496024 +0000 UTC m=+0.075070124 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:31:41 np0005603621 podman[437639]: 2026-01-31 09:31:41.560466679 +0000 UTC m=+0.118013378 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 31 04:31:41 np0005603621 nova_compute[247399]: 2026-01-31 09:31:41.691 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:41 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:42.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:43.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:43 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:44.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:44 np0005603621 nova_compute[247399]: 2026-01-31 09:31:44.451 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:45 np0005603621 nova_compute[247399]: 2026-01-31 09:31:45.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:45.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:45 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:46.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:46 np0005603621 nova_compute[247399]: 2026-01-31 09:31:46.693 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.236 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.236 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.236 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.236 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.237 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:31:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:31:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:47.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:31:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:31:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971646173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.690 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.860 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.861 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4063MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.862 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.862 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.933 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.934 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:31:47 np0005603621 nova_compute[247399]: 2026-01-31 09:31:47.964 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:31:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:48.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:31:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730818640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:31:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:48 np0005603621 nova_compute[247399]: 2026-01-31 09:31:48.456 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:31:48 np0005603621 nova_compute[247399]: 2026-01-31 09:31:48.463 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:31:48 np0005603621 nova_compute[247399]: 2026-01-31 09:31:48.486 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:31:48 np0005603621 nova_compute[247399]: 2026-01-31 09:31:48.490 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:31:48 np0005603621 nova_compute[247399]: 2026-01-31 09:31:48.490 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:31:49 np0005603621 nova_compute[247399]: 2026-01-31 09:31:49.452 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:49.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:31:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:31:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.002000062s ======
Jan 31 04:31:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:51.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Jan 31 04:31:51 np0005603621 nova_compute[247399]: 2026-01-31 09:31:51.492 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:51 np0005603621 nova_compute[247399]: 2026-01-31 09:31:51.492 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:51 np0005603621 nova_compute[247399]: 2026-01-31 09:31:51.493 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:31:51 np0005603621 nova_compute[247399]: 2026-01-31 09:31:51.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:52.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:31:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:53.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:31:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:54.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:54 np0005603621 nova_compute[247399]: 2026-01-31 09:31:54.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:54 np0005603621 nova_compute[247399]: 2026-01-31 09:31:54.453 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:55 np0005603621 nova_compute[247399]: 2026-01-31 09:31:55.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:31:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:55.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:31:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:56.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:56 np0005603621 nova_compute[247399]: 2026-01-31 09:31:56.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:56 np0005603621 nova_compute[247399]: 2026-01-31 09:31:56.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:31:56 np0005603621 nova_compute[247399]: 2026-01-31 09:31:56.200 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:31:56 np0005603621 nova_compute[247399]: 2026-01-31 09:31:56.217 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:31:56 np0005603621 nova_compute[247399]: 2026-01-31 09:31:56.699 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:57.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:31:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:31:58.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:31:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:31:59 np0005603621 nova_compute[247399]: 2026-01-31 09:31:59.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:31:59 np0005603621 nova_compute[247399]: 2026-01-31 09:31:59.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 31 04:31:59 np0005603621 nova_compute[247399]: 2026-01-31 09:31:59.242 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 31 04:31:59 np0005603621 nova_compute[247399]: 2026-01-31 09:31:59.473 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:31:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:31:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:31:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:31:59.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:00.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:00 np0005603621 nova_compute[247399]: 2026-01-31 09:32:00.243 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:32:01 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.3 total, 600.0 interval#012Cumulative writes: 62K writes, 235K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.73 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 820 writes, 1255 keys, 820 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 820 writes, 406 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:32:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:01.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:01 np0005603621 nova_compute[247399]: 2026-01-31 09:32:01.735 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:02.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:03.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:04.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:04 np0005603621 nova_compute[247399]: 2026-01-31 09:32:04.475 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:05.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:06.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:06 np0005603621 nova_compute[247399]: 2026-01-31 09:32:06.739 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:07.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:08.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:08 np0005603621 nova_compute[247399]: 2026-01-31 09:32:08.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:08 np0005603621 nova_compute[247399]: 2026-01-31 09:32:08.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 31 04:32:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:32:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:32:09 np0005603621 nova_compute[247399]: 2026-01-31 09:32:09.476 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:09.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:10.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:11 np0005603621 nova_compute[247399]: 2026-01-31 09:32:11.222 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:11 np0005603621 ceph-mgr[74689]: [devicehealth INFO root] Check health
Jan 31 04:32:11 np0005603621 nova_compute[247399]: 2026-01-31 09:32:11.741 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:12 np0005603621 podman[437843]: 2026-01-31 09:32:12.484662545 +0000 UTC m=+0.042972375 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:32:12 np0005603621 podman[437844]: 2026-01-31 09:32:12.503339661 +0000 UTC m=+0.061649361 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:32:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:13.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:14 np0005603621 nova_compute[247399]: 2026-01-31 09:32:14.527 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:32:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:16.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:32:16 np0005603621 nova_compute[247399]: 2026-01-31 09:32:16.743 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:17.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:18.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:19 np0005603621 nova_compute[247399]: 2026-01-31 09:32:19.529 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:20.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:32:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:21.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:32:21 np0005603621 nova_compute[247399]: 2026-01-31 09:32:21.778 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:32:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:32:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:32:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:32:21 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:32:21 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:32:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 02f31c8e-b46f-42d2-83d7-43aa056cea3c does not exist
Jan 31 04:32:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c16ca9fe-0fff-41f3-a8e9-d1feb59fe97a does not exist
Jan 31 04:32:21 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1e85478b-5deb-4bec-82cd-b9934fbd0a29 does not exist
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:32:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:32:22 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:32:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:22.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.519853028 +0000 UTC m=+0.040312754 container create 9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:22 np0005603621 systemd[1]: Started libpod-conmon-9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341.scope.
Jan 31 04:32:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.503275427 +0000 UTC m=+0.023735183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.603096593 +0000 UTC m=+0.123556359 container init 9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.610160491 +0000 UTC m=+0.130620227 container start 9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.61438641 +0000 UTC m=+0.134846296 container attach 9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:32:22 np0005603621 jovial_zhukovsky[438182]: 167 167
Jan 31 04:32:22 np0005603621 systemd[1]: libpod-9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341.scope: Deactivated successfully.
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.616165986 +0000 UTC m=+0.136625732 container died 9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 31 04:32:22 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b5544f0345aa6425bb620e35f113ff39fd747fbe9ca259b1e09c80b3a91e64e8-merged.mount: Deactivated successfully.
Jan 31 04:32:22 np0005603621 podman[438166]: 2026-01-31 09:32:22.667950721 +0000 UTC m=+0.188410447 container remove 9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:22 np0005603621 systemd[1]: libpod-conmon-9a66c147c5bd13064e7d87398e7ce5a7712d775f012212804545808cbc20c341.scope: Deactivated successfully.
Jan 31 04:32:22 np0005603621 podman[438205]: 2026-01-31 09:32:22.809417611 +0000 UTC m=+0.042361076 container create 07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:32:22 np0005603621 systemd[1]: Started libpod-conmon-07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb.scope.
Jan 31 04:32:22 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:32:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac779ded33ac238fc1f2e4075084a49041e30cba4b6cc6eeeb228c618d9f609/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:22 np0005603621 podman[438205]: 2026-01-31 09:32:22.789817377 +0000 UTC m=+0.022760872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:32:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac779ded33ac238fc1f2e4075084a49041e30cba4b6cc6eeeb228c618d9f609/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac779ded33ac238fc1f2e4075084a49041e30cba4b6cc6eeeb228c618d9f609/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac779ded33ac238fc1f2e4075084a49041e30cba4b6cc6eeeb228c618d9f609/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:22 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac779ded33ac238fc1f2e4075084a49041e30cba4b6cc6eeeb228c618d9f609/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:22 np0005603621 podman[438205]: 2026-01-31 09:32:22.904604304 +0000 UTC m=+0.137547799 container init 07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:32:22 np0005603621 podman[438205]: 2026-01-31 09:32:22.909847886 +0000 UTC m=+0.142791361 container start 07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:32:22 np0005603621 podman[438205]: 2026-01-31 09:32:22.91357142 +0000 UTC m=+0.146514915 container attach 07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:32:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:23.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:23 np0005603621 condescending_goldwasser[438222]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:32:23 np0005603621 condescending_goldwasser[438222]: --> relative data size: 1.0
Jan 31 04:32:23 np0005603621 condescending_goldwasser[438222]: --> All data devices are unavailable
Jan 31 04:32:23 np0005603621 systemd[1]: libpod-07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb.scope: Deactivated successfully.
Jan 31 04:32:23 np0005603621 podman[438205]: 2026-01-31 09:32:23.625217259 +0000 UTC m=+0.858160744 container died 07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:32:23 np0005603621 systemd[1]: var-lib-containers-storage-overlay-7ac779ded33ac238fc1f2e4075084a49041e30cba4b6cc6eeeb228c618d9f609-merged.mount: Deactivated successfully.
Jan 31 04:32:23 np0005603621 podman[438205]: 2026-01-31 09:32:23.670138253 +0000 UTC m=+0.903081728 container remove 07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 31 04:32:23 np0005603621 systemd[1]: libpod-conmon-07fb7d86575feafb96ea617684658f90c751517d035baa1a0c7de0b9fe2a6fdb.scope: Deactivated successfully.
Jan 31 04:32:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:24.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.123882286 +0000 UTC m=+0.037188548 container create c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 31 04:32:24 np0005603621 systemd[1]: Started libpod-conmon-c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d.scope.
Jan 31 04:32:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.108214953 +0000 UTC m=+0.021521195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.203171049 +0000 UTC m=+0.116477301 container init c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.211539717 +0000 UTC m=+0.124845939 container start c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.214175378 +0000 UTC m=+0.127481640 container attach c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:32:24 np0005603621 goofy_wu[438407]: 167 167
Jan 31 04:32:24 np0005603621 systemd[1]: libpod-c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d.scope: Deactivated successfully.
Jan 31 04:32:24 np0005603621 conmon[438407]: conmon c827c61cad597be57192 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d.scope/container/memory.events
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.21715101 +0000 UTC m=+0.130457232 container died c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 31 04:32:24 np0005603621 systemd[1]: var-lib-containers-storage-overlay-14abf694e5ad896e6bcb1b2f71a95dd7f27a8f397c7ed8a05ea6b95d1d19be68-merged.mount: Deactivated successfully.
Jan 31 04:32:24 np0005603621 podman[438391]: 2026-01-31 09:32:24.254656015 +0000 UTC m=+0.167962237 container remove c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wu, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:32:24 np0005603621 systemd[1]: libpod-conmon-c827c61cad597be5719261469b11f6b36bd97940a1a2b9e9d79aa5145bb0dd2d.scope: Deactivated successfully.
Jan 31 04:32:24 np0005603621 podman[438431]: 2026-01-31 09:32:24.385019533 +0000 UTC m=+0.057405471 container create 14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:32:24 np0005603621 systemd[1]: Started libpod-conmon-14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10.scope.
Jan 31 04:32:24 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:32:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176b8e117e07411deb573ed5fc188632a6adb70f7aaf73b995cb8333729f31ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176b8e117e07411deb573ed5fc188632a6adb70f7aaf73b995cb8333729f31ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176b8e117e07411deb573ed5fc188632a6adb70f7aaf73b995cb8333729f31ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:24 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176b8e117e07411deb573ed5fc188632a6adb70f7aaf73b995cb8333729f31ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:24 np0005603621 podman[438431]: 2026-01-31 09:32:24.359889858 +0000 UTC m=+0.032275846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:32:24 np0005603621 podman[438431]: 2026-01-31 09:32:24.478532454 +0000 UTC m=+0.150918382 container init 14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jepsen, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:24 np0005603621 podman[438431]: 2026-01-31 09:32:24.484283731 +0000 UTC m=+0.156669639 container start 14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jepsen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:24 np0005603621 podman[438431]: 2026-01-31 09:32:24.488000806 +0000 UTC m=+0.160386764 container attach 14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:24 np0005603621 nova_compute[247399]: 2026-01-31 09:32:24.530 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]: {
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:    "0": [
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:        {
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "devices": [
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "/dev/loop3"
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            ],
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "lv_name": "ceph_lv0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "lv_size": "7511998464",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "name": "ceph_lv0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "tags": {
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.cluster_name": "ceph",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.crush_device_class": "",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.encrypted": "0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.osd_id": "0",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.type": "block",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:                "ceph.vdo": "0"
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            },
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "type": "block",
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:            "vg_name": "ceph_vg0"
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:        }
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]:    ]
Jan 31 04:32:25 np0005603621 jolly_jepsen[438447]: }
Jan 31 04:32:25 np0005603621 systemd[1]: libpod-14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10.scope: Deactivated successfully.
Jan 31 04:32:25 np0005603621 podman[438431]: 2026-01-31 09:32:25.255326571 +0000 UTC m=+0.927712499 container died 14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:32:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-176b8e117e07411deb573ed5fc188632a6adb70f7aaf73b995cb8333729f31ac-merged.mount: Deactivated successfully.
Jan 31 04:32:25 np0005603621 podman[438431]: 2026-01-31 09:32:25.305835067 +0000 UTC m=+0.978220995 container remove 14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_jepsen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:25 np0005603621 systemd[1]: libpod-conmon-14a62a1dd0144565a3591f3575f05b3b2364cfc2e8ad5fa8c04d6e0a8380ef10.scope: Deactivated successfully.
Jan 31 04:32:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:25.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:25 np0005603621 podman[438610]: 2026-01-31 09:32:25.882579069 +0000 UTC m=+0.045614646 container create eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:32:25 np0005603621 systemd[1]: Started libpod-conmon-eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf.scope.
Jan 31 04:32:25 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:32:25 np0005603621 podman[438610]: 2026-01-31 09:32:25.864201163 +0000 UTC m=+0.027236730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:32:25 np0005603621 podman[438610]: 2026-01-31 09:32:25.960114718 +0000 UTC m=+0.123150295 container init eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 31 04:32:25 np0005603621 podman[438610]: 2026-01-31 09:32:25.965869386 +0000 UTC m=+0.128904933 container start eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 31 04:32:25 np0005603621 podman[438610]: 2026-01-31 09:32:25.969327613 +0000 UTC m=+0.132363180 container attach eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:32:25 np0005603621 systemd[1]: libpod-eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf.scope: Deactivated successfully.
Jan 31 04:32:25 np0005603621 competent_mirzakhani[438627]: 167 167
Jan 31 04:32:25 np0005603621 conmon[438627]: conmon eb1e1d33d628b88d4fbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf.scope/container/memory.events
Jan 31 04:32:25 np0005603621 podman[438610]: 2026-01-31 09:32:25.972014985 +0000 UTC m=+0.135050552 container died eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:32:25 np0005603621 systemd[1]: var-lib-containers-storage-overlay-5daee4359d1ce785ecc345a2670f51be97eecc115bfebf43c15baf4abdf6d7d2-merged.mount: Deactivated successfully.
Jan 31 04:32:26 np0005603621 podman[438610]: 2026-01-31 09:32:26.012440771 +0000 UTC m=+0.175476318 container remove eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 31 04:32:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:26 np0005603621 systemd[1]: libpod-conmon-eb1e1d33d628b88d4fbf39e4f8958a477ff36c61230f916a6dfdaafb935a6eaf.scope: Deactivated successfully.
Jan 31 04:32:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:26.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:26 np0005603621 podman[438649]: 2026-01-31 09:32:26.134790771 +0000 UTC m=+0.040634153 container create 9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 04:32:26 np0005603621 systemd[1]: Started libpod-conmon-9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d.scope.
Jan 31 04:32:26 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:32:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b47b3ab040528d4f6e67cc7225ce86495420d1102a2f636268bb1bc2fccfc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b47b3ab040528d4f6e67cc7225ce86495420d1102a2f636268bb1bc2fccfc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b47b3ab040528d4f6e67cc7225ce86495420d1102a2f636268bb1bc2fccfc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:26 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99b47b3ab040528d4f6e67cc7225ce86495420d1102a2f636268bb1bc2fccfc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:32:26 np0005603621 podman[438649]: 2026-01-31 09:32:26.21263834 +0000 UTC m=+0.118481762 container init 9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tharp, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:32:26 np0005603621 podman[438649]: 2026-01-31 09:32:26.117997854 +0000 UTC m=+0.023841236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:32:26 np0005603621 podman[438649]: 2026-01-31 09:32:26.218189531 +0000 UTC m=+0.124032903 container start 9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:32:26 np0005603621 podman[438649]: 2026-01-31 09:32:26.222203385 +0000 UTC m=+0.128046817 container attach 9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:32:26 np0005603621 nova_compute[247399]: 2026-01-31 09:32:26.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]: {
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:        "osd_id": 0,
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:        "type": "bluestore"
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]:    }
Jan 31 04:32:27 np0005603621 thirsty_tharp[438665]: }
Jan 31 04:32:27 np0005603621 systemd[1]: libpod-9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d.scope: Deactivated successfully.
Jan 31 04:32:27 np0005603621 podman[438649]: 2026-01-31 09:32:27.059561287 +0000 UTC m=+0.965404709 container died 9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tharp, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 31 04:32:27 np0005603621 systemd[1]: var-lib-containers-storage-overlay-99b47b3ab040528d4f6e67cc7225ce86495420d1102a2f636268bb1bc2fccfc6-merged.mount: Deactivated successfully.
Jan 31 04:32:27 np0005603621 podman[438649]: 2026-01-31 09:32:27.109015002 +0000 UTC m=+1.014858374 container remove 9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:32:27 np0005603621 systemd[1]: libpod-conmon-9424e9a834de41ce76c6ded7307e3cc26619eaa3b534a415726c1c78f3b17d6d.scope: Deactivated successfully.
Jan 31 04:32:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:32:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:32:27 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:32:27 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:32:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev d9728cc2-6e6f-49fa-87b6-577be18db1fb does not exist
Jan 31 04:32:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev b1a21a83-3373-43fd-9980-f8b4e7a43b36 does not exist
Jan 31 04:32:27 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c2f3859c-36d8-46e2-8287-0be047effe91 does not exist
Jan 31 04:32:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:27.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:28.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:32:28 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:32:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:29.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:29 np0005603621 nova_compute[247399]: 2026-01-31 09:32:29.533 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:30.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:32:30.578 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:32:30.578 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:32:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:32:30.578 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:32:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:31.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:31 np0005603621 nova_compute[247399]: 2026-01-31 09:32:31.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:32.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:32:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:33.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:32:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:32:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:34.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:32:34 np0005603621 nova_compute[247399]: 2026-01-31 09:32:34.200 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:34 np0005603621 nova_compute[247399]: 2026-01-31 09:32:34.534 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:35.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:32:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:36.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:32:36 np0005603621 nova_compute[247399]: 2026-01-31 09:32:36.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:32:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:37.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:38.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:32:38
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', '.mgr']
Jan 31 04:32:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:32:39 np0005603621 nova_compute[247399]: 2026-01-31 09:32:39.254 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:32:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:32:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:39.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:39 np0005603621 nova_compute[247399]: 2026-01-31 09:32:39.537 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:40.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:41.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:41 np0005603621 nova_compute[247399]: 2026-01-31 09:32:41.789 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:42.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:43 np0005603621 podman[438809]: 2026-01-31 09:32:43.490635367 +0000 UTC m=+0.049442405 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Jan 31 04:32:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:43.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:43 np0005603621 podman[438810]: 2026-01-31 09:32:43.532456775 +0000 UTC m=+0.091039686 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Jan 31 04:32:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:44 np0005603621 nova_compute[247399]: 2026-01-31 09:32:44.574 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:44 np0005603621 nova_compute[247399]: 2026-01-31 09:32:44.953 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:45.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:46 np0005603621 nova_compute[247399]: 2026-01-31 09:32:46.820 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.241 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.242 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.242 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.242 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.242 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:32:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:47.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:32:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648844090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.643 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.753 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.754 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4041MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.754 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.754 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.966 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.966 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:32:47 np0005603621 nova_compute[247399]: 2026-01-31 09:32:47.983 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:32:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:48.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:32:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253745775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:32:48 np0005603621 nova_compute[247399]: 2026-01-31 09:32:48.396 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:32:48 np0005603621 nova_compute[247399]: 2026-01-31 09:32:48.401 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:32:48 np0005603621 nova_compute[247399]: 2026-01-31 09:32:48.420 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:32:48 np0005603621 nova_compute[247399]: 2026-01-31 09:32:48.423 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:32:48 np0005603621 nova_compute[247399]: 2026-01-31 09:32:48.424 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:32:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:49.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:49 np0005603621 nova_compute[247399]: 2026-01-31 09:32:49.577 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:32:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:32:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:50.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:51.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:51 np0005603621 nova_compute[247399]: 2026-01-31 09:32:51.823 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:52.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:52 np0005603621 nova_compute[247399]: 2026-01-31 09:32:52.425 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:52 np0005603621 nova_compute[247399]: 2026-01-31 09:32:52.425 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:52 np0005603621 nova_compute[247399]: 2026-01-31 09:32:52.425 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:32:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:53.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:54.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:54 np0005603621 nova_compute[247399]: 2026-01-31 09:32:54.578 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:55 np0005603621 nova_compute[247399]: 2026-01-31 09:32:55.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:55.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:56.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:56 np0005603621 nova_compute[247399]: 2026-01-31 09:32:56.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:56 np0005603621 nova_compute[247399]: 2026-01-31 09:32:56.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:32:57 np0005603621 nova_compute[247399]: 2026-01-31 09:32:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:32:57 np0005603621 nova_compute[247399]: 2026-01-31 09:32:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:32:57 np0005603621 nova_compute[247399]: 2026-01-31 09:32:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:32:57 np0005603621 nova_compute[247399]: 2026-01-31 09:32:57.215 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:32:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:57.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:32:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:32:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:32:58.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:32:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:32:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:32:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:32:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:32:59.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:32:59 np0005603621 nova_compute[247399]: 2026-01-31 09:32:59.580 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:00.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:01.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:01 np0005603621 nova_compute[247399]: 2026-01-31 09:33:01.871 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:02.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:02 np0005603621 nova_compute[247399]: 2026-01-31 09:33:02.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:03.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:04.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:04 np0005603621 nova_compute[247399]: 2026-01-31 09:33:04.582 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:33:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:05.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:33:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:06.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:06 np0005603621 nova_compute[247399]: 2026-01-31 09:33:06.873 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:07.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:33:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:33:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:09.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:09 np0005603621 nova_compute[247399]: 2026-01-31 09:33:09.584 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:10.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:11.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:11 np0005603621 nova_compute[247399]: 2026-01-31 09:33:11.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:13.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:14.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:14 np0005603621 podman[439013]: 2026-01-31 09:33:14.529925466 +0000 UTC m=+0.087715264 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 31 04:33:14 np0005603621 podman[439014]: 2026-01-31 09:33:14.530493874 +0000 UTC m=+0.085154736 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 31 04:33:14 np0005603621 nova_compute[247399]: 2026-01-31 09:33:14.632 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:15.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:16.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:16 np0005603621 nova_compute[247399]: 2026-01-31 09:33:16.935 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:17.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:18.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:19.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:19 np0005603621 nova_compute[247399]: 2026-01-31 09:33:19.633 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4406: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:20.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:21.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:21 np0005603621 nova_compute[247399]: 2026-01-31 09:33:21.938 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:22.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:23.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:24.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:24 np0005603621 nova_compute[247399]: 2026-01-31 09:33:24.660 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:25.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:26 np0005603621 nova_compute[247399]: 2026-01-31 09:33:26.970 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:27.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:28.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:29 np0005603621 nova_compute[247399]: 2026-01-31 09:33:29.662 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 8d79e6c8-c9d0-491d-a7d9-e8797add1272 does not exist
Jan 31 04:33:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev c6b607b3-4b3b-4326-a152-f8b777bf9039 does not exist
Jan 31 04:33:29 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 7dcb350a-871d-4493-914d-57471d021cf4 does not exist
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:33:29 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:33:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:30.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:33:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:30 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.36281649 +0000 UTC m=+0.044897884 container create d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:33:30 np0005603621 systemd[1]: Started libpod-conmon-d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d.scope.
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.341646927 +0000 UTC m=+0.023728351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:33:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.508270114 +0000 UTC m=+0.190351538 container init d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shtern, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.516279741 +0000 UTC m=+0.198361135 container start d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shtern, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.521085638 +0000 UTC m=+0.203167032 container attach d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:33:30 np0005603621 suspicious_shtern[439352]: 167 167
Jan 31 04:33:30 np0005603621 systemd[1]: libpod-d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d.scope: Deactivated successfully.
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.522295896 +0000 UTC m=+0.204377290 container died d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shtern, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 31 04:33:30 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d95b0c7d95526b182f24b197127489904e8b7f71c842583d7139d264ee2bc149-merged.mount: Deactivated successfully.
Jan 31 04:33:30 np0005603621 podman[439336]: 2026-01-31 09:33:30.567379324 +0000 UTC m=+0.249460718 container remove d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:33:30 np0005603621 systemd[1]: libpod-conmon-d993fec6bc8baf84e560b8edc7ed71037bb7750ec0cc21b29c89e246943ca62d.scope: Deactivated successfully.
Jan 31 04:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:33:30.578 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:33:30.579 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:33:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:33:30.580 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:33:30 np0005603621 podman[439377]: 2026-01-31 09:33:30.703633292 +0000 UTC m=+0.041897762 container create 2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 31 04:33:30 np0005603621 systemd[1]: Started libpod-conmon-2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b.scope.
Jan 31 04:33:30 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:33:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cf93ebbe76ce7a28d7ec2b77cd95e14c8d79c33f0bb255b4e30670b1f70b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cf93ebbe76ce7a28d7ec2b77cd95e14c8d79c33f0bb255b4e30670b1f70b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cf93ebbe76ce7a28d7ec2b77cd95e14c8d79c33f0bb255b4e30670b1f70b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cf93ebbe76ce7a28d7ec2b77cd95e14c8d79c33f0bb255b4e30670b1f70b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:30 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65cf93ebbe76ce7a28d7ec2b77cd95e14c8d79c33f0bb255b4e30670b1f70b7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:30 np0005603621 podman[439377]: 2026-01-31 09:33:30.773321539 +0000 UTC m=+0.111586019 container init 2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:33:30 np0005603621 podman[439377]: 2026-01-31 09:33:30.780081018 +0000 UTC m=+0.118345478 container start 2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 31 04:33:30 np0005603621 podman[439377]: 2026-01-31 09:33:30.685994298 +0000 UTC m=+0.024258778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:33:30 np0005603621 podman[439377]: 2026-01-31 09:33:30.783583185 +0000 UTC m=+0.121847645 container attach 2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:33:31 np0005603621 quirky_hermann[439393]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:33:31 np0005603621 quirky_hermann[439393]: --> relative data size: 1.0
Jan 31 04:33:31 np0005603621 quirky_hermann[439393]: --> All data devices are unavailable
Jan 31 04:33:31 np0005603621 systemd[1]: libpod-2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b.scope: Deactivated successfully.
Jan 31 04:33:31 np0005603621 podman[439377]: 2026-01-31 09:33:31.557652779 +0000 UTC m=+0.895917269 container died 2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:33:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:31 np0005603621 systemd[1]: var-lib-containers-storage-overlay-65cf93ebbe76ce7a28d7ec2b77cd95e14c8d79c33f0bb255b4e30670b1f70b7f-merged.mount: Deactivated successfully.
Jan 31 04:33:31 np0005603621 podman[439377]: 2026-01-31 09:33:31.60800428 +0000 UTC m=+0.946268740 container remove 2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_hermann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:33:31 np0005603621 systemd[1]: libpod-conmon-2ffc0d38bbf3c835564604ef035332f0c66f4505eb49338d6e07bfdc25958d5b.scope: Deactivated successfully.
Jan 31 04:33:31 np0005603621 nova_compute[247399]: 2026-01-31 09:33:31.972 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.047923946 +0000 UTC m=+0.035935028 container create c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 31 04:33:32 np0005603621 systemd[1]: Started libpod-conmon-c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a.scope.
Jan 31 04:33:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.107347578 +0000 UTC m=+0.095358680 container init c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.113511737 +0000 UTC m=+0.101522819 container start c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:33:32 np0005603621 amazing_golick[439626]: 167 167
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.116665874 +0000 UTC m=+0.104676976 container attach c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 31 04:33:32 np0005603621 systemd[1]: libpod-c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a.scope: Deactivated successfully.
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.116905602 +0000 UTC m=+0.104916684 container died c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.031932093 +0000 UTC m=+0.019943205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:33:32 np0005603621 systemd[1]: var-lib-containers-storage-overlay-23e9755193cbd52877e016e8f64982e8ce42b31c9082f7ff660903e9885ec405-merged.mount: Deactivated successfully.
Jan 31 04:33:32 np0005603621 podman[439610]: 2026-01-31 09:33:32.149240458 +0000 UTC m=+0.137251540 container remove c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_golick, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 31 04:33:32 np0005603621 systemd[1]: libpod-conmon-c2d37cea26fa81c785ac488669da28da1b1992c711b2996e73c603e7f18fae2a.scope: Deactivated successfully.
Jan 31 04:33:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:32.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:32 np0005603621 podman[439651]: 2026-01-31 09:33:32.278597784 +0000 UTC m=+0.037420814 container create 61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:33:32 np0005603621 systemd[1]: Started libpod-conmon-61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5.scope.
Jan 31 04:33:32 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:33:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f604eb19ca077f6d65c56b3373a8c1437a11eedcc2fa5457226386d96e8741aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f604eb19ca077f6d65c56b3373a8c1437a11eedcc2fa5457226386d96e8741aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f604eb19ca077f6d65c56b3373a8c1437a11eedcc2fa5457226386d96e8741aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:32 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f604eb19ca077f6d65c56b3373a8c1437a11eedcc2fa5457226386d96e8741aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:32 np0005603621 podman[439651]: 2026-01-31 09:33:32.265251493 +0000 UTC m=+0.024074543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:33:32 np0005603621 podman[439651]: 2026-01-31 09:33:32.359681322 +0000 UTC m=+0.118504362 container init 61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcclintock, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:33:32 np0005603621 podman[439651]: 2026-01-31 09:33:32.366000217 +0000 UTC m=+0.124823247 container start 61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:33:32 np0005603621 podman[439651]: 2026-01-31 09:33:32.368825714 +0000 UTC m=+0.127648774 container attach 61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcclintock, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]: {
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:    "0": [
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:        {
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "devices": [
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "/dev/loop3"
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            ],
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "lv_name": "ceph_lv0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "lv_size": "7511998464",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "name": "ceph_lv0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "tags": {
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.cluster_name": "ceph",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.crush_device_class": "",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.encrypted": "0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.osd_id": "0",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.type": "block",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:                "ceph.vdo": "0"
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            },
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "type": "block",
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:            "vg_name": "ceph_vg0"
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:        }
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]:    ]
Jan 31 04:33:33 np0005603621 wonderful_mcclintock[439668]: }
Jan 31 04:33:33 np0005603621 systemd[1]: libpod-61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5.scope: Deactivated successfully.
Jan 31 04:33:33 np0005603621 podman[439651]: 2026-01-31 09:33:33.099546581 +0000 UTC m=+0.858369631 container died 61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcclintock, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 31 04:33:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-f604eb19ca077f6d65c56b3373a8c1437a11eedcc2fa5457226386d96e8741aa-merged.mount: Deactivated successfully.
Jan 31 04:33:33 np0005603621 podman[439651]: 2026-01-31 09:33:33.150980147 +0000 UTC m=+0.909803177 container remove 61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mcclintock, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:33:33 np0005603621 systemd[1]: libpod-conmon-61e5eb00048461ca989070e602a9b79a2d990fbec49286ee1b5a346fadf0e4f5.scope: Deactivated successfully.
Jan 31 04:33:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:33.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.761049626 +0000 UTC m=+0.055134680 container create 97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 31 04:33:33 np0005603621 systemd[1]: Started libpod-conmon-97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126.scope.
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.739712578 +0000 UTC m=+0.033797682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:33:33 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.854859786 +0000 UTC m=+0.148944860 container init 97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.864281027 +0000 UTC m=+0.158366081 container start 97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.868656211 +0000 UTC m=+0.162741265 container attach 97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:33:33 np0005603621 systemd[1]: libpod-97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126.scope: Deactivated successfully.
Jan 31 04:33:33 np0005603621 competent_elion[439848]: 167 167
Jan 31 04:33:33 np0005603621 conmon[439848]: conmon 97136336c909cb33f063 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126.scope/container/memory.events
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.870141007 +0000 UTC m=+0.164226051 container died 97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:33:33 np0005603621 systemd[1]: var-lib-containers-storage-overlay-b5212e398658cbb7b83ae632ae7e8f0f862c4a6f510f72edfab0615fdc0df17c-merged.mount: Deactivated successfully.
Jan 31 04:33:33 np0005603621 podman[439831]: 2026-01-31 09:33:33.915071412 +0000 UTC m=+0.209156466 container remove 97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elion, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:33:33 np0005603621 systemd[1]: libpod-conmon-97136336c909cb33f063ef335baf66b446d809b07706d3b6d3313e89b86ba126.scope: Deactivated successfully.
Jan 31 04:33:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Jan 31 04:33:34 np0005603621 podman[439870]: 2026-01-31 09:33:34.083324107 +0000 UTC m=+0.052330485 container create 6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:33:34 np0005603621 systemd[1]: Started libpod-conmon-6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043.scope.
Jan 31 04:33:34 np0005603621 podman[439870]: 2026-01-31 09:33:34.061095532 +0000 UTC m=+0.030101900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:33:34 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:33:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1c41572f4ebea16ebb63b6c162c15c68e0a0792c535243a9ea4bf4d6e30706/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1c41572f4ebea16ebb63b6c162c15c68e0a0792c535243a9ea4bf4d6e30706/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1c41572f4ebea16ebb63b6c162c15c68e0a0792c535243a9ea4bf4d6e30706/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:34 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1c41572f4ebea16ebb63b6c162c15c68e0a0792c535243a9ea4bf4d6e30706/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:33:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:34.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:34 np0005603621 podman[439870]: 2026-01-31 09:33:34.188130006 +0000 UTC m=+0.157136414 container init 6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_feynman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:33:34 np0005603621 podman[439870]: 2026-01-31 09:33:34.195681139 +0000 UTC m=+0.164687497 container start 6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:33:34 np0005603621 podman[439870]: 2026-01-31 09:33:34.199703292 +0000 UTC m=+0.168709690 container attach 6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:33:34 np0005603621 nova_compute[247399]: 2026-01-31 09:33:34.693 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]: {
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:        "osd_id": 0,
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:        "type": "bluestore"
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]:    }
Jan 31 04:33:34 np0005603621 sharp_feynman[439887]: }
Jan 31 04:33:35 np0005603621 systemd[1]: libpod-6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043.scope: Deactivated successfully.
Jan 31 04:33:35 np0005603621 podman[439870]: 2026-01-31 09:33:35.013892961 +0000 UTC m=+0.982899309 container died 6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:33:35 np0005603621 systemd[1]: var-lib-containers-storage-overlay-bd1c41572f4ebea16ebb63b6c162c15c68e0a0792c535243a9ea4bf4d6e30706-merged.mount: Deactivated successfully.
Jan 31 04:33:35 np0005603621 podman[439870]: 2026-01-31 09:33:35.062899512 +0000 UTC m=+1.031905870 container remove 6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_feynman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:33:35 np0005603621 systemd[1]: libpod-conmon-6b19b9f2c28f9ab46d062f5b18b6d52d41a1b7438aef065687341a3120eec043.scope: Deactivated successfully.
Jan 31 04:33:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:33:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:33:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 4cce684b-a666-44a2-b3be-ba4a5fcd2515 does not exist
Jan 31 04:33:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev da7ce902-b078-49ac-8ca7-843c5b07776b does not exist
Jan 31 04:33:35 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 5ee724cf-468b-45f8-a350-f6da3e35b2c5 does not exist
Jan 31 04:33:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Jan 31 04:33:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:33:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:36.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:37 np0005603621 nova_compute[247399]: 2026-01-31 09:33:37.011 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 KiB/s rd, 0 B/s wr, 141 op/s
Jan 31 04:33:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:38.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:33:38
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['.mgr', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Jan 31 04:33:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:33:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:33:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:39.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:39 np0005603621 nova_compute[247399]: 2026-01-31 09:33:39.692 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 31 04:33:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:40.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:41 np0005603621 nova_compute[247399]: 2026-01-31 09:33:41.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:41.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:42 np0005603621 nova_compute[247399]: 2026-01-31 09:33:42.013 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 31 04:33:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:42.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:33:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:43.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:33:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 178 op/s
Jan 31 04:33:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:44.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:44 np0005603621 nova_compute[247399]: 2026-01-31 09:33:44.695 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:45 np0005603621 podman[439979]: 2026-01-31 09:33:45.491558889 +0000 UTC m=+0.049773815 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:33:45 np0005603621 podman[439980]: 2026-01-31 09:33:45.512637118 +0000 UTC m=+0.070620677 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:33:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:45.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Jan 31 04:33:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:46.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.014 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.234 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.235 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.235 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.235 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:33:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:33:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590740182' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.644 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.783 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.784 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4051MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.784 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.784 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.842 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.843 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:33:47 np0005603621 nova_compute[247399]: 2026-01-31 09:33:47.858 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:33:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 156 op/s
Jan 31 04:33:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:48.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:33:48 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4004956576' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:33:48 np0005603621 nova_compute[247399]: 2026-01-31 09:33:48.286 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:33:48 np0005603621 nova_compute[247399]: 2026-01-31 09:33:48.290 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:33:48 np0005603621 nova_compute[247399]: 2026-01-31 09:33:48.305 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:33:48 np0005603621 nova_compute[247399]: 2026-01-31 09:33:48.306 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:33:48 np0005603621 nova_compute[247399]: 2026-01-31 09:33:48.307 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:33:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:49.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:49 np0005603621 nova_compute[247399]: 2026-01-31 09:33:49.697 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:33:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:33:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:50 np0005603621 nova_compute[247399]: 2026-01-31 09:33:50.307 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:51 np0005603621 nova_compute[247399]: 2026-01-31 09:33:51.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:51 np0005603621 nova_compute[247399]: 2026-01-31 09:33:51.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:33:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:51.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:52 np0005603621 nova_compute[247399]: 2026-01-31 09:33:52.017 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:52.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:53.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:54 np0005603621 nova_compute[247399]: 2026-01-31 09:33:54.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:54.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:54 np0005603621 nova_compute[247399]: 2026-01-31 09:33:54.700 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:55.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:56 np0005603621 nova_compute[247399]: 2026-01-31 09:33:56.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:57 np0005603621 nova_compute[247399]: 2026-01-31 09:33:57.019 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:33:57 np0005603621 nova_compute[247399]: 2026-01-31 09:33:57.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:57 np0005603621 nova_compute[247399]: 2026-01-31 09:33:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:33:57 np0005603621 nova_compute[247399]: 2026-01-31 09:33:57.199 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:33:57 np0005603621 nova_compute[247399]: 2026-01-31 09:33:57.217 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:33:57 np0005603621 nova_compute[247399]: 2026-01-31 09:33:57.218 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:33:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:57.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:33:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:33:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:33:58.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:33:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:33:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:33:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:33:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:33:59.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:33:59 np0005603621 nova_compute[247399]: 2026-01-31 09:33:59.737 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:34:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:01.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:34:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:02 np0005603621 nova_compute[247399]: 2026-01-31 09:34:02.061 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:34:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:02.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:34:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:03.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:04 np0005603621 nova_compute[247399]: 2026-01-31 09:34:04.199 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:04 np0005603621 nova_compute[247399]: 2026-01-31 09:34:04.738 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:05.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:07 np0005603621 nova_compute[247399]: 2026-01-31 09:34:07.064 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:07.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:34:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:34:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:09.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:09 np0005603621 nova_compute[247399]: 2026-01-31 09:34:09.779 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:11.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:12 np0005603621 nova_compute[247399]: 2026-01-31 09:34:12.113 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:13.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4433: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:14 np0005603621 nova_compute[247399]: 2026-01-31 09:34:14.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:14.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:14 np0005603621 nova_compute[247399]: 2026-01-31 09:34:14.780 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:15.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:16 np0005603621 podman[440185]: 2026-01-31 09:34:16.541087014 +0000 UTC m=+0.080629326 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 31 04:34:16 np0005603621 podman[440186]: 2026-01-31 09:34:16.544926883 +0000 UTC m=+0.086532699 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:34:17 np0005603621 nova_compute[247399]: 2026-01-31 09:34:17.115 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:19 np0005603621 nova_compute[247399]: 2026-01-31 09:34:19.783 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:34:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:21.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:34:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:22 np0005603621 nova_compute[247399]: 2026-01-31 09:34:22.118 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:24 np0005603621 nova_compute[247399]: 2026-01-31 09:34:24.782 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:25.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:26.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:27 np0005603621 nova_compute[247399]: 2026-01-31 09:34:27.121 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:29 np0005603621 nova_compute[247399]: 2026-01-31 09:34:29.784 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:30.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:34:30.580 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:34:30.580 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:34:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:34:30.581 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:34:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:32 np0005603621 nova_compute[247399]: 2026-01-31 09:34:32.124 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:32.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.322691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852072322712, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 2094, "num_deletes": 251, "total_data_size": 3940684, "memory_usage": 4017952, "flush_reason": "Manual Compaction"}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852072391680, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 3865367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 95085, "largest_seqno": 97178, "table_properties": {"data_size": 3855846, "index_size": 6078, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19097, "raw_average_key_size": 20, "raw_value_size": 3836977, "raw_average_value_size": 4073, "num_data_blocks": 266, "num_entries": 942, "num_filter_entries": 942, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769851847, "oldest_key_time": 1769851847, "file_creation_time": 1769852072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 69037 microseconds, and 5286 cpu microseconds.
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.391723) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 3865367 bytes OK
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.391767) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.397126) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.397140) EVENT_LOG_v1 {"time_micros": 1769852072397134, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.397157) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 3932207, prev total WAL file size 3932207, number of live WAL files 2.
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.397827) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(3774KB)], [221(12MB)]
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852072397852, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 16903634, "oldest_snapshot_seqno": -1}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 12593 keys, 14843055 bytes, temperature: kUnknown
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852072584280, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 14843055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14764229, "index_size": 46265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31493, "raw_key_size": 332814, "raw_average_key_size": 26, "raw_value_size": 14546989, "raw_average_value_size": 1155, "num_data_blocks": 1752, "num_entries": 12593, "num_filter_entries": 12593, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769852072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.584526) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 14843055 bytes
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.585782) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.6 rd, 79.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.4 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 13110, records dropped: 517 output_compression: NoCompression
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.585798) EVENT_LOG_v1 {"time_micros": 1769852072585791, "job": 140, "event": "compaction_finished", "compaction_time_micros": 186500, "compaction_time_cpu_micros": 26101, "output_level": 6, "num_output_files": 1, "total_output_size": 14843055, "num_input_records": 13110, "num_output_records": 12593, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852072586247, "job": 140, "event": "table_file_deletion", "file_number": 223}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852072587423, "job": 140, "event": "table_file_deletion", "file_number": 221}
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.397761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.587533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.587540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.587542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.587544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:34:32 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:34:32.587545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:34:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:33.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:34.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:34 np0005603621 nova_compute[247399]: 2026-01-31 09:34:34.785 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:35.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:34:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 058c1159-4ccf-46f9-9f71-0948f64e2fd4 does not exist
Jan 31 04:34:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 04520f82-107f-4ac8-9a20-555ba553f067 does not exist
Jan 31 04:34:36 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 12466e29-b837-483b-9ca6-756ca79a9039 does not exist
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:34:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:36.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:34:36 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.622887804 +0000 UTC m=+0.043523442 container create a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 31 04:34:36 np0005603621 systemd[1]: Started libpod-conmon-a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe.scope.
Jan 31 04:34:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.690438636 +0000 UTC m=+0.111074284 container init a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.696980697 +0000 UTC m=+0.117616335 container start a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.604234579 +0000 UTC m=+0.024870257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.700674021 +0000 UTC m=+0.121309679 container attach a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 31 04:34:36 np0005603621 systemd[1]: libpod-a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe.scope: Deactivated successfully.
Jan 31 04:34:36 np0005603621 peaceful_thompson[440574]: 167 167
Jan 31 04:34:36 np0005603621 conmon[440574]: conmon a827210083aaa7d425e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe.scope/container/memory.events
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.702787436 +0000 UTC m=+0.123423074 container died a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:34:36 np0005603621 systemd[1]: var-lib-containers-storage-overlay-4e23fa2b13450a888363bd6a06eb281f11c7e31e0f3b5a9d5f1af4fa6efbbcfd-merged.mount: Deactivated successfully.
Jan 31 04:34:36 np0005603621 podman[440559]: 2026-01-31 09:34:36.743862071 +0000 UTC m=+0.164497709 container remove a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:34:36 np0005603621 systemd[1]: libpod-conmon-a827210083aaa7d425e58247702e12e4f7272a7bb746734b69010e0bf1fbf3fe.scope: Deactivated successfully.
Jan 31 04:34:36 np0005603621 podman[440598]: 2026-01-31 09:34:36.882312888 +0000 UTC m=+0.044751440 container create 9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chaplygin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 31 04:34:36 np0005603621 systemd[1]: Started libpod-conmon-9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603.scope.
Jan 31 04:34:36 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:34:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0628362521456965f4db02ed58d68540265dddf27946f68e93c0e92d9e5a0a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0628362521456965f4db02ed58d68540265dddf27946f68e93c0e92d9e5a0a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0628362521456965f4db02ed58d68540265dddf27946f68e93c0e92d9e5a0a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0628362521456965f4db02ed58d68540265dddf27946f68e93c0e92d9e5a0a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:36 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0628362521456965f4db02ed58d68540265dddf27946f68e93c0e92d9e5a0a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:36 np0005603621 podman[440598]: 2026-01-31 09:34:36.857195684 +0000 UTC m=+0.019634256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:34:36 np0005603621 podman[440598]: 2026-01-31 09:34:36.968945278 +0000 UTC m=+0.131383830 container init 9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chaplygin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:34:36 np0005603621 podman[440598]: 2026-01-31 09:34:36.973993283 +0000 UTC m=+0.136431825 container start 9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 31 04:34:36 np0005603621 podman[440598]: 2026-01-31 09:34:36.992033169 +0000 UTC m=+0.154471711 container attach 9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 31 04:34:37 np0005603621 nova_compute[247399]: 2026-01-31 09:34:37.126 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:37.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:37 np0005603621 funny_chaplygin[440614]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:34:37 np0005603621 funny_chaplygin[440614]: --> relative data size: 1.0
Jan 31 04:34:37 np0005603621 funny_chaplygin[440614]: --> All data devices are unavailable
Jan 31 04:34:37 np0005603621 systemd[1]: libpod-9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603.scope: Deactivated successfully.
Jan 31 04:34:37 np0005603621 podman[440598]: 2026-01-31 09:34:37.703924696 +0000 UTC m=+0.866363238 container died 9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 31 04:34:37 np0005603621 systemd[1]: var-lib-containers-storage-overlay-d0628362521456965f4db02ed58d68540265dddf27946f68e93c0e92d9e5a0a7-merged.mount: Deactivated successfully.
Jan 31 04:34:37 np0005603621 podman[440598]: 2026-01-31 09:34:37.75400798 +0000 UTC m=+0.916446522 container remove 9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:34:37 np0005603621 systemd[1]: libpod-conmon-9e60c1265dc7ede86fc0e349dc03371e3aff240d98cfc844d96b9fc52ca90603.scope: Deactivated successfully.
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.200696414 +0000 UTC m=+0.032742460 container create b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:34:38 np0005603621 systemd[1]: Started libpod-conmon-b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666.scope.
Jan 31 04:34:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.25056249 +0000 UTC m=+0.082608556 container init b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.256504073 +0000 UTC m=+0.088550119 container start b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.258941078 +0000 UTC m=+0.090987124 container attach b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:34:38 np0005603621 lucid_goldwasser[440800]: 167 167
Jan 31 04:34:38 np0005603621 systemd[1]: libpod-b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666.scope: Deactivated successfully.
Jan 31 04:34:38 np0005603621 conmon[440800]: conmon b52d33bc12aa0cc939b5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666.scope/container/memory.events
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.260699303 +0000 UTC m=+0.092745369 container died b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:34:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:38.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:38 np0005603621 systemd[1]: var-lib-containers-storage-overlay-445446db12aa4986a7c9e085b94c3e06286502fc336f422195eca7f227d44546-merged.mount: Deactivated successfully.
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.186902379 +0000 UTC m=+0.018948445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:34:38 np0005603621 podman[440784]: 2026-01-31 09:34:38.290320295 +0000 UTC m=+0.122366341 container remove b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 31 04:34:38 np0005603621 systemd[1]: libpod-conmon-b52d33bc12aa0cc939b58aea338389e7af79f650d009bcf632046d5464821666.scope: Deactivated successfully.
Jan 31 04:34:38 np0005603621 podman[440824]: 2026-01-31 09:34:38.399724577 +0000 UTC m=+0.036677442 container create 00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 31 04:34:38 np0005603621 systemd[1]: Started libpod-conmon-00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a.scope.
Jan 31 04:34:38 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:34:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c761be88be4fa439f1b04b136119ffc411422aafba95017ddf211255a18827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c761be88be4fa439f1b04b136119ffc411422aafba95017ddf211255a18827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c761be88be4fa439f1b04b136119ffc411422aafba95017ddf211255a18827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:38 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0c761be88be4fa439f1b04b136119ffc411422aafba95017ddf211255a18827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:38 np0005603621 podman[440824]: 2026-01-31 09:34:38.476918435 +0000 UTC m=+0.113871390 container init 00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:34:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:38 np0005603621 podman[440824]: 2026-01-31 09:34:38.386314064 +0000 UTC m=+0.023266939 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:34:38 np0005603621 podman[440824]: 2026-01-31 09:34:38.483035404 +0000 UTC m=+0.119988269 container start 00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_engelbart, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:34:38 np0005603621 podman[440824]: 2026-01-31 09:34:38.486037916 +0000 UTC m=+0.122990811 container attach 00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:34:38
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['backups', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'vms']
Jan 31 04:34:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]: {
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:    "0": [
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:        {
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "devices": [
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "/dev/loop3"
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            ],
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "lv_name": "ceph_lv0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "lv_size": "7511998464",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "name": "ceph_lv0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "tags": {
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.cluster_name": "ceph",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.crush_device_class": "",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.encrypted": "0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.osd_id": "0",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.type": "block",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:                "ceph.vdo": "0"
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            },
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "type": "block",
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:            "vg_name": "ceph_vg0"
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:        }
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]:    ]
Jan 31 04:34:39 np0005603621 sleepy_engelbart[440841]: }
Jan 31 04:34:39 np0005603621 systemd[1]: libpod-00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a.scope: Deactivated successfully.
Jan 31 04:34:39 np0005603621 conmon[440841]: conmon 00f423bd1dc1ce071ec1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a.scope/container/memory.events
Jan 31 04:34:39 np0005603621 podman[440824]: 2026-01-31 09:34:39.206184928 +0000 UTC m=+0.843137793 container died 00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:34:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e0c761be88be4fa439f1b04b136119ffc411422aafba95017ddf211255a18827-merged.mount: Deactivated successfully.
Jan 31 04:34:39 np0005603621 podman[440824]: 2026-01-31 09:34:39.24939061 +0000 UTC m=+0.886343485 container remove 00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 31 04:34:39 np0005603621 systemd[1]: libpod-conmon-00f423bd1dc1ce071ec1a7167dabd563611623c5b2eb10d225f265f20b164a1a.scope: Deactivated successfully.
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:34:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:34:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.761130448 +0000 UTC m=+0.036494115 container create 2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 04:34:39 np0005603621 nova_compute[247399]: 2026-01-31 09:34:39.787 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:39 np0005603621 systemd[1]: Started libpod-conmon-2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e.scope.
Jan 31 04:34:39 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.745862208 +0000 UTC m=+0.021225895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.842228467 +0000 UTC m=+0.117592174 container init 2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.84783977 +0000 UTC m=+0.123203457 container start 2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.851205854 +0000 UTC m=+0.126569531 container attach 2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:34:39 np0005603621 peaceful_newton[441024]: 167 167
Jan 31 04:34:39 np0005603621 systemd[1]: libpod-2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e.scope: Deactivated successfully.
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.856312251 +0000 UTC m=+0.131675918 container died 2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 31 04:34:39 np0005603621 systemd[1]: var-lib-containers-storage-overlay-56bb00a817824c40c70b85c9c1cf20567c091ea1cbe99760d06e1786a0a75a28-merged.mount: Deactivated successfully.
Jan 31 04:34:39 np0005603621 podman[441008]: 2026-01-31 09:34:39.889469883 +0000 UTC m=+0.164833550 container remove 2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 31 04:34:39 np0005603621 systemd[1]: libpod-conmon-2c60ad4e9464b8f86797e3347cc0238c0b5927f90500b9aee993b6331b7b4e0e.scope: Deactivated successfully.
Jan 31 04:34:40 np0005603621 podman[441048]: 2026-01-31 09:34:40.025810294 +0000 UTC m=+0.041437128 container create ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 31 04:34:40 np0005603621 systemd[1]: Started libpod-conmon-ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56.scope.
Jan 31 04:34:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:40 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:34:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40e826458ccf4923a44924bbccca38ed808372957dba9c6f2f17e9bbd52a384/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40e826458ccf4923a44924bbccca38ed808372957dba9c6f2f17e9bbd52a384/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40e826458ccf4923a44924bbccca38ed808372957dba9c6f2f17e9bbd52a384/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:40 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40e826458ccf4923a44924bbccca38ed808372957dba9c6f2f17e9bbd52a384/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:34:40 np0005603621 podman[441048]: 2026-01-31 09:34:40.009084249 +0000 UTC m=+0.024711103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:34:40 np0005603621 podman[441048]: 2026-01-31 09:34:40.126196047 +0000 UTC m=+0.141822921 container init ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 31 04:34:40 np0005603621 podman[441048]: 2026-01-31 09:34:40.135853415 +0000 UTC m=+0.151480289 container start ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 31 04:34:40 np0005603621 podman[441048]: 2026-01-31 09:34:40.139440326 +0000 UTC m=+0.155067200 container attach ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:34:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:40.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]: {
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:        "osd_id": 0,
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:        "type": "bluestore"
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]:    }
Jan 31 04:34:40 np0005603621 competent_heisenberg[441064]: }
Jan 31 04:34:40 np0005603621 systemd[1]: libpod-ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56.scope: Deactivated successfully.
Jan 31 04:34:41 np0005603621 podman[441086]: 2026-01-31 09:34:41.011460327 +0000 UTC m=+0.030262474 container died ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 31 04:34:41 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e40e826458ccf4923a44924bbccca38ed808372957dba9c6f2f17e9bbd52a384-merged.mount: Deactivated successfully.
Jan 31 04:34:41 np0005603621 podman[441086]: 2026-01-31 09:34:41.053313156 +0000 UTC m=+0.072115283 container remove ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:34:41 np0005603621 systemd[1]: libpod-conmon-ad709d97222fa5179b4c74864a588ee10ba8ee9e6846f1eaff65a8df1bf09f56.scope: Deactivated successfully.
Jan 31 04:34:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:34:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:34:41 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:34:41 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:34:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 6fa5f7b9-66e1-4c17-8d4f-3d8595f2f97e does not exist
Jan 31 04:34:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev bbaa37d8-859f-4c6a-b2a1-b9fa1448881c does not exist
Jan 31 04:34:41 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev e40484d5-dc3e-41c0-b86e-fc90ffde380a does not exist
Jan 31 04:34:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:34:41 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:34:41 np0005603621 nova_compute[247399]: 2026-01-31 09:34:41.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:41.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:42 np0005603621 nova_compute[247399]: 2026-01-31 09:34:42.129 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:42.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:43.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:44.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:44 np0005603621 nova_compute[247399]: 2026-01-31 09:34:44.831 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:45.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:46.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:47 np0005603621 nova_compute[247399]: 2026-01-31 09:34:47.131 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:47 np0005603621 podman[441154]: 2026-01-31 09:34:47.508613093 +0000 UTC m=+0.059963388 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:34:47 np0005603621 podman[441155]: 2026-01-31 09:34:47.562807824 +0000 UTC m=+0.114156579 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 31 04:34:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:47.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:48.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.219 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.219 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.220 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.220 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.220 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:34:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:49.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:34:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031701541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.705 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.831 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.868 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.869 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4060MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.869 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.869 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.977 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.978 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:34:49 np0005603621 nova_compute[247399]: 2026-01-31 09:34:49.996 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:34:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:34:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:50.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:34:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/57911992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:34:50 np0005603621 nova_compute[247399]: 2026-01-31 09:34:50.454 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:34:50 np0005603621 nova_compute[247399]: 2026-01-31 09:34:50.460 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:34:50 np0005603621 nova_compute[247399]: 2026-01-31 09:34:50.481 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:34:50 np0005603621 nova_compute[247399]: 2026-01-31 09:34:50.482 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:34:50 np0005603621 nova_compute[247399]: 2026-01-31 09:34:50.483 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:34:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:51.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:52 np0005603621 nova_compute[247399]: 2026-01-31 09:34:52.134 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:52.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:52 np0005603621 nova_compute[247399]: 2026-01-31 09:34:52.484 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:52 np0005603621 nova_compute[247399]: 2026-01-31 09:34:52.485 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:52 np0005603621 nova_compute[247399]: 2026-01-31 09:34:52.485 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:34:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:53.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:54.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:54 np0005603621 nova_compute[247399]: 2026-01-31 09:34:54.866 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:55 np0005603621 nova_compute[247399]: 2026-01-31 09:34:55.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:34:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:55.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:34:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:56.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:57 np0005603621 nova_compute[247399]: 2026-01-31 09:34:57.176 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:34:57 np0005603621 nova_compute[247399]: 2026-01-31 09:34:57.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:57.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:34:58 np0005603621 nova_compute[247399]: 2026-01-31 09:34:58.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:58 np0005603621 nova_compute[247399]: 2026-01-31 09:34:58.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:34:58 np0005603621 nova_compute[247399]: 2026-01-31 09:34:58.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:34:58 np0005603621 nova_compute[247399]: 2026-01-31 09:34:58.221 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:34:58 np0005603621 nova_compute[247399]: 2026-01-31 09:34:58.222 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:34:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:34:58.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:34:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:34:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:34:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:34:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:34:59 np0005603621 nova_compute[247399]: 2026-01-31 09:34:59.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:00.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:01.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:02 np0005603621 nova_compute[247399]: 2026-01-31 09:35:02.217 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:02.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:04 np0005603621 nova_compute[247399]: 2026-01-31 09:35:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:04.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:04 np0005603621 nova_compute[247399]: 2026-01-31 09:35:04.868 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:05.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:06.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:07 np0005603621 nova_compute[247399]: 2026-01-31 09:35:07.220 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:07.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:08.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:35:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:35:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:09.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:09 np0005603621 nova_compute[247399]: 2026-01-31 09:35:09.870 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:10.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:11.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:12 np0005603621 nova_compute[247399]: 2026-01-31 09:35:12.261 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:12.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:13.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:14.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:14 np0005603621 nova_compute[247399]: 2026-01-31 09:35:14.872 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:15.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:16.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:17 np0005603621 nova_compute[247399]: 2026-01-31 09:35:17.263 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:17.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:18.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:18 np0005603621 podman[441360]: 2026-01-31 09:35:18.509778494 +0000 UTC m=+0.071307359 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 31 04:35:18 np0005603621 podman[441361]: 2026-01-31 09:35:18.527138849 +0000 UTC m=+0.087571150 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 31 04:35:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:19.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:19 np0005603621 nova_compute[247399]: 2026-01-31 09:35:19.874 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:20.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:21.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:22 np0005603621 nova_compute[247399]: 2026-01-31 09:35:22.266 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:22.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:23.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:24 np0005603621 nova_compute[247399]: 2026-01-31 09:35:24.877 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:26.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:27 np0005603621 nova_compute[247399]: 2026-01-31 09:35:27.269 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:27.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:28.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:29.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:29 np0005603621 nova_compute[247399]: 2026-01-31 09:35:29.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:30.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:35:30.581 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:35:30.582 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:35:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:35:30.582 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:35:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:32 np0005603621 nova_compute[247399]: 2026-01-31 09:35:32.270 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:32.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:34.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:34 np0005603621 nova_compute[247399]: 2026-01-31 09:35:34.878 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:35.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:36.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:37 np0005603621 nova_compute[247399]: 2026-01-31 09:35:37.271 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:37.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:38.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:35:38
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes']
Jan 31 04:35:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:35:39 np0005603621 ceph-mgr[74689]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3835187053
Jan 31 04:35:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:39.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:39 np0005603621 nova_compute[247399]: 2026-01-31 09:35:39.880 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:40.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:41.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:35:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 42cf6991-c9a5-4757-80a0-779991f75a48 does not exist
Jan 31 04:35:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev ab294b99-9334-44b2-aee5-861175b4612c does not exist
Jan 31 04:35:42 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev cf7d23f4-48a1-407e-9b58-bf122e3ea2cb does not exist
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:35:42 np0005603621 nova_compute[247399]: 2026-01-31 09:35:42.304 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:35:42 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 31 04:35:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.67990631 +0000 UTC m=+0.041641841 container create ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_northcutt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 31 04:35:42 np0005603621 systemd[1]: Started libpod-conmon-ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572.scope.
Jan 31 04:35:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.7510835 +0000 UTC m=+0.112819051 container init ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.659456847 +0000 UTC m=+0.021192428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.759101102 +0000 UTC m=+0.120836633 container start ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.762424757 +0000 UTC m=+0.124160298 container attach ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 31 04:35:42 np0005603621 intelligent_northcutt[441752]: 167 167
Jan 31 04:35:42 np0005603621 systemd[1]: libpod-ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572.scope: Deactivated successfully.
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.766465713 +0000 UTC m=+0.128201254 container died ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:35:42 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e74e3fc266f74bf53a8a31e61edef21bf90793537aaf558c32c19b9e3a9fa41f-merged.mount: Deactivated successfully.
Jan 31 04:35:42 np0005603621 podman[441735]: 2026-01-31 09:35:42.804692537 +0000 UTC m=+0.166428058 container remove ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 31 04:35:42 np0005603621 systemd[1]: libpod-conmon-ecd1bb4a6379a8743d08413edfac38262f2d8f9f0933be713342e0491fc62572.scope: Deactivated successfully.
Jan 31 04:35:42 np0005603621 podman[441778]: 2026-01-31 09:35:42.92206155 +0000 UTC m=+0.041483626 container create 5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 31 04:35:42 np0005603621 systemd[1]: Started libpod-conmon-5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd.scope.
Jan 31 04:35:42 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:35:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337b80244e18038cab0f5169018f0b6a7ce102b18b7b9aee351db27d6e8cfa53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337b80244e18038cab0f5169018f0b6a7ce102b18b7b9aee351db27d6e8cfa53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337b80244e18038cab0f5169018f0b6a7ce102b18b7b9aee351db27d6e8cfa53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337b80244e18038cab0f5169018f0b6a7ce102b18b7b9aee351db27d6e8cfa53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:42 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/337b80244e18038cab0f5169018f0b6a7ce102b18b7b9aee351db27d6e8cfa53/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:42 np0005603621 podman[441778]: 2026-01-31 09:35:42.90080518 +0000 UTC m=+0.020227296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:35:43 np0005603621 podman[441778]: 2026-01-31 09:35:43.016985947 +0000 UTC m=+0.136408043 container init 5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 31 04:35:43 np0005603621 podman[441778]: 2026-01-31 09:35:43.023029257 +0000 UTC m=+0.142451343 container start 5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 31 04:35:43 np0005603621 podman[441778]: 2026-01-31 09:35:43.026682991 +0000 UTC m=+0.146105107 container attach 5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:35:43 np0005603621 nova_compute[247399]: 2026-01-31 09:35:43.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:43.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:43 np0005603621 fervent_curran[441794]: --> passed data devices: 0 physical, 1 LVM
Jan 31 04:35:43 np0005603621 fervent_curran[441794]: --> relative data size: 1.0
Jan 31 04:35:43 np0005603621 fervent_curran[441794]: --> All data devices are unavailable
Jan 31 04:35:43 np0005603621 systemd[1]: libpod-5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd.scope: Deactivated successfully.
Jan 31 04:35:43 np0005603621 podman[441809]: 2026-01-31 09:35:43.882665365 +0000 UTC m=+0.024974596 container died 5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:35:43 np0005603621 systemd[1]: var-lib-containers-storage-overlay-337b80244e18038cab0f5169018f0b6a7ce102b18b7b9aee351db27d6e8cfa53-merged.mount: Deactivated successfully.
Jan 31 04:35:43 np0005603621 podman[441809]: 2026-01-31 09:35:43.927240188 +0000 UTC m=+0.069549409 container remove 5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_curran, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 31 04:35:43 np0005603621 systemd[1]: libpod-conmon-5c78ff50c87576f112916dfbb6d03614ec1481768a7cdb998897b9e6d7e9facd.scope: Deactivated successfully.
Jan 31 04:35:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.521356922 +0000 UTC m=+0.045150912 container create e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 31 04:35:44 np0005603621 systemd[1]: Started libpod-conmon-e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148.scope.
Jan 31 04:35:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.582767134 +0000 UTC m=+0.106561154 container init e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.589634319 +0000 UTC m=+0.113428309 container start e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.496990844 +0000 UTC m=+0.020784914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.593658787 +0000 UTC m=+0.117452797 container attach e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:35:44 np0005603621 sad_brown[441982]: 167 167
Jan 31 04:35:44 np0005603621 systemd[1]: libpod-e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148.scope: Deactivated successfully.
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.594874364 +0000 UTC m=+0.118668354 container died e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 31 04:35:44 np0005603621 systemd[1]: var-lib-containers-storage-overlay-2d19232b4037ccefa86b7c6d029927399789f3e381d986c6803d9c81f86b90c6-merged.mount: Deactivated successfully.
Jan 31 04:35:44 np0005603621 podman[441966]: 2026-01-31 09:35:44.62681376 +0000 UTC m=+0.150607750 container remove e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 31 04:35:44 np0005603621 systemd[1]: libpod-conmon-e51ff24444a7d1b348a67cec6ddef60a6fd61457ce344c4eb64f6d67d869f148.scope: Deactivated successfully.
Jan 31 04:35:44 np0005603621 podman[442007]: 2026-01-31 09:35:44.776570631 +0000 UTC m=+0.039811493 container create d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 31 04:35:44 np0005603621 systemd[1]: Started libpod-conmon-d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e.scope.
Jan 31 04:35:44 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:35:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636ef163a99084c00b8917479421286d4bd4282d16240a8b75c3fee69d54f5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636ef163a99084c00b8917479421286d4bd4282d16240a8b75c3fee69d54f5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636ef163a99084c00b8917479421286d4bd4282d16240a8b75c3fee69d54f5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:44 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e636ef163a99084c00b8917479421286d4bd4282d16240a8b75c3fee69d54f5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:44 np0005603621 podman[442007]: 2026-01-31 09:35:44.835455254 +0000 UTC m=+0.098696126 container init d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:35:44 np0005603621 podman[442007]: 2026-01-31 09:35:44.840997938 +0000 UTC m=+0.104238800 container start d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 31 04:35:44 np0005603621 podman[442007]: 2026-01-31 09:35:44.844710686 +0000 UTC m=+0.107951548 container attach d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 31 04:35:44 np0005603621 podman[442007]: 2026-01-31 09:35:44.757285064 +0000 UTC m=+0.020525946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:35:44 np0005603621 nova_compute[247399]: 2026-01-31 09:35:44.882 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]: {
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:    "0": [
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:        {
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "devices": [
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "/dev/loop3"
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            ],
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "lv_name": "ceph_lv0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "lv_size": "7511998464",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2f5ab832-5f2e-5a84-bd93-cf8bab960ee2,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=69ce1ba1-37ea-44ee-8e02-ae107b60d956,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "lv_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "name": "ceph_lv0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "tags": {
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.block_uuid": "V17ZLe-6JhH-tcmO-njuA-y31P-d8En-Sx6fGh",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.cephx_lockbox_secret": "",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.cluster_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.cluster_name": "ceph",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.crush_device_class": "",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.encrypted": "0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.osd_fsid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.osd_id": "0",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.type": "block",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:                "ceph.vdo": "0"
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            },
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "type": "block",
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:            "vg_name": "ceph_vg0"
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:        }
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]:    ]
Jan 31 04:35:45 np0005603621 vibrant_nobel[442023]: }
Jan 31 04:35:45 np0005603621 systemd[1]: libpod-d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e.scope: Deactivated successfully.
Jan 31 04:35:45 np0005603621 podman[442007]: 2026-01-31 09:35:45.565178315 +0000 UTC m=+0.828419187 container died d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:35:45 np0005603621 systemd[1]: var-lib-containers-storage-overlay-e636ef163a99084c00b8917479421286d4bd4282d16240a8b75c3fee69d54f5c-merged.mount: Deactivated successfully.
Jan 31 04:35:45 np0005603621 podman[442007]: 2026-01-31 09:35:45.610544112 +0000 UTC m=+0.873784974 container remove d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 31 04:35:45 np0005603621 systemd[1]: libpod-conmon-d0c5189fe137f288277d75fe668fae87758b7558ffbe59806ed3f0341cdbd65e.scope: Deactivated successfully.
Jan 31 04:35:45 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:45 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:45 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:45.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.076845935 +0000 UTC m=+0.039419812 container create c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_thompson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:35:46 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:46 np0005603621 systemd[1]: Started libpod-conmon-c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354.scope.
Jan 31 04:35:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.058861429 +0000 UTC m=+0.021435336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.15393834 +0000 UTC m=+0.116512227 container init c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_thompson, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.163884324 +0000 UTC m=+0.126458231 container start c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.168111156 +0000 UTC m=+0.130685023 container attach c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 31 04:35:46 np0005603621 happy_thompson[442198]: 167 167
Jan 31 04:35:46 np0005603621 systemd[1]: libpod-c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354.scope: Deactivated successfully.
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.172537226 +0000 UTC m=+0.135111093 container died c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_thompson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 31 04:35:46 np0005603621 systemd[1]: var-lib-containers-storage-overlay-8b08e6a6b3969da3698b01284aa9313303c2316beb65acfdb4aae1a75574222d-merged.mount: Deactivated successfully.
Jan 31 04:35:46 np0005603621 podman[442181]: 2026-01-31 09:35:46.205076659 +0000 UTC m=+0.167650526 container remove c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_thompson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 31 04:35:46 np0005603621 systemd[1]: libpod-conmon-c5bf815695161be3ba021ffd478fb6240b200b7ccb38d3b501b19b7a02a0a354.scope: Deactivated successfully.
Jan 31 04:35:46 np0005603621 podman[442223]: 2026-01-31 09:35:46.320165311 +0000 UTC m=+0.040344681 container create 067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 31 04:35:46 np0005603621 systemd[1]: Started libpod-conmon-067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f.scope.
Jan 31 04:35:46 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:46 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:46 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:46.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:46 np0005603621 systemd[1]: Started libcrun container.
Jan 31 04:35:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a549ed10f1880f4f559b66c36b40624086b7447ef597d95dc45d48a5ff95d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a549ed10f1880f4f559b66c36b40624086b7447ef597d95dc45d48a5ff95d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a549ed10f1880f4f559b66c36b40624086b7447ef597d95dc45d48a5ff95d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:46 np0005603621 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a549ed10f1880f4f559b66c36b40624086b7447ef597d95dc45d48a5ff95d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 31 04:35:46 np0005603621 podman[442223]: 2026-01-31 09:35:46.302427822 +0000 UTC m=+0.022607202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 31 04:35:46 np0005603621 podman[442223]: 2026-01-31 09:35:46.399663292 +0000 UTC m=+0.119842682 container init 067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 31 04:35:46 np0005603621 podman[442223]: 2026-01-31 09:35:46.404880547 +0000 UTC m=+0.125059927 container start 067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 31 04:35:46 np0005603621 podman[442223]: 2026-01-31 09:35:46.407465498 +0000 UTC m=+0.127644868 container attach 067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]: {
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:    "69ce1ba1-37ea-44ee-8e02-ae107b60d956": {
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:        "ceph_fsid": "2f5ab832-5f2e-5a84-bd93-cf8bab960ee2",
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:        "osd_id": 0,
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:        "osd_uuid": "69ce1ba1-37ea-44ee-8e02-ae107b60d956",
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:        "type": "bluestore"
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]:    }
Jan 31 04:35:47 np0005603621 eloquent_jepsen[442242]: }
Jan 31 04:35:47 np0005603621 systemd[1]: libpod-067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f.scope: Deactivated successfully.
Jan 31 04:35:47 np0005603621 podman[442223]: 2026-01-31 09:35:47.13725616 +0000 UTC m=+0.857435540 container died 067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jepsen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 31 04:35:47 np0005603621 systemd[1]: var-lib-containers-storage-overlay-22a549ed10f1880f4f559b66c36b40624086b7447ef597d95dc45d48a5ff95d3-merged.mount: Deactivated successfully.
Jan 31 04:35:47 np0005603621 podman[442223]: 2026-01-31 09:35:47.177845738 +0000 UTC m=+0.898025118 container remove 067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 31 04:35:47 np0005603621 systemd[1]: libpod-conmon-067b7fd1c01adbbc553b3f7565af7386a5106cadda8a68ec8894be995324451f.scope: Deactivated successfully.
Jan 31 04:35:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 31 04:35:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:35:47 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 31 04:35:47 np0005603621 ceph-mon[74394]: log_channel(audit) log [INF] : from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:35:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 44006e81-0d57-403f-9230-68b06ff2ea8e does not exist
Jan 31 04:35:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 1eb0c4e7-9661-4a09-bd81-5408c1ff24a0 does not exist
Jan 31 04:35:47 np0005603621 ceph-mgr[74689]: [progress WARNING root] complete: ev 626fb343-c7d7-40d8-b792-64b972cda209 does not exist
Jan 31 04:35:47 np0005603621 nova_compute[247399]: 2026-01-31 09:35:47.306 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:47 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:47 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:47 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:47.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:48 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:35:48 np0005603621 ceph-mon[74394]: from='mgr.14134 192.168.122.100:0/2753939032' entity='mgr.compute-0.ddmhwk' 
Jan 31 04:35:48 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:48 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:35:48 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:48.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:35:48 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.223 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.223 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.224 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.224 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.224 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:35:49 np0005603621 podman[442345]: 2026-01-31 09:35:49.499443466 +0000 UTC m=+0.054107782 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 31 04:35:49 np0005603621 podman[442346]: 2026-01-31 09:35:49.521398198 +0000 UTC m=+0.076322783 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 31 04:35:49 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:35:49 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332919159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.620 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:35:49 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:49 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:49 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:49.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.732 247403 WARNING nova.virt.libvirt.driver [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.733 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4037MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.734 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.734 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.883 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.925 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 31 04:35:49 np0005603621 nova_compute[247399]: 2026-01-31 09:35:49.925 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.015 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing inventories for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.126 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating ProviderTree inventory for provider d7116329-87c2-469a-b33a-1e01daf74ceb from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.126 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Updating inventory in ProviderTree for provider d7116329-87c2-469a-b33a-1e01daf74ceb with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] _maybe_adjust
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021635957565605806 of space, bias 1.0, pg target 0.6490787269681741 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.0003271566164154104 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 31 04:35:50 np0005603621 ceph-mgr[74689]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.142 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing aggregate associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.163 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Refreshing trait associations for resource provider d7116329-87c2-469a-b33a-1e01daf74ceb, traits: COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE42,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.178 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 31 04:35:50 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:50 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:50 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:50.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:50 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 31 04:35:50 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626056718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.610 247403 DEBUG oslo_concurrency.processutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.615 247403 DEBUG nova.compute.provider_tree [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed in ProviderTree for provider: d7116329-87c2-469a-b33a-1e01daf74ceb update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.656 247403 DEBUG nova.scheduler.client.report [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Inventory has not changed for provider d7116329-87c2-469a-b33a-1e01daf74ceb based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.658 247403 DEBUG nova.compute.resource_tracker [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 31 04:35:50 np0005603621 nova_compute[247399]: 2026-01-31 09:35:50.659 247403 DEBUG oslo_concurrency.lockutils [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:35:51 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:51 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:51 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:51.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:52 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:52 np0005603621 nova_compute[247399]: 2026-01-31 09:35:52.308 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:52 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:52 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:52 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:52.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:53 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:53 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:53 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:53 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:53.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:54 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:54 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:54 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:54 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:54.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:54 np0005603621 nova_compute[247399]: 2026-01-31 09:35:54.659 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:54 np0005603621 nova_compute[247399]: 2026-01-31 09:35:54.660 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:54 np0005603621 nova_compute[247399]: 2026-01-31 09:35:54.660 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 31 04:35:54 np0005603621 nova_compute[247399]: 2026-01-31 09:35:54.884 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:55 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:55 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:55 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:55.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:56 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:56 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:56 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:35:56 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:56.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:35:57 np0005603621 nova_compute[247399]: 2026-01-31 09:35:57.193 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:57 np0005603621 nova_compute[247399]: 2026-01-31 09:35:57.312 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:35:57 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:57 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:57 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:58 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:35:58 np0005603621 nova_compute[247399]: 2026-01-31 09:35:58.197 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:58 np0005603621 nova_compute[247399]: 2026-01-31 09:35:58.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 31 04:35:58 np0005603621 nova_compute[247399]: 2026-01-31 09:35:58.198 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 31 04:35:58 np0005603621 nova_compute[247399]: 2026-01-31 09:35:58.217 247403 DEBUG nova.compute.manager [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 31 04:35:58 np0005603621 nova_compute[247399]: 2026-01-31 09:35:58.218 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:58 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:58 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:58 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:35:58.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:58 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:35:59 np0005603621 nova_compute[247399]: 2026-01-31 09:35:59.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:35:59 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:35:59 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:35:59 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:35:59.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:35:59 np0005603621 nova_compute[247399]: 2026-01-31 09:35:59.886 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:00 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:00 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:00 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:00 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:00.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:01 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:01 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:01 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:01.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:02 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:02 np0005603621 nova_compute[247399]: 2026-01-31 09:36:02.315 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:02 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:02 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:02 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #225. Immutable memtables: 0.
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.550578) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 225
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852163550640, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 1015, "num_deletes": 256, "total_data_size": 1585296, "memory_usage": 1616056, "flush_reason": "Manual Compaction"}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #226: started
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852163558898, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 226, "file_size": 1557690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97179, "largest_seqno": 98193, "table_properties": {"data_size": 1552709, "index_size": 2504, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10776, "raw_average_key_size": 19, "raw_value_size": 1542693, "raw_average_value_size": 2799, "num_data_blocks": 109, "num_entries": 551, "num_filter_entries": 551, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769852073, "oldest_key_time": 1769852073, "file_creation_time": 1769852163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 226, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 8352 microseconds, and 3254 cpu microseconds.
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.558936) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #226: 1557690 bytes OK
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.558952) [db/memtable_list.cc:519] [default] Level-0 commit table #226 started
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.661419) [db/memtable_list.cc:722] [default] Level-0 commit table #226: memtable #1 done
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.661495) EVENT_LOG_v1 {"time_micros": 1769852163661479, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.661532) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 1580563, prev total WAL file size 1580563, number of live WAL files 2.
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000222.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.662438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323638' seq:72057594037927935, type:22 .. '6C6F676D0034353230' seq:0, type:0; will stop at (end)
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [226(1521KB)], [224(14MB)]
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852163662513, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [226], "files_L6": [224], "score": -1, "input_data_size": 16400745, "oldest_snapshot_seqno": -1}
Jan 31 04:36:03 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:03 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:03 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #227: 12619 keys, 16202976 bytes, temperature: kUnknown
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852163785345, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 227, "file_size": 16202976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16122423, "index_size": 47928, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31557, "raw_key_size": 334411, "raw_average_key_size": 26, "raw_value_size": 15903044, "raw_average_value_size": 1260, "num_data_blocks": 1818, "num_entries": 12619, "num_filter_entries": 12619, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769852163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.785627) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 16202976 bytes
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.787067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 131.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 14.2 +0.0 blob) out(15.5 +0.0 blob), read-write-amplify(20.9) write-amplify(10.4) OK, records in: 13144, records dropped: 525 output_compression: NoCompression
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.787087) EVENT_LOG_v1 {"time_micros": 1769852163787077, "job": 142, "event": "compaction_finished", "compaction_time_micros": 122915, "compaction_time_cpu_micros": 27886, "output_level": 6, "num_output_files": 1, "total_output_size": 16202976, "num_input_records": 13144, "num_output_records": 12619, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000226.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852163787347, "job": 142, "event": "table_file_deletion", "file_number": 226}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852163788889, "job": 142, "event": "table_file_deletion", "file_number": 224}
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.662254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.789113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.789130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.789132) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.789137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:03 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:03.789139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:04 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:04 np0005603621 nova_compute[247399]: 2026-01-31 09:36:04.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:36:04 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:04 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:04 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:04.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:04 np0005603621 nova_compute[247399]: 2026-01-31 09:36:04.886 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:05 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:05 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:05 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:05.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:06 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:06 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:06 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:06 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:06.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:07 np0005603621 nova_compute[247399]: 2026-01-31 09:36:07.316 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:07 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:07 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:36:07 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:07.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:08 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:08 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:36:08 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:08.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:36:08 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:36:08 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:36:09 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:09 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:09 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:09.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:09 np0005603621 nova_compute[247399]: 2026-01-31 09:36:09.888 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:10 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:10 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:10 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:10 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:10.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:11 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:11 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:11 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:12 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:12 np0005603621 nova_compute[247399]: 2026-01-31 09:36:12.318 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:12 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:12 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:12 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:12.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:13 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:13 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:13 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:13 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:13.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:14 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:14 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:14 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:14 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:14.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:14 np0005603621 nova_compute[247399]: 2026-01-31 09:36:14.889 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:15 np0005603621 nova_compute[247399]: 2026-01-31 09:36:15.194 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:36:15 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:15 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:15 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:15.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:16 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4494: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:16 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:16 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:16 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:16.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:17 np0005603621 nova_compute[247399]: 2026-01-31 09:36:17.320 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:17 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:17 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:17 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:17.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:18 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4495: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:18 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:18 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:18 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:18.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:18 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:19 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:19 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:19 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:19.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:19 np0005603621 nova_compute[247399]: 2026-01-31 09:36:19.891 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:20 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4496: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:20 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:20 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:20 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:20.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:20 np0005603621 podman[442531]: 2026-01-31 09:36:20.51757544 +0000 UTC m=+0.076787917 container health_status 1ae13fee90b46f5eb19bd9030483f3244c23c433b8f4e46eb5cc0bd2c6a35d52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 31 04:36:20 np0005603621 podman[442532]: 2026-01-31 09:36:20.53759105 +0000 UTC m=+0.095165215 container health_status d068e3ccc9a9ec2248f269659070439009ec5c33951d360f88b13647cd275613 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c9c07335481e70451acb503caf3b3b3a05811a07f9fde1e24aebece19089a266-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e-25bdd24b66af043d77baba2a46a2d5dc0c63491fff70f82946d87e0106b3878e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 31 04:36:21 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:21 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:21 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:21.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:22 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:22 np0005603621 nova_compute[247399]: 2026-01-31 09:36:22.323 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:22 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:22 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:22 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:22.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:23 np0005603621 systemd-logind[818]: New session 78 of user zuul.
Jan 31 04:36:23 np0005603621 systemd[1]: Started Session 78 of User zuul.
Jan 31 04:36:23 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:23 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:23 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:23 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:23.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:24 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:24 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:24 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:24 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:24.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:24 np0005603621 nova_compute[247399]: 2026-01-31 09:36:24.893 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42357 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45098 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42363 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:25 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45104 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:25 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:25 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:25 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:25.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:25 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 31 04:36:25 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476451071' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 31 04:36:26 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52126 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:26 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:26 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:26 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:26 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:26.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:26 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52132 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:27 np0005603621 nova_compute[247399]: 2026-01-31 09:36:27.327 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:27 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:27 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:27 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:27.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:28 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4500: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:28 np0005603621 ovs-vsctl[442867]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 31 04:36:28 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:28 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:28 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:28.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:28 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:29 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 31 04:36:29 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 31 04:36:29 np0005603621 virtqemud[247123]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 31 04:36:29 np0005603621 lvm[443179]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 31 04:36:29 np0005603621 lvm[443179]: VG ceph_vg0 finished
Jan 31 04:36:29 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: cache status {prefix=cache status} (starting...)
Jan 31 04:36:29 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:29 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: client ls {prefix=client ls} (starting...)
Jan 31 04:36:29 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:29 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:29 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:29 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:29.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:29 np0005603621 nova_compute[247399]: 2026-01-31 09:36:29.896 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42381 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:30 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:36:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984188900' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: damage ls {prefix=damage ls} (starting...)
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42393 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:30 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:30 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:30 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:30.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump loads {prefix=dump loads} (starting...)
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:36:30.583 159734 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 31 04:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:36:30.583 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 31 04:36:30 np0005603621 ovn_metadata_agent[159729]: 2026-01-31 09:36:30.584 159734 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 31 04:36:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 31 04:36:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614417429' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:30 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45122 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:30 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 31 04:36:30 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2525584106' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 31 04:36:30 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52165 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 31 04:36:31 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42420 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:31 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:31.240+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:31 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 31 04:36:31 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45134 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1804257378' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42438 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 31 04:36:31 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: ops {prefix=ops} (starting...)
Jan 31 04:36:31 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 31 04:36:31 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166737999' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 31 04:36:31 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:31 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:31 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:31.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:31 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42450 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1345065818' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:32 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: session ls {prefix=session ls} (starting...)
Jan 31 04:36:32 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh Can't run that command on an inactive MDS!
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52204 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45164 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:32.211+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:32 np0005603621 ceph-mds[94918]: mds.cephfs.compute-0.jroeqh asok_command: status {prefix=status} (starting...)
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52210 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:32.316+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:32 np0005603621 nova_compute[247399]: 2026-01-31 09:36:32.330 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:32 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:32 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:32 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:32.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3782128328' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:36:32 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2809213277' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:36:32 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45188 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1162383296' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52246 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363660065' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45200 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42525 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:33.407+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:36:33 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:36:33 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52261 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434300348' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 31 04:36:33 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:33 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:33 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:33.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690481493' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 31 04:36:33 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:36:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154267586' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:36:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 31 04:36:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200570735' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 31 04:36:34 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:34 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:34 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:34.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42588 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45260 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:36:34 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:34.579+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:36:34 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 31 04:36:34 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/18235691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52315 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:34 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:34.815+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 31 04:36:34 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42612 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:34 np0005603621 nova_compute[247399]: 2026-01-31 09:36:34.899 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 31 04:36:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556196205' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403841024 unmapped: 51200000 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403849216 unmapped: 51191808 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 400 heartbeat osd_stat(store_statfs(0x1a383b000/0x0/0x1bfc00000, data 0x4b99cac/0x48f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 400 ms_handle_reset con 0x558e1bf3e400 session 0x558e177fe3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4505540 data_alloc: 234881024 data_used: 27582464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403881984 unmapped: 51159040 heap: 455041024 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 410468352 unmapped: 52969472 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 401 ms_handle_reset con 0x558e1807ac00 session 0x558e175f0000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e1558b000 session 0x558e169141e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400965632 unmapped: 62472192 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e18506000 session 0x558e15dada40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e14cb3800 session 0x558e1780af00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400973824 unmapped: 62464000 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400982016 unmapped: 62455808 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a2d0c000/0x0/0x1bfc00000, data 0x56c18fb/0x5420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4608576 data_alloc: 234881024 data_used: 29917184
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399712256 unmapped: 63725568 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 heartbeat osd_stat(store_statfs(0x1a2d0c000/0x0/0x1bfc00000, data 0x56c18fb/0x5420000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e1807ac00 session 0x558e175a72c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397410304 unmapped: 66027520 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 ms_handle_reset con 0x558e1bf3e400 session 0x558e1769c1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397713408 unmapped: 65724416 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.380999565s of 10.683680534s, submitted: 62
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e1769c960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18632c00 session 0x558e17b0bc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397713408 unmapped: 65724416 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a221e000/0x0/0x1bfc00000, data 0x61af58c/0x5f0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e1769d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e175a6f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397713408 unmapped: 65724416 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1558b000 session 0x558e175a6780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a221e000/0x0/0x1bfc00000, data 0x61af58c/0x5f0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4627232 data_alloc: 234881024 data_used: 24453120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e18494f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x5a8f58c/0x57ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396255232 unmapped: 67182592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4627232 data_alloc: 234881024 data_used: 24453120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396263424 unmapped: 67174400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396263424 unmapped: 67174400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e181d01e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1807ac00 session 0x558e1780bc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1807ac00 session 0x558e175b74a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e183150e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 396263424 unmapped: 67174400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1558b000 session 0x558e156a2000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e17b03680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e17b001e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a293f000/0x0/0x1bfc00000, data 0x5a8f58c/0x57ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e176cd860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e175a6780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674058 data_alloc: 234881024 data_used: 24453120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a2390000/0x0/0x1bfc00000, data 0x603d59c/0x5d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a2390000/0x0/0x1bfc00000, data 0x603d59c/0x5d9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1558b000 session 0x558e17b0bc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e1807ac00 session 0x558e1769c960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4674058 data_alloc: 234881024 data_used: 24453120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e184f2000 session 0x558e1769c1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.228857040s of 18.564140320s, submitted: 26
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e1780af00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 69615616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18634000 session 0x558e1766f860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18506000 session 0x558e156a3c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395018240 unmapped: 68419584 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a238f000/0x0/0x1bfc00000, data 0x603d5ac/0x5d9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395018240 unmapped: 68419584 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e18632c00 session 0x558e17b014a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4718203 data_alloc: 234881024 data_used: 30314496
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 395026432 unmapped: 68411392 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 394797056 unmapped: 68640768 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e17e7cc00 session 0x558e1601d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e17e7cc00 session 0x558e180e7e00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 heartbeat osd_stat(store_statfs(0x1a238e000/0x0/0x1bfc00000, data 0x603d5ac/0x5d9f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397090816 unmapped: 66347008 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 ms_handle_reset con 0x558e15556c00 session 0x558e17661680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397090816 unmapped: 66347008 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397090816 unmapped: 66347008 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e18506000 session 0x558e1706b860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a2390000/0x0/0x1bfc00000, data 0x5d3253a/0x5a92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a2698000/0x0/0x1bfc00000, data 0x5d341e7/0x5a95000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4738301 data_alloc: 251658240 data_used: 38334464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397123584 unmapped: 66314240 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e17732800 session 0x558e17b0d4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e18fcb800 session 0x558e178203c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e15556c00 session 0x558e17820780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 66297856 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 397139968 unmapped: 66297856 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.787518501s of 11.968016624s, submitted: 61
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 400990208 unmapped: 62447616 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e17732800 session 0x558e180541e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401440768 unmapped: 61997056 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 ms_handle_reset con 0x558e17e7cc00 session 0x558e152af680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4645895 data_alloc: 234881024 data_used: 33972224
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a1e15000/0x0/0x1bfc00000, data 0x60fc1d7/0x6319000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399474688 unmapped: 63963136 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399474688 unmapped: 63963136 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 401874944 unmapped: 61562880 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 405 heartbeat osd_stat(store_statfs(0x1a216d000/0x0/0x1bfc00000, data 0x6470cb4/0x5fc0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402350080 unmapped: 61087744 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e18506000 session 0x558e17570000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 61046784 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4804117 data_alloc: 234881024 data_used: 34136064
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 61046784 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402391040 unmapped: 61046784 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 61038592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402399232 unmapped: 61038592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.950796127s of 10.631396294s, submitted: 212
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a20d1000/0x0/0x1bfc00000, data 0x64fe943/0x604e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e18fcb800 session 0x558e1766f2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403775488 unmapped: 59662336 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e15556c00 session 0x558e1601c780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e17732800 session 0x558e176cc5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e17e7cc00 session 0x558e176cda40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e18506000 session 0x558e15ea0d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4815156 data_alloc: 234881024 data_used: 34136064
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a1f67000/0x0/0x1bfc00000, data 0x6676943/0x61c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403783680 unmapped: 59654144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e1bf3e400 session 0x558e175a70e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 heartbeat osd_stat(store_statfs(0x1a1f67000/0x0/0x1bfc00000, data 0x6676943/0x61c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403791872 unmapped: 59645952 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 ms_handle_reset con 0x558e15556c00 session 0x558e18495c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403791872 unmapped: 59645952 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403808256 unmapped: 59629568 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 407 ms_handle_reset con 0x558e17732800 session 0x558e175a74a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a1f65000/0x0/0x1bfc00000, data 0x667845f/0x61c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 407 ms_handle_reset con 0x558e17e7cc00 session 0x558e1809b4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4812011 data_alloc: 234881024 data_used: 34336768
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 407 ms_handle_reset con 0x558e1558b400 session 0x558e177ff680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a1f65000/0x0/0x1bfc00000, data 0x667845f/0x61c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 402759680 unmapped: 60678144 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403832832 unmapped: 59604992 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 408 ms_handle_reset con 0x558e180ce400 session 0x558e1645ab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 408 ms_handle_reset con 0x558e15556c00 session 0x558e15137e00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.398662567s of 10.007455826s, submitted: 102
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e170eb800 session 0x558e178203c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e18635400 session 0x558e1766eb40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a4019000/0x0/0x1bfc00000, data 0x3e29240/0x4046000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399327232 unmapped: 64110592 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e17732800 session 0x558e170ba3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e1558b400 session 0x558e180b2b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e15556c00 session 0x558e17b01860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4360264 data_alloc: 234881024 data_used: 23638016
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399335424 unmapped: 64102400 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a44ef000/0x0/0x1bfc00000, data 0x3a2188a/0x3c3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e170eb800 session 0x558e15ea0780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399409152 unmapped: 64028672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 ms_handle_reset con 0x558e17732800 session 0x558e152afc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4439666 data_alloc: 234881024 data_used: 23601152
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 heartbeat osd_stat(store_statfs(0x1a446c000/0x0/0x1bfc00000, data 0x3aa48c3/0x3cc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399425536 unmapped: 64012288 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 399433728 unmapped: 64004096 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.361456871s of 10.004527092s, submitted: 161
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403292160 unmapped: 60145664 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524226 data_alloc: 234881024 data_used: 23879680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403292160 unmapped: 60145664 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18635400 session 0x558e177fe3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32d9000/0x0/0x1bfc00000, data 0x4c3643a/0x4e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32d9000/0x0/0x1bfc00000, data 0x4c3643a/0x4e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4585704 data_alloc: 234881024 data_used: 23875584
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7cc00 session 0x558e175b7680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e1766e1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32d9000/0x0/0x1bfc00000, data 0x4c3643a/0x4e55000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403095552 unmapped: 60342272 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403103744 unmapped: 60334080 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e170eb800 session 0x558e186ad860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.963036537s of 10.694633484s, submitted: 58
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403111936 unmapped: 60325888 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e18494000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7cc00 session 0x558e1645a1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4588309 data_alloc: 234881024 data_used: 23883776
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32b1000/0x0/0x1bfc00000, data 0x4c5d45d/0x4e7d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 403111936 unmapped: 60325888 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e17b01e00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404037632 unmapped: 59400192 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e1766e3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e1809ba40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404529152 unmapped: 58908672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 404529152 unmapped: 58908672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32af000/0x0/0x1bfc00000, data 0x4c5f45d/0x4e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408477696 unmapped: 54960128 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4709181 data_alloc: 251658240 data_used: 40263680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408477696 unmapped: 54960128 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e15ea0d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408477696 unmapped: 54960128 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a32af000/0x0/0x1bfc00000, data 0x4c5f45d/0x4e7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e17b030e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4604751 data_alloc: 251658240 data_used: 37675008
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408502272 unmapped: 54935552 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3e13000/0x0/0x1bfc00000, data 0x40fb45d/0x431b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.932298660s of 12.038512230s, submitted: 32
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 409591808 unmapped: 53846016 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 411590656 unmapped: 51847168 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 411680768 unmapped: 51757056 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a349d000/0x0/0x1bfc00000, data 0x4a7045d/0x4c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 50176000 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770651 data_alloc: 251658240 data_used: 39198720
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2be3000/0x0/0x1bfc00000, data 0x531d45d/0x553d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 48914432 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2be3000/0x0/0x1bfc00000, data 0x531d45d/0x553d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4773601 data_alloc: 251658240 data_used: 39391232
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2bee000/0x0/0x1bfc00000, data 0x532045d/0x5540000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4767745 data_alloc: 251658240 data_used: 39415808
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414392320 unmapped: 49045504 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18635400 session 0x558e17b01680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.476836205s of 14.158192635s, submitted: 166
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18504000 session 0x558e1845c000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e1809b680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 49037312 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 49037312 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2c12000/0x0/0x1bfc00000, data 0x52fc43a/0x551b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 49037312 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4759909 data_alloc: 251658240 data_used: 39301120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2c12000/0x0/0x1bfc00000, data 0x52fc43a/0x551b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a2c11000/0x0/0x1bfc00000, data 0x52fd43a/0x551c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e170eb800 session 0x558e1601dc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414408704 unmapped: 49029120 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e18494f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414425088 unmapped: 49012736 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4584221 data_alloc: 234881024 data_used: 30871552
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414425088 unmapped: 49012736 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414425088 unmapped: 49012736 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a360f000/0x0/0x1bfc00000, data 0x42c443a/0x44e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4584221 data_alloc: 234881024 data_used: 30871552
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414433280 unmapped: 49004544 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc ms_handle_reset ms_handle_reset con 0x558e18542c00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3835187053
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3835187053,v1:192.168.122.100:6801/3835187053]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc handle_mgr_configure stats_period=5
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a360f000/0x0/0x1bfc00000, data 0x42c443a/0x44e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a360f000/0x0/0x1bfc00000, data 0x42c443a/0x44e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 48865280 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4584221 data_alloc: 234881024 data_used: 30871552
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e15556c00 session 0x558e181d0f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 48857088 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7dc00 session 0x558e17b092c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e175b7c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 48848896 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18504000 session 0x558e176ccf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.935745239s of 22.065441132s, submitted: 24
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e177fe5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414597120 unmapped: 48840704 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1558b000 session 0x558e15dada40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1807ac00 session 0x558e177ff4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e1848c3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e175f3860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e17571c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e7cc00 session 0x558e18054b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17e87c00 session 0x558e1601dc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1807ac00 session 0x558e1845c000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e18501400 session 0x558e17b01680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1807a800 session 0x558e17b030e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e1c8c8800 session 0x558e15ea0d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414441472 unmapped: 48996352 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 ms_handle_reset con 0x558e17732800 session 0x558e17b0d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a383a000/0x0/0x1bfc00000, data 0x42c444a/0x44e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414457856 unmapped: 48979968 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4491086 data_alloc: 234881024 data_used: 26501120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4491086 data_alloc: 234881024 data_used: 26501120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414466048 unmapped: 48971776 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a3fdc000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [1])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 47644672 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.544863701s of 12.705204010s, submitted: 25
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4532618 data_alloc: 234881024 data_used: 32337920
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 47603712 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4092000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4092000/0x0/0x1bfc00000, data 0x3a6d43a/0x3c8c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4537202 data_alloc: 234881024 data_used: 33103872
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 47587328 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a4086000/0x0/0x1bfc00000, data 0x3a7943a/0x3c98000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 47529984 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4624536 data_alloc: 234881024 data_used: 33644544
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416890880 unmapped: 46546944 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.343315125s of 10.646893501s, submitted: 73
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a34d7000/0x0/0x1bfc00000, data 0x462043a/0x483f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4640986 data_alloc: 234881024 data_used: 33513472
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418185216 unmapped: 45252608 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 410 handle_osd_map epochs [411,411], i have 411, src has [1,411]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e18494000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a34de000/0x0/0x1bfc00000, data 0x462143a/0x4840000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18501400 session 0x558e17b0c5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e15ea0780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649804 data_alloc: 234881024 data_used: 34615296
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e70000 session 0x558e1766eb40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e1809b4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e18495c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.740987301s of 10.010063171s, submitted: 57
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e1601e000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18501400 session 0x558e17b00f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a33b5000/0x0/0x1bfc00000, data 0x4747157/0x4969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a33b5000/0x0/0x1bfc00000, data 0x4747157/0x4969000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418209792 unmapped: 45228032 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e170eb400 session 0x558e185ac5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4662115 data_alloc: 234881024 data_used: 34615296
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e1780b0e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 43065344 heap: 463437824 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e165ae960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2a22000/0x0/0x1bfc00000, data 0x50da157/0x52fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e15692d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18501400 session 0x558e1769de00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 53583872 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 53583872 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 53583872 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418258944 unmapped: 53575680 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e18f0e400 session 0x558e1645b860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742155 data_alloc: 251658240 data_used: 35667968
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2a1e000/0x0/0x1bfc00000, data 0x50dd167/0x5300000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e169141e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.706860542s of 10.017317772s, submitted: 15
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418267136 unmapped: 53567488 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e186ac3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e17b0a1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 53420032 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418414592 unmapped: 53420032 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422379520 unmapped: 49455104 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422879232 unmapped: 48955392 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a29f9000/0x0/0x1bfc00000, data 0x5170167/0x5325000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4824333 data_alloc: 251658240 data_used: 45694976
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422895616 unmapped: 48939008 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 48128000 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4899649 data_alloc: 251658240 data_used: 46161920
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423772160 unmapped: 48062464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a208e000/0x0/0x1bfc00000, data 0x5adb167/0x5c90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423772160 unmapped: 48062464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.691255569s of 11.111913681s, submitted: 56
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423780352 unmapped: 48054272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a208a000/0x0/0x1bfc00000, data 0x5ade167/0x5c93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423780352 unmapped: 48054272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 47505408 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4947467 data_alloc: 251658240 data_used: 46370816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428957696 unmapped: 42876928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428965888 unmapped: 42868736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428998656 unmapped: 42835968 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429015040 unmapped: 42819584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429015040 unmapped: 42819584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5002446 data_alloc: 251658240 data_used: 46809088
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429015040 unmapped: 42819584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a160b000/0x0/0x1bfc00000, data 0x655e167/0x6713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 46014464 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4992478 data_alloc: 251658240 data_used: 46952448
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 46006272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 46006272 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e7c400 session 0x558e17b04b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.185502052s of 15.360220909s, submitted: 50
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 45998080 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e155d3800 session 0x558e183150e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17ca0800 session 0x558e17b0cd20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425844736 unmapped: 45989888 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e155d3800 session 0x558e15dacd20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2075000/0x0/0x1bfc00000, data 0x5af4167/0x5ca9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 45981696 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4901853 data_alloc: 251658240 data_used: 45346816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2076000/0x0/0x1bfc00000, data 0x5af4157/0x5ca8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 45981696 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17732800 session 0x558e15136b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 45948928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 45948928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 45948928 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e87c00 session 0x558e1848da40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4939429 data_alloc: 251658240 data_used: 45342720
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e17b0b2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d7b000/0x0/0x1bfc00000, data 0x4df0147/0x4fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d7b000/0x0/0x1bfc00000, data 0x4df0147/0x4fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4764191 data_alloc: 251658240 data_used: 39112704
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.408885002s of 13.657489777s, submitted: 65
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 45940736 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 45932544 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807ac00 session 0x558e17b09e00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d4f000/0x0/0x1bfc00000, data 0x4e14147/0x4fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 45932544 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 45932544 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e180cf400 session 0x558e17b001e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a2d4f000/0x0/0x1bfc00000, data 0x4e14147/0x4fc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e87c00 session 0x558e177ff680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4805083 data_alloc: 251658240 data_used: 44142592
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e1807a800 session 0x558e175b7680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 45916160 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e191fdc00 session 0x558e180e72c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 ms_handle_reset con 0x558e17e70c00 session 0x558e17b04000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426967040 unmapped: 44867584 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 411 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e17e87c00 session 0x558e17b041e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4763347 data_alloc: 251658240 data_used: 44044288
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x48c5d30/0x4ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e15556c00 session 0x558e1766e780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.172450066s of 10.508795738s, submitted: 49
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e18504000 session 0x558e17b045a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a3235000/0x0/0x1bfc00000, data 0x48c5d30/0x4ae6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e1807a800 session 0x558e186ad860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 45899776 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430637056 unmapped: 41197568 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430964736 unmapped: 40869888 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4848018 data_alloc: 251658240 data_used: 45154304
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430972928 unmapped: 40861696 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e180cf400 session 0x558e175f01e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a29a6000/0x0/0x1bfc00000, data 0x5158d20/0x5378000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [0,0,1])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 ms_handle_reset con 0x558e15556c00 session 0x558e175f34a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420356096 unmapped: 51478528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4599958 data_alloc: 234881024 data_used: 32751616
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e1848cf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e170bb4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420364288 unmapped: 51470336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.573250771s of 10.048701286s, submitted: 158
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17e87c00 session 0x558e1809be00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a3cd6000/0x0/0x1bfc00000, data 0x3e287fd/0x4048000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4717000/0x0/0x1bfc00000, data 0x30c07fd/0x32e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4717000/0x0/0x1bfc00000, data 0x30c07fd/0x32e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4455360 data_alloc: 234881024 data_used: 26558464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e186acd20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e170bb2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420372480 unmapped: 51462144 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e1780ab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e1601e000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292332 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e170bb0e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e170ba1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412229632 unmapped: 59604992 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4292332 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e180e70e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.756290436s of 14.266667366s, submitted: 15
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e175f14a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412164096 unmapped: 59670528 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316473 data_alloc: 234881024 data_used: 19558400
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4316473 data_alloc: 234881024 data_used: 19558400
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412172288 unmapped: 59662336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.683190346s of 11.701642036s, submitted: 6
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a564c000/0x0/0x1bfc00000, data 0x24b27fd/0x26d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 59490304 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412753920 unmapped: 59080704 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4329939 data_alloc: 234881024 data_used: 19558400
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5501000/0x0/0x1bfc00000, data 0x25fd7fd/0x281d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 59547648 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 59547648 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a546b000/0x0/0x1bfc00000, data 0x26937fd/0x28b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412286976 unmapped: 59547648 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e180b3680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17e87c00 session 0x558e1848d680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e1807a800 session 0x558e17571680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e1807a800 session 0x558e1706ab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e156a2d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52cc000/0x0/0x1bfc00000, data 0x28327fd/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4358266 data_alloc: 234881024 data_used: 19656704
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412319744 unmapped: 59514880 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52cc000/0x0/0x1bfc00000, data 0x28327fd/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412327936 unmapped: 59506688 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.042664528s of 12.211009979s, submitted: 45
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412467200 unmapped: 59367424 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4357298 data_alloc: 234881024 data_used: 19656704
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412467200 unmapped: 59367424 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412475392 unmapped: 59359232 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412475392 unmapped: 59359232 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a52ab000/0x0/0x1bfc00000, data 0x28537fd/0x2a73000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412475392 unmapped: 59359232 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e175a6000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e17571860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17e87c00 session 0x558e17b045a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296288 data_alloc: 218103808 data_used: 17399808
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5873000/0x0/0x1bfc00000, data 0x228b7fd/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296288 data_alloc: 218103808 data_used: 17399808
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5873000/0x0/0x1bfc00000, data 0x228b7fd/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5873000/0x0/0x1bfc00000, data 0x228b7fd/0x24ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.162653923s of 12.348175049s, submitted: 25
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 408616960 unmapped: 63217664 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 58638336 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4370226 data_alloc: 234881024 data_used: 17797120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4370226 data_alloc: 234881024 data_used: 17797120
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4fc8000/0x0/0x1bfc00000, data 0x2b367fd/0x2d56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.642455101s of 12.881525040s, submitted: 62
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e1848cf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412352512 unmapped: 59482112 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4281318 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e177ff680
Jan 31 04:36:35 np0005603621 rsyslogd[998]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 59457536 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 59449344 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 59449344 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4280503 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412393472 unmapped: 59441152 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a5994000/0x0/0x1bfc00000, data 0x216a7fd/0x238a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 59432960 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e1645ab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17b054a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e1807a800 session 0x558e156a2000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412401664 unmapped: 59432960 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e1848d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.243667603s of 24.796049118s, submitted: 26
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e1601d4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4328887 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a541b000/0x0/0x1bfc00000, data 0x26e37fd/0x2903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17b01c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e175a72c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e1601d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412557312 unmapped: 59277312 heap: 471834624 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e1706b2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412655616 unmapped: 62849024 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417073 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e17b0a1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e186acb40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17820b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e1780a5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e1645ad20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a48ba000/0x0/0x1bfc00000, data 0x324380d/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e17732800 session 0x558e17b001e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15556c00 session 0x558e15136780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a48ba000/0x0/0x1bfc00000, data 0x324380d/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417073 data_alloc: 218103808 data_used: 16318464
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a48ba000/0x0/0x1bfc00000, data 0x324380d/0x3464000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412663808 unmapped: 62840832 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.153007507s of 14.410977364s, submitted: 25
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e18501400 session 0x558e17b041e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e155d3800 session 0x558e17b04000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 ms_handle_reset con 0x558e15272000 session 0x558e18054b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 62685184 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412819456 unmapped: 62685184 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422911 data_alloc: 218103808 data_used: 16326656
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412631040 unmapped: 62873600 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540191 data_alloc: 234881024 data_used: 31350784
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a4894000/0x0/0x1bfc00000, data 0x3267853/0x348a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416366592 unmapped: 59138048 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4540191 data_alloc: 234881024 data_used: 31350784
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.293484688s of 12.319557190s, submitted: 9
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 417054720 unmapped: 58449920 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422313984 unmapped: 53190656 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2f08000/0x0/0x1bfc00000, data 0x3a53853/0x3c76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421281792 unmapped: 54222848 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 54124544 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a28a4000/0x0/0x1bfc00000, data 0x40b0853/0x42d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421601280 unmapped: 53903360 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2888000/0x0/0x1bfc00000, data 0x40c4853/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4663159 data_alloc: 234881024 data_used: 32256000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2888000/0x0/0x1bfc00000, data 0x40c4853/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2888000/0x0/0x1bfc00000, data 0x40c4853/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4654851 data_alloc: 234881024 data_used: 32256000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a2895000/0x0/0x1bfc00000, data 0x40c6853/0x42e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421609472 unmapped: 53895168 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.770472527s of 13.373343468s, submitted: 145
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421625856 unmapped: 53878784 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4658235 data_alloc: 234881024 data_used: 32243712
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421625856 unmapped: 53878784 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a288f000/0x0/0x1bfc00000, data 0x40ca4ac/0x42ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421625856 unmapped: 53878784 heap: 475504640 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a288f000/0x0/0x1bfc00000, data 0x40ca4ac/0x42ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 414 ms_handle_reset con 0x558e18504000 session 0x558e1766f860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423665664 unmapped: 63881216 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 415 ms_handle_reset con 0x558e191fdc00 session 0x558e16915a40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423673856 unmapped: 63873024 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a19e8000/0x0/0x1bfc00000, data 0x4f70159/0x5195000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423698432 unmapped: 63848448 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4760629 data_alloc: 234881024 data_used: 35393536
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423698432 unmapped: 63848448 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a19e4000/0x0/0x1bfc00000, data 0x4f71dce/0x5198000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 63840256 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184fec00 session 0x558e1601cf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e15e4f800 session 0x558e180b34a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e15272000 session 0x558e175f1c20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184fec00 session 0x558e15ea0780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e18504000 session 0x558e175714a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e191fdc00 session 0x558e1706b860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184f2800 session 0x558e1601cf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e15272000 session 0x558e1766f860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 ms_handle_reset con 0x558e184fec00 session 0x558e18054b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4790537 data_alloc: 234881024 data_used: 35393536
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a1610000/0x0/0x1bfc00000, data 0x5346dde/0x556e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.951803207s of 12.451093674s, submitted: 51
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423878656 unmapped: 63668224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423886848 unmapped: 63660032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a160c000/0x0/0x1bfc00000, data 0x534891d/0x5571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423886848 unmapped: 63660032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a160c000/0x0/0x1bfc00000, data 0x534891d/0x5571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423886848 unmapped: 63660032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4794535 data_alloc: 234881024 data_used: 35401728
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a160c000/0x0/0x1bfc00000, data 0x534891d/0x5571000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e18504000 session 0x558e17b04000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 63512576 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 63512576 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 63504384 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 63504384 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e18f0fc00 session 0x558e1769c1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424050688 unmapped: 63496192 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4826520 data_alloc: 251658240 data_used: 36204544
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424099840 unmapped: 63447040 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4879000 data_alloc: 251658240 data_used: 43540480
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 62627840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a15e9000/0x0/0x1bfc00000, data 0x536c91d/0x5595000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1907f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.388605118s of 16.425638199s, submitted: 16
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 61751296 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428785664 unmapped: 58761216 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428826624 unmapped: 58720256 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4936172 data_alloc: 251658240 data_used: 44589056
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428851200 unmapped: 58695680 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 428851200 unmapped: 58695680 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x19fdc7000/0x0/0x1bfc00000, data 0x59ee91d/0x5c17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4956704 data_alloc: 251658240 data_used: 48934912
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 430686208 unmapped: 56860672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e191fdc00 session 0x558e15136780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e17aee400 session 0x558e180b23c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.634708405s of 10.053680420s, submitted: 54
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431161344 unmapped: 56385536 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x19fdc2000/0x0/0x1bfc00000, data 0x59f391d/0x5c1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e15272000 session 0x558e186ac3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4850348 data_alloc: 251658240 data_used: 43851776
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a041f000/0x0/0x1bfc00000, data 0x4fe690d/0x520e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431169536 unmapped: 56377344 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e15556c00 session 0x558e15ea0d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e184fec00 session 0x558e175a6780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4684822 data_alloc: 251658240 data_used: 35962880
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a167a000/0x0/0x1bfc00000, data 0x413d8da/0x4363000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.669324875s of 11.818543434s, submitted: 33
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4685082 data_alloc: 251658240 data_used: 35962880
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a1679000/0x0/0x1bfc00000, data 0x413e8da/0x4364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427696128 unmapped: 59850752 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427704320 unmapped: 59842560 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.3 total, 600.0 interval#012Cumulative writes: 59K writes, 227K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 21K syncs, 2.76 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5202 writes, 20K keys, 5202 commit groups, 1.0 writes per commit group, ingest: 20.24 MB, 0.03 MB/s#012Interval WAL: 5202 writes, 2078 syncs, 2.50 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427704320 unmapped: 59842560 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a1679000/0x0/0x1bfc00000, data 0x413e8da/0x4364000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4689562 data_alloc: 251658240 data_used: 37027840
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 ms_handle_reset con 0x558e18504000 session 0x558e156a25a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 427704320 unmapped: 59842560 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 418 ms_handle_reset con 0x558e15556c00 session 0x558e156a2b40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 418 ms_handle_reset con 0x558e15272000 session 0x558e177ff2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a1674000/0x0/0x1bfc00000, data 0x41405f7/0x4369000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [1,2])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 418 ms_handle_reset con 0x558e17aee400 session 0x558e177fef00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429170688 unmapped: 58376192 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 418 heartbeat osd_stat(store_statfs(0x19fcaf000/0x0/0x1bfc00000, data 0x5b05595/0x5d2d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429170688 unmapped: 58376192 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 419 ms_handle_reset con 0x558e184fec00 session 0x558e1645b0e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429178880 unmapped: 58368000 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e18f0fc00 session 0x558e177ff4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.062422752s of 10.363031387s, submitted: 101
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e15272000 session 0x558e175f1a40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429203456 unmapped: 58343424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e15556c00 session 0x558e1809b0e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 ms_handle_reset con 0x558e17aee400 session 0x558e175f2d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4895946 data_alloc: 251658240 data_used: 40210432
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 heartbeat osd_stat(store_statfs(0x19fcaa000/0x0/0x1bfc00000, data 0x5b08f19/0x5d34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429203456 unmapped: 58343424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e184fec00 session 0x558e170ba3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429228032 unmapped: 58318848 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e193c6000 session 0x558e1706ab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e191fd400 session 0x558e1601d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429228032 unmapped: 58318848 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 421 ms_handle_reset con 0x558e15272000 session 0x558e165af4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a16e2000/0x0/0x1bfc00000, data 0x40d1b1e/0x42fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712034 data_alloc: 251658240 data_used: 39739392
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a16de000/0x0/0x1bfc00000, data 0x40d3679/0x42ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 58294272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e17aee400 session 0x558e176ccf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e184fec00 session 0x558e17b0d4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e156bc400 session 0x558e17570000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 422 ms_handle_reset con 0x558e15272000 session 0x558e180e7860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e17aee400 session 0x558e180e7e00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e184fec00 session 0x558e151372c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e191fd400 session 0x558e18495a40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e156bc800 session 0x558e1809ba40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e15556c00 session 0x558e17b0bc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e15272000 session 0x558e17b0c5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 63897600 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a16da000/0x0/0x1bfc00000, data 0x40d5326/0x4302000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 63897600 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e18501400 session 0x558e17b00f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e17732800 session 0x558e17570d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4589346 data_alloc: 234881024 data_used: 24395776
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.267575264s of 10.683115959s, submitted: 138
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 ms_handle_reset con 0x558e17aee400 session 0x558e180e7860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a1f92000/0x0/0x1bfc00000, data 0x381c336/0x3a4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3045000/0x0/0x1bfc00000, data 0x2768e6e/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4417822 data_alloc: 218103808 data_used: 16379904
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 424 ms_handle_reset con 0x558e15272000 session 0x558e170ba3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3045000/0x0/0x1bfc00000, data 0x2768e6e/0x2997000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1809b0e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e177ff4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e1645b0e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420757504 unmapped: 66789376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420782080 unmapped: 66764800 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457687 data_alloc: 234881024 data_used: 19607552
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4457687 data_alloc: 234881024 data_used: 19607552
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3042000/0x0/0x1bfc00000, data 0x276a9bd/0x299b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 420798464 unmapped: 66748416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.997537613s of 19.108545303s, submitted: 49
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 63299584 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4530919 data_alloc: 234881024 data_used: 19644416
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2617000/0x0/0x1bfc00000, data 0x31969bd/0x33c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2617000/0x0/0x1bfc00000, data 0x31969bd/0x33c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424239104 unmapped: 63307776 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 63299584 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 63283200 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4549795 data_alloc: 234881024 data_used: 20557824
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4549795 data_alloc: 234881024 data_used: 20557824
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32229bd/0x3453000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 424271872 unmapped: 63275008 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4549795 data_alloc: 234881024 data_used: 20557824
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.564064026s of 16.081941605s, submitted: 84
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x37719bd/0x39a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e185fc400 session 0x558e1601f680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e1601d860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1769c1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e175a7680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e1601e3c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 62201856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x37719bd/0x39a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4593282 data_alloc: 234881024 data_used: 20557824
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184eec00 session 0x558e17b01680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e175f21e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1845cf00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e152aed20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203c000/0x0/0x1bfc00000, data 0x37719bd/0x39a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4595120 data_alloc: 234881024 data_used: 20557824
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x37719cd/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x37719cd/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4634388 data_alloc: 234881024 data_used: 26116096
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a203b000/0x0/0x1bfc00000, data 0x37719cd/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.197906494s of 17.336410522s, submitted: 21
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x37729cd/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4634872 data_alloc: 234881024 data_used: 26116096
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2039000/0x0/0x1bfc00000, data 0x37729cd/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 62193664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426369024 unmapped: 61177856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426369024 unmapped: 61177856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426369024 unmapped: 61177856 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4697934 data_alloc: 234881024 data_used: 26128384
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b8000/0x0/0x1bfc00000, data 0x3ff49cd/0x4226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426377216 unmapped: 61169664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426377216 unmapped: 61169664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b8000/0x0/0x1bfc00000, data 0x3ff49cd/0x4226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426377216 unmapped: 61169664 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.118430138s of 10.285791397s, submitted: 43
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b8000/0x0/0x1bfc00000, data 0x3ff49cd/0x4226000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4702580 data_alloc: 234881024 data_used: 26468352
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b2000/0x0/0x1bfc00000, data 0x3ffa9cd/0x422c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e16915a40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18089000 session 0x558e17571680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426385408 unmapped: 61161472 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a17b2000/0x0/0x1bfc00000, data 0x3ffa9cd/0x422c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [1])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555081 data_alloc: 234881024 data_used: 20561920
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32239bd/0x3454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e17b04960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 66428928 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.955152512s of 11.067607880s, submitted: 30
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a258a000/0x0/0x1bfc00000, data 0x32239bd/0x3454000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a21f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421126144 unmapped: 66420736 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4555081 data_alloc: 234881024 data_used: 20561920
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184fec00 session 0x558e175a6780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421158912 unmapped: 66387968 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e191fd400 session 0x558e175f0960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421216256 unmapped: 66330624 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e18315680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 66256896 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421314560 unmapped: 66232320 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383870 data_alloc: 218103808 data_used: 15134720
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4383870 data_alloc: 218103808 data_used: 15134720
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421322752 unmapped: 66224128 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421330944 unmapped: 66215936 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421339136 unmapped: 66207744 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421347328 unmapped: 66199552 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421355520 unmapped: 66191360 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421363712 unmapped: 66183168 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421371904 unmapped: 66174976 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421380096 unmapped: 66166784 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421388288 unmapped: 66158592 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421396480 unmapped: 66150400 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421404672 unmapped: 66142208 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 66134016 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 66134016 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421412864 unmapped: 66134016 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421421056 unmapped: 66125824 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421429248 unmapped: 66117632 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421437440 unmapped: 66109440 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421445632 unmapped: 66101248 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421453824 unmapped: 66093056 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421462016 unmapped: 66084864 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421462016 unmapped: 66084864 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421462016 unmapped: 66084864 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421470208 unmapped: 66076672 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4384030 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421478400 unmapped: 66068480 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e17b0dc20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e17b0c780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e1809a000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1809ad20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 108.594985962s of 109.833038330s, submitted: 382
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3220000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184fec00 session 0x558e1809b4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e191fd400 session 0x558e1780a5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e17570f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e1848da40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e18494000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0f000/0x0/0x1bfc00000, data 0x268f9ad/0x28bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422270 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0f000/0x0/0x1bfc00000, data 0x268f9ad/0x28bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422270 data_alloc: 218103808 data_used: 15138816
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0f000/0x0/0x1bfc00000, data 0x268f9ad/0x28bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e184fec00 session 0x558e18495860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0e000/0x0/0x1bfc00000, data 0x268f9d0/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462992 data_alloc: 234881024 data_used: 20385792
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0e000/0x0/0x1bfc00000, data 0x268f9d0/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462992 data_alloc: 234881024 data_used: 20385792
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2d0e000/0x0/0x1bfc00000, data 0x268f9d0/0x28c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 421486592 unmapped: 66060288 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.986009598s of 20.119245529s, submitted: 13
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422076416 unmapped: 65470464 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503560 data_alloc: 234881024 data_used: 20520960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503560 data_alloc: 234881024 data_used: 20520960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422084608 unmapped: 65462272 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422092800 unmapped: 65454080 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422092800 unmapped: 65454080 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422092800 unmapped: 65454080 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422100992 unmapped: 65445888 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503560 data_alloc: 234881024 data_used: 20520960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422100992 unmapped: 65445888 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28a2000/0x0/0x1bfc00000, data 0x2afb9d0/0x2d2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e185ac960
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.309407234s of 13.431710243s, submitted: 27
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17fd0800 session 0x558e1706a1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422100992 unmapped: 65445888 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e18501400 session 0x558e175a6000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418742272 unmapped: 68804608 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418750464 unmapped: 68796416 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418758656 unmapped: 68788224 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392758 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f99d/0x23ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.220516205s of 42.306221008s, submitted: 33
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e15dad2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418766848 unmapped: 68780032 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395882 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 68771840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 68771840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f9ac/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418775040 unmapped: 68771840 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 68763648 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15556c00 session 0x558e169152c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17732800 session 0x558e181d0d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a321f000/0x0/0x1bfc00000, data 0x217f9ac/0x23af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449739 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418955264 unmapped: 68591616 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449739 data_alloc: 218103808 data_used: 15073280
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6f000/0x0/0x1bfc00000, data 0x282ea0e/0x2a5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.493515968s of 12.869886398s, submitted: 31
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e15272000 session 0x558e180545a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418963456 unmapped: 68583424 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x282ea31/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4500510 data_alloc: 234881024 data_used: 21995520
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x282ea31/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4500510 data_alloc: 234881024 data_used: 21995520
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x282ea31/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 418971648 unmapped: 68575232 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.652256012s of 11.668172836s, submitted: 4
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 65101824 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2100000/0x0/0x1bfc00000, data 0x329ca31/0x34ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422445056 unmapped: 65101824 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579302 data_alloc: 234881024 data_used: 22003712
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2100000/0x0/0x1bfc00000, data 0x329ca31/0x34ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422707200 unmapped: 64839680 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2088000/0x0/0x1bfc00000, data 0x3314a31/0x3546000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207d000/0x0/0x1bfc00000, data 0x331fa31/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4590494 data_alloc: 234881024 data_used: 23314432
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207d000/0x0/0x1bfc00000, data 0x331fa31/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422772736 unmapped: 64774144 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.939406395s of 11.124873161s, submitted: 85
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207d000/0x0/0x1bfc00000, data 0x331fa31/0x3551000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4590786 data_alloc: 234881024 data_used: 23318528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422780928 unmapped: 64765952 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422789120 unmapped: 64757760 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422789120 unmapped: 64757760 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 ms_handle_reset con 0x558e17fd0800 session 0x558e176cde00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a207c000/0x0/0x1bfc00000, data 0x3320a31/0x3552000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422797312 unmapped: 64749568 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422805504 unmapped: 64741376 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4591130 data_alloc: 234881024 data_used: 23334912
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 426 ms_handle_reset con 0x558e18501400 session 0x558e175a74a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422813696 unmapped: 64733184 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422813696 unmapped: 64733184 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 422821888 unmapped: 64724992 heap: 487546880 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 427 ms_handle_reset con 0x558e184fec00 session 0x558e15dacd20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2078000/0x0/0x1bfc00000, data 0x332268a/0x3555000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 427 ms_handle_reset con 0x558e17fd0400 session 0x558e152ae000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 59637760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 431759360 unmapped: 59637760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.998736382s of 10.688847542s, submitted: 23
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 428 ms_handle_reset con 0x558e15272000 session 0x558e156a2f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4690667 data_alloc: 234881024 data_used: 33574912
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a180b000/0x0/0x1bfc00000, data 0x3b8e2e3/0x3dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423329792 unmapped: 68067328 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e17fd0800 session 0x558e175712c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 68042752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e184fec00 session 0x558e15ea1860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e18501400 session 0x558e17b0a5a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e14cb3400 session 0x558e181d1680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 68042752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 429 ms_handle_reset con 0x558e14cb3400 session 0x558e1766ed20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423354368 unmapped: 68042752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423362560 unmapped: 68034560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698255 data_alloc: 234881024 data_used: 33574912
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e1780ab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e184fec00 session 0x558e17b08d20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17fd0800 session 0x558e1766eb40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e18501400 session 0x558e175b6f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e14cb3400 session 0x558e1601ef00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e180e72c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4700791 data_alloc: 234881024 data_used: 33579008
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 68026368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b93808/0x3dcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17fd0800 session 0x558e152af4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b93808/0x3dcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4700791 data_alloc: 234881024 data_used: 33579008
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e184fec00 session 0x558e1601c1e0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e18501400 session 0x558e165aef00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.898597717s of 15.987887383s, submitted: 28
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e14cb3400 session 0x558e176cc780
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423378944 unmapped: 68018176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4702905 data_alloc: 234881024 data_used: 33599488
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4702905 data_alloc: 234881024 data_used: 33599488
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423387136 unmapped: 68009984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.751194000s of 13.756506920s, submitted: 2
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735891 data_alloc: 234881024 data_used: 33701888
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4742611 data_alloc: 234881024 data_used: 34357248
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426745856 unmapped: 64651264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4743763 data_alloc: 234881024 data_used: 34758656
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 64643072 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4743763 data_alloc: 234881024 data_used: 34758656
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.341286659s of 15.488368034s, submitted: 11
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735603 data_alloc: 234881024 data_used: 34758656
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423559168 unmapped: 67837952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4735603 data_alloc: 234881024 data_used: 34758656
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.959305763s of 11.974246025s, submitted: 4
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e184fec00 session 0x558e1809af00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4741043 data_alloc: 234881024 data_used: 35516416
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423567360 unmapped: 67829760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a14f4000/0x0/0x1bfc00000, data 0x3e9f818/0x40da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423575552 unmapped: 67821568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423575552 unmapped: 67821568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e170bab40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17fd0800 session 0x558e152af4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4717131 data_alloc: 234881024 data_used: 35495936
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b93818/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1800000/0x0/0x1bfc00000, data 0x3b937b6/0x3dcd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e17e7d000 session 0x558e175b7e00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423583744 unmapped: 67813376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4716406 data_alloc: 234881024 data_used: 35495936
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e14cb3400 session 0x558e175a7a40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.583814621s of 13.691693306s, submitted: 31
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423591936 unmapped: 67805184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a1801000/0x0/0x1bfc00000, data 0x3b937a6/0x3dcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 ms_handle_reset con 0x558e15272000 session 0x558e15dada40
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423591936 unmapped: 67805184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423591936 unmapped: 67805184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423600128 unmapped: 67796992 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 430 handle_osd_map epochs [431,431], i have 431, src has [1,431]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4719530 data_alloc: 234881024 data_used: 35500032
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a17ff000/0x0/0x1bfc00000, data 0x3b953f1/0x3dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423616512 unmapped: 67780608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423624704 unmapped: 67772416 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 67747840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4642462 data_alloc: 234881024 data_used: 33583104
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 431 ms_handle_reset con 0x558e17e7d000 session 0x558e17b0a000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 67747840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.550810814s of 10.401992798s, submitted: 24
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 431 ms_handle_reset con 0x558e17fd0800 session 0x558e175b63c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423649280 unmapped: 67747840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a206a000/0x0/0x1bfc00000, data 0x332b3f1/0x3564000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423665664 unmapped: 67731456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 432 ms_handle_reset con 0x558e184fec00 session 0x558e165af4a0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423665664 unmapped: 67731456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2066000/0x0/0x1bfc00000, data 0x332d0ba/0x3567000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423690240 unmapped: 67706880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649610 data_alloc: 234881024 data_used: 33591296
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423690240 unmapped: 67706880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2063000/0x0/0x1bfc00000, data 0x332ec15/0x356a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 423706624 unmapped: 67690496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 433 ms_handle_reset con 0x558e15556c00 session 0x558e186ad2c0
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 433 ms_handle_reset con 0x558e14cb3400 session 0x558e156a2f00
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4439599 data_alloc: 218103808 data_used: 15101952
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a31a3000/0x0/0x1bfc00000, data 0x218db90/0x23c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.949651718s of 10.408130646s, submitted: 76
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 ms_handle_reset con 0x558e15272000 session 0x558e175a6000
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 ms_handle_reset con 0x558e17e7d000 session 0x558e18495860
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414121984 unmapped: 77275136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414130176 unmapped: 77266944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414138368 unmapped: 77258752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414146560 unmapped: 77250560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414146560 unmapped: 77250560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414146560 unmapped: 77250560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414154752 unmapped: 77242368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414154752 unmapped: 77242368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414154752 unmapped: 77242368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414162944 unmapped: 77234176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 77225984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 77225984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414171136 unmapped: 77225984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 77217792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 77217792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414179328 unmapped: 77217792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414187520 unmapped: 77209600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 77201408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 77201408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 77193216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 77193216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414203904 unmapped: 77193216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 77201408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config diff' '{prefix=config diff}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config show' '{prefix=config show}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413917184 unmapped: 77479936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 77692928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'log dump' '{prefix=log dump}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413704192 unmapped: 77692928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'perf dump' '{prefix=perf dump}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'perf schema' '{prefix=perf schema}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412893184 unmapped: 78503936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412909568 unmapped: 78487552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 78479360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 78479360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412917760 unmapped: 78479360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 78471168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 78471168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 78471168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 78471168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412925952 unmapped: 78471168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412934144 unmapped: 78462976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412942336 unmapped: 78454784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412958720 unmapped: 78438400 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412958720 unmapped: 78438400 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412958720 unmapped: 78438400 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412958720 unmapped: 78438400 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412966912 unmapped: 78430208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412975104 unmapped: 78422016 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412975104 unmapped: 78422016 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412975104 unmapped: 78422016 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 78405632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 78389248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 78381056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 78381056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 78381056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 78381056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 78381056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 78381056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413024256 unmapped: 78372864 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413024256 unmapped: 78372864 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413024256 unmapped: 78372864 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413024256 unmapped: 78372864 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 78364672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 78364672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 78364672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 78364672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 78364672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 78364672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413048832 unmapped: 78348288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 78340096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 78340096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 78340096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 78340096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 78340096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 78340096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413065216 unmapped: 78331904 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413073408 unmapped: 78323712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413073408 unmapped: 78323712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413073408 unmapped: 78323712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413073408 unmapped: 78323712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413073408 unmapped: 78323712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413073408 unmapped: 78323712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413081600 unmapped: 78315520 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413081600 unmapped: 78315520 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 78307328 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 78307328 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 78299136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.3 total, 600.0 interval#012Cumulative writes: 62K writes, 234K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 62K writes, 22K syncs, 2.75 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2220 writes, 6987 keys, 2220 commit groups, 1.0 writes per commit group, ingest: 5.66 MB, 0.01 MB/s#012Interval WAL: 2220 writes, 963 syncs, 2.31 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 78299136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 78299136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 78299136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 78299136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 78299136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc ms_handle_reset ms_handle_reset con 0x558e18635400
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3835187053
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3835187053,v1:192.168.122.100:6801/3835187053]
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: mgrc handle_mgr_configure stats_period=5
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 ms_handle_reset con 0x558e170eb800 session 0x558e165af680
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413106176 unmapped: 78290944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413114368 unmapped: 78282752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413114368 unmapped: 78282752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413122560 unmapped: 78274560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413130752 unmapped: 78266368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413130752 unmapped: 78266368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 78258176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 78258176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 78258176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 78258176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413147136 unmapped: 78249984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413155328 unmapped: 78241792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413155328 unmapped: 78241792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 78233600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 78233600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 78233600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 78233600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 78233600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413171712 unmapped: 78225408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413171712 unmapped: 78225408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413171712 unmapped: 78225408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413179904 unmapped: 78217216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413188096 unmapped: 78209024 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413196288 unmapped: 78200832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413204480 unmapped: 78192640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413212672 unmapped: 78184448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413220864 unmapped: 78176256 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413220864 unmapped: 78176256 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413220864 unmapped: 78176256 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413229056 unmapped: 78168064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 ms_handle_reset con 0x558e1d957400 session 0x558e1769cd20
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413229056 unmapped: 78168064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441950 data_alloc: 218103808 data_used: 15106048
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413229056 unmapped: 78168064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413229056 unmapped: 78168064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3204000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413229056 unmapped: 78168064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413237248 unmapped: 78159872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 272.774078369s of 272.857147217s, submitted: 18
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413261824 unmapped: 78135296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 78127104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 78127104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413270016 unmapped: 78127104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413278208 unmapped: 78118912 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 413368320 unmapped: 78028800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414490624 unmapped: 76906496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414539776 unmapped: 76857344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414547968 unmapped: 76849152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414556160 unmapped: 76840960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414564352 unmapped: 76832768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 76824576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 76824576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 76824576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414572544 unmapped: 76824576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414580736 unmapped: 76816384 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414588928 unmapped: 76808192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414597120 unmapped: 76800000 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414597120 unmapped: 76800000 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414605312 unmapped: 76791808 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414605312 unmapped: 76791808 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414605312 unmapped: 76791808 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414605312 unmapped: 76791808 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414605312 unmapped: 76791808 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414605312 unmapped: 76791808 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 76783616 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 76783616 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 76783616 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 76783616 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414613504 unmapped: 76783616 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 76775424 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 76775424 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 76775424 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414621696 unmapped: 76775424 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 76767232 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 76767232 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 76767232 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414629888 unmapped: 76767232 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 76759040 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 76759040 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 76759040 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 76759040 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414638080 unmapped: 76759040 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 76750848 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414654464 unmapped: 76742656 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414662656 unmapped: 76734464 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414662656 unmapped: 76734464 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414662656 unmapped: 76734464 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414662656 unmapped: 76734464 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414662656 unmapped: 76734464 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414662656 unmapped: 76734464 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414670848 unmapped: 76726272 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414670848 unmapped: 76726272 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414670848 unmapped: 76726272 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414670848 unmapped: 76726272 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414679040 unmapped: 76718080 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414687232 unmapped: 76709888 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414695424 unmapped: 76701696 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414695424 unmapped: 76701696 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414695424 unmapped: 76701696 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414695424 unmapped: 76701696 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414695424 unmapped: 76701696 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414695424 unmapped: 76701696 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414703616 unmapped: 76693504 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414711808 unmapped: 76685312 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414711808 unmapped: 76685312 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414711808 unmapped: 76685312 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 31 04:36:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367217795' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414711808 unmapped: 76685312 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414720000 unmapped: 76677120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414728192 unmapped: 76668928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414728192 unmapped: 76668928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414728192 unmapped: 76668928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414728192 unmapped: 76668928 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414736384 unmapped: 76660736 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 76652544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 76652544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 76652544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414744576 unmapped: 76652544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 76644352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 76644352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 76644352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414752768 unmapped: 76644352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 76636160 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 76636160 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414760960 unmapped: 76636160 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414769152 unmapped: 76627968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414777344 unmapped: 76619776 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414777344 unmapped: 76619776 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 76611584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 76611584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414785536 unmapped: 76611584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 76603392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 76603392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 76603392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 76595200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 76595200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 76595200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 76595200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 76595200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414801920 unmapped: 76595200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414810112 unmapped: 76587008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414810112 unmapped: 76587008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414810112 unmapped: 76587008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414810112 unmapped: 76587008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 76578816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 76578816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 76578816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42651 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 76578816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 76570624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 76570624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 76570624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414826496 unmapped: 76570624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414834688 unmapped: 76562432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414834688 unmapped: 76562432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414834688 unmapped: 76562432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414834688 unmapped: 76562432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414834688 unmapped: 76562432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414834688 unmapped: 76562432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 76554240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 76554240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 76554240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 76554240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 76554240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414842880 unmapped: 76554240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414859264 unmapped: 76537856 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414859264 unmapped: 76537856 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414859264 unmapped: 76537856 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414867456 unmapped: 76529664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 76521472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 76521472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 76521472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 76521472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 76521472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414875648 unmapped: 76521472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414883840 unmapped: 76513280 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414883840 unmapped: 76513280 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414892032 unmapped: 76505088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414900224 unmapped: 76496896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414908416 unmapped: 76488704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414924800 unmapped: 76472320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414924800 unmapped: 76472320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414924800 unmapped: 76472320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414924800 unmapped: 76472320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414924800 unmapped: 76472320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414924800 unmapped: 76472320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414932992 unmapped: 76464128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414932992 unmapped: 76464128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414932992 unmapped: 76464128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414949376 unmapped: 76447744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414949376 unmapped: 76447744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414957568 unmapped: 76439552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414957568 unmapped: 76439552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414957568 unmapped: 76439552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414957568 unmapped: 76439552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414965760 unmapped: 76431360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414973952 unmapped: 76423168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414982144 unmapped: 76414976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414990336 unmapped: 76406784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414990336 unmapped: 76406784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 76398592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 76398592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 76398592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 76398592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 76398592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 414998528 unmapped: 76398592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415006720 unmapped: 76390400 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415006720 unmapped: 76390400 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415014912 unmapped: 76382208 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415023104 unmapped: 76374016 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415023104 unmapped: 76374016 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415031296 unmapped: 76365824 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415031296 unmapped: 76365824 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415031296 unmapped: 76365824 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415039488 unmapped: 76357632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415039488 unmapped: 76357632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415039488 unmapped: 76357632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415039488 unmapped: 76357632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415039488 unmapped: 76357632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415039488 unmapped: 76357632 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415055872 unmapped: 76341248 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415064064 unmapped: 76333056 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415080448 unmapped: 76316672 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415096832 unmapped: 76300288 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 76292096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 76292096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 76292096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 76292096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 76292096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 76292096 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415113216 unmapped: 76283904 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415113216 unmapped: 76283904 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415121408 unmapped: 76275712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415121408 unmapped: 76275712 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 76267520 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 76267520 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 76267520 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 76267520 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415137792 unmapped: 76259328 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415137792 unmapped: 76259328 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 76251136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 76251136 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 76242944 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 76234752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 76234752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 76234752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 76234752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 76234752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415162368 unmapped: 76234752 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415170560 unmapped: 76226560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415170560 unmapped: 76226560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415170560 unmapped: 76226560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415170560 unmapped: 76226560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415170560 unmapped: 76226560 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415178752 unmapped: 76218368 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 76210176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 76210176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 76210176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415186944 unmapped: 76210176 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415195136 unmapped: 76201984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415195136 unmapped: 76201984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415195136 unmapped: 76201984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415195136 unmapped: 76201984 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415203328 unmapped: 76193792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415203328 unmapped: 76193792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415203328 unmapped: 76193792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415203328 unmapped: 76193792 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415211520 unmapped: 76185600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415211520 unmapped: 76185600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415211520 unmapped: 76185600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415211520 unmapped: 76185600 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415219712 unmapped: 76177408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415219712 unmapped: 76177408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415219712 unmapped: 76177408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415219712 unmapped: 76177408 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415227904 unmapped: 76169216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415227904 unmapped: 76169216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415227904 unmapped: 76169216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415227904 unmapped: 76169216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415227904 unmapped: 76169216 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415236096 unmapped: 76161024 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415244288 unmapped: 76152832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415244288 unmapped: 76152832 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415252480 unmapped: 76144640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415252480 unmapped: 76144640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415252480 unmapped: 76144640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415252480 unmapped: 76144640 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415260672 unmapped: 76136448 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415277056 unmapped: 76120064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415277056 unmapped: 76120064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415277056 unmapped: 76120064 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415285248 unmapped: 76111872 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415293440 unmapped: 76103680 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415293440 unmapped: 76103680 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415293440 unmapped: 76103680 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415293440 unmapped: 76103680 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415293440 unmapped: 76103680 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415293440 unmapped: 76103680 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415309824 unmapped: 76087296 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415318016 unmapped: 76079104 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415326208 unmapped: 76070912 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415326208 unmapped: 76070912 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 76046336 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 76046336 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 76046336 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 76046336 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 76046336 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415350784 unmapped: 76046336 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 76038144 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 76038144 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 76038144 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 76038144 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 76038144 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415358976 unmapped: 76038144 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415367168 unmapped: 76029952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415367168 unmapped: 76029952 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415375360 unmapped: 76021760 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415383552 unmapped: 76013568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415383552 unmapped: 76013568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415383552 unmapped: 76013568 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415391744 unmapped: 76005376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415391744 unmapped: 76005376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415391744 unmapped: 76005376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415391744 unmapped: 76005376 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415399936 unmapped: 75997184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415399936 unmapped: 75997184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415399936 unmapped: 75997184 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415408128 unmapped: 75988992 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415408128 unmapped: 75988992 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415416320 unmapped: 75980800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415416320 unmapped: 75980800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415416320 unmapped: 75980800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415416320 unmapped: 75980800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415416320 unmapped: 75980800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415416320 unmapped: 75980800 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415424512 unmapped: 75972608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.3 total, 600.0 interval#012Cumulative writes: 62K writes, 235K keys, 62K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 62K writes, 23K syncs, 2.73 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 820 writes, 1255 keys, 820 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 820 writes, 406 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415424512 unmapped: 75972608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415424512 unmapped: 75972608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415424512 unmapped: 75972608 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415432704 unmapped: 75964416 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415440896 unmapped: 75956224 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415440896 unmapped: 75956224 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415440896 unmapped: 75956224 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415449088 unmapped: 75948032 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415457280 unmapped: 75939840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415457280 unmapped: 75939840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415457280 unmapped: 75939840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415457280 unmapped: 75939840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415457280 unmapped: 75939840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415457280 unmapped: 75939840 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415465472 unmapped: 75931648 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415473664 unmapped: 75923456 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415481856 unmapped: 75915264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415481856 unmapped: 75915264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415481856 unmapped: 75915264 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415498240 unmapped: 75898880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415498240 unmapped: 75898880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415498240 unmapped: 75898880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415498240 unmapped: 75898880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415498240 unmapped: 75898880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415498240 unmapped: 75898880 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415506432 unmapped: 75890688 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415514624 unmapped: 75882496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415514624 unmapped: 75882496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415514624 unmapped: 75882496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415514624 unmapped: 75882496 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415522816 unmapped: 75874304 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415522816 unmapped: 75874304 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415522816 unmapped: 75874304 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415522816 unmapped: 75874304 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415539200 unmapped: 75857920 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415547392 unmapped: 75849728 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415547392 unmapped: 75849728 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415547392 unmapped: 75849728 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415547392 unmapped: 75849728 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415547392 unmapped: 75849728 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415547392 unmapped: 75849728 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415555584 unmapped: 75841536 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415563776 unmapped: 75833344 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415571968 unmapped: 75825152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415571968 unmapped: 75825152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415571968 unmapped: 75825152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415571968 unmapped: 75825152 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415580160 unmapped: 75816960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415580160 unmapped: 75816960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415580160 unmapped: 75816960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415580160 unmapped: 75816960 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415588352 unmapped: 75808768 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415596544 unmapped: 75800576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415596544 unmapped: 75800576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415596544 unmapped: 75800576 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 75792384 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 75792384 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415612928 unmapped: 75784192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a3205000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1a62f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415612928 unmapped: 75784192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 602.574829102s of 604.190185547s, submitted: 414
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415612928 unmapped: 75784192 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415653888 unmapped: 75743232 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [0,1])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415744000 unmapped: 75653120 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415768576 unmapped: 75628544 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415776768 unmapped: 75620352 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415784960 unmapped: 75612160 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415784960 unmapped: 75612160 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415784960 unmapped: 75612160 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 75603968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415793152 unmapped: 75603968 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415801344 unmapped: 75595776 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415801344 unmapped: 75595776 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415801344 unmapped: 75595776 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415801344 unmapped: 75595776 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415809536 unmapped: 75587584 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45305 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 75579392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 75579392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 75579392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 75579392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 75579392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 75579392 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 75571200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 75571200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 75571200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 75571200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 75571200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415825920 unmapped: 75571200 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415834112 unmapped: 75563008 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 75554816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 75554816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 75554816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 75554816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 75554816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 75554816 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 75546624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 75546624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 75546624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415850496 unmapped: 75546624 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 75538432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 75538432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 75538432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 75538432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 75538432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415858688 unmapped: 75538432 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415866880 unmapped: 75530240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415866880 unmapped: 75530240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415866880 unmapped: 75530240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415866880 unmapped: 75530240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415866880 unmapped: 75530240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415866880 unmapped: 75530240 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415875072 unmapped: 75522048 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415883264 unmapped: 75513856 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415883264 unmapped: 75513856 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415883264 unmapped: 75513856 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 75505664 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 75497472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 75497472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 75497472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 75497472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 75497472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415899648 unmapped: 75497472 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 75489280 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 75489280 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 75489280 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415907840 unmapped: 75489280 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 75481088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 75481088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 75481088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 75481088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 75481088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 75481088 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 75472896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 75472896 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 75464704 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 75456512 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 75456512 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 75456512 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 75456512 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 75456512 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 75456512 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 75448320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 75448320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 75448320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 75448320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 75448320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 75448320 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 75440128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 75440128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 75440128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 75440128 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415965184 unmapped: 75431936 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 75423744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 75423744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 75423744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 75423744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 75423744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 75423744 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 75415552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 75415552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 75415552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 75415552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415981568 unmapped: 75415552 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 75407360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415989760 unmapped: 75407360 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 415997952 unmapped: 75399168 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: osd.0 434 heartbeat osd_stat(store_statfs(0x1a2df5000/0x0/0x1bfc00000, data 0x218f6c0/0x23c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1aa3f9c6), peers [1,2] op hist [])
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 75390976 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416014336 unmapped: 75382784 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: bluestore.MempoolThread(0x558e13d95b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441870 data_alloc: 218103808 data_used: 15126528
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416022528 unmapped: 75374592 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config diff' '{prefix=config diff}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config show' '{prefix=config show}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter dump' '{prefix=counter dump}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter schema' '{prefix=counter schema}'
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 75046912 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: prioritycache tune_memory target: 4294967296 mapped: 416350208 unmapped: 75046912 heap: 491397120 old mem: 2845415832 new mem: 2845415832
Jan 31 04:36:35 np0005603621 ceph-osd[84880]: do_command 'log dump' '{prefix=log dump}'
Jan 31 04:36:35 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:35 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 31 04:36:35 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:35.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 31 04:36:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52363 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45323 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42666 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:35 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 31 04:36:35 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268189899' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52384 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42684 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 31 04:36:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743026141' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45335 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:36 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:36 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:36 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:36.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52411 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45353 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 31 04:36:36 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3721288603' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 31 04:36:36 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52438 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42738 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45368 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52459 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:37 np0005603621 nova_compute[247399]: 2026-01-31 09:36:37.356 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42756 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45383 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 31 04:36:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3210994024' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42780 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:37 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:37 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:37 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:37.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:37 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45395 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:37 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 31 04:36:37 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047360054' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4505: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52504 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42807 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:38.150+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45407 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326975859' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 31 04:36:38 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:38 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:38 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:38.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52528 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1741121419' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] scanning for idle connections..
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [volumes INFO mgr_util] cleaning up connections: []
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45419 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2987898052' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Optimize plan auto_2026-01-31_09:36:38
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] do_upmap
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] pools ['default.rgw.log', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data']
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: [balancer INFO root] prepared 0/10 changes
Jan 31 04:36:38 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52546 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 31 04:36:38 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547760860' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124360122' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647365598' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209302913' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45452 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:39 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:39.493+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52579 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:39 np0005603621 ceph-2f5ab832-5f2e-5a84-bd93-cf8bab960ee2-mgr-compute-0-ddmhwk[74685]: 2026-01-31T09:36:39.602+0000 7f6435e43640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:39 np0005603621 ceph-mgr[74689]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 31 04:36:39 np0005603621 systemd[1]: Starting Hostname Service...
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602299944' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 31 04:36:39 np0005603621 systemd[1]: Started Hostname Service.
Jan 31 04:36:39 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:39 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:39 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:39.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 31 04:36:39 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290304083' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 31 04:36:39 np0005603621 nova_compute[247399]: 2026-01-31 09:36:39.900 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:40 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4506: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:40 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:40 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 31 04:36:40 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:40.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 31 04:36:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 31 04:36:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3509183412' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 31 04:36:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 31 04:36:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589881867' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 31 04:36:40 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 31 04:36:40 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2529977383' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 31 04:36:40 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42942 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42954 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42960 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:41 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42969 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:41 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:41 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:41 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.42999 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45569 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4507: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 31 04:36:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1926151857' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 31 04:36:42 np0005603621 nova_compute[247399]: 2026-01-31 09:36:42.386 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:42 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:42 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:42 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52705 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43008 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45587 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 31 04:36:42 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1150377185' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45596 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43026 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52717 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:42 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52723 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3275772900' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45608 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 nova_compute[247399]: 2026-01-31 09:36:43.198 247403 DEBUG oslo_service.periodic_task [None req-85be7847-7ccd-4892-a7aa-161de3959a02 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43041 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52741 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649928242' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45623 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #228. Immutable memtables: 0.
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.569138) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 228
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852203569185, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 764, "num_deletes": 250, "total_data_size": 849152, "memory_usage": 862512, "flush_reason": "Manual Compaction"}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #229: started
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852203575879, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 229, "file_size": 626441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98195, "largest_seqno": 98957, "table_properties": {"data_size": 622473, "index_size": 1491, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 12178, "raw_average_key_size": 22, "raw_value_size": 613722, "raw_average_value_size": 1136, "num_data_blocks": 63, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769852163, "oldest_key_time": 1769852163, "file_creation_time": 1769852203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 229, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 6777 microseconds, and 1895 cpu microseconds.
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.575913) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #229: 626441 bytes OK
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.575936) [db/memtable_list.cc:519] [default] Level-0 commit table #229 started
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.578880) [db/memtable_list.cc:722] [default] Level-0 commit table #229: memtable #1 done
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.578894) EVENT_LOG_v1 {"time_micros": 1769852203578890, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.578912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 844778, prev total WAL file size 844778, number of live WAL files 2.
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000225.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.579356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373630' seq:72057594037927935, type:22 .. '6D6772737461740034303131' seq:0, type:0; will stop at (end)
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [229(611KB)], [227(15MB)]
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852203579386, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [229], "files_L6": [227], "score": -1, "input_data_size": 16829417, "oldest_snapshot_seqno": -1}
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43059 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #230: 12665 keys, 13220186 bytes, temperature: kUnknown
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852203716800, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 230, "file_size": 13220186, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13143599, "index_size": 43834, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31685, "raw_key_size": 336649, "raw_average_key_size": 26, "raw_value_size": 12927806, "raw_average_value_size": 1020, "num_data_blocks": 1645, "num_entries": 12665, "num_filter_entries": 12665, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769843985, "oldest_key_time": 0, "file_creation_time": 1769852203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b1c0121d-64f9-45ea-8a37-6ea14a60eed8", "db_session_id": "H7FZQV5I20IRLGUCDO1Y", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52753 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.717116) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 13220186 bytes
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.718848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.4 rd, 96.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 15.5 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(48.0) write-amplify(21.1) OK, records in: 13159, records dropped: 494 output_compression: NoCompression
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.718887) EVENT_LOG_v1 {"time_micros": 1769852203718871, "job": 144, "event": "compaction_finished", "compaction_time_micros": 137501, "compaction_time_cpu_micros": 27162, "output_level": 6, "num_output_files": 1, "total_output_size": 13220186, "num_input_records": 13159, "num_output_records": 12665, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000229.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852203719129, "job": 144, "event": "table_file_deletion", "file_number": 229}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769852203720734, "job": 144, "event": "table_file_deletion", "file_number": 227}
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.579282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.720818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.720824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.720826) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.720829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:43 np0005603621 ceph-mon[74394]: rocksdb: (Original Log Time 2026/01/31-09:36:43.720831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 31 04:36:43 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:43 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:43 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.100 - anonymous [31/Jan/2026:09:36:43.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:43 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45647 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52786 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(cluster) log [DBG] : pgmap v4508: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45662 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52807 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:44 np0005603621 radosgw[94351]: ====== starting new request req=0x7fb0bb49a6f0 =====
Jan 31 04:36:44 np0005603621 radosgw[94351]: ====== req done req=0x7fb0bb49a6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 31 04:36:44 np0005603621 radosgw[94351]: beast: 0x7fb0bb49a6f0: 192.168.122.102 - anonymous [31/Jan/2026:09:36:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 31 04:36:44 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 31 04:36:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1122249837' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45692 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52828 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:44 np0005603621 nova_compute[247399]: 2026-01-31 09:36:44.902 247403 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 31 04:36:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:36:44 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 31 04:36:44 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.43143 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 31 04:36:45 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.45707 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:45 np0005603621 ceph-mgr[74689]: log_channel(audit) log [DBG] : from='client.52849 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 31 04:36:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 31 04:36:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 31 04:36:45 np0005603621 ceph-mon[74394]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 31 04:36:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/736061557' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 31 04:36:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 31 04:36:45 np0005603621 ceph-mon[74394]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
